uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,498,715
arxiv
\section{Introduction} The nature of the first supermassive black holes (SMBHs), powering the most luminous quasars observed at $z\sim 6$, is still far from being understood. These actively accreting BHs of $10^9-10^{10}$ M$_\odot$ must have formed and grown in less than 1 Gyr. The Eddington luminosity $L_{\rm Edd}$, defined as the maximum luminosity that a black hole (BH) can achieve, as a result of the balance between radiation and gravitation, classically provides a limit to the rate at which a BH can accrete gas. If we assume that the BH accretes a fraction $(1 - \epsilon_r)$ of the infalling material, at the Eddington rate $\dot{M}_{\rm Edd,1} = L_{\rm Edd}/c^2$, its mass growth can be described as \begin{equation}\label{eq:massevo} M_{\rm BH}(t) = M_{0} e^{\frac{1 - \epsilon_r}{\epsilon_r}\frac{t}{t_{\rm \small{Edd}}}}, \end{equation} \noindent where $\epsilon_r$ is the radiative efficiency, $M_{0}$ is the initial mass of the seed BH and $t_{\rm Edd} \sim 0.45$ Gyr is the Eddington time. Two main seed formation mechanisms have been proposed (see e.g. \citealt{Volonteri2008,Volonteri2009,Volonteri2010,Volonteri2012} and \citealt{Latif2016} for a review). One scenario predicts {\it light seeds} of $M_0 \sim 100 \, M_\odot$, consisting of Population III (Pop~III) stellar remnants (\citealt{Madau2001,Volonteri2003}). The second model predicts a higher seed mass, formed via the {\it direct collapse} of gas onto $M_0 \simeq [10^4 - 10^6] \, M_\odot$ BH (\citealt{Haehnelt1993, Bromm2003, Begelman2006, Lodato2006}). The Eddington limit provides a tight constraint on the value of $M_0$. To reproduce the mass of ULAS J1120, $M_{\rm SMBH} \sim 2 \times 10^9 \, M_\odot$, the most distant quasar currently known at $z \sim 7$ \citep{Mortlock2011}, the initial seed has to be $M_0 \gtrsim 4 \times 10^3 \, M_{\odot}$ if $\epsilon_r \sim 0.1$ and the BH has accreted uninterruptedly since $z = 30$\footnote{Hereafter we adopt a Lambda Cold Dark Matter ($\Lambda$CDM) cosmology with parameters $\Omega_{\rm M} = 0.314$, $\Omega_{\Lambda} = 0.686$, and $h = 0.674$ (Planck Collaboration et al. 2014).} at the Eddington rate. The assumption of such uninterrupted mass accretion is unrealistic. In fact, the accretion rate is limited by the available gas mass and by the radiative feedback produced by the accretion process itself. An alternative possibility is to have short, episodic periods of super-Eddington accretion, that allow to grow a SMBH mass even starting from light seeds \citep{Haiman2004, Yoo2004, Shapiro2005, Volonteri2005, Pelupessy2007, Tanaka2009, Madau2014, Volonteri2015, Lupi2016, Pezzulli2016}. The detection and characterization of $z>6$ quasars fainter than the ones currently observed would be extremely helpful to improve our understanding of the high-z SMBHs formation process. Several observational campaigns in the X-ray band have been made to discover the faint progenitors of SMBHs at $z \gtrsim 5$. \citet{Weigel2015} searched for active galactic nuclei (AGNs) in the \textit{Chandra} Deep Field South (CDF-S) starting their analysis from already X-ray selected sources within the \textit{Chandra} 4 Ms catalogue \citep{Xue2011}. They combined GOODS, CANDELS and \textit{Spitzer} data to estimate the photometric redshift of their sources but no convincing AGN candidates was found at $z \gtrsim 5$. This result has been confirmed by the independent analysis of \citet{Georgakakis2015}, who combined deep \textit{Chandra} and wide-area/shallow XMM–-Newton survey fields to infer the evolution of the X-ray luminosity function at $3 \lesssim z \lesssim 5$. They find a strong evolution at the faint-end and extrapolating this trend to $z \gtrsim 5$ they predict $< 1$ AGN in the CDF-S. A complementary approach was followed by \citet{Treister2013}, who started from a sample of photometrically selected galaxies at $z \sim 6$, $7$, and $8$ from the {\it Hubble Space Telescope} Ultra Deep Field (HUDF) and CANDELS, and then combined these data with the 4 Ms CDF-S. None of the sources was detected in X-ray either individually or via stacking, placing tight constraints on black hole growth at these redshifts\footnote{These authors estimate an accreted mass density $ \rm < 1000 \, M_{\odot}\, Mpc^{-3}$ at $z \sim 6$.}. More recently, \citet{Vito2016} investigated the X-ray emission of samples of CANDELS selected galaxies at redshift $3.5 \leq z \leq 6.5$, stacking the data from 7 Ms CDF-S. Assuming that all the X-ray stacked emission is due to X-ray binaries, the authors find that their inferred star formation rate density is consistent with the UV-based result in the literature. This suggests that most of the X-ray emission from individually undetected galaxies is due to binaries. However, by improving the multi-dimensional source detection technique developed by \citet{Fiore2012}, \citet{Giallongo2015} identified three faint AGN candidates in the GOODS-S field, with photometric redshifts $z>6$. Very faint $z>4$ galaxies are selected in the sample from the near infrared (NIR) H band luminosity, down to $H\leq 27$ (which at these redshifts corresponds to a UV rest-frame selection). Then, AGN candidates with soft X-ray ($[0.5-2]$ KeV) fluxes above $F_{\rm X}\sim 1.5\times 10^{-17} \rm erg \, s^{-1}\,cm^{-2}$, are extracted from the sub-sample. NIR-based selection methods allow to reach fainter X-ray fluxes than direct blind X-ray selections. By means of a novel photometric method, supported by numerical simulations, \citet{Pacucci2016} identified two of these high redshift AGN candidates, object 33160 at $z\sim 6$ and object 29323 at $z\sim 9.7$, as possible hosts of direct collapse BHs. In contrast, none of the $z>6$ NIR-selected sources identified by \citet{Giallongo2015} are found by \citet{Cappelluti16} in the same area, using a similar approach as in \citet{Giallongo2015} but different thresholds and energy bands. Beside the poor statistics and the large uncertainties related to photometric redshift estimates\footnote{An example is the source 29323 with the highest photo-z=9.7 selected by \citet{Giallongo2015} but excluded from the \citet{Cappelluti16} sample because of artifacts in the spectral energy distribution.}, the authors underline that the actual number of high redshift AGN candidates is very sensitive to the adopted selection procedure. The analysis of future surveys carried out with the next generation X-ray observatory \textit{ATHENA+}, will enlarge the systematic search of high redshift AGNs to lower luminosity sources. Possible explanations to the very limited number (or even the lack) of $z>6$ detections reported in these studies, are strong gas and dust obscuration \citep{Gilli2007, Treister2009, Fiore2009} or low BH occupation fraction (i.e. a low fraction of halos containing a BH in their centres). For this reason, several authors have proposed to search for SMBH progenitors through far-infrared emission lines that are unaffected by dust obscuration (e.g. \citealt{Spaans2008}, \citealt{Schleicher2010}, \citealt{Gallerani2014}). Additionally, short episodes of mildly super-Eddington growth, followed by longer periods of quiescence, with duty cycles of $20-50\%$ \citep{Madau2014}, may further decrease the probability of observing accreting BHs, resulting in a low \textit{active} BH occupation fraction. It should be noted that BHs cannot be detected by X-ray observations if their growth is driven by BH-BH mergers, rather than mass accretion. Indeed, the accretion process is directly related to the emission in this band (see the detailed discussion by \citealt{Treister2013}). In this work, we want to understand which of these explanations is the most plausible to interpret the shortage of detections of high-$z$ faint BHs. To this aim, we investigate the detectability of progenitors of $z\sim 6$ SMBHs in the super-critical growth scenario, by constructing a model for the optical/UV and X-ray emission of an active BH. We consider the dependence of the X-ray spectrum on the Eddington ratio $\lambda_{\rm Edd} = L_{\rm bol}/L_{\rm Edd}$ (i.e. the bolometric-to-Eddington luminosity ratio). We apply the emission model to the sample of $z > 6$ BH progenitors of $z \sim 6$ quasars analysed in \citet[][hereafter P16]{Pezzulli2016}. The sample has been generated using the data-constrained semi-analytical model \textsc{GAMETE/QSOdust}, that allows to simulate a statistically meaningful number of hierarchical histories of $z \sim 6$ quasars, following the star formation history, chemical evolution and nuclear black hole growth in all their progenitor galaxies. The model has been thoroughly described in \citet{Valiante2011,Valiante2012,Valiante2014} and P16. In P16, we analysed the importance of super-Eddington accretion for the formation of $z \sim 6$ quasars assuming that Pop~III BH remnants of $\sim 100 \, M_\odot$ grow via radiatively inefficient \textit{slim} accretion discs \citep{Abramowicz1988}. We found that $\sim 80\%$ of the final SMBH mass grows via super-critical episodes, that represent the most widespread accretion regime down to $z \sim 10$. Moreover, rapid accretion in dense, gas-rich environments allows to grow, on average, a BH mass of $10^4 M_{\odot}$ at $z \sim 20$, comparable to that of direct collapse BHs. The paper is organized as follows: in Section \ref{Sec1} we describe the developed model for the spectrum of accreting BHs, in Section \ref{sample} we analyse the properties of the simulated BH sample, while in Section \ref{results} we present our results for the observability of faint SMBHs progenitors with current and future surveys. Finally, conclusions are drawn in Section \ref{conclusions}. \section{The Spectral Energy Distribution of accreting BHs}\label{Sec1} The spectral energy distribution (SED) of AGNs has been modelled in the literature using empirical models inferred from observations (e.g. \citealt{Marconi2004,Lusso2010}) or calibrating physically motivated prescriptions with observations \citep{Yue2013}. These models have been also applied, when necessary, to super-critical growth regimes \citep{Pacucci2015}. Simulations of \textit{slim} discs have been also developed, taking into account the vertical disc structure and predicting the SED of the emitted radiation \citep{Wang1999,Watarai2000,Ohsuga2003,Shimura2003}. The typical spectrum of a radio quiet AGN can be approximately divided into three major components: the Infrared Bump (IB), the Big Blue Bump (BBB), and the X-ray region. Under the assumption of an optically thick disc, a large fraction, up to $\gtrsim 50 \%$, of the bolometric emission is expected to be in the form of optical/UV thermal disc photons, producing the BBB continuum that extends from the NIR at $1 \mu m$ to the UV $\sim$ 1000 $\AA$ or the soft X-ray wavelengths, in some cases. In the hard X-ray band the AGN flux per unit frequency $F_{\nu}$ is well described by a power law with spectral index $\sim 0.9$ \citep{Piconcelli2005,Just2007}. This emission is due to Compton up-scattering of optical/UV photons by hot electrons in the corona above the disc. Overlapped to the continuum, there is also a strong emission line at 6.4 keV, a noticeable narrow feature corresponding to the K$\alpha$ transition of iron, and a reflection component, usually referred to as \textit{Compton hump}, around $30 \, \rm keV$ \citep{Ghisellini1994, Fiocchi2007}. The Fe-K$\alpha$ line is attributed to fluorescence in the inner part of the accretion disc, $\sim$ few Schwarzschild radii from the central BH, while the Compton hump is due to Compton-down scattering of high energy photons by high column density reflector $N_{\rm H} \gtrsim 10^{24} \rm \, cm^{-2}$. Finally, the IB extends from $\sim 1$ $\mu m$ to $\sim 100$ $\mu$m, and it is thought to arise from reprocessed BBB emission by dust. In this section, we will focus on the emission in the optical/UV and X-ray bands\footnote{The normalization of the final SED is $L_{\rm bol}$, computed for each active galaxy simulated in \textsc{GAMETE/QSOdust} (see P16 for details).}. \subsection{Modeling the primary emission}\label{section primary} \begin{figure} \centering \includegraphics[width=8cm]{figures/SEDslim} \caption{ Examples of thermal emission spectra for BHs with masses of $10^6 M_{\odot}$ (blue lines) and $10^9 M_{\odot}$ (orange line) normalized to a common bolometric luminosity of $L_{\rm bol} = 10^{12} L_\odot$. Standard thin disc and slim disc models are shown with solid and dashed lines, respectively. For this luminosity, we find that $r_0 > r_{\rm pt}$ for the $10^9 M_{\odot}$ BH so that the slim and the thin disc models lead to the same emission spectrum. } \label{figslim} \end{figure} We parametrize the emission from the hot corona as a power law \begin{equation} L_{\nu} \propto {\nu}^{-\Gamma + 1}e^{h\nu/E_c} , \end{equation} \noindent where $E_c = 300\, \rm keV$ is the exponential cut-off energy \citep{Sozonov2004, Yue2013} and $\Gamma$ is the photon index. We include the reflection component using the PEXRAV model \citep{Magdziarz1995} in the XSPEC package, assuming an isotropic source located above the disc, fixing the reflection solid angle to $2\pi$, and the inclination angle to $60^{\circ}$. Observations show evidence of a dependence of the photon index $\Gamma$ of the X-ray spectrum on the Eddington ratio $\lambda_{\rm Edd} = L_{\rm bol}/L_{\rm Edd}$ \citep{Grupe2004, Shemmer2008, Zhou2010, Lusso2010, Brightman2013}. Despite this correlation seems to be found in both the soft and hard bands, the measures of $\Gamma_{\rm 0.5-2keV}$ can be contaminated by the presence of the soft excess, hampering any strong claim of a correlation between the primary emission in this band and $\lambda_{\rm Edd}$. Instead, this contamination is less important in the hard band $[2 - 10]\rm keV$. \citet{Brightman2013} measured the spectral index $\Gamma_{\rm 2-10 keV}$ of radio-quiet AGNs with $\lambda_{\rm Edd} \lesssim 1$ up to $z \sim 2$, finding that: \begin{equation}\label{gammabrig} \Gamma_{\rm 2-10 keV} = (0.32 \pm 0.05) \log\lambda_{\rm Edd} + (2.27 \pm 0.06). \end{equation} Here we adopt the above relation to model the dependence of the X-ray spectrum on $\lambda_{\rm Edd}$. We assume the primary emission in the optical/UV bands to be described as the sum of a multicolour black body spectrum $L^{\rm BB}_{\nu}$, emitted by different parts at different disc temperatures $T$: \begin{equation} L^{\rm BB}_{\nu} = L_{0} \int^{T_{\rm max}}_{0} B_{\nu}(T) \left( \frac{T}{T_{\rm max}}\right)^{-11/3} \frac{dT}{T_{\rm max}}, \end{equation} \noindent where $B_{\nu}(T)$ is the Planck function and $L_{0}$ is a normalization factor. The temperature profile of a steady-state, optically thick, geometrically thin accretion disc is \citep{Shakura1973}: \begin{equation} \label{lambda} T(r) = \left( \frac{3GM_{\rm BH} \dot{M}}{8\pi\sigma r^3} \right)^{1/4} \left( 1 - \sqrt{\frac{r_0}{r}}\right)^{1/4}, \end{equation} \noindent where $M_{\rm BH}$ is the mass of the compact object, $\dot{M}$ the gas accretion rate, $\sigma$ is the Stefan-Boltzman constant and $r_{0}$ is the Innermost Stable Circular Orbit (ISCO), that we assume to be the ISCO for a non-rotating BH. The maximum temperature $T_{\rm max}$ is achieved at a radius $r(T_{\rm max}) = \frac{49}{36}r_0 $. Hence, the SED depends both on $\lambda_{\rm Edd}$ and $M_{\rm BH}$. In fact, for a given luminosity, the peak of the SED is shifted towards higher energies for lower $M_{\rm BH}$ (see Figure \ref{figslim}). However, the assumption of a standard \textit{thin} disc model is valid when the disc is geometrically thin, i.e. for luminosities below $\sim 30 \%$ of Eddington luminosity. Above this value, the radiation pressure causes an inflation of the disc \citep{McClintock2006}. Optically thick disc with high accretion rates are better described by \textit{slim} accretion disc models \citep{Abramowicz1988, Sadowski2009, Sadowski2011}, where the photon trapping effect has an important role. In fact, photons produced in the innermost region of the disc are trapped within it, due to large Thompson optical depth, and advected inward. The typical radius within which photons are trapped, $r_{\rm pt}$, can be obtained by imposing that the photon diffusion time scale is equal to the accretion time scale, so that \citep{Ohsuga_PT2002}: \begin{equation} r_{\rm pt} = \frac{3}{2} R_{s}(\dot{M}/\dot{M}_{\rm Edd,1}) \rm h, \end{equation} \noindent where $R_{s} = 2GM_{\rm BH}/c^2$ is the Schwarzschild radius, $\dot{M}_{\rm Edd,1}$ is the Eddington accretion rate and $\rm h=H/r$ is the ratio between the half disc-thickness $\rm H$ and the disc radius $\rm r$. Since $\rm h \approx 1$ in radiation pressure dominated regions, we assume $\rm h = 2/3$ so that $r_{\rm pt} = R_{s}(\dot{M}/\dot{M}_{\rm Edd,1})$. Photon trapping causes a cut-off of the emission at higher temperatures and, thus, a shift of the spectrum towards lower energies. To consider this feature of super-critical, advection-dominated energy flows, we assume that the radiative emission contributing to the spectrum is that emerging from $r>r_{\rm pt}$. Under this assumption, the difference between \textit{thin} and \textit{slim-like} discs will appear for $L \gtrsim 0.3 L_{\rm Edd}$. In Figure \ref{figslim} we show the thermal emission corresponding to a bolometric luminosity of $L_{\rm bol} = 10^{12} L_\odot$ and two BH masses $M_{\rm BH} = 10^9 M_{\odot}$ (orange) and $M_{\rm BH} = 10^6 M_{\odot}$ (blue). We compare the classical \textit{thin }disc (solid lines) to that of \textit{slim }disc (dashed line). If we consider \textit{thin} discs, for a given $L_{\rm bol}$, BHs with higher masses have a SED which peaks at lower energies. As a result of photon trapping, a comparable shift towards lower energies is obtained by a $\sim 10^6 \, M_{\odot}$ BH with a super-critical accretion disc, for which $r_{\rm pt} > r_0$. The relative amplitude of the spectrum in the UV and X-ray bands is usually quantified by the the optical to X-ray spectral index $\alpha_{\rm OX}$, defined as $\alpha_{\rm OX} = -0.384 \log(L_{\rm 2 keV}/L_{2500{\AA}})$. Observations \citep{Steffen2006, Just2007, Young2009, Lusso2010, Lusso2016} suggest that $\alpha_{\rm OX}$ increases with $L_{\rm 2500}$, implying that the higher is the emission in the UV/optical band, the weaker is the X-ray component per unit of UV luminosity. In a recent study, based on a sample of AGNs with multiple X-ray observations at $0 \lesssim z \lesssim 5$, \citet{Lusso2016} found that $\log L_{\rm 2keV} = 0.638 \log L_{2500 {\AA} } + {7.074}$, which implies, \begin{equation}\label{alpha2016} \alpha_{\rm OX,2016} = 0.14\log L_{\rm 2500 {\AA}} - 2.72. \end{equation} \noindent In what follows, we adopt this relation to quantify the relative contribution of the optical/UV and X-ray spectrum, and truncate the emission from the hot corona at energies below $\sim 3 T_{\rm max}$. \subsection{Absorbed spectrum}\label{abs} \begin{figure} \centering \includegraphics[width=8cm]{figures/cross} \caption{Photoelectric cross section as a function of energy for $Z = Z_\odot$.} \label{cross} \end{figure} The radiation produced from the accreting process can interact with the gas and dust in the immediate surroundings of the BH. For the purpose of this study, we consider only the absorption in the X-ray band. The two main attenuation processes are photoelectric absorption and Compton scattering of photons against free electrons. The effect of these physical processes is to attenuate the intrinsic flux, $F_{\nu}$, by: \begin{equation} F^{\rm obs}_{\nu} = F_{\nu}e^{-\tau_\nu}. \end{equation} \noindent At $h\nu \gtrsim 0.1$ keV and under the assumption of a fully-ionized H-He mixture, the optical depth $\tau_\nu$ can be written as $\tau_\nu = (1.2\sigma_T + \sigma_{ph})N_{H}$ \citep{Yaqoob1997} where $N_{H}$ is the hydrogen column density and $\sigma_T$ and $\sigma_{ph}$ are the Thomson and the photoelectric cross section, respectively. \citet{Morrison1983} computed an interstellar photoelectric absorption cross section $\sigma^{Z_{\odot} }_{ph}$ as a function of energy in the range [0.03-10]~keV, for solar metallicity $Z_{\odot}$\footnote{We have renormalized $\sigma_{\rm ph}$ that \citealt{Morrison1983} originally computed for $Z = 0.0263$ to a solar metallicity value of $Z_\odot = 0.013$ \citep{Asplund2009}.}. In our simulations, the gas metallicities of high-z BH host galaxies span a wide range of values, with $0 \lesssim Z \lesssim Z_{\odot}$. To account of the metallicity dependence of the absorbing material, we separate the photoelectric cross section into its components \begin{equation}\label{sigma} \sigma_{ph} = \sigma_{H} + \sigma_{He} + \sigma_{met}, \end{equation} \noindent where $\sigma_{H}$ and $\sigma_{He}$ represent the contribution of hydrogen and helium. The hydrogen ionization energy $\sim 13.6 \rm eV$ and helium second ionization energy $\sim 54.4 \rm eV$ are much lower than the energy in the X-ray band ($\sim \rm keV$), hence $\sigma_{H}$ and $\sigma_{He}$ can be safely evaluated in Born approximation. Following \citet{Shu1991}, the cross section in Born approximation for a hydrogen atom is \begin{equation} \sigma_{\rm X} = \frac{8 \pi}{3 \sqrt{3}} \frac{Z^4_{\rm X} m_e e^{10}}{c \hbar^3(\hbar \omega)}\sqrt{ \frac{48 Z_{\rm X} e^2}{2 a_Z \hbar \omega}} , \end{equation} \noindent where $Z_{\rm X}$ is the atomic number for the X-th element (1 for H, 2 for He), $m_e$ and $e$ are the electron mass and charge, $c$ is the speed of light, $\hbar$ the reduced Plank constant and $a_Z = \hbar/Z_{\rm X}m_e e^2$. \begin{figure*} \includegraphics[width=5.6cm]{figures/absozsun} \includegraphics[width=5.6cm]{figures/abso01zsun} \includegraphics[width=5.6cm]{figures/abso001zsun} \caption{Primary (black solid line) and reprocessed emissions (dashed lines) of accreting BHs for column densities $N_{\rm H} = (10^{23}$, $10^{24}$, $5 \times 10^{24}$) $\rm cm^{-2}$. Different panels refer to different metallicities: $Z = Z_\odot$ (left), $Z = 0.1Z_\odot$ (middle) and $Z = 0.01Z_\odot$ (right).} \label{abso} \end{figure*} In Figure \ref{cross} we can see the photoelectric cross section for $Z = Z_{\sun}$. For energies $\gtrsim 0.2 \rm keV$, $\sigma_{\rm ph}$ is dominated by metals, in particular C and N. The cross section presents several gaps that correspond to the K-shell energies of different elements. In fact, in the evaluation of $\sigma_{\rm ph}$ it has been taken into account that an element X contributes to the absorption only if the photon energy is greater than the K-shell energy, with the highest energy gap corresponding to Fe. The photoelectric cross section decreases for increasing energy, when the Thomson cross section $\sigma_{\rm T}$ becomes dominant (for $E \gtrsim 10$ keV at $Z = Z_\odot$). Thus, softer X-ray photons are expected to be more absorbed than harder ones. This feature is well visible in Figure \ref{abso}, where the intrinsic spectrum for $L_{\rm bol} = 10^{12} L_{\odot}$ and $M_{\rm BH} = 10^9 M_{\odot}$ (black line) is compared to the spectra attenuated by gas with $Z = Z_\odot$, $0.1 \, Z_\odot$ and $0.01 \, Z_\odot$ (from left to right respectively) and different values of hydrogen column density $N_{\rm H}$ (dashed lines), that have been computed consistently with the diffuse and cold gas density profiles (see Section \ref{sample}). The effect of metallicity is relevant only at lower energies, where the photoelectric cross section is dominant. As already discussed, in fact, at energies $E \gtrsim 10$ keV the Thomson cross section becomes dominant, removing the absorption dependence on metallicity.\\ Compton thick AGNs, which are usually characterized by $N_{\rm H} \gtrsim 1.5 \times 10^{24} \, \rm cm^{-2}$, are completely absorbed in the soft band. The emission peak moves to $\sim 20\rm\, keV$, and the corresponding magnitudes is $\sim$ 2 orders of magnitude lower than in the intrinsic spectrum. For $N_{\rm H} \lesssim 10^{25} \, \rm cm^{-2}$, the direct emission is visible at energies $E \gtrsim 10 \rm \, keV$, and they are labelled as \textit{transmission-dominated} AGNs. For even larger column densities ($N_{\rm H} > 10^{25} \, \rm cm^{-2}$) direct X-ray emission is strongly affected by Compton scattering and fully obscured, and only the faint reflection component can be detected (\textit{reflection-dominated} AGNs). We note, however, that X-ray observations of $z \gtrsim 4$ quasars typically sample the rest-frame hard X-ray band.\\ The condensation of the absorbing material into grains reduces the value of $\sigma_{\rm ph}$. \citet{Morrison1983} estimate the importance of this effect, evaluating the photoelectric cross section in the case that all the elements but H, He, Ne and Ar are depleted in grains, with the exception of O, for which the condensation efficiency is assumed to be 0.25. The variation in the photoelectric cross section is relatively modest, $\sim 11\%$ at $E \sim 0.3$ keV and $\sim 4$\% at 1 keV. Hence, hereafter we neglect this effect.\\ Despite we are restricting our analysis to the X-ray part of the emission spectrum, it is important to note that the absorbed radiation will be re-emitted at lower energies. \citet{Yue2013} find that for Compton-thick systems, secondary photons emitted by free-free, free-bound and two-photon processes can increase the luminosity by a factor of $\sim 10$ in the rest-frame $[3 - 10]$~eV, which will be redshifted in the near IR at $z=0$. As a result, most of the energy emitted is expected to be observed in the IR and soft-X-ray bands \citep{Pacucci2015, Pacucci2016, Natarajan2016}. \section{The sample}\label{sample} In Section \ref{Sec1} we have introduced our emission model for accreting BHs. Physical inputs required to compute the spectrum are the BH mass, $M_{\rm BH}$, the bolometric luminosity, $\rm L_{\rm bol}$, the Eddington accretion ratio, $\dot{M}/\dot{M}_{\rm Edd,1}$, the metallicity, $Z$, and the column density, $N_{\rm H}$. We adopt the semi-analytic model \textsc{GAMETE/QSOdust}, in the version described by P16, to simulate these properties for a sample of BH progenitors of $z \gtrsim 6$ SMBHs. In this section, we first summarize the main properties of the model and then we describe the physical properties of the simulated sample. \subsection{Simulating SMBH progenitors with \textsc{GAMETE/QSOdust}} \begin{figure} \centering \includegraphics[width=8cm]{figures/Lbolmbhnh_CT.eps} \caption{Properties of BH progenitors extracted from 30 simulations at $z = 7, 8, 9$ and 10. Bolometric luminosities are shown as a function of BH masses (\textit{left panel}) and hydrogen column density in the host galaxy $N_{\rm H}$ (\textit{right panels}). Cyan lines represent $L_{\rm Edd}(M_{\rm BH})$. The green vertical line represents the $N_{\rm H}$ corresponding to a Compton-thick system, while $f_{\rm CT}$ is the fraction of Compton-thick BHs present at that redshift.} \label{propertyBH} \end{figure} The code allows to reconstruct several independent merger histories of a $10^{13} M_\odot$ DM halo assumed to host a typical $z \sim 6$ SMBH, like J1148 (e.g. \citealt{Fan2004}). The time evolution of the mass of gas, stars, metals and dust in a two-phase interstellar medium (ISM) is self-consistently followed inside each progenitor galaxy. The hot diffuse gas, that we assume to fill each newly virialized DM halo, can gradually cool through processes that strongly depend on the temperature and chemical composition of the gas. For DM halos with virial temperature $T_{\rm vir} < 10^4$~K, defined as \textit{minihalos}, we consider the contribution of $\rm H_2$, OI and CII cooling \citep{Valiante2016}, while for Ly$\alpha$-halos ($T_{\rm vir} \geq 10^4$~K) the main cooling path is represented by atomic transitions. In quiescent evolution, the gas settles on a rotationally-supported disc, that can be disrupted when a major merger occurs, forming a bulge structure. The hydrogen column density $N_{\rm H}$ has been computed taking into account the gas distribution in the diffuse and cold phases. We assumed a spherically-symmetric Hernquist density profile for the gaseous bulge \citep{Hernquist1990}, \begin{equation} \rho_b(r) = \frac{M_b}{2 \pi}\frac{r_b}{r(r+r_b)^3}, \end{equation} \noindent where $M_b$ is the bulge mass of the gas, $r_b$ is the scale radius $r_b = R_{\rm eff}/1.8153$ \citep{Hernquist1990}, and the effective radius, $R_{\rm eff}$, has been computed as $\log(R_{\rm eff}/{\rm kpc}) = 0.56\log(M_b + M_b^\star) - 5.54$, where $M_b^\star$ is the stellar mass of the bulge \citep{Shen2003}. For the diffuse gas, we adopt an isothermal density profile (see Section 2.1 and 2.2 in P16) and we do not consider the contribution of the galaxy disc to the absorbing column density.\\ We assume BH seeds to form with a constant mass of $100 \, M_\odot$ as remnants of Pop~III stars in halos with $Z \leq Z_{\rm cr} = 10^{-4} \, Z_\odot$. As a result of metal enrichment, BH seeds are planted in halos with a mass distribution peaking around $M_{\rm h} \sim 10^7 \, M_{\odot}$, at $z > 20$, below which no Pop~III stars is formed. The BH grows via gas accretion from the surrounding medium and through mergers with other BHs. Our prescription allows to consider quiescent and enhanced accretion following merger-driven infall of cold gas, which loses angular momentum due to torque interactions between galaxies. We model the accretion rate to be proportional to the cold gas mass in the bulge $M_{\rm b}$, and inversely proportional to the bulge dynamical time-scale $\tau_{\rm b}$: \begin{equation} \dot{M}_{\rm accr} = \frac{f_{\rm accr} M_{\rm b}}{\tau_{\rm b}}, \end{equation} \noindent where $f_{\rm accr} = \beta f(\mu)$, with $\beta = 0.03$ in the reference model and $f(\mu) = \max[1, 1+2.5(\mu - 0.1)]$, so that mergers with $\mu \leq 0.1$ do not trigger bursts of accretion. As discussed in Section \ref{section primary}, once the accretion rates become high, the standard \textit{thin} disc model is no longer valid. Therefore, the bolometric luminosity $L_{\rm bol}$ produced by the accretion process has been computed starting from the numerical solution of the relativistic slim accretion disc obtained by \citet{Sadowski2009}, adopting the fit presented in \citet{Madau2014}. This model predicts mildly super-Eddington luminosities even when the accretion rate is highly super-critical. The energy released by the AGN can couple with the interstellar gas. We consider energy-driven feedback, which drives powerful galactic-scale outflows, and SN-driven winds, computing the SN rate explosion for each galaxy according to formation rate, age and initial mass function of its stellar population \citep{deBennassuti2014,Valiante2014}. Finally, in BH merging events, the newly formed BH can receive a large center-of-mass recoil due to the net linear momentum carried by the asymmetric gravitational wave \citep{Campanelli2007,Baker2008} and we compute the \textit{kick} velocities following \citet{Tanaka2009}. We refer the reader to P16 for a more detailed description of the model. \subsection{Physical properties of the sample} \begin{figure} \centering \includegraphics[width=8cm]{figures/Nhmedd.eps} \caption{Column density of the bulge and Eddington accretion ratio for each of the active BHs found at $z = 7, 8, 9, 10$. Azure (magenta) represents super- (sub-) critical accreting BHs, i.e. those for which $\dot{M}/\dot{M}_{\rm Edd} > 1$} \label{propertymedd} \end{figure} We run $N_r$ independent merger trees and reproduce all the observed properties of one of the best studied quasars, SDSS J1148+5152 (hereafter J1148) at $z=6.4$ that we consider as a prototype of luminous $z \gtrsim 6$ quasars. We choose $N_r = 30$ to match the statistics of the currently known sample of $z \gtrsim 6$ quasars with robust BH mass measurements and $M_{\rm BH} \gtrsim 10^9 M_\odot$ \citep{Fan2001,Fan2003,Fan2004,Fan2006}. Figure \ref{propertyBH} shows the bolometric luminosity as a function of the BH mass (left panel) and hydrogen column density (right panel) for \textit{active} BH progenitors (i.e. with $\lambda_{\rm Edd} \geq 5 \times 10^{-3}$) of SMBHs extracted from the simulations at $z = 7,8,9,10$. All BH progenitors have masses $M_{\rm BH} \gtrsim 10^{6} M_{\odot}$ and bolometric luminosities $L_{\rm bol} \gtrsim 10^{42}$ erg/s. As it can be seen from the figure, luminosities never exceed $\sim$ few $L_{\rm Edd}$ (cyan lines), also for super-critical accreting BHs. This is a result of the low radiative efficiencies of the \textit{slim} disc solution: only a small fraction of the viscosity-generated heat can propagate, while the larger fraction is advected inward. In the right panel of the figure, we show the relation between hydrogen column density $N_{\rm H}$ and bolometric luminosity. At all redshifts, our sample is composed only by \textit{transmission-dominated} AGNs. The vertical lines indicate the column density above which the systems are classified as Compton-thick. The fraction of Compton-thick AGNs, $f_{\rm CT}$, is also shown. We find that $f_{\rm CT}$ increases with redshift, ranging between $35\%$ at $z = 10$ to $\sim 0 $ at $z = 7$ and that $f_{\rm CT} \sim 45 \%$ for all the simulated sample at all redshifts. These numbers are consistent with the loose limits inferred from the analysis of the cosmic X-ray background (CXB) with AGN population synthesis models, which generally find $f_{\rm CT} = 5 - 50 \%$ \citep{Ueda2003, Gilli2007, Akylas2012}, and with indications of growing obscuration with redshift \citep{LaFranca2005, Treister2009, Brightman2012} and luminosity (\citealt{Vito2013}, see however \citealt{Buchner2015}). The environmental conditions in which these BHs grow play an important role in determining the accretion regime. Figure \ref{propertymedd} shows the Eddington accretion ratio $\dot{M}/\dot{M}_{\rm Edd}$, where $\dot{M}_{\rm Edd} = 16L_{\rm Edd}/c^{2}$, as a function of the hydrogen column density of the bulge, which provides the gas reservoir to BH accretion. We find a positive correlation of the ratio with $N_{\rm H,bulge}$, showing that, when $N_{\rm H,bulge} \gtrsim 10^{23} \rm cm^2$, BHs accrete at super-critical rates. \begin{figure} \centering \includegraphics[width=8cm]{figures/Mbh_density} \caption{The mass function of BH progenitors at four different snapshots (z = 10, 9, 8 and 7 from top to bottom). The black line shows the total while the azure solid and magenta dotted lines indicate active BHs accreting at super and sub-Eddington rates, respectively. The fraction of active BHs at each redshift, $f_{\rm act}$, is also reported. {The green solid line in the bottom panel represents the BH mass function inferred from observations by \citet{Willott2010} at $z = 6$}.} \label{density} \end{figure} \begin{figure} \centering \includegraphics[width=8.3cm]{figures/flux_unabsorbed} \includegraphics[width=8.3cm]{figures/flux_absorbed} \caption{Flux distribution for each snapshot (black solid lines), divided in super- (azure) and sub- (magenta) Eddington accreting BH progenitors. We report both the \textit{unabsorbed} model (\textit{top panel}) and the \textit{absorbed} model (\textit{bottom panel}), for the soft (left panels) and hard (right panels) \textit{Chandra} bands. Vertical dashed green lines represent different \textit{Chandra} flux limits: CDF-S 4 Ms (long-dashed, \citealt{Xue2011}), $F_{\rm CDF-S} = 9.1 \times 10^{-18}$ ($5.5 \times 10^{-17}$) $\rm erg \, s^{-1}\,cm^{-2}$ and CDF-N 2 Ms (short-dashed, \citealt{Alexander2003}), $F_{\rm CDF-N} = 2.5 \times 10^{-17}$ ($1.4 \times 10^{-16}$) $\rm erg \, s^{-1}\,cm^{-2}$ in the soft (hard) band. In each panel, we also show the average number N of active progenitors with flux larger than CDF 4 Ms flux limit.} \label{fluxes} \end{figure} In the current model we do not take into account possible anisotropy of the AGN structure, such as the presence of a cleaned (dust and gas free) region from which the nucleus can be visible. For this reason we will investigate two extreme scenarios: the first assumes that there is no important absorption and that the observed X-ray emission is the intrinsic one (\textit{unabsorbed} case), while in the second we compute the absorption as explained in Section \ref{abs} (\textit{absorbed} case). The first important quantity that we can compute is the BH mass function $\Psi(M_{\rm BH})$ of BH progenitors of $z \sim 6$, luminous quasars. Figure \ref{density} shows $\Psi(M_{\rm BH})$ (black line) at different redshifts. The contribution of super- (azure solid) and sub- (magenta dotted) Eddington accreting BHs is also shown. Here the lines represent the averages over 30 merger tree simulations and the comoving volume $V$ of the Universe in which BHs are distributed is $1 \, \rm Gpc^{3}$, as the observed comoving number density of quasars at $z \sim 6$ is $n = 1 \rm \, Gpc^{-3}$ \citep{Fan2004}. In the the bottom panel of Figure \ref{density}, we compare our results with the BH mass function inferred from observations of SMBHs by \citet{Willott2010} at $z=6$ (shown with the green solid line). As expected, our predictions are below the observed distribution. In fact, our calculations describe the mass functions of BH progenitors of $z = 6$ SMBHs, namely a sub-population of existing BHs. This comparison is meant to show that our model predictions do not exceed the observed BH mass function. At each redshift we consider the whole population of BH progenitors (active and inactive) along the simulated hierarchical merger histories (black solid histogram), with the exclusion of possible satellite BHs and kicked out BHs. These are assumed to never settle (or return) to the galaxy center, remaining always inactive (i.e. they do not accrete gas) and do not contribute to the assembly of the final SMBH (see P16 for details). The black solid histogram shows that the majority of BHs are temporarily non accreting BHs, due to the reduced gas content in the bulge. The fraction of active BHs in also reported in Figure \ref{density} for the 4 snapshots. It increases by a factor $\sim 1.3$ from $z=10$ to $z=9$, $\sim 3.2$ from $z = 9$ to $z = 8$ and $\sim 2.8$ from $z = 8$ to $z = 7$. This is due to the increasing fraction of BHs that accrete at sub-Eddington rates (see also Fig.~4 in P16). While the progenitors mass function is relatively flat at $z=7$, a pronounced peak in the distribution becomes visible at higher redshifts, around $M_{\rm BH, peak}\sim 10^7 \, (2.5\times 10^6)$ M$_\odot$ at $z=8 \, (10)$. The mass density, particularly at the low mass end, is shifted towards more massive BHs at $z\leq 8$, as a consequence of BH growth due to mergers and gas accretion. Our simulations are constrained to reproduce the final BH mass of J1148 at $z_{0} = 6.4$, thus the total number of progenitors naturally decreases as an effect of merging (major and minor) and gravitational recoil processes, implying a lower/poorer statistics as redshift approaches $\sim z_0$. Finally, the decreasing trend in the number density of $M_{\rm BH}<M_{\rm BH, peak}$ BHs, reflects the effects of chemical feedback. Efficient metal enrichment at $Z\geq Z_{\rm cr} = 10^{-4}\, Z_{\odot}$ inhibits the formation of Pop~III stars and BHs already at $z<20$. At lower redshifts the effects of dust and metal line cooling allows the gas to fragment more efficiently, inducing the formation of lower mass (Pop~II) stars \citep{Schneider2002, Schneider2003, Schneider2012a}. As BH seeds grow in mass, the number density at the low-mass end decreases with time. By $z\sim 7$ the population of $<10^{6}$ M$_\odot$ active progenitors is fully-evolved into more massive objects. The number and redshift distribution of accreting BHs in the two different accretion regimes have been widely investigated and discussed in P16. The resulting active BH mass functions reflect these properties. Super-Eddington accreting BHs are the dominant component ($> 60\%$) down to $z\sim 10$ as indicated by the azure histogram in the upper panel of Figure \ref{density}. At lower $z$, super-critical accretion becomes progressively less frequent ($<24\%$), and sub-Eddington accretion dominates BH growth down to $z\sim 6-7$.\\ \section{Results and discussion}\label{results} In this section we analyse the X-ray luminosity of the BH sample introduced in the previous section and we discuss the best observational strategies to detect them by critically assessing the main reasons which have, so far, limited their observability. \begin{figure*} \centering \includegraphics[width=8cm]{figures/lognlogs} \includegraphics[width=8cm]{figures/CXRBmean} \caption{{\it Left panel:}Number of active BH progenitors, per unit area of $0.03 \, \rm deg^2$, with a flux larger than F in the \textit{Chandra} soft band, as a function of F. Predictions for the \textit{unabsorbed} (solid violet) and \textit{absorbed} (dashed ochre) models are shown. Vertical green lines represent two different \textit{Chandra} flux limits: CDF-S 4 Ms (dotted lines) and CDF-N 2 Ms (dashed-dotted lines). Red triangle and blue square represent, respectively, the observations obtained by \citet{Giallongo2015} and the upper limit of \citet{Weigel2015}. {\it Right panel:} Cosmic X-ray Background in the soft band [0.5 - 2]~keV predicted by the absorbed and unabsorbed models. The solid lines show the average among 30 independent simulations and the shaded region is the 1-$\sigma$ scatter. We also show the soft CXB measured by \citet{lehmer2012} in the 4Ms CDF-S and the upper limit on $z > 7.5$ accreting BHs placed by \citet[see text]{cappelluti2012}.} \label{numberflux} \end{figure*} \paragraph*{Black hole occupation fraction.} \noindent The black hole occupation fraction $f_{\rm BH}$ represents the number fraction of galaxies seeded with a BH, regardless the nuclear BHs are active or not. This quantity, not to be confused with the \textit{AGN} fraction, is directly related to the seeding efficiency. In this work, we assume that a BH seed is planted once a burst of Pop~III stars occurs in a metal poor, newly virialized halo, as explained in Section \ref{sample}. As already mentioned above, in the model we account for the possibility that a galaxy may lose its central BH during a major merger with another galaxy, due to large center-of-mass recoil velocity resulting from net-momentum carrying gravitational wave emission produced by the merging BH pair. As a result of this effect, the occupation fraction depends not only on the seeding efficiency, but also on the merger histories of SMBHs. \citet*{Alexander2014} developed a model in which super-exponential accretion in dense star clusters is able to build a $\sim 10^4 \, M_{\odot}$ BH in $\sim 10^7$ yr, starting from light seeds. The subsequent growth of this BH, up to $\sim 10^9 \, M_{\odot}$, is driven by Eddington-limited accretion. They show that with this mechanism even a low occupation fraction of $f_{\rm BH} \sim 1-5\%$ can be enough to reproduce the observed distribution of $z > 6$ luminous quasars. However, despite the local BH occupation fraction approaches unity, there are no strong constraints on the value of $f_{\rm BH}$ at high-$z$. In fact, the observed SMBHs number density at $z = 0$ could be reproduced even if $f_{\rm BH} \sim 0.1$ at $z \sim 5$, as a result of multiple mergers experienced by DM halos in the hierarchical formation history of local structures \citep{Menou2001}. By averaging over 30 different merger trees, we predict that $f_{\rm BH}$ increases with time, finding an occupation fraction of $f_{\rm BH} = 0.95,\, 0.84, \, 0.76,\,0.70$, at $z = 7, 8, 9, 10$, respectively\footnote{Considering all the simulated galaxies in our sample, at all redshift, we find an occupation fraction of $f_{\rm BH} = 0.35$.}. Hence, more than $70 \%$ of the final SMBH progenitors host a BH in their centre at $z<10$. Indeed, our simulated $f_{\rm BH}$ is higher than those predicted for average volumes of the Universe, as mentioned above, suggesting that the low occupation fraction is not the main limiting process for the X-ray detectability of BHs at $z > 6$. \begin{figure} \centering \includegraphics[width=9cm]{figures/Unobscured_testz.png} \includegraphics[width=9cm]{figures/Obscured_testz.png} \caption{Number of progenitors potentially observable in a survey with sensitivity $F_{\rm [0.5-2] keV}$ and probing an area A for the \textit{unabsorbed} (top panel) and \textit{absorbed} (bottom panel) models. Black lines represent the values of $\log N(F, A) = -2, -1$ (dashed lines) and $\log N(F, A) = 0, 1, 2$ and 3 (solid lines). We also show the area/flux coverage achieved by current surveys and \textit{ATHENA}+.} \label{area} \end{figure} \paragraph*{Active fraction and obscuration.} We report the \textit{active} fraction $f_{\rm act}$ of SMBH progenitors, averaged over 30 simulations, in the labels of Figure \ref{density}. As it can be seen, $f_{\rm act}$ decreases with increasing redshift, from $f_{\rm act} = 37\%$ at $z = 7$ to $3\%$ at $z = 10$. On average, the total active fraction (at all redshifts) is $f_{\rm act} = 1.17 \%$. These values reflect the fact that BH growth is dominated by short, super-Eddington accreting episodes, particularly at high redshifts (P16), drastically reducing the fraction of active BHs, and thus the probability to observe them. A similar conclusion has been drawn by \citet{Page2001}, linking the observations of the local optical luminosity function of galaxies with the X-ray luminosity function of Seyfert 1. They find an active BH occupation fraction of $f_{\rm act} \sim 1 \%$. Comparable values have been also reported by \citet{Haggard2010} who combined \textit{Chandra} and SDSS data up to $z \sim 0.7$, and \citet{Silverman2009} for the 10k catalogue of the zCOSMOS survey up to $z \sim 1$. While our predictions for $f_{\rm act}$ are consistent with the above studies, a larger fraction of active BHs is to be expected in models where SMBH growth at $z > 6$ is Eddington-limited ($\sim 40 - 50 \%$ between $z \sim 7 - 10$, \citealt{Valiante2016}). Figure \ref{fluxes} shows the total number of active progenitors as a function of flux in the \textit{Chandra} soft (0.5-2 keV) and hard (2-8 keV) bands. We also distinguish super- (sub-) Eddington accreting BHs. As a reference, we report the flux limits of \textit{Chandra} Deep Field South 4 Ms, $\rm F_{CDF-S}= 9.1 \times 10^{-18}$ $\rm erg \, s^{-1}\,cm^{-2}$ (dotted line, \citealt{Xue2011}) and \textit{Chandra} Deep Field North (CDF-N) 2 Ms, $\rm F_{CDF-N} = 2.5 \times 10^{-17}$ $\rm erg \, s^{-1}\,cm^{-2}$ (dot-dashed line, \citealt{Alexander2003}), showing for each panel and each band the average number N of active BHs with a flux larger than the limit of the CDF-S 4 Ms. In the upper panel we show the \textit{unabsorbed} model and the difference between the soft and hard X-ray band reflects the intrinsic SED. Moreover, since the flux limit of \textit{Chandra} is deeper in the soft band, this energy range is to be preferred for the detectability of high-z progenitors. The effect of an isotropic absorption on the flux is shown in the bottom panel of Figure \ref{fluxes}. It does not appear to be as severe as it could be inferred from the large $N_{\rm H}$ shown in Figure \ref{propertymedd}. In fact, the soft (hard) \textit{Chandra} bands at $z = 7,8,9,10$ sample the rest frame energy bands $[4,16]\rm keV$, $[4.5,18]\rm keV$, $[5,20]\rm keV$, $[5.5,22]\rm keV$ ($[16,64]\rm keV$, $[18,72]\rm keV$, $[20,80]\rm keV$, $[22,88]\rm keV$), respectively. As discussed in Section \ref{abs}, in the range $[0.2-100]\rm keV$, the harder is the photon energy, the lower is the photoelectric absorption. As a result, the average number N of detectable BHs in the \textit{absorbed} model is close to that of \textit{unabsorbed} model at redshift $z \sim 7 - 8$, while it becomes much lower at larger $z$, reaching $\rm N = 0$ in the hard band at $z = 10$. This is a consequence of the larger fractions of Compton-thick BHs $f_{\rm CT}$ and, more generally, of the larger column densities. As already discussed, higher values of $N_{\rm H}$ correspond to super-Eddington accreting BHs. As a result, the shift towards lower fluxes in the \textit{absorbed} model mainly affects super-Eddington accreting BHs. In the left panel of Fig.~\ref{numberflux} we show the cumulative number of BHs per unit area in the \textit{unabsorbed} (solid line) and \textit{absorbed} (dashed line) models with a flux $> F$ in the soft X-ray band. We have assumed here an area of $\rm \hat{A} = 0.03 \, \rm deg^2$ and show the flux limits of CDF-S 4 Ms and CDF-N 2 Ms as reference values\footnote{We assume BH progenitors to be distributed within a cube of 1 $\rm Gpc^3$, corresponding to an angular size of $A_{\rm box} \sim 390\times 390 \rm \, arcmin^{2}$ at $z \sim 7$ and $\sim 350 \times 350 \rm \, arcmin^2$ at $z \sim 10$.}. For comparison, we report the number of AGN candidates selected with the same effective area coverage ($\rm A_{\rm obs} \sim \hat{A}$) by \citet{Giallongo2015} with a flux threshold of $\rm F_{\rm \hat{X}} = 1.5 \times 10^{-17} erg \, s^{-1} \, cm^{-2}$ (red circle). We also include the upper limit $\rm N < 1$ resulted from the analysis by \citet{Weigel2015} of the CDF-S. In the \textit{unabsorbed} (\textit{absorbed}) model we find $\rm N(>F_{CDF-S}) = 0.15$ $(0.12)$ and $\rm N(>F_{\rm \hat{X}}) = 0.13$ $(0.1)$. The effect of absorption decreases the number $\rm N$, also by a factor 2 for lower flux limits $(<-17)$, but it is not the main limiting factor preventing the observations of BH progenitors. In fact, we find that $N < 1$ also in the \textit{unabsorbed} model, for both $F_{CDF-S}$ and $F_{\rm \hat{X}}$. Our result is consistent with the non-detection reported by \citet{Weigel2015} and suggests that if the AGN candidates reported by \citet{Giallongo2015} are at $z > 6$, they are likely not SMBH progenitors of $z \sim 6$ quasars. If we rescale linearly with $f_{\rm act}$ the relation in Figure \ref{numberflux}, for $f_{\rm act} = 1$ we would find an average number of observable active progenitors of $\rm N(>F_{CDF-S}) \sim 13$ $(10)$ and $\rm N(>F_{\rm \hat{X}}) \sim 11$ $(9)$. Thus, an active fraction of $f_{\rm act} < 10 \%$ is required in order to obtain a number of observed objects $N \lesssim 1$. Interesting constraints on the activity of an early BH population have recently come from the measurement of the cross correlation signal between the fluctuations of the source-subtracted cosmic infrared background (CIB) maps at 3.6 and 4.5 micron on angular scales $> 20''$ and the unresolved CXB at [0.5 - 2]~keV by \citet{cappelluti2013}. The authors argue that the cross-power is of extragalactic origin, although it is not possible to determine if the signal is produced by a single population of sources (accreting BHs) or by different populations in the same area. Indeed, theoretical models show that highly obscured accreting black holes with mass $[10^4 - 10^6]~M_\odot$ at $z > 13$ provide a natural explanation for the observed signal \citep{Yue2013, yue2014}, requiring a number density of active BHs of $[2.7 - 4]\times10^{-5} \, M_\odot \, \rm Mpc^{-3}$ at $z \sim 13$ \citep{yue2016}. While a detailed calculation of the cross-correlation between CXB and CIB is beyond the scope of the present analysis, in the right panel of Fig.~\ref{numberflux} we compare the CXB in the soft band predicted by our models with the upper limit of $3 \times10^{-13}/(1+z) \, \rm erg \, cm^{-2} \, s^{-1} \, deg^{-2}$ placed by \citep{cappelluti2012} on the contribution of early black holes at $z > 7.5$ under the assumption that they produce the observed large scale CIB excess fluctuations \citep{kashlinsky2012}. For comparison, we also show the measured CXB in the soft band reported by \citet{lehmer2012} from the analysis of the 4Ms CDF-S. The predictions for the absorbed and unabsorbed models are more than a factor 10 below the upper limit by \citet{cappelluti2012}, showing that the cross-correlation signal can not be reproduced by accreting SMBHs progenitors only. \paragraph*{Best observational strategy.} In order to understand which survey maximizes the probability to observe faint progenitors of $z \sim 6$ quasars, we define the number of BHs expected to be observed in a survey with sensitivity F and probing an area A of the sky: \begin{equation} N(F,A) = N(>F) \frac{A}{A_{\rm box}}, \end{equation} \noindent where $N(>F)$ is the number of progenitors with flux $\geq F$. In Figures \ref{area} we show $N(F,A)$ for the \textit{unabsorbed} (top panel) and \textit{absorbed} (bottom panel) models, in the \textit{observed} soft band. We report the contours corresponding to $N(F,A) = 10^{-2},10^{-1}$ (black dashed lines) and $N(F,A) = 1,10,10^2$ and $10^3$ (black solid lines). For fluxes $\rm F_{[0.5-2]keV} \gtrsim 10^{-14} \, erg \, s^{-1}\,cm^{-2}$, we find $N(F,A) \lesssim 1$ for every possible area coverage. We also show the sensitivity curves in the soft band of current surveys: CDF-S in yellow, \textit{AEGIS} in green \citep{Laird2009}, \textit{COSMOS Legacy} in cyan \citep{Civano2016}, XMM-LSS \citep{Gandhi2006} + XXL \citep{Pierre2016} in magenta. In white we show the predicted curve for \textit{ATHENA}+ with $5"$ PSF and multi-tiered survey strategy, for a total observing time of 25 Ms \citep[for details see][]{Aird2013}, and note that a survey can observe the integrated number $N(F,A)$ over its curve. The difference between the \textit{unabsorbed} and the \textit{absorbed} models is almost negligible, reaching at most a factor of 2. In fact, the \textit{observed} soft-band corresponds, for high-z progenitors, to rest-frame energies hard enough to be almost unobscured, despite the large $N_{\rm H}$ and Compton-thick fraction (see Section \ref{results}). The position occupied by the curve of the most sensitive survey performed nowadays, CDF-S, exploring a solid angle of $ 465 \rm \, arcmin^2$, is observationally disadvantaged with respect to the \textit{COSMOS Legacy}, less sensitive but covering a wider region of the sky. This survey, in fact, should observe at least one progenitor. Similarly, XMM-LSS+XXL, despite having an even lower sensitivity, represent the current survey that maximizes the probability of SMBH progenitors detections. A huge improvement in the detection will be obtained with \textit{ATHENA}+. According to our simulations, for a total observing time of 25 Ms more than 100 SMBH progenitors will be detected. The progenitors of $M_{\rm BH} \sim 10^9$ high-$z$ quasars are luminous enough to be detected in the X-ray soft band of current surveys. The real limit to their observability is that these objects are extremely rare, as a result of their low \textit{active} fraction. None of the surveys performed so far probes a region of the sky large enough for their detection to be meaningful, limiting the potentially observable systems to a few. The above conclusion applies to a scenario where SMBH at z = 6 grow by short super-Eddington accretion episodes onto $100 M_\odot$ BH seeds formed at $z > 20$ as remnants of Pop~III stars. In \citet{Valiante2016} we have investigated the alternative scenario where BH growth is Eddington limited and starts from BH seeds whose properties are set by their birth environment. According to this scenario, the formation of a few heavy seeds with mass $\sim 10^5 M_\odot$ (between 3 and 30 in our reference model) enables the Eddington-limited growth of SMBHs at $z > 6$. In a forthcoming paper, we will explore the X-ray detectability of SMBH progenitors in this alternative scenario and make a detailed comparison with the results presented here. \section{Conclusions}\label{conclusions} The main aim of this work was to interpret the lack of detections of $z \gtrsim 6$ AGNs in the X-ray band. Three are the most likely possibilities: \textit{i}) large gas obscuration, \textit{ii}) low BH occupation fraction or \textit{iii}) low \textit{active} fraction. We developed a model for the emission of accreting BHs, taking into account the super-critical accretion process, which can be very common in high-z, gas-rich systems. We compute the spectrum of active BHs simulated by P16 with an improved version of the cosmological semi-analytical code \textsc{GAMETE/QSOdust}. In P16, we have investigated the importance of super-Eddington accretion in the early growth of $z \sim 6$ SMBHs. Here we model the emission spectrum of all the simulated SMBH progenitors at $z > 6$ and study their observability with current and future surveys. Hence the sample of BHs that we have investigated does not necessarily represent a fair sample of \textit{all} BHs at $z > 6$ but only the sub-sample of those which contribute to the early build-up of the observed number of $z \sim 6$ quasars with mass $M_{\rm BH} \gtrsim 10^9 \, M_\odot$. We find that: \begin{itemize} \item the mean occupation fraction, averaged over 30 independent merger tree realizations and over the whole evolution, is $f_{\rm BH} = 35 \%$. It increases with time, being $f_{\rm BH} = 0.95,\, 0.84, \, 0.76,\,0.70$, at $z = 7, 8, 9, 10$, suggesting that the occupation fraction is not the main limitation for the observability of $z > 6$ BHs. \item We find a mean Compton thick fraction of $f_{\rm CT} \sim 45\%$. Absorption mostly affect the super-Eddington accreting BHs at $z > 10$, where the surrounding gas reaches large values of $N_{\rm H}$; \item Despite the large column densities, absorption does not significantly affect the \textit{observed} soft X-ray fluxes. In fact, at $z > 6$ the observed soft X-ray band samples the rest-frame hard energy band, where obscuration is less important. The absorption can reduce the number of observed progenitors at most by a factor 2; \item The main limiting factor to the observation of faint progenitors is a very low \textit{active} fraction, the mean value of which is $f_{\rm act} = 1.17 \%$. This is due to short, super-Eddington accreting episodes, particularly at high $z$. In fact, $f_{\rm act} = 3\%$ at $z = 10$ and grows to $f_{\rm act} = 37\%$ at $z = 7$ due to longer sub-Eddington accretion events. As a result, surveys with larger fields at shallower sensitivities maximize the probability of detection. Our simulations suggest that the probability of detecting at least 1 SMBH progenitor at $z > 6$ is larger in the \textit{Cosmos Legacy} surveys than in the CDF-S. \end{itemize} Better selection strategies of SMBH progenitors at $z > 6$ will be possible using future multi-wavelength searches. Large area surveys in the X-ray band (e.g. \textit{ATHENA+}) complemented with deep, high-sensitivity opt/IR observations (e.g. \textit{James Webb Space Telescope}) and radio detection may provide a powerful tool to study faint progenitors of $z \sim 6$ SMBHs. \section{acknowledgements} We thank Massimo Dotti, Enrico Piconcelli and Luca Zappacosta for their insightful help. We also acknowledge valuable discussions and suggestions from Angela Bongiorno, Marcella Brusa, Nico Cappelluti, Andrea Comastri, Roberto Gilli, Elisabeta Lusso and Francesca Senatore. The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 306476.
1,116,691,498,716
arxiv
\section{Introduction} {High-quality measurements of} stellar oscillation frequencies of {thousands} of solar-like stars are now available from the NASA space mission $Kepler$, and from the French satellite mission CoRoT (Convection Rotation and planetary Transits). In order to exploit these data for probing stellar interiors, accurate modelling of stellar oscillations is required. However, adiabatically computed frequencies are increasingly overestimated with increasing radial order (see dot-dashed curve in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}). These effects have become known as `surface effects' \citep[e.g.,][]{Brown84, Gough84, Balmforth92b, RosenthalEtal99, Houdek10, GrigahceneEtal12}. Semi-empirical corrections to adiabatically computed frequencies proposed by e.g., \citet{KjeldsenEtal08}, \citet{BallGizon14}, \citet{SonoiEtal15}, \citet{BallEtal16} are purely descriptive and provide little physical insight. Here, we report on a self-consistent model computation, which reproduces the observed solar frequencies to within $\sim3\mu$Hz, and, for the first time, without the need of any ad-hoc functional corrections. It represents a purely physical explanation for the `surface effects' by considering (a) a state-of-the-art 3D--1D patched mean model, (b) nonadiabatic effects, (c) a consistent treatment of turbulent pressure in the mean and pulsation models, and (d) a depth-dependent modelling of the turbulent anisotropy in both the mean and oscillation calculations. Convection modifies pulsation properties of stars principally through three effects: \begin{itemize} \item[(i)]effects through the turbulent pressure term in the {hydrostatic equation} ({structural} effect), and its pulsational perturbation in the momentum equation (modal effect); \item[(ii)]opacity variations brought about by large convective temperature fluctuations, affecting the mean stratification; this {structural} effect is also known as `convective back-warming' \citep[e.g.,][]{TrampedachEtal13}; \item[(iii)]nonadiabatic effects, additional to the pulsational perturbed radiative heat flux, through the perturbed convective heat flux (modal effects) in the thermal energy equation. \end{itemize} We follow \citeauthor{RosenthalEtal99}'s~(\citeyear{RosenthalEtal99}) idea of replacing the outer layers of a 1D solar envelope model by an averaged 3D simulation, and adopt the most advanced and accurate 3D -- 1D matching procedure available today \citep{TrampedachEtal14a, TrampedachEtal14b} for estimating {structural} effects on adiabatic solar frequencies. Furthermore, we use a 1D nonlocal, time-dependent convection model for estimating the modal effects of nonadiabaticity and convection dynamics. Additional modal effects can be associated with the advection of the oscillations by spatially varying turbulent flows in the limit of temporal stationarity \citep{Brown84, ZhugzhdaStix94, BhattacharyaEtal15}. This `advection picture' is related to the `dynamical picture' of including temporally varying turbulent convection fluctuations in the limit of spatially horizontal homogeneity. Because these two pictures describe basically the same effect but in two different limits, i.e. they are complementary, only one of them should be included. We do so by adopting the latter. \begin{figure} \includegraphics[width=\columnwidth]{./plot_frequ5_iii_19Jul16} \caption{Inertia-scaled frequency differences between MDI measurements (Sun) of acoustic modes with degree $l=20$--$23$ and model computations as functions of oscillation frequency. The scaling factor $Q_{nl}$ for a mode with radial order $n$, is obtained from taking ratios between the inertia of modes with $l=23$ and radial modes, interpolated to the $l=23$ frequencies \citep[e.g.,][]{AertsEtal10}. The dot-dashed curve shows the differences for baseline model, `Sun\,-\,{A' (cf. Section~\ref{sec:1Dbmodel})}, reflecting the results for a standard solar model computation. The dashed curve plots the residuals for the patched model which includes turbulent pressure and convective back-warming in the mean model, i.e. `Sun\,-\,{B} (cf. Section~\ref{sec:31Dmodel})'. The solid curve illustrates the differences from the modal effects of nonadiabaticity and perturbation to the turbulent pressure, i.e. `{D\,-\,C}'\, (cf. Section~\ref{sec:1Dmodel}).} \label{fig:MDI-1D.a-1DPM.a-NL} \end{figure} \vspace{-5pt} \section{Model computations} \label{sec:models} We use stellar envelope models in which the total pressure $p=p_{\rm g}+p_{\rm t}$ satisfies the equation for hydrostatic support, \begin{equation} \frac{{\rm d} p}{{\rm d} m}=-\frac{1}{4\pi r^2}\frac{Gm}{r^2}\,, \label{eq:hydstat} \end{equation} where ${p_{\mathrm g}}$ is the gas pressure and ${p_{\mathrm t}}$ the turbulent pressure, ${p_{\mathrm t}}:=\overline{\rho ww}$, with $w$ being the vertical component of the convective velocity field $\bm{u} = (u,v,w)$, and an overbar indicates an ensemble average. The other symbols are mass $m$, radius $r$, mass density $\rho$, and gravitational constant $G$. \vspace{-3pt} \subsection{Adiabatic pulsations of mean models constructed with turbulent pressure} \label{sec:adcalturb} If turbulent pressure is included in the model's mean stratification, particular care must be given to frequency calculations that neglect the pulsational perturbation to the turbulent pressure. In an adiabatic treatment the relative gas pressure perturbation $\Ldel{p_{\mathrm g}}/{p_{\mathrm g}}$ is related to the relative density perturbation $\Ldel\rho/\rho$ by the linearized expression \begin{equation} \frac{\Ldel\rho}{\rho} =\frac{1}{\gamma_1}\frac{\Ldel{p_{\mathrm g}}}{{p_{\mathrm g}}} =\frac{1}{\gamma_1}\frac{p}{{p_{\mathrm g}}}\left(\frac{\Ldel p}{p}-\frac{\Ldel{p_{\mathrm t}}}{p}\right)\,, \label{eq:perturbed-adiab-eos} \end{equation} where $\Ldel X(m)$ {are perturbations following the motion, and} $\gamma_1:=(\partial\ln{p_{\mathrm g}}/\partial\ln\rho)_{s}$ is the first adiabatic exponent with $s$ being specific entropy. A standard adiabatic calculation typically neglects convection dynamics, i.e. the effect of the perturbation to the turbulent pressure, $\Ldel{p_{\mathrm t}}/p$, leading to the approximate linearized expression for an adiabatic change \begin{equation} \frac{\Ldel\rho}{\rho} \simeq\frac{1}{\gamma_1}\frac{p}{{p_{\mathrm g}}}\frac{\Ldel p}{p}\,. \label{eq:perturbed-adiab-eos2} \end{equation} Neglecting $\Ldel{p_{\mathrm t}}$ in the full expression~(\ref{eq:perturbed-adiab-eos}) for an adiabatic change is partially justified from full nonadiabatic pulsation calculations, in which the turbulent pressure and its pulsational perturbation, $\Ldel{p_{\mathrm t}}$, are consistently included. Such a pulsation computation, in which the pulsational perturbation to the convective heat flux and turbulent pressure is obtained from a time-dependent convection model \citep[e.g.,][]{HoudekDupret15}, shows that $\Ldel{p_{\mathrm t}}$ varies predominantly in quadrature with perturbation $\Ldel{p_{\mathrm g}}$. This is illustrated in Fig.~\ref{fig:dpt-phase}, for the 1D solar envelope model computed according to Section~\ref{sec:1Dmodel}, where the phases $\varphi(\Ldel{p_{\mathrm t}})$ (dashed curve) and $\varphi(\Ldel{p_{\mathrm g}})$ (dot-dashed curve) of turbulent and gas pressure perturbations are plotted as a function of the total pressure $p$ for a particular radial mode. The solid curve is the norm $|\Ldel{p_{\mathrm t}}/p|$ of the relative turbulent pressure eigenfunction. In layers where $|\Ldel{p_{\mathrm t}}/p|$ is largest, the difference between $\varphi(\Ldel{p_{\mathrm t}})$ and $\varphi(\Ldel{p_{\mathrm g}})$ can be as large as $\sim 60\degr$, indicating that the turbulent pressure perturbation contributes predominantly to the imaginary part of the complex eigenfrequency, i.e. to the damping or driving of the pulsation modes. \begin{figure} \includegraphics[width=\columnwidth]{./plot_phase_pt_iii_19Jul16} \caption{Norm of the relative turbulent pressure eigenfunction $|\Ldel{p_{\mathrm t}}/p|$ (solid curve) and phases of $\Ldel{p_{\mathrm t}}$ (dashed curve) and gas pressure perturbations $\Ldel{p_{\mathrm g}}$ (dot-dashed curve){;} {the relative displacement eigenfunction is normalized to unity at the surface.} Results are shown for a radial mode with frequency $\nu\simeq 2947\,\mu$Hz, obtained with the solar envelope model `{D}' of Section~\ref{sec:1Dmodel}.} \label{fig:dpt-phase} \end{figure} Equation~(\ref{eq:perturbed-adiab-eos2}) describes consistently, in view of the ${p_{\mathrm t}}-$term in the hydrostatic equation~(\ref{eq:hydstat}), the approximation of neglecting $\Ldel{p_{\mathrm t}}$ in the adiabatic frequency calculations. Therefore, if turbulent pressure is included in the stellar equilibrium structure, the only modification to the adiabatic oscillation equations is the inclusion of the factor $p/{p_{\mathrm g}}$ in the expression for an adiabatic change~(\ref{eq:perturbed-adiab-eos2}) \citep[see also][]{RosenthalEtal99}. { Omitting this factor is inconsistent with neglecting $\Ldel{p_{\mathrm t}}$ in the adiabatic frequency calculations. } \begin{figure} \includegraphics[width=\columnwidth]{./pulsdrb_sunref_surfpaper_iii_19Jul16} \caption{Radial damping rates in units of cyclic frequency of model `{D}' (values are connected by solid lines) are compared with BiSON measurements of half the linewidths in the spectral peaks of the observed solar power spectrum (symbols with error bars; from \citealt{ChaplinEtal05}).} \label{fig:BiSON.HWHM-1D.eta} \end{figure} \subsection{Envelope models} The convection effects on the mean model structure are investigated by comparing envelope models, computed with either the standard mixing-length formulation or by adopting appropriately averaged 3D simulation results for the outer layers of the convective envelope, with solar frequencies measured by the MDI\footnote{Michelson Doppler Imager} instrument \citep{ScherrerEtal95} on the SOHO\footnote{SOlar and Heliospheric Observatory} spacecraft. The modal effects are estimated by comparing adiabatic and nonadiabatic frequencies from 1D envelope models constructed with a nonlocal, time-dependent formulation for the mean and pulsational perturbations to the convective heat flux and turbulent pressure. The adopted models are \begin{description} \item[{\bf A}:] adiabatically computed oscillations of a 1D baseline model constructed with the standard mixing-length formulation \citep{BohmVitense58} for convection. \item[{\bf B}:] adiabatically computed oscillations of a patched model that was constructed by replacing the outer parts of the convection zone of the baseline model, BM, by averaged hydrodynamical simulation results. It therefore includes the turbulent pressure and the effect of convective back-warming in the mean model. The turbulent pressure perturbation, $\Ldel{p_{\mathrm t}}$, is omitted in the adiabatic oscillation calculations according to equation~(\ref{eq:perturbed-adiab-eos2}). \item[{\bf C}:] adiabatically computed oscillations of a 1D nonlocal mixing-length model including turbulent pressure ${p_{\mathrm t}}$ in the equation of hydrostatic support, but omitting convective back-warming and the effect of the pulsational Lagrangian perturbations to the turbulent pressure in the adiabatic oscillation calculations according to equation~(\ref{eq:perturbed-adiab-eos2}). \item[{\bf D}:] nonadiabatically computed oscillations of the same 1D nonlocal mean model used for {model `C'}, including the Lagrangian perturbations to turbulent pressure, $\Ldel{p_{\mathrm t}}$, and to the convective heat flux. \end{description} \begin{figure} \includegraphics[width=\columnwidth]{./plot_pt_14Jul16iii} \caption{ Comparison of the turbulent pressure over total pressure between the patched mean model `{B}' (dashed curve), for which the convection zone was modelled by averaged 3D simulation results, and the calibrated nonlocal mean model `{D}' (solid curve), as functions of the logarithmic total pressure. The dotted curve is the acoustic cutoff frequency \citep[e.g.,][]{AertsEtal10} indicating the region of mode propagation ($\log p\gtrsim 5.3$). } \label{fig:max-pt} \end{figure} \subsubsection{The 3D convective atmosphere simulation} \label{sec:3Dsim} The 3D simulation, {described by \citet{TrampedachEtal13}}, evolves the conservation equations of mass, momentum and energy on a regular grid, which is optimized in the vertical direction to capture the photospheric transition. The equation of state (EOS) is a custom calculation of \citeauthor{MihalasEtal88}'s~(\citeyear{MihalasEtal88}) EOS for the employed 15 element mixture, and the monochromatic opacities are described by \citet{TrampedachEtal14a}. Radiative transfer is solved explicitly with the hydrodynamics, and line-blanketing (non-greyness) is accounted for by a binning of the monochromatic opacities, as developed by \citet{Nordlund82}. The top and bottom boundaries are open and transmitting, minimizing their effect on the interior of the simulation. The constant entropy assigned to the inflows at the bottom is adjusted to obtain the \hbox{solar effective temperature $T_{\rm eff}$.} \subsubsection{Baseline model - {\rm `A'}} \label{sec:1Dbmodel} The baseline model is a 1D solar envelope model integrated from a Rosseland optical depth of $\tau=10^{-4}$ down to a depth of 5\%\,R$_\odot$, and using \citeauthor{BohmVitense58}'s~(\citeyear{BohmVitense58}) mixing-length formulation for convection. The model is computed with a code \citep{JCDFrandsen83} that is closely related to \citeauthor{JCD08}'s~(\citeyear{JCD08}) stellar evolution code ASTEC. The turbulent pressure is omitted in the hydrostatic equation~(\ref{eq:hydstat}). The 1D model adopts the same atomic physics as the 3D atmosphere simulation described above in Section~\ref{sec:3Dsim}. The 3D simulation also provides the temperature-optical-depth relation and the mixing length for the 1D baseline model \citep{TrampedachEtal14a, TrampedachEtal14b}. This is accomplished by matching the 1D baseline model to the 3D simulation at a common pressure sufficiently deep that the 3D convective fluctuations can be considered linear and far enough from the bottom of the 3D spatial domain that boundary effects are negligible. The total mass and luminosity are identical for the 1D baseline model and the 3D simulation, whereas $T_{\rm eff}$ and surface gravity are diluted in the latter case by the convective expansion of the 3D atmosphere, which also gives rise to the stratification part of the `surface effects' in the patched model of Section~\ref{sec:31Dmodel}. The limited extent of the envelope models restricts our mode selection to those that have lower turning points well inside the lower boundary. Choosing modes with degree $l=20$--$23$ fulfils this requirement, as well as ensures that the modes are predominantly radial on the scale of the thin layer giving rise to the surface effects. \subsubsection{Patched model with ${p_{\mathrm t}}$ - {\rm `B'}} \label{sec:31Dmodel} Since the 1D baseline model is matched continuously to the 3D simulation (see Section~\ref{sec:1Dbmodel}), the two solutions can be combined to a single, patched, model for the adiabatic oscillation calculations. This, however, demands one more step: the 3D simulation is carried out in the plane-parallel approximation, and their constant gravitational acceleration introduces significant glitches in some quantities. We therefore apply a correction for sphericity, consistent with the \hbox{radius of the 1D model}. \subsubsection{Nonlocal models with ${p_{\mathrm t}}$ - {\rm `C \& D'}} \label{sec:1Dmodel} The 1D nonlocal model calculations with turbulent pressure are carried out essentially in the manner described by \citet[][see also \citealt{Balmforth92a}]{HoudekEtal99}. The convective heat flux and turbulent pressure are obtained from a nonlocal generalization of the mixing-length formulation \citep{Gough77a, Gough77b}. In this generalization three more parameters, $a, b$ and $c$, are introduced which control the spatial coherence of the ensemble of eddies contributing to the total convective heat flux ($a$) and turbulent pressure ($c$), and the degree to which the turbulent fluxes are coupled to the local stratification ($b$). The effects of varying these nonlocal parameters on the solar structure and oscillation properties were discussed in detail by \citet{Balmforth92a}. The nonlocal parameter $c$ is calibrated such as to have the maximum value of the turbulent pressure, max(${p_{\mathrm t}}$), in the 1D nonlocal model to agree with the 3D simulation result (see Fig.~\ref{fig:max-pt}). The depth-dependence of the anisotropy $\Phi:=\bm{u}\cdot\bm{u}/{w^2}$ of the convective velocity field is adapted from the 3D simulations using an analytical function with the maximum 3D value in the atmospheric layers and the minimum 3D value in the deep interior of the simulations. The {remaining} nonlocal parameters $a$ and $b$ {cannot be easily obtained from the 3D simulations and} are therefore calibrated such as to have a good agreement between calculated damping rates and measured solar linewidths (see Fig.~\ref{fig:BiSON.HWHM-1D.eta}). The mixing length was calibrated to the helioseismically determined convection-zone depth $d_{\rm cz}/{\text R}_\odot\simeq0.287$ \citep{JCD-DOG-MJT91}. Both the envelope and pulsation calculations assume the generalized Eddington approximation to radiative transfer \citep{UnnoSpiegel66}. The abundances by mass of hydrogen and heavy elements are adopted from the patched model `{B}', i.e. $X=0.736945$ and $Z=0.018055$. The opacities are obtained from the OPAL tables \citep{IglesiasRogers96}, supplemented at low temperature by tables from \citet{Kurucz91}. The EOS includes a detailed treatment of the ionization of C, N, and O, and a treatment of the first ionization of the next seven most abundant elements \citep{JCD82}. The integration of stellar-structure equations starts at an optical depth of $\tau=10^{-4}$ and ends at a radius fraction $r/$R$_\odot=0.2$. The temperature gradient in the plane-parallel atmosphere is corrected by using a radially varying Eddington factor fitted to Model C of \citet{VernazzaEtal81}. The linear nonadiabatic pulsation calculations are carried out using the same nonlocal convection formulation with the assumption that all eddies in the cascade respond to the pulsation in phase with the dominant large eddies. A simple thermal outer boundary condition is adopted at the temperature minimum where for the mechanical boundary condition the solutions are matched smoothly onto those of a plane-parallel isothermal atmosphere \citep[e.g.,][]{BalmforthEtal01}. At the base of the model envelope the conditions of adiabaticity and vanishing of the displacement eigenfunction are imposed. Only radial p modes are considered. \begin{figure} \includegraphics[width=\columnwidth]{./plot_frequ5_p2only_iii_14Jul16} \caption{Inertia-scaled frequency difference between MDI data (Sun) and model calculations. The solid curve includes the combined frequency corrections arising from {structural} effects (`{B}') and modal effects (`{D}'). The dot-dashed curve is the result for our baseline model `{A}', reflecting the result for a `standard' solar model computation.} \label{fig:MDI-1D[all].na} \end{figure} \section{Results and discussion} The adiabatic frequency corrections (Section~\ref{sec:adcalturb}) arising from modifications to the stratification of the mean model are obtained from an appropriately averaged 3D simulation for the outer convection layers (Section~\ref{sec:31Dmodel}). The frequency corrections associated with modal effects arising from nonadiabaticity, including both the perturbations to the radiation and convective heat flux, and from convection-dynamical effects of the perturbation to the turbulent pressure, are estimated from a 1D nonlocal, time-dependent convection model including turbulent pressure (Section~\ref{sec:1Dmodel}). \subsection{Adiabatic frequency corrections from modifications to the mean structure} \label{sec:mean_structure} Frequency differences between MDI data (Sun) and our baseline model, `Sun - {A}', are depicted in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL} by the dot-dashed curve, illustrating the well-known `surface effects' for a standard solar model with a frequency residual up to $\sim20\,\mu$Hz. The effect on the adiabatic frequencies by adopting an averaged 3D simulation for the outer convection layers is illustrated by the dashed curve in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}. It shows the frequency difference between MDI data and the patched model, `Sun - {B}'. The patched model underestimates the frequencies by as much as $\sim10\,\mu$Hz. The change from overestimating the frequencies with the baseline model, `{A}' (dot-dashed curve), to underestimating the frequencies with the patched model, `{B}' (dashed curve), is mainly due to effects of turbulent pressure ${p_{\mathrm t}}$ in the equation of hydrostatic support~(\ref{eq:hydstat}) and from opacity changes (convective back-warming) of the relatively large convective temperature fluctuations in the superadiabatic boundary layers. \subsection{Modal effects from nonadiabaticity and convection dynamics} \label{sec:model_effects} Additional to the {structural} changes we also consider the modal effects of nonadiabaticity and pulsational perturbation to turbulent pressure $\Ldel{p_{\mathrm t}}$. We do this by using the 1D solar envelope model of Section~\ref{sec:1Dmodel}, which includes turbulent pressure, and which is calibrated such as to have the same max(${p_{\mathrm t}}$) as the 3D solar simulation (see Fig.~\ref{fig:max-pt}). To assess the modal effects we compute for this nonlocal envelope model nonadiabatic and adiabatic frequencies. The frequency differences between these two model computations, i.e. `{D\,-\,C}', is plotted in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL} with a solid curve, and illustrates the modal effects of nonadiabaticity and turbulent pressure perturbations $\Ldel{p_{\mathrm t}}$. These modal effects (solid curve in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}) produce frequency residuals that are similar in magnitude to the frequency residuals between the Sun and the patched model, `Sun\,-\,{B}' (dashed curve). This suggests that the underestimation of the adiabatic frequencies due to changes in the mean model, `{B}', is nearly compensated by the modal effects. The remaining overall frequency difference between the Sun and models that include both {structural} and modal effects, i.e. the difference between the dashed and solid curves in Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}, is illustrated in Fig.~\ref{fig:MDI-1D[all].na} by the solid curve, showing a maximum frequency difference of $\sim3\,\mu$Hz. Also depicted, for comparison, is the dot-dashed curve from Fig.~\ref{fig:MDI-1D.a-1DPM.a-NL}, which shows the frequency difference for the baseline model {`A'}, representing the result for a standard solar model calculation. We conclude that, if both {structural} and modal effects due to convection and nonadiabaticity are considered together, it is possible to reproduce the measured solar frequencies satisfactorily (solid curve in Fig.~\ref{fig:MDI-1D[all].na}) without the need of any ad-hoc correction functions. Moreover, the calibrated set of convection parameters in the 1D nonlocal model calculations reproduces the {turbulent-pressure profile of the 3D simulation in the relevant wave-propagating layers} (Fig.~\ref{fig:max-pt}), the correct depth of the convection zone, and solar linewidths {over the whole measured frequency range} (Fig.~\ref{fig:BiSON.HWHM-1D.eta}). Although we have not used the same equilibrium model for estimating the {structural} (`{B}') and the modal effects (`{D}'), we believe that this remaining inconsistency is minute on the estimated modal effects, because of the satisfactory reproduction of the ${p_{\mathrm t}}$ profile in the nonlocal equilibrium model `{D}' (see Fig.~\ref{fig:max-pt}). However, we do plan to address this in a future paper. \section*{Acknowledgements} We thank Douglas Gough for many inspiring discussions. RT acknowledges funding from NASA grant NNX15AB24G. Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (Grant DNRF106).
1,116,691,498,717
arxiv
\section{\label{sec:level1}Introduction} Supergravity branes have played a role of outermost importance in String Theory since they were discovered to be the macroscopic counterparts of many String Theory microscopic extended objects, during the second String Revolution \cite{Polchinski:1995mt}. However, strictly speaking, this correspondence is limited to the extremal cases, which have been thoroughly studied in the literature. Much less attention has been paid to non-extremal Supergravity branes (which are regular in general, in contrast to the extremal ones), since they do not obey first order differential equations and its String Theory interpretation is less clear. In this note we are interested in further understanding the structure of general non-extremal Supergravity branes and its behaviour under \emph{electric-magnetic} duality. In reference \cite{deAntonioMartin:2012bi}, a generalization of the FGK-formalism \cite{Ferrara:1997tw} to an arbitrary number of space-time dimensions $d$ and worldvolume dimensions $($p$+1)$ was presented. The $d$-dimensional class of theories considered in \cite{deAntonioMartin:2012bi} describes gravity coupled to a given number of scalars $\phi^{i}\, , i = 1,\dots,n_{\phi},$ and $(\text{p}+1)$-forms $A^{\Lambda}_{(\text{p}+1)}\, , \Lambda = 1,\dots,n_{A},$ and are given by the following, two-derivative, action \begin{eqnarray} \label{eq:daction} S &=& \int d^{d}x \sqrt{|\mathrm{g}|} \left\{ R + \mathcal{G}_{ij} (\phi)\partial_{\mu} \phi^{i} \partial^{\mu} \phi^{j}\right. \\ \notag &+&\left.4 \tfrac{(-1)^{\text{p}}}{(\text{p}+2)!} I_{\Lambda \Omega}(\phi) F_{(\text{p}+2)}^{\Lambda} \cdot F_{(\text{p}+2)}^{\Omega} \right\}\, , \end{eqnarray} \noindent where $F^{\Lambda}_{(\text{p}+2)} = (\text{p}+2)d A^{\Lambda}_{(\text{p}+1)}$ are the $(\text{p}+2)$-form field strengths and the scalar dependent, negative definite, matrix $I_{\Lambda\Omega}\left(\phi\right)$ describes the couplings of scalars $\phi^{i}$ to the $(\text{p}+1)$-forms $A^{\Lambda}_{(\text{p}+1)}$. The generic space-time metric considered in \cite{deAntonioMartin:2012bi} was \begin{eqnarray} \label{eq:generalmetric1} ds_{(d)}^{2} &=& e^{\frac{2}{\text{p}+1}U} \left[ W^{\frac{\text{p}}{\text{p}+1}} dt^{2} -W^{-\frac{1}{\text{p}+1}}d\vec{z}^{\, 2}_{(\text{p})} \right]\\ \notag &-&e^{-\frac{2}{\tilde{\text{p}}+1}U} \gamma_{(\tilde{\text{p}}+3)}\, , \end{eqnarray} \begin{equation} \label{eq:backgroundtransversemetric} \gamma_{(\tilde{\text{p}}+3)} = \mathcal{X}^{\frac{2}{\tilde{\text{p}}+1}} \left[ \mathcal{X}^2 \frac{d\rho^2}{(\tilde{\text{p}}+1)^2} + d\Omega^{2}_{(\tilde{\text{p}}+2)} \right]\, , \end{equation} \noindent where $\mathcal{X}\equiv \left(\frac{ \omega/2}{\sinh{\left(\frac{\omega}{2} \rho\right)}} \right)$, $\vec{z}_{(\text{p})} \equiv \left( z^{1},\dots,z^{\text{p}}\right)$ are spatial worldvolume coordinates and $d=\text{p}+\tilde{\text{p}}+4$ so $\tilde{\text{p}}$ is the number of spatial dimensions of the dual brane. $d\Omega^{2}_{(\tilde{\text{p}}+2)}$ stands for the round metric on the $(\tilde{\text{p}}+2)$-sphere of unit radius, and $\omega$ is a constant that corresponds to the \emph{non-extremality} parameter of the black-brane solution. In other words, the black-brane is extremal if and only if $\omega = 0$. Assuming the space-time background (\ref{eq:generalmetric1}) and that all the fields of the theory depend exclusively on the radial coordinate $\rho$, the equations of motion of (\ref{eq:daction}) are equivalent to the following set of ordinary differential equations \cite{deAntonioMartin:2012bi} \begin{eqnarray} \label{eq:1} \ddot{U} +e^{2U}V_{\rm BB} & = & 0\, , \\ & & \nonumber \\ \label{eq:2} \ddot{\phi}^{i} +\Gamma_{jk}{}^{i}\dot{\phi}^{j}\dot{\phi}^{k} +\tfrac{d-2}{2(\tilde{\text{p}}+1)(\text{p}+1)}e^{2U}\partial^{i} V_{\rm BB} & = & 0\, , \\ & & \nonumber \\ \label{eq:hamiltonianconstraint} (\dot{U})^{2} +\tfrac{(\text{p}+1)(\tilde{\text{p}}+1)}{d-2} \mathcal{G}_{ij} \dot{\phi}^{i} \dot{\phi}^{j} +e^{2 U} V_{\rm BB} & = & c^{2}\, , \end{eqnarray} \noindent where $V_{\rm BB}$ stands for the so-called \emph{black-brane} potential \begin{equation}\label{VBB} V_{\rm BB}\left(\phi, q\right)\equiv 2\alpha^2\frac{2(\text{p}+1)(\tilde{\text{p}}+1)}{(d-2)} \left(I^{-1}\right)^{\Lambda\Omega} q_{\Lambda} q_{\Omega}\, , \end{equation} \noindent and $c^2$ is a real semi-definite positive constant given by \begin{equation}\label{c} c^2 \equiv \frac{(\text{p}+1)(\tilde{\text{p}}+2)}{4(d-2)}\omega^2 - \frac{(\tilde{\text{p}}+1)\text{p}}{4(d-2)}\gamma^2\, , \end{equation} \noindent and $\gamma$ is another constant whose origin will be clear in a moment. Notice that the system of differential equations above only involves the metric factor $U$ and the scalar fields $\phi^{i}$, since the $(\text{p}+1)$-forms can be eliminated in terms of the corresponding charges $q_{\Lambda}\, , \Lambda = 1,\dots,n_{A},$ by explicitly integrating the Maxwell equations. \noindent Remarkably enough, it turns out that $W$ can also be explicitly integrated yielding \begin{equation} W = e^{\gamma\rho}\, , \end{equation} \noindent where $\gamma$ is the (integration) constant which appears in (\ref{c}). In \cite{deAntonioMartin:2012bi} it was argued that in order to have a regular black-brane solution, we must have \footnote{In the ansatz at hand, the event horizon (if any) will correspond to $\rho\rightarrow +\infty$, whereas spatial infinity will be at $\rho \rightarrow 0^+$. In order for the worldvolume metric to be regular in the near horizon limit, $e^U\propto e^{\frac{\omega \rho}{2}}$ and $W\sim e^{\omega \rho}$, which fixes $\gamma = \omega$.} $\gamma = \omega$ and therefore $c^2 = \frac{\omega^2}{4}$. To sum up, in reference \cite{deAntonioMartin:2012bi} it was found that the above ansatz corresponds to a black-brane solution (not necessarily regular) of the theories defined by the generic action (\ref{eq:daction}) if equations (\ref{eq:1}), (\ref{eq:2}) and (\ref{eq:hamiltonianconstraint}) are satisfied. It can be seen that the FGK system of equations is completely fixed once we know the following data: the Riemannian metric $\mathcal{G}_{ij}$ of the non-linear sigma model, the number p of spatial dimensions of the brane and the matrix $I_{\Lambda\Omega}$ describing the couplings of the scalars and the (p+1)-forms. Actually, the FGK-system is invariant under the interchange \begin{equation} \label{eq:interchange} \mathrm{p}\leftrightarrow \tilde{\mathrm{p}}\, , \end{equation} \noindent which however does not leave invariant the space-time metric, which represents now the metric of a $\tilde{\mathrm{p}}$ brane. A $\tilde{\mathrm{p}}$ brane naturally couples to a $(\tilde{\mathrm{p}}+1)$-form, that is, to the magnetic duals of the electric (p+1)-forms $A^{\Lambda}_{(\mathrm{p}+1)}$. Therefore, in order to properly perform the interchange \eqref{eq:interchange} we also have to change the electric matrix $I_{el}$ of couplings to the magnetic $I_{mag}$ one. Schematically the transformation is \begin{equation} \label{eq:interchangeII} \mathrm{p}\leftrightarrow \tilde{\mathrm{p}}\, , \qquad I_{el}\leftrightarrow I_{mag}\, . \end{equation} \noindent The only term in the FGK-system that depends on $I_{\Lambda\Omega}$ is the black-brane potential $V_{BB}$. Therefore, if \begin{equation} \label{eq:Vbbcondition} \left(I^{-1}\right)^{\Lambda\Omega}_{el} q_{\Lambda} q_{\Omega} = \left(I^{-1}\right)^{\Lambda\Omega}_{mag} q^{\prime}_{\Lambda} q^{\prime}_{\Omega}\, , \end{equation} \noindent where $q^{\prime}_{\Lambda} = A_{\Lambda}^{\Omega}q_{\Omega},\,\, A\in $Gl$(n_{A},\mathbb{R})$, then the FGK-system is invariant under the transformation \eqref{eq:interchangeII}, up to a redefinition of the charges, and therefore with the same solution of the FGK-system we can construct two space-time solutions, the electric-brane solution and the magnetic-brane solution. In order to see when condition \eqref{eq:Vbbcondition} holds, we have to change from electric variables $A^{\Lambda}_{(\mathrm{p}+1)}$ to the magnetic ones $\tilde{A}_{(\tilde{\mathrm{p}}+1)\Lambda}$ in the action \eqref{eq:daction}. The equations of motion and the Bianchi identities for the electric fields $A^{\Lambda}_{(\mathrm{p}+1)}$ are \begin{equation} d\left(I_{\Lambda\Omega}\ast F^{\Omega}_{(\mathrm{p}+2)}\right) = 0\, , \qquad dF^{\Lambda}_{(\mathrm{p}+2)} = 0\, . \end{equation} \noindent Now we define \begin{equation} \label{eq:GF} G_{(\tilde{\mathrm{p}}+2)\Lambda} = I_{\Lambda\Omega}\ast F^{\Omega}_{(\mathrm{p}+2)}\, . \end{equation} \noindent and thus the equations of motion for the electric vector fields can be written as a Bianchi identity for $G_{(\tilde{\mathrm{p}}+2)\Lambda}$ \begin{equation} dG_{(\tilde{\mathrm{p}}+2)\Lambda} = 0\Rightarrow G_{(\tilde{\mathrm{p}}+2)\Lambda} = d\tilde{A}_{(\tilde{\mathrm{p}}+1)\Lambda}\,\,\, \mathrm{locally} \, . \end{equation} \noindent Equation \eqref{eq:GF} can be inverted as follows \begin{equation} \label{eq:GFinverted} F^{\Lambda}_{(\mathrm{p}+2)} = (-1)^{(d-1)+(\mathrm{p}+2)(\tilde{\mathrm{p}}+2)}\left( I^{-1}\right)^{\Lambda\Omega} \ast G_{(\tilde{\mathrm{p}}+2)\Omega} \end{equation} \noindent Substituting equation \eqref{eq:GFinverted} in equation \eqref{eq:daction}, we deduce that \begin{equation} \label{eq:LIelmag} I_{mag} = I^{-1}_{el}\, . \end{equation} \noindent Given equation \eqref{eq:LIelmag} and equation \eqref{eq:Vbbcondition} we obtain that a sufficient condition to obtain the same FGK-system for electric and magnetic branes is that there exists a matrix $A\in$ Gl$(n_{A},\mathbb{R})$ such that the following \emph{self-duality} condition holds \begin{equation} \label{eq:selduality} I^{-1} = A I A^{T}\, . \end{equation} \noindent Without invoking supersymmetry we can say little more beyond equation \eqref{eq:selduality}, since the couplings in the action \eqref{eq:daction} are in principle arbitrary aside from some regularity conditions. Supersymmetry, however, constrains the couplings and therefore it is easier to analyze when equation \eqref{eq:selduality} is satisfied. Supergravity non-linear sigma models are constrainted by supersymmetry and related to the couplings of the (p+1)-forms and the scalars of the theory. Let us now consider the general situation of an extended ungauged Supergravity, where the scalar manifold is a homogeneous space of the form \begin{equation} \mathcal{M}_{S} = \frac{G}{H}\, , \end{equation} \noindent and the matrix $I$ of the couplings between the (p+1)-forms and the scalars is a coset representative, namely $I\in \frac{G}{H}$. The coset element $I$ must be taken in a particular representation, namely $I$ is in the representation $R(G)$ that acts on the charges of the corresponding electric p-forms of the theory. This is the standard situation happening in an extended Supergravity in diverse dimensions. From the self-duality condition \eqref{eq:selduality} we are interested in coset representatives $I$ such that there exists a matrix $A\in Gl(n_{A},\mathbb{R})$ satisfying \begin{equation} \label{eq:seldualityII} I^{-1} = A I A^{T}\, . \end{equation} \noindent There is a sufficient condition on $G$ such that the self-duality condition \eqref{eq:seldualityII} is implied. Let us assume that the Lie group leaves invariant a bilinear form $\mathcal{B}\in V^{\ast}\otimes V^{\ast}$, where $V$ is the $n_{A}$-dimensional representation vector space of $G$, or in other words, $q_{\Lambda}\in V$. The condition of $G$ leaving invariant $\mathcal{B}$ can be rewritten as follows \begin{equation} \label{eq:RB} R^{T}\mathcal{B}R = \mathcal{B}\, ,\qquad R\in R(G)\, , \end{equation} \noindent where $R(G)$ is the corresponding representation of $G$ as automorphisms of $V$. Now, the self-duality condition does not have to be satisfied by an arbitrary element in $G$ but for an element in $G/H$ which, in the representation $R(G)$ must be symmetric in order to be an admissible $I$ \footnote{Even if it is non-symmetric, when contracting with $F^{\Lambda}_{(\mathrm{p}+2)}$ in \eqref{eq:daction} only the symmetric part survives.}. Assuming then that $R^{T} = R$ we can rewrite \eqref{eq:RB} as follows \begin{equation} \label{eq:RBII} R^{-1} = \mathcal{B}^{-1} R\mathcal{B} \, ,\qquad R\in R(G)\, , \end{equation} \noindent and therefore if \begin{equation} \label{eq:BTB1} \mathcal{B}^{T} = \mathcal{B}^{-1}\, , \end{equation} \noindent then equation \eqref{eq:seldualityII} is satisfied and the corresponding FGK model is self-dual, meaning that the system of differential equations to be solved for the electric p-brane and the corresponding magnetic $\tilde{\mathrm{p}}$-brane is exactly the same. There are several Supergravities where condition \eqref{eq:BTB1} holds. Just to name a few: Type-IIB Supergravity, where $G=$Sl$(2,\mathbb{R})$, $H = $SO$(2)$ so $\mathcal{B} = \text{antidiag}(1,-1)$; nine-dimensional $\mathcal{N}=2$ Supergravity, where $G = $Sl$(2,\mathbb{R})\times $O$(1,1)$ and $H= $O$(2)$, quotienting only the first factor and $\mathcal{B} = \text{antidiag}(1,-1)\times\text{diag}(1,-1)$; four-dimensional $\mathcal{N}=8$ Supergravity, where $G=$E$_{7(7)}$ acting on the {\bf 56} irrep. on the charges, $H = $SU$(8)/\mathbb{Z}_2$ and $\mathcal{B} $ is the symplectic form in the ${\bf 56}$-dimensional vector space; four-dimensional $\mathcal{N}=6$ Supergravity, with $G=$SO$^*(12)$, $H = $U$(6)$ and $\mathcal{B}$ is the identity matrix, etc. Let us see how this works in an particular example, namely the $(p,q)$-black-strings and $(p,q)$-5-black branes of Type-IIB Supergravity. First, we will use the effective FGK variables to construct the non-extremal $(p,q)$-black-string, new in the literature, and then, we will show how in the FGK framework this solution is actually the same as the non-extremal $(p,q)$-5-black-brane, also new. Before getting started, let us review the basic properties of the extremal $(p,q)$-string of Schwarz \cite{Schwarz:1995dk}. From the stringy perspective, a (extremal) $(p,q)$-string is a bound state of Type-IIB String Theory composed of $p$ \emph{D-strings} (\emph{D1s}), charged under the RR two-form $C_{(2)}$, and $q$ \emph{fundamental strings} (\emph{F1s}), with charge under the NS-NS two-form $B$. Type-IIB Supergravity is invariant under a global SL$(2,\mathbb{R})$ symmetry, so all the states of the theory are accomodated in multiplets of such group. In particular any state can be generated from another one living in the same multiplet by applying a SL$(2,\mathbb{R})$ transformation. This is the case for the $D1$ and $F1$ solutions, which are related to each other via this IIB S-duality. Similarly, we can generate a $(p,q)$-string starting from one of them, and performing a general enough SL$(2,\mathbb{R})$ transformation. This was done for the first time by Schwarz \cite{Schwarz:1995dk}, who also gave the corresponding Supergravity version of the solution. In fact, from the Supergravity perspective, all these states correspond to extremal black strings charged under one or both two-forms. All these solutions are nevertheless singular, given that the corresponding black-string singularities are naked. As we will see, this behavior is cured in the non-extremal case, and we will be able to construct a regular non-extremal $(p,q)$-black-string solution. The relevant truncated Type-IIB Supergravity Lagrangian is \begin{eqnarray} \label{eq:IIBactionEtruncated} S = \int \, d^{10} x \sqrt{|\mathrm{g}|}\,\left[R + \frac{1}{2}\frac{\partial_{\mu}\tau\partial^{\mu}\bar{\tau}}{\left(\Im{\rm m}\tau\right)^2} + \frac{1}{2\cdot 3!}\mathcal{H}^{T}\mathcal{M}^{-1} \mathcal{H} \right]\, \,\, \end{eqnarray} \noindent where $\mathcal{H}\equiv d\mathfrak{B}$, with $\mathfrak{B}^T\equiv \left(C_{(2)},B \right)$ and $\mathcal{M}\equiv \frac{1}{\Im \mathfrak{m} \tau}\left( \begin{array}{cc} |\tau|^2 & \Re \mathfrak{e} \tau \\ \Re \mathfrak{e} \tau & 1 \end{array} \right)$ with $\Im \mathfrak{m} \tau>0$ is the coset representative of the space SL$(2,\mathbb{R})/$SO$(2)$ parametrized by the axidilaton $\tau\equiv C_{(0)}+ie^{-\Phi}$. Since black strings in ten dimensions have $\text{p}=1$ and $\tilde{\text{p}}=5$, let us set \footnote{There should be no confusion about the p that denotes the number of spatial dimensions of a given brane and the $p$ in the $(p,q)$-strings, which corresponds to its charge under $C_{(2)}$.} \begin{equation}\label{pene} d=10\, , \quad \text{p}=1\, , \quad \tilde{\text{p}} = 5\, \end{equation} in the FGK effective action (\ref{eq:daction}). \noindent Now, the key point to notice is that the action (\ref{eq:IIBactionEtruncated}) is a particular case of (\ref{eq:daction}), by taking $n_{\phi}=2\, , n_{A} = 2$ and making the following identifications \begin{equation} \label{eq:identificationfields} \phi^{1} = C_{(0)}\, , \, \phi^{2} =e^{-\Phi}\, , \, \mathcal{G}_{ij} =e^{2\Phi} \frac{\delta_{ij}}{2}\, , \, I(\phi) \equiv -\frac{1}{8}\mathcal{M}^{-1}\, , \end{equation} \noindent where $i,j =1,2$ and $\tau = C_{(0)}+i e^{-\Phi}$. We thus obtain that the black-brane potential for this truncation of Type-IIB Supergravity is given by \begin{equation} \label{eq:blackstringsV} -V_{\text{BB}}\left(\phi, q\right) = \mathcal{M}^{\Lambda\Omega} q_{\Lambda} q_{\Omega} = e^{\Phi}\left( \left|\tau\right|^2 p^2 +q^{2} + 2 pqC_{(0)}\right)\, , \end{equation} \noindent where $\Lambda, \Omega = 1,2$ and we have defined $\alpha^2 = \frac{1}{2^4\cdot 3}$ and $q_1\equiv p$, $q_2\equiv q$. Therefore, in order to obtain the black-string solutions of the theory (\ref{eq:IIBactionEtruncated}) we \emph{just} have to solve the system of ordinary differential equations given by (\ref{eq:1}), (\ref{eq:2}) and (\ref{eq:hamiltonianconstraint}) assuming equations (\ref{pene}), (\ref{eq:identificationfields}) and (\ref{eq:blackstringsV}). Notice that $\mathcal{M}$ is definite positive and therefore $V_{\text{BB}}\left(\phi, q\right)$ in (\ref{eq:blackstringsV}) is negative definite. In reference \cite{deAntonioMartin:2012bi}, it was shown that for regular extremal black-brane solutions, the value $\phi_{H}$ of the scalars at the black-brane horizon obeys \begin{equation} \label{eq:BBattractorsIiI} \partial_{i} V_{\text{BB}}\left(\phi_{H}, q\right) = 0\, ,\qquad i = 1,\dots, n_{\phi}\, . \end{equation} The solutions $\phi_{H}$ of equation (\ref{eq:BBattractorsIiI}) are the so-called black-brane attractors, and generalize to black-brane solutions the popular concept of black-hole attractor. Notice that equation (\ref{eq:BBattractorsIiI}) completely fixes the value of the scalars at the horizon in terms of the charges, as long as there are no \emph{flat directions}. Taking the black-brane potential as in (\ref{eq:blackstringsV}), one easily finds that (\ref{eq:BBattractorsIiI}) has no solutions for the $(p,q)$-black-string system, meaning that there does not exist any extremal regular black-string solution of Type-IIB Supergravity with non-trivial scalars. The most general extremal solution of this kind was constructed by Schwarz in \cite{Schwarz:1995dk}. It is given, in standard coordinates by \begin{eqnarray} \label{eq:pqs} &ds_{E}^2 = H^{-\frac{3}{4}} \left[ dt^2 - dz^2\right] - H^{\frac{1}{4}}d\vec{x}^2\, , \\ \notag &\mathfrak{B}_{tz}=\mathfrak{a}\left(H^{-1}-1 \right)\, , \, \mathcal{M} = \mathfrak{a} \mathfrak{a}^T H^{-\frac{1}{2}} + \mathfrak{b}\mathfrak{b}^T H^{\frac{1}{2}}\, ,\, \end{eqnarray} where \begin{equation} \displaystyle H = 1 + \frac{h}{r^6}\, , \, \end{equation} $r^2 \equiv \vec{x}^2$ and $\mathfrak{a}^T = (a_{1}, a_{2})$ and $\mathfrak{b}^T = (b_{1}, b_{2})$ are two constant vectors to be expressed in terms of the physical parameters of the solution and subject to the constraint $\mathfrak{a}^T\eta \mathfrak{b}=a_1 b_2-a_2 b_1=1$. The relation between $\mathcal{M}$ and $H$ can be inverted to obtain the expression for the axidilaton, which reads \begin{equation} \tau = \frac{a_{1} a_{2} + b_{1} b_{2} H}{a^2_{2} + b^2_{2} H} + \frac{i \sqrt{H}}{a^2_{2} + b^2_{2} H}\, . \end{equation} It is not difficult to recover the $D1$ and $F1$ solutions from the $(p,q)$-black-string one by setting $C_{(0)}=0$ and $q=0$ or $p=0$ respectively in each case. The standard coordinates can be related to the FGK ones through the change $r=\rho^{-\frac{1}{6}}$. It is straightforward to check that equations (\ref{eq:1}), (\ref{eq:2}) and (\ref{eq:hamiltonianconstraint}) with $c=0$ are satisfied by Schwarz's $(p,q)$-black-string (\ref{eq:pqs}) \footnote{In particular, the relation between $U(\rho)$ and $H(r)$ is given by $H(r)^{-\frac{3}{4}}=e^{U(\rho)}$.}. We find that the singular extremal $(p,q)$-black-string can be generalized to a regular non-extremal solution, given by \begin{widetext} \begin{eqnarray} \label{eq:susystringmetric14} ds_{E}^2& = &H^{-\frac{3}{4}} \left[W dt^2 - dz^2\right] - H^{\frac{1}{4}} \left[W^{-1}dr^2+r^2 d\Omega^2_{(7)} \right]\, ,\\ \notag \mathfrak{B}_{tz}&=&\pm \mathfrak{a}\left( H^{-1}-1\right)\, , \, \,\,\, \tau=\displaystyle \frac{a_{1} a_{2} + b_{1} b_{2} H}{a^2_{2} + b^2_{2} H} + \frac{i \sqrt{H}}{a^2_{2} + b^2_{2} H}\, ,\\ \notag \displaystyle H& =& 1 + \frac{h}{r^6}\, ,\,\, W=1+\frac{2c}{r^6}\, , \,\, h=c + \frac{2}{\sqrt{3}}\sqrt{|{V_{\rm BB}}_{\infty}|+\frac{3c^2}{4}}\, , \end{eqnarray} \begin{eqnarray} \notag a_1&=& \frac{\left(q\,C_{(0)\infty}+p|\tau_{\infty}|^2\right)e^{\Phi_{\infty}}}{\sqrt{|{V_{\rm BB}}_{\infty}|}}\, ,\, \,b_1=-\frac{q\, }{\sqrt{|{V_{\rm BB}}_{\infty}|}}\, ,\\ \notag a_2&=& \frac{\left(q+p\,C_{(0)\infty}\right)e^{\Phi_{\infty}}}{\sqrt{|{V_{\rm BB}}_{\infty}|}}\, ,\, \, \, b_2=\frac{p\,}{\sqrt{|{V_{\rm BB}}_{\infty}|}}\, , \\ \notag {V_{\rm BB}}_{\infty}&\equiv& -e^{\Phi_{\infty} }\left(q^2+2pqC_{(0)\infty}+p^2|\tau_{\infty}|^2\right)\, , \end{eqnarray} \end{widetext} \noindent where we have expressed all the parameters of the solution in terms of the corresponding physical quantities (charges $\mathfrak{q}$ and asymptotic values of the axion and dilaton). The FGK variables in which this solution was obtained are related to the standard ones by the change of variables \begin{eqnarray} r^6=\frac{2c}{e^{2c\rho}-1}\, ,\,\,\, H(r)^{-3/4}=e^{U(\rho)}e^{-c\rho}\, . \end{eqnarray} It can be easily seen that the general non-extremal solution we have found reduces to all the known solutions, namely, the non-extremal $D1$-brane by taking $C_{(0)} = 0,\, q = 0$; the non-extremal $F1$-string by setting $C_{(0)} = 0,\, p = 0$; and Schwarz's extremal $(p,q)$-string by taking the $c\rightarrow 0$ limit. This non-extremal $(p,q)$-black-string posseses the same metric as the non-extremal $D1$ and $F1$, and an axidilaton with both real an imaginary parts having the same expression as Schwarz's extremal $(p,q)$-string (\ref{eq:pqs}) (although everything depends now also on the non-extremality parameter $c=\omega/2$). As we explained before, the FGK equations (\ref{eq:1}), (\ref{eq:2}) and (\ref{eq:hamiltonianconstraint}) are blind under electric-magnetic duality for a broad class of bosonic actions. That is indeed the case of the action (\ref{eq:IIBactionEtruncated}). Indeed, all the equations of motion of the FGK-formalism coming from (\ref{eq:IIBactionEtruncated}) are invariant under the interchange $\mathrm{p}\leftrightarrow \tilde{\mathrm{p}}\, , \,\, I_{el}\leftrightarrow I_{mag}$. The only subtlety appears in the black-brane potential. Since $\mathcal{M}^{-1} = \eta^{T}\,\mathcal{M}\,\eta\,$, this goes from \begin{equation} \label{eq:blackstringsVi} -V_{BB}^{(C_{(2)},B)}=\mathfrak{q}^T \mathcal{M} \mathfrak{q} = e^{\Phi}\left( \left|\tau\right|^2 p^2 +q^{2} + 2 pqC_{(0)}\right)\, , \end{equation} \noindent in the \emph{electric} version of the action, to \begin{equation} \label{eq:blackstringsViii} -V_{BB}^{(C_{(6)},B^{(6)})}=\mathfrak{q}_5^T \mathcal{M} \mathfrak{q}_5= e^{\Phi}\left( \left|\tau\right|^2 p_5^2 +q_5^{2} + 2 p_5q_5C_{(0)}\right)\, , \end{equation} in the \emph{magnetic} one, provided that we define the charges $\mathfrak{q}_5$ as \begin{equation} \mathfrak{q}_5=(p_5,q_5)^T\equiv \eta \mathfrak{q} =(q,-p)^T\, , \,\,\, \eta = \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right)\, . \end{equation} Hence, in the effective FGK variables, pairs consisting of a black string and a $5$-black-brane solving the equations of motion of the corresponding ten-dimensional action appear as a single solution. This corresponds in general to a black string of charges $(p,q)$ under $(C_{(2)},B)$ and a $5$-black-brane with charges $(q,-p)$ under $(C_{(6)},B^{(6)})$. Also, the fact that both black-brane potentials are equivalent implies that no regular $5$-black-brane extremal objects exist. The known $5$-brane solutions of Type-IIB Supergravity correspond to the non-extremal \emph{D5-brane}, the non-extremal \emph{S5} and the analogous of Schwarz's extremal black-string, the \emph{$(p,q)$-5-brane} of Lu and Roy \cite{Lu:1998vh}. Using the very same solution of the FGK system (\ref{eq:susystringmetric14}) it is straightforward to construct the non-extremal \emph{$(p,q)$-5-brane}, which can be easily seen to reduce to the known cases just mentioned. As we have explained, there is a black-brane attractor mechanism at work for extremal black-branes $(\omega=0)$, which fixes the scalars at the horizon as the critical points $\phi_{H}$ of the black-brane potential. Indeed, assuming regularity of the scalars at the horizon as well as a regular Riemannian scalar metric, the value of the scalars at the horizon $\phi_{H}$ for an extremal black-brane solution satisfies (\ref{eq:BBattractorsIiI}). We will use now the FGK-formalism for black-branes to prove the existence of a universal \footnote{In the sense that it will have the same expression for any theory of the form (\ref{eq:daction}).} black-brane solution with constant scalars, and a universal near-horizon behaviour, if condition (\ref{eq:BBattractorsIiI}) is satisfied. In this case, however, such condition will appear as a constraint from imposing the scalars to be constant (often refered to as \emph{double-extremality}) and not from requiring the non-extremality parameter $c$ to vanish. Indeed, for constant scalars, the FGK system of equations reduces to \begin{eqnarray} \label{eq:1c} \ddot{U} +e^{2U}V_{\rm BB} & = & 0\, , \\ \label{eq:2c} \partial_{i} V_{\rm BB} & = & 0\, , \\ \label{eq:hamiltonianconstraintc} (\dot{U})^{2}+e^{2 U} V_{\rm BB} & = & c^{2}\, . \end{eqnarray} \noindent Note that equations (\ref{eq:1c}), (\ref{eq:2c}) and (\ref{eq:hamiltonianconstraintc}) do not depend on the number p of spatial dimensions of the brane. Notice also that $V_{\rm BB}(q)$ will be now a constant constructed from the product of the constant $n_A\times n_A$ kinetic matrix $\left(I^{-1}\right)^{\Lambda\Omega}$ and the charge vectors $q_{\Lambda}$, see (\ref{VBB}). Thus, a double-extremal black brane will in general be charged under the $n_A$ $($p$+1)$-forms $A_{(\text{p}+1)}^{\Lambda}$ present in the theory. Equation (\ref{eq:2c}) can be automatically solved if the black-brane potential has at least one critical point, something that must be analyzed in a case by case basis and that we will assume henceforth. Equation (\ref{eq:1c}) is the derivative of equation (\ref{eq:hamiltonianconstraintc}), and thus we are left with a single equation. This was to be expected, provided there is only one variable left to be integrated, namely $U$. Equation (\ref{eq:hamiltonianconstraintc}) can be explicitly integrated and the solution is given by \begin{equation} \label{eq:nonextremaluniversal} e^{-2U} =\frac{|V_{\rm BB}| \sinh^2\left(c\rho + s\right)}{c^2} \, , \end{equation} \noindent where $s$ is an integration constant. Normalizing the metric to obtain Minkowski space-time at spatial infinity fixes $s$ to be given by \begin{equation} \label{eq:s} s = \mathrm{arcsinh}\left(\frac{c}{\sqrt{|V_{\rm BB}|}}\right)\, . \end{equation} \noindent Therefore, inserting equation (\ref{eq:nonextremaluniversal}) into the general metric (\ref{eq:generalmetric1}) we obtain a complete $(p_1,p_2,$ $...,p_{n_A})$-p-black-brane solution with constant scalars which solves the theory (\ref{eq:daction}). The metric factor $e^{-2U}$ is well defined for $\rho\in [0,+\infty )$ and therefore the solution contains a horizon at $\rho\rightarrow +\infty$ and is regular. Taking the extremal limit $c\to 0$ we obtain \begin{equation} \label{eq:extremaluniversal} e^{-2U} = \left(1+\sqrt{|V_{\rm BB}|}\rho\right)^2\, , \end{equation} \noindent which corresponds to a regular extremal universal black-brane solution. We can obtain now the near-horizon geometry of the extremal solution simply by taking the limit $\rho\to+\infty$ in the general extremal metric where now $U$ is given by equation (\ref{eq:extremaluniversal}). Making the change of coordinates $\rho = r^{\text{p}+1} $ and relabeling $\vec{z}$ and $t$ we can rewrite the final result as follows \begin{eqnarray} \label{eq:nearhorizonextremal2} &&\hspace{-0.2cm}\lim_{\rho\to\infty} ds_{(d)}^{2} =\\ \notag &&\hspace{-0.2cm} |V_{\rm BB}|^{\frac{1}{\tilde{\text{p}}+1}}\left[\frac{(\text{p}+1)^2}{(\tilde{\text{p}}+1)^2}\, \frac{1}{r^2}\left[dt^{2}-d\vec{z}^{\, 2}_{(\text{p})} - dr^2\right] + d\Omega^{2}_{(\tilde{\text{p}}+2)}\right]\, , \end{eqnarray} \noindent which corresponds to the space AdS$_{(2+\text{p})}\times S^{\tilde{\text{p}}+2}$. Notice that the near-horizon geometry (\ref{eq:nearhorizonextremal2}) is itself a solution of the equations of motion, and corresponds again to a universal solution with constant scalars. Let us remind the reader that in order for both the universal black-brane solution, or the near-horizon solution to exist, the only requirement is that the $n_{\phi}$ scalars present in the theory can be consistently chosen to be constant. This is equivalent to requiring the black-brane potential to have a critical point. A simple case in which we can easily construct the double-extremal solution corresponds to $\mathcal{N}= 2$, $d=5$ supergravity coupled to one vector multiplet. A model of this theory gets completely determined by specifying a completely symmetric tensor $C_{IJK}$ (see, e.g. \cite{deAntonioMartin:2012bi}, for details), which in this case reads $C_{011}=1/3$. The black-brane potential of the model reads \begin{equation}\label{vv} -V_{\rm BB}=\frac{1}{3}\left[ \left(p^0\right)^2e^{-2\sqrt{\frac{2}{3}}\phi}+2\left(p^1 \right)^2e^{\sqrt{\frac{2}{3}}\phi}\right]\, , \end{equation} being $\phi$ the only scalar of the theory, and $p^0$, $p^1$ the charges under the 2-forms $B_{0\mu\nu}$ and $B_{1\mu\nu}$ dual to the graviphoton and the 1-form of the vector multiplet respectively \cite{deAntonioMartin:2012bi}. Now, (\ref{vv}) has a critical point for \begin{equation}\label{phii} \phi_h=\sqrt{\frac{2}{3}}\log \left(\left|\frac{p^0}{p^1}\right| \right)\, , \end{equation} at which \begin{equation}\label{vvv} -V_{\rm BB}(\phi_h,p)=\left[|p^0|(p^1)^2\right]^{2/3}\, . \end{equation} Therefore, the double-extremal black string of this model is given by \begin{equation} e^{-2U} =\frac{\left[|p^0|(p^1)^2\right]^{2/3} \sinh^2\left(c\rho + s\right)}{c^2} \, , \end{equation} with \begin{equation} s = \mathrm{arcsinh}\left(\frac{c}{\left[|p^0|(p^1)^2\right]^{1/3} }\right) \, . \end{equation} \noindent \textbf{Acknowledgements.} This work has been supported in part by the Spanish Ministry of Science and Education grant FPA2012-35043-C02 (-01 \& -02), the Centro de Excelencia Severo Ochoa Program grant SEV-2012-0249, the Comunidad de Madrid grant HEPHACOS S2009ESP-1473 and the Spanish Consolider-Ingenio 2010 program CPAN CSD2007-00042 and EU-COST action MP1210 “The String Theory Universe”. The work was further supported by the JAE-predoc grant JAEPre 2011 00452 (PB) and the ERC Starting Independent Researcher Grant 259133, ObservableString (CSS). Research at IFT is supported by the Spanish MINECO's Centro de Excelencia Severo Ochoa Programme under grant SEV-2012-0249. TO wishes to thank M.M. Fernández for her permanent support. \bibliographystyle{JHEP}
1,116,691,498,718
arxiv
\section{Introduction} Collective behaviours often emerge in systems constituted by individuals capable of motion and of interaction with their environment (active particles) \cite{bechinger2016active}. While the ensuing complex behaviours seem to pose a daunting task for our understanding, their defining characteristics can be captured by simple models, where complex collective behaviours emerge even if each active particle follows very simple rules, senses only its immediate surroundings, and directly interacts only with nearby particles, without having any knowledge of an overall plan. In particular, it has been found that systems of interacting active particles give rise to robust and universal emergent behaviours occuring at many different length and time scales with classical examples going from swarms of bacteria and syntethic microswimmers, to schools of fish, flocks of birds and human crowds \cite{vicsek2012collective,gautrais2012deciphering}. The first model for collective motion was introduced to model the swarm behaviour of animals at the macroscale. In 1987, Reynolds introduced the Boids model to simulate the aggregate motion of flocks of birds, herds of land animals, or schools of fish within computer graphics applications \cite{reynolds1987flocks}. Then, in 1995, Vicsek and co-authors introduced the Vicsek model as a special case \cite{vicsek1995novel}, where a swarm is modeled by a collection of active particles moving with constant speed and tending to align with the average direction of motion of the particles in their local neighborhood. Later, several additional models have been introduced to capture the properties of collective behaviours \cite{chate2008modeling, barberis2016large, mijalkov2016engineering, matsui2017noise, cambui2017finite}. Several systems featuring complex collective behaviours have also been realised experimentally. Motile bacteria have been shown to form vortices and other spatial patterns \cite{czirok1996formation}. Artificial active particles have been shown to cluster and form crystal-like structures \cite{palacci2013living, theurkauff2012dynamic, ginot2015nonequilibrium}. Beyond its intrinsic scientific interest, a deep understanding of collective behaviours can contribute to applications in, e.g., swarm robotics, autonomous vehicles and high-accuracy cancer treatment \cite{wang2012nano, brambilla2013swarm, bechinger2016active}. In fact, the models employed to describe collective behaviours have also been fruitfully exploited in order to build artificial systems with robust behaviors arising from interactions between very simple constituent agents \cite{palacci2013living, rubenstein2014programmable, werfel2014designing}. Here, we introduce a novel simple model with short-range aligning interactions between the particles and we study it numerically as a function of the level of orientational noise. First, we study systems constituted by only active particles and we find that there is a transition from a gaseous state at high noise levels to the emergence of metastable clusters at low noise levels. Then, we introduce also passive particles to model the presence of obstacles in the environment and we find a transition towards the emergence of a network of metastable channels along which the active particles can move as the noise level is decreased. \section{Active Systems} We consider a system of $N$ active particles moving continuously in a square arena with periodic boundary conditions. The particles are hard spheres with radius $R$ and move with velocity ${\bf v}_n$, where $n=1,...,N$ indicates the particle number. The speed of the particles is assumed constant, i.e. $|{\bf v}_n| \equiv v$. We will perform Brownian dynamics simulations of these particles \cite{volpe2014simulation} where their positions, ${\bf x}_n$, and directions, $\theta_n$, are updated at each time step $t$ according to \begin{equation} \label{eq:update} \left\{ \begin{array}{rcl} {\bf x}_n(t+1) &=& {\bf x}_n(t) + {\bf v}_n(t+1) \\ \theta_n(t+1) &=& \theta_n(t) + T_n + \xi \end{array} \right. \end{equation} where $\xi$ is a uniformly distributed white-noise term in the interval $[-\eta/2, \eta/2]$ and $T_n$ is a torque term that will be described in the following paragraph. When the volume-exclusion condition is violated so that two particles partially overlap, the particles are separated by moving each one half the overlap distance along their center-to-center axis. \begin{figure} \centering \includegraphics[width=\textwidth]{figure1.eps} \caption{{\bf Particle interactions.} (a) Torque exerted on the black particle by the red particle: the black arrows denote the direction of motion of the black particle, and the red arrows denote the direction and magnitude of the torque exerted by the red particles. (b)-(i) Particle behaviours for simple configurations in absence of noise ($\eta=0$). The grey (black) circles denote the initial (final) position of the particles. The arrows represent the initial direction of motion of the particles. (b) Two particles oriented towards each other move come together and form a 2-particle cluster. (c) Two particles oriented in opposite directions start at contact and move away from each other. (d) Two particles oriented in the same direction continue moving in that direction along a straight line. (e) Two particles with opposite directions turn until they align and move away from each other along a straight line. (f) Two particles turn towards each other until contact and form a 2-particle cluster. (g) Three particles initially oriented towards a single point move to form a 3-particle cluster. (h) Similar to (g) but with different initial velocities. (i) In a 4-particle cluster, one particle oriented at an angle of $90^\circ$ away from the centre moves away, while the remaining three particles form a 3-particle cluster.} \label{fig1} \end{figure} The particles exert a torque on each other (Figure~\ref{fig1}a) so that the torque exerted on particle $n$ by all other particles is \begin{equation} \label{eq:torque} T_n = T_0 \sum_{i \neq n} \frac{\hat{\bf v}_n \cdot \hat{\bf r}_{ni}}{r_{ni}^2}\; \hat{\bf v}_n \times \hat{\bf r}_{ni} \cdot \hat{\bf e}_z, \end{equation} where $T_0$ is a prefactor related to the strength of the interaction, $\hat{\bf v}_n$ is the unit vector representing the direction of motion of particle $n$, $\hat{\bf r}_{ni}$ is the unit vector representing the direction from particle $n$ to particle $i$, $r_{ni}$ is the distance between particle $n$ and particle $i$, and $\hat{\bf e}_z$ is the unit vector in the direction perpendicular to the plane where the particle moves. Figure~\ref{fig1}a illustrates the effect of this torque: a moving particle (black circle) tends to turn towards another particle (red circle) when the latter is in front, and tends to turn away when the other particle is behind. In the following, we will always set $R=1$, $v=0.05$, and $T_0=1$. Despite the simplicity of this interaction, it can lead to complex behaviours, including the formation of metastable clusters. Figures~\ref{fig1}b-i show the deterministic motion of particles arranged in simple configurations in the absence of noise ($\eta=0$). Two particles oriented towards each other move along the straight line joining their centres until they come in contact and form a 2-particle cluster (Figure~\ref{fig1}b). If they are oriented away from each other, they move radially away from each other (Figure~\ref{fig1}c). If they are oriented in the same direction, they translate along that direction (Figure~\ref{fig1}d). Two particles with parallel and opposite directions move away from each other (Figure~\ref{fig1}e) or form a 2-particle cluster (Figure~\ref{fig1}f) depending on their initial position. Three particles oriented towards each other form a 3-particle cluster (Figures~\ref{fig1}g-h). In a 4-particle cluster, when one particle gets oriented at an angle of $90^\circ$ away from the centre, it moves away, while the remaining three particles form a 3-particle cluster (Figure~\ref{fig1}i). \begin{figure} \centering \includegraphics[width=\textwidth]{figure2.eps} \caption{{\bf Clustering behaviour.} Steady-state behaviour for a system of $N=100$ active particles for different noise levels: (a) $\eta = 2\pi$, (b) $\eta = 0.2 \pi$ and (c) $\eta = 0.002 \pi$. Particles that belong to a cluster are colour-coded based on the size of the cluster: 2 (red), 3 (orange), 4 (light green), 5 (green), 6 (dark green) and 7 (blue). See also Movie 1 in the supplementary materials.} \label{fig2} \end{figure} Figure~\ref{fig2} shows some snapshots for the steady-state behaviour of the system for different intensities of noise (see also Supplementary Movie 1). For high noise intensity ($\eta=2\pi$, Figure~\ref{fig2}a), the directions of the particles are randomised at each time step and therefore clusters form only rarely, typically comprise only two particles and have a very short lifetime. In this regime the particles interact only when at contact through excluded-volume interactions and are therefore in a gaseous phase. At lower noise levels ($\eta=0.2\pi$, Figure~\ref{fig2}b), persistent clusters start to form, while several free particles are still present. At even lower noise levels ($\eta=0.02\pi$, Figure~\ref{fig2}c), larger clusters form, while almost all particles belong to some of the clusters. \begin{figure} \centering \includegraphics[width=\textwidth]{figure3.eps} \caption{{\bf Clustering transition.} (a) Clustering coefficient (black solid line) as a function of noise ($\eta$) corresponding to the mean value obtained from 10 simulations of a system of $N=100$ active particles for $10^4$ time steps; the shaded area represents one standard deviation around the mean. The vertical orange dashed line represents the maximum torque exerted on a particle by another at contact $\eta_c = 0.25$ for our parameters (Equation~\eqref{eq:etac}). The purple dashed line represents the average clustering coefficient when $T_0 = 0$. (b) Cluster size distribution as a function of $\eta$. The colour code corresponds to the sizes of the clusters shown as insets on the right.} \label{fig3} \end{figure} The transition from a gaseous phase to a clustering phase can be seen clearly in Figure~\ref{fig3}a, where the clustering coefficient (defined as the fraction of particles belonging to a cluster) is shown as a function of $\eta$. This transition is related to the balance between the noise level in the system ($\eta$) and the aligning torque between the particles ($T_0$). The maximum noise is $\eta_{\rm max} = \eta/2$. The maximum torque occurs when the particles are at contact and, from Equation~\eqref{eq:torque}, it is $T_{\rm max} = T_0/(8R^2)$. We can expect this transition to occur when $\eta_{\rm max} \approx T_{\rm max}$, which happens at the characteristic noise level \begin{equation}\label{eq:etac} \eta_{\rm c} = {T_0 \over 4R^2}, \end{equation} which is $\eta_{\rm c} = 0.25$ for the parameters of our simulations. This value is represented by the vertical orange dashed line in Figure~\ref{fig3}a and in fact it well-characterises the transition towards the clustered state. The purple dashed line in Figure~\ref{fig3}a represents the average clustering coefficient for the case of particles interacting only through steric interactions ($T_0=0$), which is significanlty below the case of the curve with aligning interactions ($T_0=1$). Figure~\ref{fig3}b shows the relative abundance of the various cluster sizes as a function of $\eta$. As we have already noted when discussing Figure~\ref{fig2}, as the noise level decreases, at the beginning only small clusters appear, starting with 2-particle clusters, followed by 3-particle clusters and then 4-particle clusters. For higher noise levels, larger clusters become significantly more frequent and, as a consequence, the frequency of smaller clusters decreases. \begin{figure} \centering \includegraphics[width=\textwidth]{figure4.eps} \caption{{\bf Cluster lifetimes.} Distribution of the lifetime of clusters of different sizes (shown as insets) for $\eta = 0.03\pi$.} \label{fig4} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figure5.eps} \caption{{\bf Cluster transitions.} The most common transitions between clusters of different sizes are represented by the arrows. The most stable clusters are those of size 2, 3, 6 and 7; in particular, they appear to be stable in the absence of external perturbations, i.e. collisions with other particles. Two-way transitions can be often observed between clusters of sizes 3-4 and 7-8, as shown by the double arrows between them.} \label{fig5} \end{figure} Figures~\ref{fig4} and \ref{fig5} explore the evolution of the clusters as a function of time. Clusters tend to form and grow by collision with single particles and other clusters. These collisions tend to restructure the clusters sometimes leading to a break up into smaller clusters (especially for clusters with more than 6 particles), as can be seen in Supplementary Movie 1. Figure~\ref{fig4} shows the lifetime of the clusters, which is defined as the number of time steps a cluster exists with a given number of particles until either it loses some of the particles or acquires some extra particles. Smaller clusters (sizes 2, 3 and 4) have typically longer lifetimes, while larger clusters, instead, tend to have a much shorter lifetime. Figure~\ref{fig5} shows the most common transitions between clusters. As already observed regarding their lifetimes, the most stable clusters are the ones with 2, 3 and 4 particles. The 2-particle clusters tend to acquire one extra particle and become 3-particle clusters, while only extremely rarely dissolve as two single particles. The 3-particle clusters tend to acquire one extra particle to form 4-particle clusters or to collide with other clusters to form larger clusters (typically, 5-particle or 6-particle clusters). The 4-particle clusters tend to decay by losing one of their particles becoming 3-particle clusters. Larger clusters tend to decay by breaking down into smaller clusters. \section{Active-Passive Mixed Systems} It is interesting to explore the mutual influence between active and passive particles in hybrid systems because several natural and artificial systems feature both kinds of particles. In fact, such mixtures have already been explored in several works both experimentally \cite{wu2000particle, koumakis2013targeted, kummel2015formation, pincce2016disorder, argun2016non} and theoretically \cite{gregoire2001active, schwarz2012phase, stenhammar2015activity}. We will therefore extend the model presented in the previous section by including also $M$ passive particles, which interact with all other (active and passive) particles through volume exclusion and are subject to translational diffusion. We model this diffusion by adding Gaussian white noise to their positions at each time step and set the standard deviation of this noise to $\sigma = 0.1 v$ so that during each time step their displacement is much smaller than that of the active particles. The motion of the active particle is still updated using Eq.~\eqref{eq:update}, with the only difference that now the torque is given by \begin{equation} T_n = T_0 \sum_{i \neq n} \frac{\hat{\bf v}_n \cdot \hat{\bf r}_{ni}}{r_{ni}^2} \; \hat{\bf v}_n \times \hat{\bf r}_{ni} \cdot \hat{\bf e}_z - T_0 \sum_{m} \frac{\hat{\bf v}_n \cdot \hat{\bf r}_{nm}}{r_{nm}^2} \; \hat{\bf v}_n \times \hat{\bf r}_{nm} \cdot \hat{\bf e}_z , \end{equation} where $m = 1,...,M$ is the index of the passive particles and ${\bf r}_{nm}$ is the relative position vector between particle $n$ and obstacle $m$. The sign of the torque due to the passive particles is negative so that the active particles tend to avoid the passive ones. This kind of behaviour is observed, for example, in the motion of people across a stationary crowd \cite{helbing2005self} and in the motion of microswimmers in the presence of obstacles \cite{takagi2014hydrodynamic, spagnolie2015geometric}. At low concentrations of passive particles ($M < N$) and at low packing fractions, the overall qualitative behaviour of the active particles is unaffected, so that the active particles still cluster as described in the previous section. \begin{figure} \centering \includegraphics[width=\textwidth]{figure6.eps} \caption{{\bf Metastable channel formation.} Simulation of a system of 20 active particles (red circles) and 900 passive particles (grey circles) for different levels of noise: (a) $\eta = 2\pi$, (b) $= \pi$, (c) $= 0.5 \pi$ and (d) $\eta = 0.03\pi$. From left to right, the plots correspond to time steps $t = 25\,000$, $50\,000$, $75\,000$ and $100\,000$. The blue shades represent the trails left by the passage of the active particles over the preceding $25\,000$ time steps. In (d), thanks to the low noise level, metastable channels are opened and are stabilized by the active particles. See also Movie 2 in the supplementary materials.} \label{fig6} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figure7.eps} \caption{{\bf Mean square displacement (MSD) of active particle in the presence of passive particles.} MSD of active particles in the presence of a background of passive particles in the conditions shown in Figure~\ref{fig6} as a function of the noise level ($\eta$). The MSDs feature a transitions from ballistic motion at small times ($\mathrm{MSD}(\tau) \propto \tau^2$ for small $\tau$) to diffusive motion at long times ($\mathrm{MSD}(\tau) \propto \tau$ for large $\tau$). The crossover time between the two regimes increases as the noise level decreases.} \label{fig7} \end{figure} More interesting phenomena occur at high packing fractions and high numbers of passive particles compared to the active ones ($M \gg N$). The behaviour of one such system is shown in Figure~\ref{fig6} and Supplementary Movie 2 for various noise levels. At high noise levels ($\eta=2\pi$, Figure~\ref{fig6}a) the motion of the active particles is significantly hindered by the presence of the passive particles and is essentially diffusive, as shown by the mean square displacement (MSD) which has a slope of 1 at all times (green line in Figure~\ref{fig7}). The active particles compress the passive ones creating some voids within the background of passive particles, within which they are effectively confined. As a consequence, the active particles have few chances of encountering each other and forming clusters. This behaviour is similar to that observed in experiments with microswimmers in a bath of passive particles, where it was observed that even the presence of very few active particles could help the crystallisation of the system \cite{kummel2015formation}. Decreasing the noise level to $\eta=\pi$ (Figure~\ref{fig6}b), the active particles are able to move more, but remain confined within depleted regions created in the background of passive particles. Even though they can perform some straight runs, they quickly get blocked by the passive particles and their movement becomes quickly diffusive. This is reflected by the MSD shown by the purple line in Figure~\ref{fig7}, which after a brief superdiffusive stage for short times (${\rm MSD}(\tau) \propto \tau^2$ for small $\tau$) quickly become diffusive (${\rm MSD}(\tau) \propto \tau$ for large $\tau$). Also in this case, the passive particles present in the background prevent encounters between active particles and the formation of clusters. A similar behaviour takes place even at even lower noise levels ($\eta=0.5\pi$, Figure~\ref{fig6}c). Again, the motion is directed only for short time scales and quickly becomes diffusive (see corresponding MSD plotted by the yellow line in Figure~\ref{fig7}). However, one can now observe the formation of some proto-channels, which are highlighted by the shaded blue areas which represent the trails left by the passage of the active particles over $25\,000$ time steps preceding the one represented in the figure. These proto-channels permit the active particles to occasionally encounter each other and form some 2-particle clusters. Decreasing further the noise level to $\eta = 0.03 \pi$ (Figure~\ref{fig6}d) leads to the formation of fully-fledged channels, whose presence is clearly shown by the blue shaded areas. These are open areas free of passive particles where the active particles can propagate unhindered. Comparing the various panels in Figure~\ref{fig6}d, which correspond to different times and represents trails corresponding to non-overlapping time frames, one can see that these channels are quite stable over time. The reason for this is that, once a channel is open by some active particles, additional active particle use it leading to its dynamic stabilisation. This transition towards the formation of channels is driven by the increase of the characteristic length of the directed runs of the active particles in the background of passive particles. As can be seen from the blue line in Figure~\ref{fig7}, the MSD is now ballistic over a longer time range. Once the channels are open the particles can encounter each other and form some small (2-particle and 3-particle) clusters. \section{Conclusions and Outlook} We have introduced a novel simple model for interacting active particles that leads to the formation of metastable clusters and, in the presence of a background of passive particles, to the opening of metastable channels. This model can be easily implemented in artificial systems based on robots equipped with sensors thanks to its simplicity and the fact that it relies only on the knowledge of the positions of the surrounding particles. Furthermore, by changing the level of noise in real time, it would be possible to switch between different behaviours, e.g. between clustering and dispersion. \section{Acknowledgements} This work was partially supported by the ERC Starting Grant ComplexSwimmers (grant number 677511). \section*{References}
1,116,691,498,719
arxiv
\section*{Introduction} \label{sec:Introduction} Let $X$ be a \linedef{cubic fourfold}, i.e., a smooth cubic hypersurface $X \subset \mathbb{P}^5$ over the complex numbers. Determining the rationality of $X$ is a classical question in algebraic geometry. Some classes of rational cubic fourfolds have been described by Fano \cite{fano}, Tregub \cite{tregub}, \cite{tregub:new}, and Beauville--Donagi \cite{beauville-donagi}. In particular, \linedef{pfaffian} cubic fourfolds, defined by pfaffians of skew-symmetric $6\times 6$ matrices of linear forms, are rational. Equivalently, a cubic fourfold is pfaffian if and only if it contains a quintic del Pezzo surface, see \cite[Prop.~9.1(a)]{beauville:determinantal}. Hassett \cite{hassett:special-cubics} describes, via lattice theory, divisors $\mathcal{C}_d$ in the moduli space $\mathcal{C}$ of cubic fourfolds. In particular, $\mathcal{C}_{14}$ is the closure of the locus of pfaffian cubic fourfolds and $\mathcal{C}_8$ is the locus of cubic fourfolds containing a plane. Hassett \cite{hassett:rational_cubic} identifies countably many divisors of $\mathcal{C}_8$ consisting of rational cubic fourfolds with trivial \emph{Clifford invariant}. Nevertheless, it is expected that the general cubic fourfold (and the general cubic fourfold containing a plane) is nonrational. At present, however, not a single cubic fourfold is provably nonrational. \smallskip In this work, we study rational cubic fourfolds in $\mathcal{C}_8\cap\mathcal{C}_{14}$ with nontrivial Clifford invariant, hence not contained in the divisors of $\mathcal{C}_8$ described by Hassett. Let $A(X)$ be the lattice of algebraic 2-cycles on $X$ up to rational equivalence and $d_X$ the discriminant of the intersection form on $A(X)$. Our main result is a complete description of the irreducible components of $\mathcal{C}_8 \cap \mathcal{C}_{14}$. \begin{thmintro} \label{thm:main} There are five irreducible components of $\mathcal{C}_8 \cap \mathcal{C}_{14}$, indexed by the discriminant $d_X \in \{21,29,32,36,37\}$ of a general member. The Clifford invariant of a general cubic fourfold $X$ in $\mathcal{C}_8 \cap \mathcal{C}_{14}$ is trivial if and only if $d_X$ is odd. The pfaffian locus is dense in the $d_X=32$ component. \end{thmintro} In particular, the general cubic fourfold in the $d_X = 32$ component of $\mathcal{C}_8 \cap \mathcal{C}_{14}$ is rational and has nontrivial Clifford invariant, thus answering a question of Hassett~\cite[Rem.~4.3]{hassett:rational_cubic}. We also provide a geometric description of this component:\ its general member has a \emph{tangent conic} to the sextic degeneration curve of the associated quadric surface bundle (see Proposition~\ref{prop:tau0}). At the same time, we also answer a question of E.\ Macr\`\i\ and P.\ Stellari, as these cubic fourfolds also provide a nontrivial corroboration of Kuznetsov's derived categorical conjecture on the rationality of cubic fourfolds containing a plane. \smallskip Kuznetsov \cite{kuznetsov:cubic_fourfolds} establishes a semiorthogonal decomposition of the bounded derived category $$ \mathrm{D}^{\mathrm{b}}(X) = \langle \cat{A}_X, \sheaf{O}_X, \sheaf{O}_X(1),\sheaf{O}_X(2) \rangle. $$ The category $\cat{A}_X$ has the remarkable property of being a 2-Calabi--Yau category, essentially a noncommutative deformation of the derived category of a K3 surface. Based on evidence from known cases as well as general categorical considerations, Kuznetsov conjectures that the category $\cat{A}_X$ contains all the information about the rationality of $X$. \begin{conjecture*}[Kuznetsov] A complex cubic fourfold $X$ is rational if and only if there exists a K3 surface $S$ and an equivalence $\cat{A}_X \cong \mathrm{D}^{\mathrm{b}}(S)$. \end{conjecture*} If $X$ contains a plane, a further geometric description of $\cat{A}_X$ is available. Indeed, $X$ is birational to the total space of a quadric surface bundle $\wt{X} \to \mathbb{P}^2$ by projecting from the plane. We assume that the degeneration divisor is a smooth sextic curve $D \subset \mathbb{P}^2$. The discriminant double cover $S \to \mathbb{P}^2$ branched along $D$ is then a K3 surface of degree 2 and the even Clifford algebra gives rise to a Brauer class $\beta \in {\rm Br}(S)$, called the \linedef{Clifford invariant} of $X$. Via mutations, Kuznetsov \cite[Thm.~4.3]{kuznetsov:cubic_fourfolds} establishes an equivalence $\cat{A}_X \cong \mathrm{D}^{\mathrm{b}}(S,\beta)$ with the bounded derived category of $\beta$-twisted sheaves on $S$. By classical results in the theory of quadratic forms (see \cite[Thm.~2.24]{ABB:fibrations}), $\beta$ is trivial if and only if the quadric surface bundle $\wt{X} \to \mathbb{P}^2$ has a rational section. In particular, if $\beta \in {\rm Br}(S)$ is trivial then $X$ is rational and Kuznetsov's conjecture is verified. This should be understood as the \emph{trivial case} of Kuznetsov's conjecture for cubic fourfolds containing a plane. \begin{conjecture*}[Kuznetsov ``containing a plane''] Let $X$ be a smooth complex cubic fourfold containing a plane, $S$ the associated K3 surface of degree 2, and $\beta \in {\rm Br}(S)$ the Clifford invariant. Then $X$ is rational if and only if there exists a K3 surface $S'$ and an equivalence $\mathrm{D}^{\mathrm{b}}(S,\beta) \cong \mathrm{D}^{\mathrm{b}}(S')$. \end{conjecture*} To date, this variant of Kuznetsov's conjecture is only known to hold in the trivial case (where $\beta$ is trivial and $S = S'$). E.~Macr\`\i\ and P.~Stellari asked if there was class of smooth rational cubic fourfolds containing a plane that verify this variant of Kuznetsov's conjecture in a nontrivial way, i.e., where $\beta$ is not trivial and there exists a different K3 surface $S'$ and an equivalence $\mathrm{D}^{\mathrm{b}}(S,\beta) \cong \mathrm{D}^{\mathrm{b}}(S')$. The existence of such fourfolds is not \emph{a priori} clear:\ while a general cubic fourfold containing a plane has nontrivial Clifford invariant, the existence of \emph{rational} such fourfolds was only intimated in the literature. \begin{thmintro} \label{thm:2} Let $X$ be a general member of the $d_X=32$ component of $\mathcal{C}_8 \cap \mathcal{C}_{14}$, i.e., a smooth pfaffian cubic fourfold $X$ containing a plane with nontrivial Clifford invariant $\beta \in {\rm Br}(S)$. There exists a K3 surface $S'$ of degree 14 and a nontrivial twisted derived equivalence $\mathrm{D}^{\mathrm{b}}(S,\beta) \cong \mathrm{D}^{\mathrm{b}}(S')$. \end{thmintro} The outline of this paper is as follows. In \S\ref{sec:lattices}, we study Hodge theoretic and geometric conditions for the nontriviality of the Clifford invariant (see Propositions~\ref{prop:nontriviality} and \ref{prop:conic}). In \S\ref{sec:sanity_check}, we analyze the irreducible components of $\mathcal{C}_8 \cap \mathcal{C}_{14}$, proving the first two statements of Theorem~\ref{thm:main}. We also answer a question of F.\ Charles on cubic fourfolds in $\mathcal{C}_8 \cap \mathcal{C}_{14}$ with trivial Clifford invariant (see Theorem~\ref{theorem:comps} and Proposition~\ref{oddity}). Throughout, we use the work of Looijenga~\cite{looijenga}, Laza~\cite{laza}, and Mayanskiy~\cite{lattices_mayanskiy} on the realizability of lattices of algebraic cycles on a cubic fourfold. In \S\ref{sec:hpd}, we recall some elements of the theory of homological projective duality and prove Theorem~\ref{thm:2}. Finally, in \S\ref{sec:example}, we prove the final statement of Theorem~\ref{thm:main}, that the pfaffian locus is dense in the $d_X=32$ component of $\mathcal{C}_8 \cap \mathcal{C}_{14}$, by expliciting a single point in the intersection. For the verification, we are aided by \texttt{Magma}~\cite{magma}, adapting some of the computational techniques developed in \cite{hassett_varilly:K3}. Throughout, we are guided by Hassett \cite[Rem.~4.3]{hassett:rational_cubic}, who suggests that rational cubic fourfolds containing a plane with nontrivial Clifford invariant ought to lie in $\mathcal{C}_{8} \cap \mathcal{C}_{14}$. While the locus of pfaffian cubic fourfolds is dense in $\mathcal{C}_{14}$, it is not true that the locus of pfaffians containing a plane is dense in $\mathcal{C}_{8} \cap \mathcal{C}_{14}$. In Theorem~\ref{thm:main}, we find a suitable component affirming Hassett's suggestion. \medskip {\small\noindent{\bf Acknowledgments.} Much of this work has been developed during visits of the authors at the Max Planck Institut f\"ur Mathematik in Bonn, Universit\"at Duisburg--Essen, Universit\'e Rennes 1, ETH Z\"urich, and Rice University. The hospitality of each institute is warmly acknowledged. The first and fourth authors are partially supported by NSF grant MSPRF DMS-0903039 and DMS-1103659, respectively. The second author was partially supported by the SFB/TR 45 `Periods, moduli spaces, and arithmetic of algebraic varieties'. The authors would specifically like to thank N.\ Addington, F.\ Charles, J.-L.\ Colliot-Th\'el\`ene, B.\ Hassett, R.\ Laza, M.-A.\ Knus, E.\ Macr\`\i, R.\ Parimala, and V.\ Suresh for many helpful discussions.} \section{Nontriviality criteria for Clifford invariants} \label{sec:lattices} In this section, by means of straightforward lattice-theoretic calculations, we describe a class of cubic fourfolds containing a plane with nontrivial Clifford invariant. If $(H,\inner{})$ is a $\mathbb{Z}$-lattice and $A \subset H$, then the orthogonal complement $A^{\perp} = \{v\in H\,:\,\inner{(v,A)}=0\}$ is a \linedef{saturated} sublattice (i.e., $A^{\perp} = A^{\perp}\otimes_{\mathbb{Z}}\mathbb{Q} \cap H$) and is thus a \linedef{primitive} sublattice (i.e., $H/A^{\perp}$ is torsion free). Denote by $d(H,\inner{})\in\mathbb{Z}$ the \linedef{discriminant}, i.e., the determinant of the Gram matrix. Let $X$ be a smooth cubic fourfold over $\mathbb{C}$. The integral Hodge conjecture holds for $X$ (by \cite{murre}, \cite{zucker}, cf.\ \cite[Thm.~18]{voisin:aspects}) and we denote by $A(X) = H^4(X,\mathbb{Z}) \cap H^{2,2}(X)$ the lattice of integral middle Hodge classes, which are all algebraic. Now suppose that $X$ contains a plane $P$ and let $\pi : \wt{X} \to \mathbb{P}^2$ be the quadric surface bundle defined by blowing up and projecting away from $P$. Let $\sheaf{C}_0$ be the even Clifford algebra associated to $\pi$, cf.\ \cite{kuznetsov:quadrics} or \cite[\S2]{ABB:fibrations}. Throughout, we always assume that $\pi$ has \linedef{simple degeneration}, i.e., the fibers of $\pi$ have at most isolated singularities. This is equivalent to the condition that $X$ doesn't contain another plane intersecting $P$; see \cite[Lemme~2]{voisin}. This implies that the degeneration divisor $D \subset \mathbb{P}^2$ is a smooth sextic curve, the \linedef{discriminant cover} $f : S \to \mathbb{P}^2$ branched along $D$ is a smooth K3 surface of degree 2, and that $\sheaf{C}_0$ defines an Azumaya quaternion algebra over $S$, cf.\ \cite[Prop.~3.13]{kuznetsov:quadrics}. We refer to the Brauer class $\beta \in {\rm Br}(S)[2]$ of $\sheaf{C}_0$ as the \linedef{Clifford invariant} of $X$. Let $h \in H^2(X,\mathbb{Z})$ be the hyperplane class associated to the embedding $X \subset \mathbb{P}^5$. The \linedef{transcendental} lattice $T(X)$, the \linedef{nonspecial cohomology} lattice $K$, and the \linedef{primitive cohomology} lattice $H^4(X,\mathbb{Z})_0$ are the orthogonal complements (with respect to the cup product polarization $\inner{}_X$) of $A(X)$, $\lattice{h^2, P}$, and $\lattice{h^2}$ inside $H^4(X,\mathbb{Z})$, respectively. Thus $T(X) \subset K \subset H^4(X,\mathbb{Z})_0$. We have that $T(X)=K$ for a very general cubic fourfold, cf.\ the proof of \cite[Prop.~2]{voisin}. There are natural polarized Hodge structures on $T(X)$, $K$, and $H^4(X,\mathbb{Z})_0$ given by restriction from $H^4(X,\mathbb{Z})$. Similarly, let $S$ be a smooth integral projective surface over $\mathbb{C}$ and ${\rm NS}(S) = H^2(S,\mathbb{Z}) \cap H^{1,1}(S)$ its N\'eron--Severi lattice. Let $h_1 \in {\rm NS}(S)$ be a fixed anisotropic class. The \linedef{transcendental} lattice $T(S)$ and the \linedef{primitive cohomology} $H^2(S,\mathbb{Z})_0$ are the orthogonal complements (with respect to the cup product polarization $\inner{}_S$) of ${\rm NS}(S)$ and $\lattice{h_1}$ inside $H^2(S,\mathbb{Z})$, respectively. If $f : S \to \mathbb{P}^2$ is a double cover, then we take $h_1$ to be the class of $f^* \sheaf{O}_{\mathbb{P}^2}(1)$. Let $F(X)$ be the Fano variety of lines in $X$ and $W \subset F(X)$ the divisor consisting of lines meeting $P$. Then $W$ is identified with the relative Hilbert scheme of lines in the fibers of $\pi$. Its Stein factorization $W \mor{p} S \mor{f} \mathbb{P}^2$ displays $W$ as a smooth conic bundle over the discriminant cover. Then the Abel--Jacobi map $$ \Phi : H^4(X,\mathbb{Z}) \to H^2(W,\mathbb{Z}) $$ becomes an isomorphism of $\mathbb{Q}$-Hodge structures $\Phi : H^4(X,\mathbb{Q}) \to H^2(W,\mathbb{Q})(-1)$; see \cite[Prop.~1]{voisin}. Finally, $p : W \to S$ is a smooth conic bundle and there is an injective (see \cite[Lemma~7.28]{voisin:Hodge_I}) morphism of Hodge structures $p^* : H^2(S,\mathbb{Z}) \to H^2(W,\mathbb{Z})$. We recall a result of Voisin \cite[Prop.~2]{voisin}. \begin{prop} \label{prop:voisin-2} Let $X$ be a smooth cubic fourfold containing a plane. Then $\Phi (K) \subset p{}^* H^2(S,\mathbb{Z})_0(-1)$ is a polarized Hodge substructure of index 2. \end{prop} \begin{proof} That $\Phi(K) \subset p{}^* H^2(S,\mathbb{Z})_0$ is an inclusion of index 2 is proved in \cite[Prop.~2]{voisin}. We now verify that the inclusion respects the Hodge filtrations. The Hodge filtration of $\Phi(K) \otimes_{\mathbb{Z}} \mathbb{C}$ is that induced from $H^2(W,\mathbb{C})(-1)$ since $\Phi$ is an isomorphism of $\mathbb{Q}$-Hodge structures. On the other hand, since $p : W \to S$ is a smooth conic bundle, $R^1 p_* \mathbb{C} = 0$. Hence $p{}^* : H^2(S,\mathbb{C}) \to H^2(W,\mathbb{C})$ is injective by the Leray spectral sequence and $p{}^* H^{p,2-p}(S) = p{}^* H^2(S,\mathbb{C}) \cap H^{p,2-p}(W)$. Thus the Hodge filtration of $p{}^* H^2(S,\mathbb{C})(-1)$ is induced from $H^2(W,\mathbb{C})(-1)$, and similarly for primitive cohomology. In particular, the inclusion $\Phi (K) \subset p{}^* H^2(S,\mathbb{Z})_0(-1)$ is a morphism of Hodge structures. Finally, by \cite[Prop.~2]{voisin}, we have that $\inner{}_X(x, y) = -\inner{}_S(\Phi(x), \Phi(y))$ for $x,y \in K$, and thus the inclusion also preserves the polarizations. \end{proof} By abuse of notation (of which we are already guilty), for $x \in K$, we will consider $\Phi(x)$ as an element of $p{}^* H^2(S,\mathbb{Z})_0(-1)$ without explicitly mentioning so. \begin{cor} \label{claim} Let $X$ be a smooth cubic fourfold containing a plane. Then $\Phi(T(X)) \subset p{}^* T(S)(-1)$ is a sublattice of index $\epsilon$ dividing 2. In particular, ${\operatorname{rk}}\, A(X) = {\operatorname{rk}}\, {\rm NS}(S) + 1$ and $d(A(X)) = 2^{2(\epsilon-1)} d({\rm NS}(S))$. \end{cor} \begin{proof} By the saturation property, $T(X)$ and $T(S)$ coincide with the orthogonal complement of $A(X) \cap K$ in $K$ and ${\rm NS}(S) \cap H^2(S,\mathbb{Z})_0$ in $H^2(S,\mathbb{Z})_0$, respectively. Now, for $x \in T(X)$ and $a \in {\rm NS}(S)_0$, we have $$ \inner{}_S(\Phi(x),a) = -\frac{1}{2} \Phi(x) . g . p{}^* a = -\frac{1}{2} \inner{}_X(x,{}^t\Phi(g . p{}^* a)) = 0 $$ by \cite[Lemme~3]{voisin} and the fact that ${}^t\Phi(g . p{}^* a) \in A(X)$ (here, $g \in H^2(W,\mathbb{Z})$ is the pullback of the hyperplane class from the canonical grassmannian embedding), which follows since ${}^t\Phi : H^4(W,\mathbb{Z}) \cong H_2(W,\mathbb{Z}) \to H_4(X,\mathbb{Z}) \cong H^4(X,\mathbb{Z})$ preserves the Hodge structure by the same argument as in the proof of Proposition~\ref{prop:voisin-2}. Therefore $\Phi(T(X)) \subset p{}^* T(S)(-1)$. Since $T(X) \subset K$ and $T(S)(-1) \subset H^2(S,\mathbb{Z})_0(-1)$ are saturated (hence primitive) sublattices, an application of the snake lemma shows that $p{}^* T(S)(-1)/\Phi(T(X)) \subset p{}^* H^2(S,\mathbb{Z})_0/\Phi(K) \cong \mathbb{Z}/2\mathbb{Z}$, hence the index of $\Phi(T(X))$ in $p{}^* T(S)(-1)$ divides 2. We now verify the final claims. We have ${\operatorname{rk}}\, K = {\operatorname{rk}}\, H^2(X,\mathbb{Z}) - 2 = {\operatorname{rk}}\, T(X) + {\operatorname{rk}}\, A(X) - 2$ and ${\operatorname{rk}}\, H^2(S,\mathbb{Z})_0 = {\operatorname{rk}}\, H^2(S,\mathbb{Z}) - 1 = {\operatorname{rk}}\, T(S) + {\operatorname{rk}}\, {\rm NS}(S) - 1$ (since $P$, $h^2$, and $h_1$ are anisotropic vectors, respectively), while ${\operatorname{rk}}\, K = {\operatorname{rk}}\, H^2(S,\mathbb{Z})_0$ and ${\operatorname{rk}}\, T(X) = {\operatorname{rk}}\, T(S)$ by Proposition \ref{prop:voisin-2} and the above, respectively. The claim concerning the discriminant then follows by standard lattice theory. \end{proof} Let $Q \in A(X)$ be the class of a fiber of the quadric surface bundle $\pi : \wt{X} \to \mathbb{P}^2$. Then $P + Q = h^2$, see \cite[\S1]{voisin}. \begin{prop} \label{prop:nontriviality} Let $X$ be a smooth cubic fourfold containing a plane $P$. If $A(X)$ has rank 3 and even discriminant (e.g., if the associated K3 surface $S$ of degree 2 has Picard rank 2 and even N\'eron--Severi discriminant) then the Clifford invariant $\beta \in {\rm Br}(S)$ of $X$ is nontrivial. \end{prop} \begin{proof} The Clifford invariant $\beta \in {\rm Br}(S)$ associated to the quadric surface bundle $\pi : \wt{X} \to \mathbb{P}^2$ is trivial if and only if $\pi$ has a rational section; see \cite[Thm.~6.3]{knus_parimala_sridharan:rank_4} or \cite[2~Thm.~14.1,~Lemma~14.2]{scharlau:book}. Such a section exists if and only if there exists an algebraic cycle $R \in A(X)$ such that $R.Q=1$; see \cite[Thm.~3.1]{hassett:rational_cubic} or \cite[Prop.~4.7]{kuznetsov:cubic_fourfolds}. Suppose that such a cycle $R$ exists and consider the sublattice $\lattice{h^2, Q, R} \subset A(X)$. Its intersection form has Gram matrix \begin{equation} \label{eq:beta_trivial} \begin{array}{cccc} & h^2 & Q & R\\ h^2 & 3 & 2 & x\\ Q & 2 & 4 & 1 \\ R & x & 1 & y \end{array} \end{equation} for some $x,y \in \mathbb{Z}$. The determinant of this matrix is always congruent to 5 modulo 8, so this lattice cannot be a finite index sublattice of $A(X)$, which has even discriminant by hypothesis. Hence no such 2-cycle $R$ exists and thus $\beta$ is nontrivial. The final claim follows directly from Corollary~\ref{claim}. Finally, if the associated K3 surface $S$ of degree 2 has Picard rank 2 and even N\'eron--Severi discriminant, then $A(X)$ has rank 3 and even discriminant by Corollary~\ref{claim}. \end{proof} We now provide an explicit geometric condition for the nontriviality of the Clifford invariant, which will be necessary in \S\ref{sec:example}. We say that a cubic fourfold $X$ containing a plane has a \linedef{tangent conic} if there exists a conic $C \subset \mathbb{P}^2$ everywhere tangent to the discriminant curve $D\subset \mathbb{P}^2$ of the associated quadric surface bundle. \begin{prop} \label{prop:conic} Let $X$ be a smooth cubic fourfold containing a plane. Let $S$ be the associated K3 surface of degree 2 and $\beta \in {\rm Br}(S)$ the Clifford invariant. If $X$ has a tangent conic and $S$ has Picard rank 2 then $\beta$ is nontrivial. \end{prop} \begin{proof} Consider the pull back of the cycle class of $C$ to $S$ via the discriminant double cover $f : S\to \mathbb{P}^2$. Then $f^* C$ has two components $C_1$ and $C_2$. The sublattice of the N\'eron--Severi lattice of $S$ generated by $h_1 = f{}^*\sheaf{O}_{\mathbb{P}^2}(1) = (C_1 + C_2)/2$ and $C_1$ has intersection form with Gram matrix $$ \begin{array}{ccc} & h_1 & C_1\\ h_1 & 2 & 2\\ C_1 & 2 & -2 \end{array} $$ having determinant $-8$. As $S$ has Picard rank 2, then the entire N\'eron--Severi lattice is in fact generated by $h_1$ and $C_1$ (see \cite[\S2]{elsenhal-janhel:k3} for further details) and we can apply Proposition~\ref{prop:nontriviality} to conclude the nontriviality of the Clifford invariant. \end{proof} \begin{remark} Kuznetsov's conjecture implies that the general cubic fourfold containing a plane (i.e., the associated K3 surface $S$ of degree 2 has Picard rank 1) is not rational. Indeed, in this case there exists no K3 surface $S'$ with $\cat{A}_X \cong \mathrm{D}^{\mathrm{b}}(S')$; see \cite[Prop.~4.8]{kuznetsov:cubic_fourfolds}. Therefore, for any rational cubic fourfold containing a plane, $S$ should have Picard rank at least 2. \end{remark} \section{The Clifford invariant on $\mathcal{C}_8 \cap\mathcal{C}_{14}$} \label{sec:sanity_check} In this section, we first prove that $\mathcal{C}_8 \cap \mathcal{C}_{14}$ has five irreducible components and we describe each of them in lattice theoretic terms. We then completely analyze the (non)triviality of the Clifford invariant of the general cubic fourfold (i.e., such that $A(X)$ has rank 3) in each irreducible component. One of the components corresponds to cubic fourfolds containing a plane and having a tangent conic (i.e., those considered in Proposition~\ref{prop:conic}), where we already know the nontriviality of the Clifford invariant. Another component corresponds to cubic fourfolds containing two disjoint planes, where we already know the triviality of the Clifford invariant. There are another two components of $\mathcal{C}_8 \cap \mathcal{C}_{14}$ whose general elements have trivial Clifford invariant (see Proposition~\ref{oddity}), answering a question of F.\ Charles. \medskip We recall that a cubic fourfold $X$ is in $\mathcal{C}_8$ or $\mathcal{C}_{14}$ if and only if $A(X)$ has a primitive sublattice $K_8=\lattice{h^2,P}$ or $K_{14}=\lattice{h^2,T}$ having Gram matrix $$ \begin{array}{ccc} & h^2 & P \\ h^2 & 3 & 1 \\ P & 1 & 3 \end{array} \qquad \text{or} \qquad \begin{array}{ccc} & h^2 & T \\ h^2 & 3 & 4 \\ T & 4 & 10 \end{array} $$ respectively. This follows from the definition of $\mathcal{C}_d$, together with the fact that for any $d \not\equiv 0 \bmod 9$ there is a unique lattice (up to isomorphism) of rank 2 that represents 3 and has discriminant $d$. Thus a cubic fourfold $X$ in $\mathcal{C}_8 \cap \mathcal{C}_{14}$ has a sublattice $\lattice{h^2, P, T} \subset A(X)$ with Gram matrix \begin{equation} \label{eq:hypothetical} \begin{array}{cccc} & h^2 & P & T\\ h^2 & 3 & 1 & 4\\ P & 1 & 3 & \tau \\ T & 4 & \tau & 10 \end{array} \end{equation} for some $\tau \in \mathbb{Z}$ depending on $X$. There may be \emph{a priori} restrictions on the possible values of $\tau$. Denote by $A_\tau$ the lattice of rank 3 whose bilinear form has Gram matrix \eqref{eq:hypothetical}. We will write $\mathcal{C}_\tau = \mathcal{C}_{A_\tau} \subset \mathcal{C}$ for the locus of smooth cubic fourfolds such that there is a primitive embedding $A_\tau \subset A(X)$ of lattices preserving $h^2$. If nonempty, each $\mathcal{C}_\tau$ is a subvariety of codimension 2 by a variant of the proof of \cite[Thm.~3.1.2]{hassett:special-cubics}. We will use the work of Laza~\cite{laza}, Looijenga~\cite{looijenga}, and Mayanskiy~\cite[Thm.~6.1,~Rem.~6.3]{lattices_mayanskiy} to classify exactly which values of $\tau$ are supported by cubic fourfolds. \begin{theorem} \label{theorem:comps} The irreducible components of $\mathcal{C}_8 \cap \mathcal{C}_{14}$ are the subvarieties $\mathcal{C}_\tau$ for $\tau \in \{-1,0,1,2,3\}$. Moreover, the general cubic fourfold $X$ in $\mathcal{C}_\tau$ satisfies $A(X)\cong A_\tau$. \end{theorem} \begin{proof} By construction, $\mathcal{C}_8 \cap \mathcal{C}_{14}$ is the union of $\mathcal{C}_\tau$ for all $\tau \in \mathbb{Z}$. First we decide for which values of $\tau$ is $\mathcal{C}_\tau$ possibly nonempty. If $X$ is a smooth cubic fourfold, then $A(X)$ is positive definite by the Riemann bilinear relations. Hence, to be realized as a sublattice of some $A(X)$, the lattice $A_\tau$ must be positive definite, which by Sylvester's criterion, is equivalent to $A_\tau$ having positive discriminant. As $d(A_\tau)=-3\tau^2+8\tau+32$, the only values of $\tau$ making a positive discriminant are $\{-2,-1,0,1,2,3,4\}$. Then, we prove that $\mathcal{C}_\tau$ is empty for $\tau=-2,4$ by demonstrating \emph{roots} (i.e., primitive vectors of norm 2) in $A_{\tau,0} = \lattice{h^2}^{\perp}$ (see \cite[\S4~Prop.~1]{voisin}, \cite[\S2]{looijenga}, or \cite[Def.~2.16]{laza} for details on roots). Indeed, the vectors $(1, -3, 0)$ and $(0, -4, 1)$ form a basis for $A_{\tau,0} \subset A_\tau$; for $\tau=-2$, we find short roots $(-2,2,1)$ and $(2,-10,1)$; for $\tau=4$, we find short roots $\pm(1,1,-1)$. Hence $\mathcal{C}_\tau$ is possibly nonempty only for $\tau \in \{-1,0,1,2,3\}$. The corresponding discriminants $d(A_\tau)$ are $\{21,32,37,36,29\}$. For the remaining values of $\tau$, we prove that $\mathcal{C}_\tau$ is nonempty. To this end, we verify conditions 1)--6) of \cite[Thm.~6.1]{lattices_mayanskiy}, proving that $A_\tau = A(X)$ for some cubic fourfold $X$. Condition 1) is true by definition. For condition 2), letting $v = (x,-3x-4y,y) \in A_{\tau,0}$ we see that \begin{equation} \label{eq:roots} \inner{(v,v)}= 2\bigl(12 x^2 + (36 - 3 \tau) x y + (29 - 4 \tau) y^2\bigr) \end{equation} is even. For condition 5), letting $w = (x,y,z) \in A_\tau$, we compute that \begin{equation} \label{eq:alt_form} \inner{(h^2,w)}^2 - \inner{(w,w)} = 2\bigl( 3 x^2 - y^2 + z^2 + 2 x y + 8 x z + (4-\tau) y z \bigr) \end{equation} is even. For conditions 3)--4), given each of the five values of $\tau$, we use standard Diophantine techniques to prove the nonexistence of short and long roots of \eqref{eq:roots}. Finally, for condition 6), let $q_{K_\tau} : A^*_\tau/A_\tau \to \mathbb{Q}/2\mathbb{Z}$ be the discriminant form of \eqref{eq:alt_form}, restricted to the discriminant group $A^*_\tau/A_\tau$ of the lattice $A_\tau$. Appealing to Nikulin~\cite[Cor.~1.10.2]{nikulin}, it suffices to check that the signature satisfies ${\operatorname{sgn}}(q_{K_\tau}) \equiv 0 \bmod 8$; cf.\ \cite[Rem.~6.3]{lattices_mayanskiy}. Employing the notation of \cite[Prop.~1.8.1]{nikulin}, we compute the finite quadratic form $q_{K_\tau}$ in each case: \begin{equation} \label{table} \renewcommand{\arraystretch}{1.2} \begin{array}{|c||c|c|c|c|c|} \hline \tau & -1 & 0 & 1 & 2 & 3 \\\hline d(A_\tau) & 21 & 32 & 37 & 36 & 29 \\\hline A^*_\tau/A_\tau & \mathbb{Z}/3\mathbb{Z} \times \mathbb{Z}/7\mathbb{Z} & \mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/16\mathbb{Z} & \mathbb{Z}/37\mathbb{Z} & \mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/9\mathbb{Z} & \mathbb{Z}/29\mathbb{Z} \\\hline q_{K_\tau} & q_1^3(3)\oplus q_1^7(7) & q_3^2(2)\oplus q_1^2(2^4) & q_\theta^{37}(37) & q_3^2(2)\oplus q_1^2(2)\oplus q_1^3(3^2) & q_\theta^{29}(29)\\\hline \end{array} \end{equation} where $\theta$ represents a nonsquare class modulo the respective odd prime. In each case of \eqref{table}, we verify the signature condition using the formulas in \cite[Prop.~1.11.2]{nikulin}. Finally, for the five values of $\tau$, we prove that $\mathcal{C}_\tau$ is irreducible. As the rank of $A(X)$ is an upper-semicontinuous function on $\mathcal{C}$, the general cubic fourfold $X$ in $\mathcal{C}_8 \cap \mathcal{C}_{14}$ has $A(X)$ of rank 3 (by the argument above), of which $A_\tau$ is a finite index sublattice for some $\tau$. Each proper finite overlattice $B$ of $A_\tau$, such that $B$ (along with its sublattices $K_8$ and $K_{14}$) is primitively embedded into $H^4(X,\mathbb{Z})$, will give rise to an irreducible component of $\mathcal{C}_\tau$. We will prove that no such proper finite overlattices exist. For $\tau \in \{21,37,29\}$, the discriminant of $A_\tau$ is squarefree, so there are no proper finite overlattices. In the case $\tau=0,2$, we note that $B_0 = \lattice{h^2}^{\perp}$ is a proper finite overlattice of the binary lattice $A_{\tau,0}$ (as $\lattice{h^2} \subset B$ is assumed primitive). We then directly compute that each such $B_0$ has \emph{long roots} (i.e., vectors of norm 6 whose pairing with any other vector is divisible by 3). Therefore, no such proper finite overlattices exist. \end{proof} We now address the question of the (non)triviality of the Clifford invariant. \begin{prop}\label{oddity} Let $X$ be a general cubic fourfold in $\mathcal{C}_{8}\cap \mathcal{C}_{14}$ (so that $A(X)$ has rank 3). The Clifford invariant is trivial if and only if $\tau$ is odd. \end{prop} \begin{proof} If $\tau$ is odd then, as in the proof of Proposition~\ref{prop:tau0}, $(P+T).Q = -\tau$ is odd, hence the Clifford invariant $\beta \in {\rm Br}(S)$ is trivial by an application of the criteria in \cite[Thm.~3.1]{hassett:rational_cubic} or \cite[Prop.~4.7]{kuznetsov:cubic_fourfolds} (cf.\ the proof of Proposition~\ref{prop:nontriviality}). If $\tau$ is even, then $A_d=A(X)$ has rank 3 and even discriminant, hence $\beta$ is nontrivial by Proposition~\ref{prop:nontriviality}. \end{proof} For $\tau=-1$, the component $\mathcal{C}_{\tau}$ consists of cubic fourfolds containing two disjoint planes (see \cite[4.1.3]{hassett:special-cubics}). We now give a geometric description of the general member of $\mathcal{C}_\tau$ for $\tau=0$. \begin{prop} \label{prop:tau0} Let $X$ be a smooth cubic fourfold containing a plane $P$ and having a tangent conic such that $A(X)$ has rank 3. Then $X$ is in the component $\mathcal{C}_{\tau}$ for $\tau=0$. \end{prop} \begin{proof} Since $X$ has a tangent conic and $A(X)$ has rank 3, $A(X)$ has discriminant $8$ or $32$ and $X$ has nontrivial Clifford invariant by Corollary~\ref{claim} and Proposition~\ref{prop:conic}. As the sublattice $\lattice{h^2,P} \subset A(X)$ is primitive, we can choose a class $T \in A(X)$ such that $\lattice{h^2,P,T} \subset A(X)$ has discriminant 32. Adjusting $T$ by a multiple of $P$, we can assume that $h^2.T=4$. Write $\tau = P.T$. Adjusting $T$ by multiples of $h^2-3P$ keeps $h^2.T=4$ and adjusts $\tau$ by multiples of 8. The discriminant being 32, we are left with two possible choices ($\tau=0,4$) for the Gram matrix of $\lattice{h^2,P,T}$ up to isomorphism: $$ \begin{array}{cccc} & h^2 & P & T\\ h^2 & 3 & 1 & 4\\ P & 1 & 3 & 0 \\ T & 4 & 0 & 10 \end{array} \qquad \begin{array}{cccc} & h^2 & P & T\\ h^2 & 3 & 1 & 4\\ P & 1 & 3 & 4 \\ T & 4 & 4 & 12 \end{array} $$ In these cases, we compute that $K \cap \lattice{h^2,P,T}$ (i.e., the orthogonal complement of $\lattice{h^2,P}$ in $\lattice{h^2,P,T}$) is generated by $3h^2-P-2T$ and $h^2+P-T$ and has discriminant $16$ and $5$, respectively. Let $S$ be the associated K3 surface of degree 2. We calculate that ${\rm NS}(S) \cap H^2(S,\mathbb{Z})_0$ (i.e., the orthogonal complement of $\lattice{h_1}$ in ${\rm NS}(S)$) is generated by $h_1-C_1$ and has discriminant $-4$ (see Proposition~\ref{prop:conic} for definitions). Arguing as in the proof of Corollary~\ref{claim}, there is a lattice inclusion $\Phi(K \cap \lattice{h^2,P,T}) \subset {\rm NS}(S) \cap H^2(S,\mathbb{Z})_0(-1)$ having index dividing 2, which rules out the second case above by comparing discriminants. \end{proof} In Proposition~\ref{oddity} we isolate three classes of smooth cubic fourfolds $X \in \mathcal{C}_8 \cap \mathcal{C}_{14}$ with \emph{trivial} Clifford invariant. In particular, such cubic fourfolds are rational and verify Kuznetsov's conjecture; see \cite[Prop.~4.7]{kuznetsov:cubic_fourfolds}. While the component $\mathcal{C}_\tau$ for $\tau=-1$ is in the complement of the pfaffian locus (see \cite[Prop.~1b]{tregub:new}), we expect that the pfaffian locus is dense in the other four components. \section{The twisted derived equivalence} \label{sec:hpd} Homological projective duality (HPD) can be used to obtain a significant semiorthogonal decomposition of the derived category of a pfaffian cubic fourfold. As the universal pfaffian variety is singular, a noncommutative resolution of singularities is required to establish HPD in this case. A \emph{noncommutative resolution of singularities} of a scheme $Y$ is a coherent $\sheaf{O}_Y$-algebra $\sheaf{R}$ with finite homological dimension that is generically a matrix algebra (these properties translate to ``smoothness'' and ``birational to $Y$'' from the categorical language). We refer to \cite{kuznetsov:hpd_general} for details on HPD. \begin{theorem}[\cite{kuznetsov:hpd_for_grassmann}] \label{hpd} Let $W$ be a $\mathbb{C}$-vector space of dimension 6 and $Y \subset \mathbb{P}(\wedge^2 W^{\vee})$ the universal pfaffian cubic hypersurface. There exists a noncommutative resolution of singularities $(Y,\sheaf{R})$ that is HP dual to the grassmannian $\mathrm{Gr}(2,W)$. In particular, the bounded derived category of a smooth pfaffian cubic fourfold $X$ admits a semiorthogonal decomposition $$ \mathrm{D}^{\mathrm{b}}(X)= \langle\mathrm{D}^{\mathrm{b}}(S'), \sheaf{O}_X, \sheaf{O}_X(1), \sheaf{O}_X(2) \rangle, $$ where $S'$ is a smooth K3 surface of degree 14. In particular, $\cat{A}_X \cong \mathrm{D}^{\mathrm{b}}(S')$. \end{theorem} \begin{proof} The relevant noncommutative resolution of singularities $\sheaf{R}$ of $Y$ is constructed in \cite{kuznetsov:lefschetz}. The HP duality is established in \cite[Thm.~1]{kuznetsov:hpd_for_grassmann}. The semiorthogonal decomposition is constructed as follows. Any pfaffian cubic fourfold $X$ is an intersection of $Y \subset \mathbb{P}(\wedge^2 W^{\vee})=\mathbb{P}^{14}$ with a linear subspace $\mathbb{P}^5 \subset \mathbb{P}^{14}$. If $X$ is smooth, then $\sheaf{R}|_X$ is Morita equivalent to $\sheaf{O}_X$. Via classical projective duality, $Y \subset \mathbb{P}^{14}$ corresponds to $\mathbb{G}(2,W) \subset \check{\mathbb{P}}^{14}$ while $\mathbb{P}^5 \subset \mathbb{P}^{14}$ corresponds to a linear subspace $\mathbb{P}^8 \subset \check{\mathbb{P}}^{14}$. The intersection of $\mathbb{G}(2,W)$ and $\mathbb{P}^{8}$ inside $\check{\mathbb{P}}^{14}$ is a K3 surface $S'$ of degree 14. Kuznetsov \cite[Thm.~2]{kuznetsov:hpd_for_grassmann} describes a semiorthogonal decomposition $$ \mathrm{D}^{\mathrm{b}}(X)= \langle \sheaf{O}_X(-3), \sheaf{O}_X(-2), \sheaf{O}_X(-1), \mathrm{D}^{\mathrm{b}}(S') \rangle. $$ To obtain the desired semiorthogonal decomposition and the equivalence $\cat{A}_X \cong \mathrm{D}^{\mathrm{b}}(S')$, we act on $\mathrm{D}^{\mathrm{b}}(X)$ by the autoequivalence $- \otimes \sheaf{O}_X(3)$, then mutate the image of $\mathrm{D}^{\mathrm{b}}(S')$ to the left with respect to its left orthogonal complement; see \cite{bondal-repr}. This displays the left orthogonal complement of $\langle \sheaf{O}_X, \sheaf{O}_X(1), \sheaf{O}_X(2) \rangle$, which is $\cat{A}_X$ by definition, as a category equivalent to $\mathrm{D}^{\mathrm{b}}(S')$. \end{proof} Finally, assuming the result in \S\ref{sec:example}, we can give a proof of Theorem~\ref{thm:2}. \begin{proof}[Proof of Theorem~\ref{thm:2}] Let $X$ be a smooth complex pfaffian cubic fourfold containing a plane, $S$ the associated K3 surface of degree 2, $\beta \in {\rm Br}(S)$ the Clifford invariant, and $S'$ the K3 surface of degree 14 arising from Theorem~\ref{hpd} via projective duality. Then by \cite[Thm.~4.3]{kuznetsov:cubic_fourfolds} and Theorem~\ref{hpd}, the category $\cat{A}_X$ is equivalent to both $\mathrm{D}^{\mathrm{b}}(S,\beta)$ and $\mathrm{D}^{\mathrm{b}}(S')$. The cubic fourfold $X$ is rational, being pfaffian (see \cite[Prop.~5ii]{beauville-donagi}, \cite{tregub}, and \cite[Prop.~9.2a]{beauville:determinantal}). The existence of such cubic fourfolds with $\beta$ nontrivial is guaranteed by Theorem~\ref{main}. Thus there is a twisted derived equivalence $\mathrm{D}^{\mathrm{b}}(S,\beta)\cong \mathrm{D}^{\mathrm{b}}(S')$ between K3 surfaces of degree 2 and 14. \end{proof} \begin{remark} \label{rem:no_twist} By \cite[Rem.~7.10]{huybrechts-stellari}, given any K3 surface $S$ and any nontrivial $\beta \in {\rm Br}(S)$, there is \emph{no} equivalence between $\mathrm{D}^{\mathrm{b}}(S,\beta)$ and $\mathrm{D}^{\mathrm{b}}(S)$. Thus any $X$ as in Theorem~\ref{thm:2} validates Kuznetsov's conjecture on the rationality of cubic fourfolds containing a plane, but not via the K3 surface $S$. \end{remark} \section{A pfaffian containing a plane} \label{sec:example} In this section, we exhibit a smooth pfaffian cubic fourfold $X$ containing a plane and having a tangent conic such that $A(X)$ has rank 3. By Propositions~\ref{prop:nontriviality} and \ref{prop:tau0}, $X$ has nontrivial Clifford invariant and is in the $\tau=0$ component of $\mathcal{C}_8 \cap \mathcal{C}_{14}$. In particular, this proves that the pfaffian locus nontrivially intersects, and hence in dense in (since it is open in $\mathcal{C}_{14}$), the component $\mathcal{C}_\tau$ with $\tau=0$. \begin{theorem} \label{main} Let A be the $6\times 6$ antisymmetric matrix $$ \left( \begin{array}{cccccc} 0 & y + u & x + y + u & u & z & y + u + v\\ & 0 & x + y + z & x + z + u + w & y + z + u + v + w & x + y + z + u + v + w\\ & & 0 & x + y + u + w & x + y + u + v + w & x + y + z + v + w\\ & & & 0 & x + u + v + w & x + u + w\\ & & & & 0 & z + u + w\\ & & & & & 0 \end{array} \right) $$ of linear forms in $\mathbb{Q}[x,y,z,u,v,w]$ and let $X \subset \mathbb{P}^5$ be the cubic fourfold defined by the vanishing of the pfaffian of $A$: \begin{align*} &(x - 4y - z)u^2 + (-x - 3y)uv + (x - 3y)uw + (x - 2y - z)vw - 2yv^2 + xw^2\\ ~& + (2x^2 + xz - 4y^2 + 2z^2)u + (x^2 - xy - 3y^2 + yz - z^2)v + (2x^2 + xy + 3xz - 3y^2 + yz)w \\ ~& + x^3 + x^2y + 2x^2z - xy^2 + xz^2 - y^3 + yz^2 - z^3. \end{align*} Then: \begin{enumerate} \item $X$ is smooth, rational, and contains the plane $P = \{ x=y=z=0\}$. \item The degeneration divisor $D \subset \mathbb{P}^2$ of the associated quadric surface bundle $\pi : \wt{X} \to \mathbb{P}^2$ is the sextic curve given by the vanishing of: \begin{align*} d = x^6 & + 6x^5y + 12x^5z + x^4y^2 + 22x^4yz + 28x^3y^3 - 38x^3y^2z + 46x^3yz^2 + 4x^3z^3 \\ & + 24x^2y^4 - 4x^2y^3z - 37x^2y^2z^2 -36x^2yz^3 - 4x^2z^4 + 48xy^4z - 24xy^3z^2 \\ & + 34xy^2z^3 + 4xyz^4 + 20y^5z + 20y^4z^2 - 8y^3z^3 - 11y^2z^4 - 4yz^5. \end{align*} This curve is smooth; in particular, $\pi$ has simple degeneration and the discriminant cover is a smooth K3 surface $S$ of degree 2. \item The conic $C \subset \mathbb{P}^2$ defined by the vanishing of $x^2+yz$ is tangent to the degeneration divisor $D$ at six points (five of which are distinct). \item The K3 surface $S$ has (geometric) Picard rank $2$. \end{enumerate} In particular, the Clifford invariant of $X$ is geometrically nontrivial. \end{theorem} \begin{proof} Verifying smoothness of $X$ and $D$ is a straightforward application of the jacobian criterion, while the inclusion $P \subset X$ is checked by inspecting the expression for $\mathrm{pf}(A)$; every monomial is divisible by $x$, $y$ or $z$. Rationality comes from being a pfaffian cubic fourfold; see \cite{tregub}. The smoothness of $D$ implies that $\pi$ has simple degeneration; see \cite[Rem.~7.1]{hassett_varilly:K3} or \cite[Rem.~2.6]{ABB:fibrations}. This establishes parts \textit{a)} and \textit{b)}. For part \textit{c)}, note that we can write the equation for the degeneration divisor as $d = (x^2+yz)f + g^2$, where \begin{align*} f = {} & x^4 + 6x^3y + 12x^3z + x^2y^2 + 21x^2yz - 25x^2z^2 + 28xy^3 \\ & - 24xy^2z + 34xyz^2 + 4xz^3 + 20y^4 - 5y^3z - 8y^2z^2 - 11yz^3 - 4z^4.\\ g = {} & 2xy^2 + 5y^2z - 5x^2z. \end{align*} Hence the conic $C \subset \mathbb{P}^2$ defined by $x^2+yz$ is tangent to $D$ along the zero-dimensional scheme of length 6 given by the intersection of $C$ and the vanishing of $g$. For part \textit{d)}, the surface $S$ is the smooth sextic in $\mathbb{P}(1,1,1,3) = \Proj\mathbb{Q}[x,y,z,w]$ given by \[ w^2 = d(x,y,z), \] which is the double cover $\mathbb{P}^2$ branched along the discriminant divisor $D$. In these coordinates, the discriminant cover $f : S \to \mathbb{P}^2$ is simply the restriction to $S$ of the projection $\mathbb{P}(1,1,1,3) \dasharrow \mathbb{P}^2$ away from the hyperplane $\{w=0\}$. Let $C \subset \mathbb{P}^2$ be the conic from part \textit{d)}. As discussed in Proposition \ref{prop:conic}, the curve $f^*C$ consists of two $({-2})$-curves $C_1$ and $C_2$. These curves generate a sublattice of ${\rm NS}(S)$ of rank $2$. Hence $\rho(\overline{S})\geq \rho(S) \geq 2$, where $\overline{S}=S \times_\mathbb{Q} \mathbb{C}$. We show next that $\rho(\overline{S}) \leq 2$. Write $S_p$ for the reduction mod $p$ of $S$ and $\overline{S}_p = S_p \times_{\mathbb{F}_p} \overline{\mathbb{F}}_p$. Let $\ell \neq 3$ be a prime and write $\phi(t)$ for the characteristic polynomial of the action of absolute Frobenius on $H^2_{\textrm{\'et}}(\overline{S}_3,\mathbb{Q}_\ell)$. Then $\rho(\overline{S}_3)$ is bounded above by the number of roots of $\phi(t)$ that are of the form $3\zeta$, where $\zeta$ is a root of unity \cite[Prop.~2.3]{van_luijk}. Combining the Lefschetz trace formula with Newton's identities and the functional equation that $\phi(t)$ satisfies, it is possible calculate $\phi(t)$ from knowledge of $\#S(\mathbb{F}_{3^n})$ for $1 \leq n \leq 11$; see \cite{van_luijk} for details. Let $\widetilde{\phi}(t) = 3^{-22}\phi(3t)$, so that the number of roots of $\widetilde{\phi}(t)$ that are roots of unity gives an upper bound for $\rho(\overline{S}_3)$. Using {\tt Magma}, we compute \[ \widetilde{\phi}(t) = \frac{1}{3}(t - 1)^2(3t^{20} + t^{19} + t^{17} + t^{16} + 2t^{15} + 3t^{14} + t^{12} + 3t^{11} + 2t^{10} + 3t^9 + t^8 + 3t^6 + 2t^5 + t^4 + t^3 + t + 3) \] The roots of the degree $20$ factor of $\widetilde{\phi}(t)$ are not integral, and hence they are not roots of unity. We conclude that $\rho(\overline{S}_3) \leq 2$. By \cite{van_luijk}, we have $\rho(\overline{S}) \leq \rho(\overline{S}_3)$, so $\rho(\overline{S}) \leq 2$. It follows that $S$ (and $\overline{S}$) has Picard rank 2. This concludes the proof of part \textit{d)}. Finally, the nontriviality of the Clifford invariant follows from Propositions~\ref{prop:nontriviality} and~\ref{prop:conic}. \end{proof} A satisfying feature of Theorem~\ref{main} is that we can write out a representative of the Clifford invariant of $X$ explicitly, as a quaternion algebra over the function field of the K3 surface $S$. We first prove a handy lemma, of independent interest for its arithmetic applications (see e.g., \cite{hassett_varilly:K3_hasse,hassett_varilly:K3}). \begin{lemma} \label{lem:handy} Let $K$ be a field of characteristic $\neq 2$ and $q$ a nondegenerate quadratic form of rank 4 over $K$ with discriminant extension $L/K$. For $1 \leq r \leq 4$ denote by $m_r$ the determinant of the leading principal $r \times r$ minor of the symmetric Gram matrix of $q$. Then the class $\beta \in {\rm Br}(L)$ of the even Clifford algebra of $q$ is the quaternion algebra $(-m_2, -m_1 m_3)$. \end{lemma} \begin{proof} On $n\times n$ matrices $M$ over $K$, symmetric gaussian elimination is the following operation: $$ M= \begin{pmatrix} a & v^t \\ v & A \end{pmatrix} \mapsto \begin{pmatrix} a & 0 \\ 0 & A - a^{-1} vv^t \end{pmatrix} $$ where $a \in K^\times$, $v \in K^{n-1}$ is a column vector, and $A$ is an $(n-1)\times(n-1)$ matrix over $K$. Then $m_1=a$ and the element in the first row and column of $A - a^{-1} vv^t$ is precisely $m_2/m_1$. By induction, $M$ can be diagonalized, using symmetric gaussian elimination, to the matrix $$ \text{diag}({ m_1, m_2/m_1, \dotsc, m_{n}/m_{n-1} }). $$ For $q$ of rank 4 with symmetric Gram matrix $M$, we have $$ q = \quadform{m_1} \otimes \quadform{1,m_2,m_1 m_2 m_3, m_1 m_3 m_4} $$ so that over $L = K(\sqrt{m_4})$, we have that $q\otimes_K L = \quadform{m_1} \otimes \quadform{1,m_2,m_1 m_3, m_1 m_2 m_3}$, which is similar to the norm form of the quaternion $L$-algebra with symbol $(-m_2,-m_1 m_3)$. Thus the even Clifford algebra of $q$ is Brauer equivalent to $(-m_2,-m_1 m_3)$ over $L$. \end{proof} \begin{prop} The Clifford invariant of the fourfold $X$ of Theorem~\ref{main} is represented by the unramified quaternion algebra $(b,ac)$ over the function field of associated K3 surface $S$, where $$ a = x - 4y - z, \quad b = x^2 + 14xy - 23y^2 - 8yz, $$ and $$ c = 3x^3 + 2x^2y - 4x^2z + 8xyz + 3xz^2 - 16y^3 - 11y^2z - 8yz^2 - z^3. $$ \end{prop} \begin{proof} The symmetric Gram matrix of the quadratic form $(\sheaf{O}_{\mathbb{P}^2}^3\oplus \sheaf{O}_{\mathbb{P}^2}(-1),q,\sheaf{O}_{\mathbb{P}^2}(1))$ of rank 4 over $\mathbb{P}^2$ associated to the quadric bundle $\pi : \widetilde{X} \to \mathbb{P}^2$ is $$ \left( \begin{array}{cccc} 2(x - 4y - z) & -x - 3y & x - 3y & 2x^2 + xz - 4y^2 + 2z^2 \\ & 2(-2y) & x - 2y - z & x^2 - xy - 3y^2 + yz - z^2\\ & & 2x & 2x^2 + xy + 3xz - 3y^2 + yz \\ & & & 2(x^3 + x^2y + 2x^2z -xy^2 + xz^2 - y^3 + yz^2 - z^3) \end{array} \right) $$ see \cite[\S4.2]{hassett_varilly:K3} or \cite[\S4]{kuznetsov:cubic_fourfolds}. Since $S$ is regular, ${\rm Br}(S) \to {\rm Br}(k(S))$ is injective; see \cite{auslander_goldman} or \cite[Cor.~1.10]{grothendieck:Brauer_II}. By functoriality of the Clifford algebra, the generic fiber $\beta \otimes_{S} k(S) \in {\rm Br}(k(S))$ is represented by the even Clifford algebra of the generic fiber $q\otimes_{\mathbb{P}^2} {k(\mathbb{P}^2)}$. Thus we can perform our calculations in the function field $k(S)$. In the notation of Lemma~\ref{lem:handy}, we have $m_1=2a$, $m_2=-b$, and $m_3=-2c$, and the formulas follow immediately. \end{proof} \begin{remark} Contrary to the situation in \cite{hassett_varilly:K3}, the transcendental Brauer class $\beta \in {\rm Br}(S)$ is \emph{constant} when evaluated on $S(\mathbb{Q})$; this suggests that arithmetic invariants do not suffice to witness the nontriviality of $\beta$. Indeed, using elimination theory, we find that the odd primes $p$ of bad reduction of $S$ are 5, 23, 263, 509, 1117, 6691, 3342589, 197362715625311, and 4027093318108984867401313726363. For each odd prime $p$ of bad reduction, we compute that the singular locus of $\overline{S}_p$ consists of a single ordinary double point. Thus by \cite[Prop.~4.1,~Lemma~4.2]{hassett_varilly:K3_hasse}, the local invariant map associated to $\beta$ is constant on $S(\mathbb{Q}_p)$, for all odd primes $p$ of bad reduction. By an adaptation of \cite[Lemma~4.4]{hassett_varilly:K3_hasse}, the local invariant map is also constant for odd primes of good reduction. At the real place, we prove that $S(\mathbb{R})$ is connected, hence the local invariant map is constant. To this end, recall that the set of real points of a smooth hypersurface of even degree in $\mathbb{P}^2(\mathbb{R})$ consists of a disjoint union of \emph{ovals} (i.e., topological circles, each of whose complement is homeomorphic to a union of a disk and a M\"obius band, in the language of real algebraic geometry). In particular, $\mathbb{P}^2(\mathbb{R}) \smallsetminus D(\mathbb{R})$ has a unique nonorientable connected component $R$. By graphing an affine chart of $D(\mathbb{R})$, we find that the point $(1:0:0)$ is contained in $R$. We compute that the map projecting from $(1:0:0)$ has four real critical values, hence $D(\mathbb{R})$ consists of two ovals. These ovals are not nested, as can be seen by inspecting the graph of $D(\mathbb{R})$ in an affine chart. The Gram matrix of the quadratic form, specialized at $(1:0:0)$, has positive determinant, hence by local constancy, the equation for $D$ is positive over the entire component $R$ and negative over the interiors of the two ovals (since $D$ is smooth). In particular, the map $f : S(\mathbb{R}) \to \mathbb{P}^2(\mathbb{R})$ has empty fibers over the interiors of the two ovals and nonempty fibers over $R \subset \mathbb{P}^2(\mathbb{R})$ where it restricts to a nonsplit unramified cover of degree 2, which must be the orientation double cover of $R$ since $S(\mathbb{R})$ is orientable (the K\"ahler form on $S$ defines an orientation). In particular, $S(\mathbb{R})$ is connected. This shows that $\beta$ is constant on $S(\mathbb{Q})$. We believe that the local invariant map is also constant at the prime $2$, though this must be checked with a brute force computation. \end{remark}
1,116,691,498,720
arxiv
\section{Introduction} The resonant tunneling of electromagnetic waves through different types of optical barriers is a fast developing area of optical physics. This effect was first considered for photonic crystals,\cite {Yablonovitch2,photonic} where forbidden band-gaps in the electromagnetic spectrum form optical barriers. Macroscopic defects embedded in the photonic crystal give rise to local photon modes,\cite% {Yablonovitch1,Joannopoulos,Smith,Figotin,Sakoda} which induce the resonant transmission of electromagnetic waves through the band-gaps. A different type of photonic band gaps arises in polar dielectrics, where a strong resonance interaction between the electromagnetic field and dipole active internal excitations of a dielectric brings about a gap between different branches of polaritons. Recently it was suggested that regular microscopic impurities embedded in such a dielectric give rise to local polariton states \cite{Deych1,Deych2,Podolsky} where a photon is coupled to an intrinsic excitation of a crystal, and both these components are localized in the vicinity of the defect.\cite{footnote} The main peculiarity of the local polaritons is that their electromagnetic component is bound by a {\em microscopic} defect whose size is many order of magnitude smaller then the wavelengths of respective photons. Another important property of these states is the absence of a threshold for their appearance even in 3-{\em D} isotropic systems, while for all other known local states the ``strength'' of a defect must exceed a certain critical value before the state would split off a continous spectrum. The reason for this peculiar behavior is a strong van Hove singularity in the polariton density of states in the long wave region, caused by a negative effective mass of the polariton-forming excitations of a crystal. The feasibility of resonant electromagnetic tunneling induced by local polaritons, however, is not self-evident. The idea of a microscopic defect affecting propagation of waves with macroscopic wavelength seems to be in contradiction with common wisdom. Besides, it was found that the energy of the electromagnetic component of local polaritons is very small compared to the energy of its crystal counterpart. The existence of the tunneling effect was first numerically demonstrated in Ref. \onlinecite{Deych2}, where a 1-{\em D} chain of dipoles interacting with a scalar field imitating transverse electromagnetic waves was considered. It was found that a single defect embedded in such a chain results in near $100\%$ transmission at the frequency of local polaritons through a relatively short chain ($50$ atoms). The frequency profile of the transmission was found to be strongly assymetric, in contrast to the case of electron tunneling.\cite {electrontunneling} In most cases (at least for a small concentration of the transmitting centers) one-dimensional models give a reliable description of tunneling processes, because by virtue of tunneling, a wave propagates along a chain of resonance centers, for which a 1-{\em D} topology has the highest probability of occurrence.\cite{Lifshitz} In our situation, it is also important that the local polariton states (transmitting centers) occur without a threshold in 3-{\em D} systems as well as in 1-{\em D} systems. This ensures that the transmission resonances found in Ref. \onlinecite{Deych2} are not artifacts of the one-dimensional nature of the model, and justifies a further development of the model. In the present paper we pursue this development in two interconnected directions. First, we present an exact analytical solution of the transmission problem through the chain with a single defect. This solution explains the unusual asymmetric shape of the transmission profile found in numerical calculations\cite{Deych2} and provides insight into the phenomenon under consideration. Second, we carry out numerical Monte-Carlo simulation of the electromagnetic transmission through a macroscopically long chain with a finite concentration of defects, and study the development of a defect-induced electromagnetic pass band within the polariton band-gap. The analytical solution of a single-defect model allows us to suggest a physical interpretation for some of the peculiarities of the transmission found in numerical simulations. As a by-product of our numerical results we present a new algorithm used for the computation of the transmission. This algorithms is based upon a blend of transfer-matrix approach with ideas of the invariant-embedding method,\cite{IEM} and proves to be extremely stable even deep inside the band-gap, where traditional methods would not work. Though we consider the one-dimensional model, the results obtained are suggestive for experimental observation of the predicted effects. Actually the damping of the electromagnetic waves is more experimentally restrictive than the topology of the system. We, however, discuss the effects due to damping and come to the conclusion that the effects under discussion can be observed in regular ionic crystals in the region of their phonon-polariton band-gaps. The paper is organized as follows. The Introduction is followed by an analytical solution of the transmission problem in a single-impurity situation. The next section presents results of Monte-Carlo computer simulations. The algorithm used in numerical calculations is derived and discussed in the Appendix. The paper concludes with a discussion of the results. \section{ Description of the model and analytical solution of a single-defect problem} \subsubsection{The model} Our system consists of a chain of atoms interacting with each other and with a scalar ``electromagnetic'' field. Atoms are represented by their dipole moments $P_{n}$, where the index $n$ represents the position of an atom in the chain. Dynamics of the atoms is described within the tight-binding approximation with an interaction between nearest neighbors only, \begin{equation} (\Omega _{n}^{2}-\omega ^{2})P_{n}+\Phi (P_{n+1}+P_{n-1})=\alpha E(x_{n}), \label{dipoles} \end{equation} where $\Phi $ is a parameter of the interaction, and $\Omega _{n}^{2}$ represents the site energy. Impurities in the model differ from host atoms in this parameter only, so \begin{equation} \Omega _{n}^{2}=\Omega _{0}^{2}c_{n}+\Omega _{1}^{2}(1-c_{n}), \label{site_energy} \end{equation} where $\Omega _{0}^{2}$ is the site energy of a host atom, $\Omega _{1}^{2}$ describes an impurity, $c_{n}$ is a random variable taking values $1$ and $0$ with probabilities $1-p$ and $p,$ respectively. Parameter $p$, therefore, sets the concentration of the impurities in our system. This choice of the dynamical equation corresponds to exciton-like polarization waves. Phonon-like waves can be presented in a form that is similar to Eq. (\ref {dipoles}) with $\Omega _{n}^{2}=\Omega _{0}^{2}+(1-c_{n})(1-M_{def}/M_{host})\omega ^{2}$, where $M_{def}$ and $% M_{host}$ are masses of defects and host atoms, respectively. Polaritons in the system arise as collective excitations of dipoles (polarization waves) coupled to the electromagnetic wave, $E(x_{n}),$ by means of a coupling parameter $\alpha $. The electromagnetic subsystem is described by the following equation of motion \begin{equation} \frac{\omega ^{2}}{c^{2}}E(x)+\frac{d^{2}E}{dx^{2}}=-4\pi \frac{\omega ^{2}}{% c^{2}}\sum_{n}P_{n}\delta (na-x), \label{Maxwell} \end{equation} where the right hand side is the polarization density caused by atomic dipole moments, and $c$ is the speed of light in vacuum. The coordinate $x$ in Eq. (\ref{Maxwell}) is along the chain with an interatomic distance $a$. Eqs. (\ref{dipoles}) and (\ref{Maxwell}) present a {\it microscopic} description of the transverse electromagnetic waves propagating along the chain in the sense that it does not make use of the concept of the dielectric permeability, and takes into account all modes of the field including those with wave numbers outside of the first Brillouin band. This approach enables us to address several general questions. A local state is usually composed of states with all possible values of wave number $k$. States with large $k$ cannot be considered within a macroscopic dielectric function theory, and attempts to do so lead to divergent integrals that need to be renormalized.\cite{Rupasov} In our approach, all expressions are well defined, so we can check whether a contribution from large $k$ is important, and if the long wave approximation gives reliable results. Calculation of the integrals appearing in the 3-$D$ theory requires detailed knowledge of the spectrum of excitations of a crystal throughout the entire Brillouin band. This makes analytical consideration practically unfeasible. In our 1-$% D $ model, we can carry out the calculations analytically (in a single-impurity case) and examine the influence of different factors (and approximations) upon the frequency of a local state and the transmission coefficient. Using caution, the results obtained can be used to assess approximations in 3-$D$ cases. \subsubsection{A single impurity problem} The equation for the frequency of the local polariton state in the 1-$D$ chain has the form similar to that derived in Ref.~[\onlinecite{Deych1}] \begin{equation} 1=\Delta \Omega ^{2}G(0), \label{eigen_freq} \end{equation} where, however, the expression for the polariton Green's function $% G(n-n_{0}) $ responsible for the mechanical excitation of the system can be obtained in the explicit form \begin{equation} G(n-n_{0})=\sum_{k}{\displaystyle}\frac{\cos (ak)-\cos (\frac{a\omega }{c})}{% \left[ \omega ^{2}-\Omega _{0}^{2}-2\Phi \cos \left( ka\right) \right] \left[ \cos (ak)-\cos (\frac{a\omega }{c})\right] -\displaystyle{\frac{2\pi \alpha \omega }{c}}\sin (\frac{a\omega }{c})}\exp \left[ ik\left( n-n_{0}\right) a% \right] . \label{Greenfunction} \end{equation} If one neglects the term responsible for the coupling to the electromagnetic field, the Green's function $G(n-n_{0})$ is reduced to that of the pure atomic system. This fact reflects the nature of the defect in our model: it only disturbs the mechanical (not related to the interaction with the field) properties of the system. A solution of Eq. (\ref{eigen_freq}) can be real-valued only if it falls into the gap between the upper and lower polariton branches. This gap exists if the parameter $\Phi $ in the dispersion equation of the polariton wave is positive, and the effective mass of the excitations in the long wave limit is, therefore, negative. The diagonal element, $G(0)$, of Green's function (\ref{Greenfunction}) can be calculated exactly. The dispersion equation (\ref{eigen_freq}) than takes the following form: \begin{equation} 1=\Delta \Omega ^{2}\frac{1}{2\Phi D(\omega )}\left[ \frac{\cos (\frac{% a\omega }{c})-Q_{2}(\omega )}{\sqrt{Q_{2}^{2}(\omega )-1}}-\frac{\cos (\frac{% a\omega }{c})-Q_{1}(\omega )}{\sqrt{Q_{1}^{2}(\omega )-1}}\right] , \label{eigen_freq_int} \end{equation} where $Q_{1,2}(\omega )$, \begin{eqnarray} Q_{1,2}(\omega ) &=&\frac{1}{2}\left[ \cos (\frac{a\omega }{c})+\frac{\omega ^{2}-\Omega _{0}^{2}}{2\Phi }\right] \pm \frac{1}{2}D(\omega ), \label{determinant} \\ D(\omega ) &=&\sqrt{\left[ \cos (\frac{a\omega }{c})-\frac{\omega ^{2}-\Omega _{0}^{2}}{2\Phi }\right] ^{2}-\frac{4\pi \alpha \omega }{\Phi c}% \sin (\frac{a\omega }{c})} \end{eqnarray} give the poles of the integrand in Eq. (\ref{Greenfunction}). The bottom of the polariton gap is determined by the condition $D(\omega )=0,$ yielding in the long wave limit, $a\omega /c\ll 1$, for the corresponding frequency, $% \omega _{l}$, \begin{equation} \omega _{l}^{2}\simeq \tilde{\Omega}_{0}^{2}-2\tilde{\Omega}_{0}^{2}d\frac{% \sqrt{\Phi }a}{c}, \end{equation} where we introduce parameters $d^{2}=4\pi \alpha /a$, $\tilde{\Omega}_{0}^{2} $ $=\Omega _{0}^{2}+2\Phi $, and take into account that the band width of the polarization waves, $\Phi $, obeys the inequality $\sqrt{\Phi }a/c\ll 1$% . The last term in this expression is the correction to the bottom of the polariton gap due to the interaction with the transverse electromagnetic field. Usually this correction is small, but it has an important theoretical, and, in the case of strong enough spatial dispersion and oscillator strength, practical significance.\cite{Deych1} Because of this correction the polariton gap starts at a frequency, when the determinant $% D(\omega )$ becomes imaginary, but functions $Q_{1,2}(\omega )$ are still less than $1$. This leads to the divergence of the right-hand side of Eq. (% \ref{eigen_freq_int}) as $\omega $ approaches $\omega _{l}$, and, hence, to the absence of a threshold for the solution of this equation. This divergence is not a 1-{\em D} effect since the same behavior is also found in 3-{\em D} isotropic model.\cite{Deych1,Podolsky} An asymptotic form for Eq. (\ref{eigen_freq_int}) when $\omega $ $\longrightarrow $ $\omega _{l}$ in the 1-{\em D} case reads \begin{equation} \sqrt{\omega ^{2}-\omega _{l}^{2}}\sim \frac{\Delta \Omega ^{2}}{\sqrt{\Phi }% }, \end{equation} and differs from the 3-{\em D} case by the factor of $\left( d\omega _{l}a)/(c\sqrt{\Phi }\right) $. The upper boundary of the gap, $\omega _{up}$, is determined by the condition $Q_{1}(\omega )=0$, leading to \begin{equation} \omega _{up}^{2}=\tilde{\Omega}_{0}^{2}+d^{2}, \label{wup} \end{equation} Eq. (\ref{eigen_freq_int}) also has a singularity as $\omega $ $% \longrightarrow $ $\omega _{up}$, but this singularity is exclusively caused by the 1-{\em D} nature of the system. We will discuss local states that are not too close to the upper boundary in order to avoid manifestations of purely 1-{\em D} effects. For frequencies deeper inside the gap, Eq. (\ref{eigen_freq}) can be simplified in the approximation of small spatial dispersion, $\sqrt{\Phi }% a/c\ll 1,$ to yield \begin{equation} \omega ^{2}=\tilde{\Omega}_{1}^{2}-\Delta \Omega ^{2}\left[ 1-\sqrt{\frac{% \omega ^{2}-\tilde{\Omega}_{0}^{2}}{\omega ^{2}-\tilde{\Omega}_{0}^{2}+4\Phi }}\right] -d^{2}\frac{\omega a}{2c}\frac{\Delta \Omega ^{2}}{\sqrt{\left( \omega ^{2}-\tilde{\Omega}_{0}^{2}\right) \left( \tilde{\Omega}% _{0}^{2}+d^{2}-\omega ^{2}\right) }}, \label{nodispersion} \end{equation} where $\tilde{\Omega}_{1}^{2}=\Omega _{1}^{2}+2\Phi $ is a fundamental ($% k=0) $ frequency of a chain composed of impurity atoms only. Two other terms in Eq. (\ref{nodispersion}) present corrections to this frequency due to the spatial dispersion and the interaction with the electromagnetic field respectively. One can see that both corrections have the same sign and shift the local frequency into the region between $\tilde{\Omega}_{0}^{2}$ and $% \tilde{\Omega}_{1}^{2}$. As we see below, this fact is significant for the transport properties of the chain. Transmission through the system can be considered in the framework of the transfer matrix approach. This method was adapted for the particular case of the system under consideration in Ref. \onlinecite{Deych2}. The state of the system is described by the vector $v_{n},$ with components $P_{n}$, $P_{n+1}$, $E_{n}$, $E_{n}^{\prime }/{k}_{\omega }$), which obeys to the following difference equation \begin{equation} v_{n+1}=T_{n}v_{n}. \label{EP} \end{equation} The transfer matrix $T_{n}$ describes the propagation of the vector between adjacent sites: \begin{equation} T_{n}=\left( \begin{array}{cccc} {0} & {1} & {0} & {0} \\ {-1} & {{-\displaystyle\frac{\Omega _{n}^{2}-\omega ^{2}}{\Phi }}} & % \displaystyle{{\frac{\alpha }{\Phi }\cos {ka}}} & \displaystyle{{{\frac{% \alpha }{\Phi }}\sin {ka}}} \\ {0} & {0} & {\cos {ka}} & {\sin {ka}} \\ {0} & {-4\pi k} & {-\sin {ka}} & {\cos {ka}} \end{array} \right) . \label{T} \end{equation} Analytical calculation of the transmission coefficient in the situation considered is not feasible even in the case of a single impurity because the algebra is too cumbersome. The problem, however, can be simplified considerably if one neglects the spatial dispersion of the polarization waves. In this case the $T$-matrix can be reduced to a $2\times 2$ matrix of the following form \begin{equation} \tau _{n}=\left( \begin{array}{cc} \cos ka & \sin ka \\ -\sin ka+\beta _{n}\cos ka & \cos ka+\beta _{n}\sin ka \end{array} \right) , \label{T reduced} \end{equation} where the parameter $\beta _{n}$ \begin{equation} \beta _{n}=\frac{4\pi \alpha \omega }{c\left( \omega ^{2}-\Omega _{n}^{2}\right) }, \nonumber \end{equation} represents the polarizability of the $n$-th atom due to its vibrational motion. In this case the complex transmission coefficient, $t$, can be easily expressed in terms of the elements of the resulting transfer matrix, $% T^{(N)}=\prod_{1}^{N}\tau _{n}$, \begin{equation} t=\frac{2}{\left( T_{11}^{(N)}+T_{22}^{(N)}\right) -i\left( T_{12}^{(N)}-T_{21}^{(N)}\right) }e^{-ikL}. \label{transmcoeff} \end{equation} The problem is, therefore, reduced to the calculation of $T^{(N)}$. In the case of a single impurity, the product of the transfer matrices, $\tau $, can be presented in the following form \begin{equation} T^{(N)}={\tau }^{N-n_{0}}\times \tau _{def}\times {\tau }^{n_{0}-1}, \label{iproduct} \end{equation} where the matrix $\tau _{def}$ describes the impurity atom with $\Omega _{n}=\Omega _{1}$. The matrix product in Eq. (\ref{iproduct}) is conveniently calculated in the basis, where the matrix $\tau $ is diagonal. After some cumbersome algebra, one obtains for the complex transmission coefficient: \begin{equation} t=\frac{2e^{ikL}\exp \left( -\kappa L\right) }{\left[ 1-\frac{i}{\sqrt{R}}% \left( 2-\beta \cot ka\right) \right] \left[ \left( 1+\varepsilon \right) \right] +2i\exp \left( -\kappa L\right) \Gamma \cosh \left[ \kappa a(N-2n_{0}+1\right] }. \label{transmission} \end{equation} where $R=\beta ^{2}+4\beta \cot (ak)-4$, $\Gamma =\varepsilon \beta /(\sin (ka)\sqrt{R})$, $\kappa $ is the imaginary wave number of the evanescent electromagnetic excitations, which determines the inverse localization length of the local state, and $\varepsilon =\left( \beta _{def}-\beta \right) /2\sqrt{% R}$. The last parameter describes the difference between host atoms and the impurity, and is equal to \begin{equation} \varepsilon =\frac{2\pi \alpha }{c\sqrt{R}}\omega \frac{\left( \Omega _{1}^{2}-\Omega _{0}^{2}\right) }{\left( \omega ^{2}-\Omega _{0}^{2}\right) \left( \omega ^{2}-\Omega _{1}^{2}\right) }. \label{epsilon} \end{equation} We have also neglected here a contribution from the second eigenvalue of the transfer matrix, which is proportional to $\exp (-2\kappa L)$, and is exponentially small for sufficiently long chains. For $\varepsilon =0,$ Eq. (\ref{transmission}) gives the transmission coefficient, $t_{0},$ of the pure system, \begin{equation} t_{0}=\frac{2e^{ikL}\exp \left( -\kappa L\right) }{1-\frac{i}{\sqrt{R}}% \left( 2-\beta \cot ka\right) }, \label{tpure} \end{equation} exhibiting a regular exponential decay. At the lower boundary of the polariton gap, $\Omega _{0}$, parameters $\beta $ and $\kappa $ diverge, leading to vanishing transmission at the gap edge regardless the length of the chain. It is instructive to rewrite Eq. (\ref{transmission}) in terms of $t_{0}$: \begin{equation} t=\frac{t_{0}}{\left( 1+\varepsilon \right) +i\exp \left( -ikL\right) \Gamma t_{0}\cosh \left[ \kappa a(N-2n_{0}+1\right] } \label{transmission1} \end{equation} This expression describes the resonance tunneling of the electromagnetic waves through the chain with the defect. The resonance occurs when \begin{equation} 1+\varepsilon =0, \label{res_freq} \end{equation} the transmission in this case becomes independent of the system size. Substituting the definition of the parameter $\varepsilon $ given by Eq. (% \ref{epsilon}) into Eq. (\ref{res_freq}), one arrives at an equation identical to Eq. (\ref{nodispersion}) for the frequency of the local polariton state with the parameter of the spatial dispersion, $\Phi$, being set to zero. The transmission takes a maximum value when the defect is placed in the middle of the chain, $N-2n_{0}+1=0$, and in this case \begin{equation} |t_{\max }|^{2}=\frac{1}{\Gamma ^{2}}\leq 1. \label{tmax} \end{equation} The width of the resonance is proportional to $\Gamma t_{0}$ and decreases exponentially with an increase of the system's size. In the long wave limit, $% ak\ll 1,$ Eq. (\ref{tmax}) can be rewritten in the following form \begin{equation} |t_{\max }|^{2}=1-\left( 1-2\frac{\omega _{r}^{2}-\Omega _{0}^{2}}{d^{2}}% \right) , \label{tmaxlongwave} \end{equation} where $\omega _{r}$ is the resonance frequency satisfying Eq. (\ref{res_freq}% ). It is interesting to note that the transmission coefficient becomes exactly equal to one if the resonance frequency happens to occur exactly in the center of the polariton gap. This fact has a simple physical meaning. For $\omega _{r}^{2}=\Omega _{0}^{2}+d^{2}/2$ the inverse localization length $\kappa $ becomes equal to the wave number $\omega _{r}/c$ of the incoming radiation. Owing to this fact, the field and its derivative inside the chain exactly match the field and the derivative of the incoming field as though the optical properties of the chain are identical to those in vacuum. Consequently, the field propagates through the chain without reflection. Having solved the transmission problem we can find the magnitude of the field inside the chain in terms of the incident amplitude $E_{in}$ at the resonance frequency. Spatial distribution of the field in the local polariton state can be found to have the form $E=E_{d}\exp \left( -|n-n_{0}|\kappa a\right)$. Matching this expression with the outcoming field equal to $E_{in}t\exp (ikL)$ one has for the field amplitude at the defect atom, $E_{d}$, \begin{equation} E_{d}=E_{in}t\exp (-ikL)\exp \left[ (N-n_{0})\kappa a\right] . \label{defect _field} \end{equation} For $|t|$ being of the order of one in the resonance this expression describes the drastic exponential enhancement of the incident amplitude at the defect side due to the effect of the resonance tunneling. Equations (\ref{transmission1}) and (\ref{tmax}) demonstrate that the resonance tunneling via local polariton states is remarkably different from other types of resonance tunneling phenomena, such as electron tunneling via an impurity state,\cite{electrontunneling} or through a double barrier. The most important fact is that the frequency profile of the resonance does not have the typical symmetric Laurentian shape. At $\omega =\Omega _{1}$ the parameter $\varepsilon $ diverges causing the transmission to vanish. At the same time the resonance frequency $\omega _{r}$ is very close to $\Omega _{1} $ as it follows from Eq. (\ref{nodispersion}). This results in strongly asymmetric frequency dependence of the transmission, which is skewed toward lower frequencies. The transmission vanishes precisely at two frequencies: at the low frequency band edge $\Omega _{0}$ and at the frequency $\Omega _{1}$ associated with the vibrational motion of the defect atom. At the same time, the behavior of the transmission coefficient in the vicinities of these two frequencies is essentially different: at the band edge it is $(\omega ^{2}-\Omega _{0}^{2})^{2}\exp \left( -1/\sqrt{\omega ^{2}-\Omega _{0}^{2}}\right) $ while at the defect frequency the transmission goes to zero as $(\omega ^{2}-\Omega _{1}^{2})^{2}$. These facts can be used to predict several effects that would occur with the increase of the concentration of the defects. First, one can note that with the increase of concentration of the impurities frequency $\Omega _{1}$ becomes eventually the boundary of the new polariton gap when all the original host atoms will be replaced by the defects atoms. One can conclude then that the zero of the transmission at $% \Omega _{1}$ instead of being washed out by the disorder, would actually become more singular. More exactly one should expect that the frequency dependence of the transmission in the vicinity of $\Omega _{1}$ will exhibit a crossover from the simple power decrease to the behavior with exponential singularity associated with the band edge. Second, if one takes into account such factors as spatial dispersion or damping, which prevent transmission from exact vanishing, one should expect that the above mentioned crossover to the more singular behavior would manifest itself in the form of substantial decrease of the transmission in the vicinity of $\Omega _{1}$ with an increase of the concentration. Numerical calculations discussed in the next section of the paper show that this effect does take place even at rather small concentration of the defects. Resonance tunneling is very sensitive to the presence of relaxation, which phenomenologically can be accounted for by adding $2i\gamma \omega $ to the denominator of the polarizability $\beta $, where $\gamma $ is an effective relaxation parameter. This will make the parameter $\epsilon $ complex valued, leading to two important consequences. First, the resonance condition becomes $Re(\varepsilon )=-1$, and it can be fulfilled only if the relaxation is small enough. Second, the imaginary part of $\varepsilon $ will prevent the exponential factor $t_{0}$ in Eq. (\ref{transmission1}) from canceling out at the resonance. This restricts the length of the system in which the resonance can occur and limit the enhancement of the field at the defect. These restrictions though are not specific for the system under consideration and affect experimental manifestation of any type of resonant tunneling phenomenon. Since we only concern with a frequency region in the vicinity of $\Omega _{1} $, real, $\varepsilon _{1}$, and imaginary, $\varepsilon _{2}$, parts of $\varepsilon $ can be approximately found as \begin{equation} \varepsilon _{1}\simeq d^{2}\frac{\Omega _{1}a}{2c}\sqrt{\frac{\Delta \Omega ^{2}}{d^{2}-\Delta \Omega ^{2}}}\frac{\omega ^{2}-\Omega _{1}^{2}}{\left( \omega ^{2}-\Omega _{1}^{2}\right) ^{2}+4\gamma ^{2}\omega ^{2}}, \label{Re_eps} \end{equation} \begin{equation} \varepsilon _{2}\simeq \frac{2\gamma \omega }{\omega ^{2}-\Omega _{1}^{2}}% \varepsilon _{1}. \label{Im_eps} \end{equation} It follows from Eq. (\ref{Re_eps}) that the resonance occurs only if $% (4\gamma c)/(ad^{2})<1$. This inequality has a simple physical meaning: it ensures that the distance between the resonance frequency, $\omega _{r}$, and $\Omega _{1}$, where the transmission goes to zero, is greater than the relaxation parameter, $\gamma $. This is a rather strict condition that can only be satisfied for high frequency oscillations with large oscillator strength in crystals with large interatomic spacing, $a$. The spatial dispersion, however, makes conditions for the resonant tunneling much less restrictive. In order to estimate the effect of the dissipation in the presence of the spatial dispersion one can rely upon Eq. (\ref{transmission1}) assuming that the dispersion only modifies the parameter $\varepsilon $, but does not effect the general expression for the transmission. This assumption is justified by the numerical results of Ref. \onlinecite {Deych2} and the present paper, which show that the transmission properties in the presence of the spatial dispersion do not differ significantly from the analytical calculations performed for the chain of noninteracting dipoles. According to Eq. (\ref {nodispersion}), the inter-atomic interaction moves the resonance frequency further away from $\Omega _{1}$ undermining the influence of the damping and leading to a weaker inequality: $(\gamma \Omega _{1})/\Phi <1$. This condition can be easily fulfilled, even for phonons with a relatively small negative spatial dispersion. For the imaginary part $\varepsilon _{2}$ at the resonance one can obtain from Eq. (\ref{Im_eps}) the following estimate \begin{equation} \varepsilon _{2}\sim \min [(4\gamma c)/(ad^{2}),(\gamma \Omega _{1})/\Phi ]. \end{equation} The requirement that $\varepsilon _{2}$ be much smaller than $t_{0}$ leads to the following restriction for the length of the system $L\ll (1/\kappa )\mid \ln [\varepsilon _{2}]\mid $, with $\varepsilon _{2}$ given above. The maximum value of the field at the defect site attainable for the defect located in the center of the chain is then found as $|E_{d}|\sim |E_{in}||t|/% \sqrt{\varepsilon _{2}}$. \section{One-dimensional dipole chain with finite concentration of impurities} In this section we present results of numerical Monte-Carlo simulations of the transport properties of the system under consideration in the case of randomly distributed identical defects. If spatial dispersion is taken into account the regular Maxwell boundary conditions must be complemented by additional boundary conditions regulating the behavior of polarization $P$ at the ends of the chain. In our previous paper\cite{Deych2} we calculated the transmission for two types of boundary conditions: $P_{0}=P_{N}=0$, which corresponds to the fixed ends of the chain, and $P_{0}=P_{1}$, $% P_{N-1}=P_{N}$, which corresponds to the relaxed ends. We reported in Ref. \onlinecite{Deych2} that the transmission is very sensitive to the boundary conditions with fixed ends being much more favorable for the resonance. Our present numerical results obtained with an improved numerical procedure and the analytical calculations do not confirm this dependence of the resonant tunneling upon the boundary conditions. In the case of a single impurity we find that for both types of the boundary conditions the transmission demonstrates sharp resonance similar to that found in Ref. \onlinecite{Deych2} for fixed ends. Similarly, for a finite concentration of impurities we did not find any considerable differences in the transmission for both types of boundary conditions. We conclude that the actual form of the boundary conditions is not significant for the resonant tunneling. The transfer matrix, Eq. (\ref{EP}), along with the definition of the transfer matrix, Eq. (\ref{T}), and the boundary conditions chosen in the form of fixed terminal points, provides a basis for our computations. However, it turns out that straightforward use of Eq. (\ref{EP}) in the gap region is not possible because of underflow errors arising when one pair of eigenvalues of the transfer matrix becomes exponentially greater than the second one. In order to overcome this problem we develop a new computational approach based upon the blend of the transfer-matrix method with the invariant embedding ideas. The central element of the method is a $4\times 4$ matrix $S(N)$ that depends upon the system size $N$. The complex transmission coefficient, $t,$ is expressed in terms of the elements of this matrix as \begin{equation} t=2\exp (-ikL)(S_{11}+S_{12}). \label{tfromS} \end{equation} The matrix $S(N)$ is determined by the following nonlinear recursion \begin{equation} S(N+1)=T_{N}\times \Xi (N)\times S(N), \label{S} \end{equation} where matrix $\Xi (N)$ is given by \begin{equation} \Xi (N)=\{I-S(N)\times H\times \left[ I-T(N)\right] \}^{-1}. \label{F} \end{equation} The initial condition to Eq. (\ref{S}) is given by \begin{equation} S(0)=(G+H)^{-1}, \label{S_init} \end{equation} where matrices $G$ and $H$ are specified by the boundary conditions. The derivation of the Eqs. (\ref{tfromS})-(\ref{S_init}) and more detailed discussion of the method is given in the Appendix to the paper. The test of the algorithm based upon recursion formula (\ref{S}) proves the method to provide accurate results for transmission coefficients as small as $10^{-15}$. In our simulations we fix the concentration of the defects and randomly distribute them among the host atoms. The total number of atoms in the chain is also fixed; the results presented below are obtained for a chain consisting of $1000$ atoms. For the chosen defect frequency, $\Omega _{1}\simeq 1.354\Omega _{0}$, the localization length of the local polariton state, $l_{ind},$is approximately equal to $150$ interatomic distances. The transmission coefficient is found to be extremely sensitive to a particular arrangements of defects in a realization exhibiting strong fluctuations from one realization to another. Therefore, in order to reveal the general features of the transmission independent of particular positions of defects, we average the transmission over $1000$ different realizations. We have also calculated the averaged Lyapunov exponent (the inverse localization length, $% l_{chain}$, characterizing transport through the entire chain) to verify that the averaged transmission reveals a reliable information about the transport properties of the system. The results of the computations are presented in the figures below. Figs. 1-3 show an evolution of the transmission with the increase of the concentration of the impurities. In Fig. 1 one can see the change of the transport properties at small concentrations up to 1\%. The curve labeled (1) shows, basically, the single impurity behavior averaged over random positions of the defect. With an increase of the concentration there is a greater probability for two (or more) defects to form a cluster resulting in splitting a single resonance frequency in two or more frequencies. The double peak structure of the curves (2) and (3) reflects these cluster effects. With the further increase of the concentration the clusters' sizes grow on average leading to multiple resonances with distances between adjacent resonance frequencies being too small to be distinguished. Curve (5) in Fig. 1 reflects this transformation, which marks a transition between individual tunneling resonances and the defect induced band. The concentrations in this transition region is such that an average distance between the defects is equal to the localization length of the individual local states, $l_{ind}$. The collective localization length at the frequency of the transmission peak, $l_{chain}^{\max }$, becomes equal to the length of the chain at approximately the same concentration that allows to suggest a simple relationships between the two lengths: $l_{chain}^{\max }=cl_{ind}$, where $c$ stands for the concentration. The numerical results presented in Fig. 4 clearly demonstrate this linear concentration dependence of $% l_{chain}^{\max }$ at small concentrations. For larger concentrations one can see from Figs. 2 and 3 that the peak of the transmission coefficient developes into a broad structure. This marks further development of the defect pass band. Curves in Fig. 2 show the transmission coefficient at intermediate concentrations, where localization length, $l_{chain},$ is bigger than the length of the system only in a small frequency region around the maximum of the transmission, and Fig. 3 presents well developed pass band with multipeak structure resulting from geometrical resonances at the boundaries of the system. This figures reveal an important feature of the defect polariton band: its right edge does not move with increase of the concentration. The frequency of this boundary is exactly equal to the defect frequency, $\Omega _{1},$ (which is normalized by $\Omega _{0}$ in the figures), and the entire band is developing to the left of $\Omega _{1}$ in complete agreement with the arguments based upon analytical solution of the single-impurity problem. Moreover, the magnitude of the transmission in the vicinity of $\Omega _{1}$% decreases with increase of the concentration also in agreement with our remarks at the end of the previous section. Fig. 5 presents the inverse localization length, $l_{chain},$ normalized by the length of the chain for three different concentrations. It can be seen that $l_{chain}^{-1}($ $% \Omega _{1})$ significantly grows with increase of the concentration, reaching the value of approximately $17/L$ at the concentration as small as $% 3\%$. Such a small localization length corresponds to the transmission of the order of magnitude of $10^{-17}$, which is practically zero in our computation. Further increase of the concentration does not change the minimum localization length. These results present an interesting example of the defects building up a boundary of the new forbidden gap. From this figure it is also seen the development of the pass band to the left of $\Omega _{1}$ presented above in Figs. 1-3, but at a larger scale. We can not distinguish here the details of the frequency dependence, but the transition from the single resonance behavior to the pass band, marked by the significant flattening of the curve, is clear. The last Fig. 6 presents the concentration dependence of the semiwidth, $% \delta _{\omega },$ of the defect band. The semiwidth is defined as the difference between frequency of the maximum transmission and the right edge of the band. One can see that all the point form a smooth line with no indication of a change of the dependence with the transition between different transport regimes. Attempts to fit this curve showed that it is excellently fitted by the power law $\delta _{\omega }\propto c^{\nu }$with $% \nu \simeq 0.8$ in all studied concentration range. The reason for this behavior and why it is insensitive to the change of the character of the transport requires further study. \section{Conclusion} In this paper, we considered one-dimensional resonance tunneling of scalar ``electromagnetic waves'' through an optical barrier caused by a polariton gap. The tunneling is mediated by local polariton states\ arising due to defect atoms embedded in an otherwise ideal periodic chain. We also numerically studied how a defect-induced propagating band emerges from these resonances when the concentration of defects increases. It is important to emphasize the difference between the situation considered in our paper and other types of tunneling phenomena discussed in the literature. The tunneling of electromagnetic waves through photonic crystals and electron tunneling, despite all the difference between these phenomena, share one common feature. In both cases, the resonance occurs due to defects that have dimensions comparable with wavelengths of the respective excitations (electrons interact with atomic impurities, and long wave electromagnetic waves interact with macroscopic distortions of the photonic crystals). In our case the wavelength of the propagating excitations is many orders of magnitude greater than dimensions of the atomic defects responsible for the resonance. The physical reason for such an unusual behavior lies in the nature of local polaritons. These states are formed owing to the presence of internal polariton-forming excitations. The spatial extent of these states is much larger than the geometrical dimensions of atomic defects and is comparable to the wavelength of the incident radiation. We presented an exact analytical solution of the tunneling of electromagnetic waves through a chain of noninteracting atoms with a single defect. This solution provides insight into the nature of the phenomenon under considerarion and allows one to obtain an explicit expression for the magnitude of the electromagnetic field at the defect site. The expression derived demonstrates that the field is strongly enhanced at the resonance with its magnitude growing exponentially with an increase of the length of the system. This effect is an electromagnetic analogue of the charge accumulation in the case of electron tunneling, where it is known to cause interesting nonlinear phenomena.\cite% {Penley,Azbel,Goldman-1,Goldman-2,Goldman-3} An analytical solution of the single-defect problem allowed us to make predictions regarding the transport properties of the system with multiple randomly located defects. The most interesting of these is that the dynamical frequency of the defects, $\Omega _{1},$ sets a high frequency boundary for the defect induced pass band, which does not move with increasing concentration of defects. Numerical Monte-Carlo simulations confirmed this assumption and showed that the direct interaction between the atoms (spatial dispersion) does not affect resonance tunneling considerably, though it adds new interesting features to it. One of them is the behavior of the transmission in the vicinity of $\Omega _{1}$. In absence of the spatial dispersion, the transmission at this point is exactly equal to zero, and remains small when the interaction is taken into account. The interesting fact revealed by the numerical analysis is that the transmission at $\Omega _{1}$ decreases with an increase in the concentration of the defects and nearly approaches zero at concentrations as small as $3\%$. This fact can be understood in light of the transfer matrix approach: if the frequency $\Omega _{1}$ corresponds to the eigenvalue of the defect's transfer matrix, which significantly differs from one, the transmission will diminish strongly each time the wave encounters a defect site, regardless the order in which the defects are located. Numerical results also demonstrated a transition between two transport regimes: one associated with resonance tunneling and the other occurring when the resonances spatially overlap and a pass band of extended states emerges. The transition occurs when the average distance between the defects becomes equal to the localization length of the single local state. At the same time the collective localization length at the peak transmission frequency, characterizing the transport properties of the entire chain, becomes equal to the total length of the system. This result assumes the linear dependence of this collective localization length upon concentration, which we directly confirm for small concentrations. Numerical results also showed that the width of the resonance, which develops into a pass band with an increase in concentration, does not manifest any transformation when the character of transport changes. The concentration dependence of the width was found to be extremely well described by a power law with an exponent approximately equal to $0.8$. The nature of this behavior awaits an explanation.
1,116,691,498,721
arxiv
\section{Introduction} \label{sect:intro} Radially pulsating stars like Cepheids, RR Lyrae or Miras stars are important distance indicators in the local universe. The presence of such a variable star in an eclipsing binary system serves as a unique opportunity to derive fundamental astrophysical parameters of the pulsating component with few model assumptions. Furthermore eclipsing binaries provide a very good means to independently calibrate distance determination methods based on pulsating stars by comparison with the distance obtained from the binary star analysis. Until now only a few classical Cepheids were identified in eclipsing binaries (Pietrzy{\'n}ski et al. 2010, 2011) and one system with a pseudo RR Lyrae component was reported (Pietrzy{\'n}ski et al. 2012), there are also candidates for such systems that await confirmation. The typical eclipsing binary star model consists of fixed size stars. To account for pulsations one usually: 1) modifies an existing modeling tool, 2) develops a new computer code, 3) removes pulsations from light and radial velocity curves and solves them with ordinary eclipsing binary star model. The first approach was used by Wilson \& van Hamme (2010) in case of the well-known Wilson-Devinney code (Wilson \& Devinney 1971; hereafter WD code), but only some phenomenological model was reported. The second approach was employed by MACHO to eclipsing binary Cepheids (Alcock et al. 2002, Lepischak et al. 2004), however the code was restricted only to light curve analysis and a rather simplistic treatment of stellar surfaces was used (e.g. no proximity and reflection effects were accounted for). The third method seems to be the most common and is used in case of non-radial pulsators like $\delta$-Scuti stars (e.g. Southworth et al. 2011) or $\gamma$-Doradus stars (e.g. Maceroni et al. 2013). The drawback of such an approach is that during eclipses pulsations can be removed only approximately from the light curves, which produces some systematic residuals in the solution. The way to partly overcome this difficulty is to employ the iterative light curve solution with the amplitude of the pulsations scaled according to the relative light contribution of a pulsating star during eclipses. This method was applied by Pietrzy{\'n}ski et al. (2010, 2011). To deal fully with changes in the eclipse geometry caused by pulsations, a novel approach is needed where both spectroscopic and photometric data are treated consistently. We present here a new method of modeling eclipsing binaries with radially pulsating components. Instead of a new code development we decided to use a well-known and thoroughly tested computer model called JKTEBOP (Popper \& Etzel 1981, Southworth 2004, 2007) as a core of our method. A Python based wrapper that we prepared can generate binary light curves with pulsations taken into account using original JKTEBOP code without any modifications. A similar methodology was proposed by Riazi \& Abedi (2006) in case of the WD code, but only for illustrative purposes. The method was applied to the case of the eclipsing binary Cepheid OGLE-LMC-CEP-0227 (Pietrzy{\'n}ski et al. 2010, Soszy{\'n}ski et al. 2008) in the Large Magellanic Cloud. One of the main reasons to develop our approach was to directly determine the projection factor ($p$-factor) for the Cepheid from an eclipsing binary analysis. The $p$-factor is defined as the conversion factor between the observed pulsation radial velocities and the velocity of the pulsating star's photosphere. It plays a crucial role in Baade-Wesselink (Baade 1926, Wesselink 1946) type methods employed to pulsating stars like Cepheids. Its exact value and functional dependence on e.g. pulsation period is currently actively debated -- see Section~\ref{sub:factor} in this paper for references. In our opinion, the presented method allows for a robust determination of the $p$-factor for pulsating components of detached eclipsing binary systems. Three groups were involved in the process of the preparation of this manuscript, namely the Araucaria project (data, software, analysis), the Carnegie Hubble Project (CHP; data) and the OGLE project (data). The research was based on observations obtained for ESO Programme 086.D-0103(A), 085.D-0398(A), 084.D-0640(A,B) and CNTAC time allocation 2010B-059. \section{Data} \label{sect:data} Before starting the analysis we had to make sure that we have good enough data to obtain reliable results. When the discovery of the object was announced in 2008 by Soszy{\'n}ski et al. the eclipses were rather scarcely covered by the photometry and this analysis would not be possible. Since then as a result of a special observing program a strong emphasis was put on the measurement of the brightness change during eclipses. \begin{table} \caption{OGLE V-band photometry sample (the full version is available on-line). The errors are scaled to match the condition that the reduced $\chi^2$ is equal to 1.} \centering \begin{tabular}{c|c|c} \hline $HJD - 2450000$~d & V [mag] & error [mag] \\ \hline 3001.64990 & 14.984 & 0.008 \\ 3026.74985 & 15.484 & 0.008 \\ 3331.74155 & 15.023 & 0.008 \\ 3341.74543 & 15.690 & 0.008 \\ 3355.73941 & 15.347 & 0.008 \\ 3359.66848 & 15.316 & 0.008 \\ 3365.64874 & 15.419 & 0.008 \\ ... & ... & ... \\ \hline \label{tab:phot_v} \end{tabular} \end{table} \begin{table} \caption{OGLE I-band photometry sample (the full version is available on-line). The errors are scaled to match the condition that the reduced $\chi^2$ is equal to 1.} \centering \begin{tabular}{c|c|c} \hline $HJD - 2450000$~d & I [mag] & error [mag] \\ \hline 2166.83748 & 14.353 & 0.007 \\ 2172.88623 & 14.561 & 0.007 \\ 2189.84343 & 14.365 & 0.007 \\ 2212.79165 & 14.377 & 0.007 \\ 2217.77657 & 14.492 & 0.007 \\ 2223.79686 & 14.345 & 0.007 \\ 2226.77167 & 14.303 & 0.007 \\ ... & ... & ... \\ \hline \label{tab:phot_i} \end{tabular} \end{table} \begin{table} \caption{Spitzer 3.6$\mu$m photometry sample (the full version is available on-line). The errors are scaled to match the condition that the reduced $\chi^2$ is equal to 1.} \centering \begin{tabular}{c|c|c} \hline $HJD - 2450000$~d & 3.6$\mu$m [mag] & error [mag] \\ \hline 5813.54149 & 13.229 & 0.007 \\ 5813.99999 & 13.264 & 0.007 \\ 5814.57068 & 13.299 & 0.007 \\ 5814.90875 & 13.262 & 0.007 \\ 5815.55775 & 13.197 & 0.007 \\ 5816.05859 & 13.225 & 0.007 \\ 5816.58069 & 13.239 & 0.007 \\ ... & ... & ... \\ \hline \label{tab:phot_36} \end{tabular} \end{table} In total we acquired 1045 measurements in the I-band and 317 in the V-band collected with the Warsaw telescope by the OGLE project (Udalski 2003, Soszy{\'n}ski et al. 2012) and during the time granted to the Araucaria project by CNTAC organization. The auxiliary K-band data (only outside eclipses) were acquired by the Araucaria group using SOFI instrument attached to the NTT telescope at La Silla Observatory, which allowed us to use the V-K color variation to calculate the effective temperature as a function of the pulsation phase. We have also acquired $3.6 \mu m$ and $4.5 \mu$ photometry from the Spitzer Space Telescope (114 points) -- the observations and data reduction provided by the CHP team. Because in the near-infrared the stellar limb darkening is low and the amplitude of the pulsations is small these observations put an important constraint on the geometry of the system. Fig.~\ref{fig:obs_all} presents all the photometric data used in our analysis. The {\it Spitzer} data were collected for two consecutive eclipses, i.e. for one primary and one secondary eclipse, and for one pulsation cycle outside the eclipses to obtain the unaffected pulsational light curve. In the analysis we used only $3.6 \mu m$ photometry because the out-of-eclipse observations in $4.5 \mu m$ band were too scarce and had the signal to noise ratio too low to obtain a correct representation of the pulsations (which could subsequently be used in the modeling) at this moment. We plan to complement the data in the future, however. All the photometry used in this paper is provided in Tables \ref{tab:phot_v}-\ref{tab:phot_36} and in electronic form on: \centerline{ http://araucaria.astrouw.edu.pl/p/cep227 } \centerline{} \begin{table*} \caption{Radial Velocity Measurements of CEP-0227. HARPS spectra are marked with $^{\textstyle a}$ and UVES spectra are marked with $^{\textstyle b}$.} \begin{tabular}{@{}l|c|c|c|l|c|c|c|l|c|c|c@{}} \hline HJD & $RV_1$ & $RV_2$ & $RV_p$ & HJD & $RV_1$ & $RV_2$ & $RV_p$ & HJD & $RV_1$ & $RV_2$ & $RV_p$ \\ -2450000 d & (km/s) & (km/s) & (km/s) & -2450000 d & (km/s) & (km/s) & (km/s) & -2450000 d & (km/s) & (km/s) & (km/s) \\\hline 4810.76470$^{\textstyle a}$ & 288.44 & 223.94 & 3.40 & 5148.72671 & 292.06 & 219.71 & 3.50 & 5457.87258 & 293.69 & 220.22 & 10.72 \\ 4811.58524$^{\textstyle a}$ & 288.94 & 223.57 & 15.08 & 5149.70121 & 291.62 & 219.80 & 17.26 & 5459.73087$^{\textstyle b}$ & 292.05 & 219.63 & -2.22 \\ 4854.60858 & 285.72 & 225.99 & -26.84 & 5149.81700$^{\textstyle b}$ & 292.10 & 220.04 & 18.69 & 5459.82242 & 292.77 & 220.81 & 0.19 \\ 4854.76634 & 287.31 & 226.63 & -23.86 & 5150.66019$^{\textstyle b}$ & 291.49 & 219.98 & -26.65 & 5459.87734 & 292.35 & 220.74 & 0.84 \\ 4855.62840 & 286.57 & 226.81 & -7.73 & 5150.81070$^{\textstyle b}$ & 290.88 & 220.03 & -26.82 & 5459.88346$^{\textstyle b}$ & 291.38 & 219.94 & -0.02 \\ 4882.54381 & 270.06 & 243.88 & -2.23 & 5151.62274$^{\textstyle b}$ & 291.74 & 220.89 & -12.04 & 5460.86428$^{\textstyle b}$ & 291.75 & 220.58 & 14.85 \\ 5129.67036 & 292.48 & 220.78 & 3.26 & 5151.72685$^{\textstyle b}$& 291.48 & 220.84 & -9.99 & 5461.71818$^{\textstyle b}$ & 291.33 & 220.31 & 1.93 \\ 5129.68627 & 292.43 & 220.70 & 3.42 & 5152.62706$^{\textstyle b}$ & 291.15 & 220.70 & 5.38 & 5461.84896$^{\textstyle b}$ & 291.24 & 220.97 & -17.29 \\ 5129.77828 & 292.57 & 220.69 & 4.90 & 5152.71553$^{\textstyle b}$ & 290.87 & 221.07 & 6.94 & 5462.74956$^{\textstyle b}$ & 291.19 & 220.87 & -17.54 \\ 5129.79450 & 292.60 & 220.68 & 5.20 & 5155.60958 & 290.48 & 222.21 & -7.87 & 5463.73892$^{\textstyle b}$ & 290.50 & 221.10 & 1.27 \\ 5129.85284 & 292.72 & 220.89 & 6.39 & 5155.71877 & 290.45 & 222.26 & -5.61 & 5463.87652$^{\textstyle b}$ & 290.72 & 221.30 & 3.56 \\ 5130.67281 & 292.89 & 220.30 & 17.69 & 5155.71895 & 290.02 & 222.31 & -6.03 & 5464.73657$^{\textstyle b}$ & 290.47 & 221.45 & 15.64 \\ 5130.68789 & 292.62 & 220.38 & 17.69 & 5155.84086 & 290.82 & 222.69 & -3.35 & 5464.87024$^{\textstyle b}$ & 290.30 & 221.44 & 17.73 \\ 5131.67796 & 293.56 & 220.37 & -25.20 & 5167.78617$^{\textstyle b}$ & 285.18 & 227.84 & 5.46 & 5465.70513$^{\textstyle b}$ & 290.12 & 221.67 & -22.52 \\ 5131.69389 & 293.32 & 220.18 & -25.51 & 5169.59889$^{\textstyle b}$ & 284.39 & 228.61 & -24.86 & 5465.81925$^{\textstyle b}$ & 289.79 & 221.63 & -26.61 \\ 5131.78765 & 292.96 & 220.14 & -25.60 & 5174.84169$^{\textstyle b}$ & 280.30 & 231.55 & -4.09 & 5466.70871$^{\textstyle b}$ & 289.60 & 222.23 & -14.11 \\ 5131.80356 & 292.42 & 220.19 & -26.06 & 5185.53522$^{\textstyle a}$ & 273.67 & 237.70 & -18.20 & 5466.81183$^{\textstyle b}$ & 289.54 & 222.59 & -11.69 \\ 5131.85265 & 293.03 & 220.25 & -25.19 & 5185.65259$^{\textstyle a}$ & 274.36 & 239.12 & -15.18 & 5467.75339$^{\textstyle b}$ & 289.34 & 222.65 & 4.92 \\ 5131.86398 & 292.36 & 220.00 & -25.80 & 5185.79925$^{\textstyle a}$ & 273.48 & 238.98 & -12.38 & 5468.69682$^{\textstyle b}$ & 288.61 & 222.95 & 17.88 \\ 5132.67972 & 292.99 & 220.00 & -10.91 & 5187.53794$^{\textstyle a}$ & 273.26 & 239.88 & 16.16 & 5468.85045$^{\textstyle b}$ & 290.13 & 222.85 & 20.08 \\ 5132.69356 & 292.98 & 220.06 & -10.62 & 5187.68699$^{\textstyle a}$ & 271.54 & 239.89 & 16.97 & 5469.68392$^{\textstyle a}$ & 289.74 & 224.34 & -25.09 \\ 5132.69358 & 293.54 & 220.02 & -10.05 & 5187.80119$^{\textstyle a}$ & 272.91 & 240.05 & 19.16 & 5477.79364$^{\textstyle a}$ & 283.78 & 227.69 & -19.80 \\ 5132.76078 & 293.15 & 220.21 & -9.00 & 5251.52915 & 238.30 & 273.28 & 8.94 & 5477.87033$^{\textstyle a}$ & 284.65 & 227.85 & -18.25 \\ 5132.77322 & 293.25 & 220.18 & -8.64 & 5251.52920 & 238.60 & 273.91 & 9.24 & 5478.87548$^{\textstyle a}$ & 283.67 & 228.16 & 0.45 \\ 5132.85291 & 293.04 & 220.14 & -7.12 & 5251.59065 & 238.78 & 274.38 & 10.58 & 5479.78887$^{\textstyle a}$ & 283.59 & 228.98 & 14.66 \\ 5132.86601 & 293.05 & 220.40 & -6.84 & 5251.64129 & 238.57 & 274.67 & 11.27 & 5479.87635$^{\textstyle a}$ & 284.03 & 228.66 & 15.73 \\ 5132.87916 & 292.97 & 220.19 & -6.64 & 5251.64131 & 238.84 & 274.60 & 11.54 & 5502.84368$^{\textstyle a}$ & 269.47 & 243.35 & 18.43 \\ 5141.77686$^{\textstyle b}$ & 293.27 & 219.22 & 14.23 & 5251.69466 & 238.54 & 274.81 & 12.14 & 5559.58184$^{\textstyle b}$ & 239.06 & 272.86 & 14.74 \\ 5144.62452$^{\textstyle a}$ & 292.97 & 219.09 & -1.41 & 5251.69469 & 238.96 & 274.91 & 12.56 & 5560.80361$^{\textstyle b}$ & 238.73 & 274.02 & -26.25 \\ 5144.68958$^{\textstyle a}$ & 293.58 & 219.16 & 0.46 & 5251.74836 & 238.41 & 275.08 & 12.86 & 5561.81170$^{\textstyle b}$ & 237.77 & 273.55 & -10.43 \\ 5144.76237$^{\textstyle a}$ & 293.90 & 219.11 & 2.15 & 5272.50251 & 232.83 & 280.22 & -23.32 & 5582.55697$^{\textstyle b}$ & 232.90 & 280.11 & 18.03 \\ 5145.64671$^{\textstyle a}$ & 292.38 & 219.63 & 13.99 & 5272.55269 & 232.64 & 280.70 & -22.32 & 5583.56084$^{\textstyle b}$ & 231.99 & 280.22 & -26.60 \\ 5145.72154$^{\textstyle a}$ & 294.21 & 219.98 & 16.28 & 5272.63451 & 231.82 & 280.75 & -21.27 & 5584.64151$^{\textstyle b}$ & 232.31 & 280.73 & -8.55 \\ 5146.61150$^{\textstyle a}$ & 291.77 & 220.04 & -7.11 & 5272.63454 & 232.47 & 281.15 & -20.61 & 5588.59591$^{\textstyle b}$ & 231.20 & 281.25 & -5.65 \\ 5146.67262$^{\textstyle a}$ & 292.15 & 220.02 & -15.48 & 5272.68201 & 233.08 & 281.10 & -19.19 & 5590.53541 & 231.57 & 282.04 & 22.62 \\ 5146.73621$^{\textstyle a}$ & 292.75 & 219.22 & -21.43 & 5272.68214 & 233.53 & 281.42 & -18.74 & 5590.58281 & 230.76 & 281.71 & 22.36 \\ 5146.79951$^{\textstyle a}$ & 293.61 & 219.38 & -24.13 & 5431.78359$^{\textstyle a}$ & 290.07 & 223.10 & -25.14 & 5590.63837 & 230.56 & 282.11 & 21.27 \\ 5147.55995$^{\textstyle b}$ & 294.12 & 220.78 & -16.46 & 5431.82587$^{\textstyle a}$ & 288.39 & 223.70 & -26.60 & 5590.63846 & 230.56 & 281.51 & 21.26 \\ 5147.69997$^{\textstyle b}$ & 293.16 & 219.99 & -14.64 & 5431.88740$^{\textstyle a}$ & 290.85 & 222.81 & -23.66 & 5590.68403 & 231.89 & 282.54 & 20.08 \\ 5147.83139$^{\textstyle b}$ & 292.56 & 219.89 & -12.01 & 5457.75868 & 291.65 & 220.75 & 19.98 & 5590.72419 & 232.35 & 283.02 & 16.82 \\ 5148.63334$^{\textstyle b}$ & 292.19 & 219.97 & 2.27 & 5457.81626 & 292.94 & 220.54 & 16.88 & 5598.66755$^{\textstyle b}$ & 230.19 & 282.81 & -25.09 \\\hline \label{tab:vel} \end{tabular} \end{table*} The photometry alone, however, is not sufficient to obtain the absolute values of some important parameters like mass or scale of the system. Using the MIKE spectrograph at the 6.5-m Magellan Clay telescope at Las Campanas Observatory in Chile, the HARPS spectrograph attached to the 3.6-m telescope at La Silla Observatory and the UVES spectrograph on VLT at Paranal Observatory we obtained 123 high-resolution spectra at 116 epochs (49 MIKE + 27 HARPS + 40 UVES) -- 76 more than those reported in Pietrzy{\'n}ski et al. (2010). All the observations were performed by the Araucaria project. Using these data we also confirmed the OGLE-LMC-CEP-0227 (hereafter CEP-0227) to be a classical fundamental-mode Cepheid pulsator in a well detached, double-lined, eclipsing system. The object turned out to have near-perfect properties for deriving the masses of its two components with a very high accuracy. \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{lc_cover.eps}} \\ \caption{All photometric data collected for OGLE-LMC-CEP-0227. {\it Upper panel:} OGLE V-band data, {\it middle panel:} OGLE I-band data, {\it lower panel:} Spitzer $3.6 \mu m$ data. Note the difference in the eclipses coverage after the detection of the system at HJD about 2454900 days. \label{fig:obs_all}} \end{center} \end{figure} Radial velocities (RVs) of both components were measured using RaveSpan application (Pilecki et al. 2012). We have used the Broadening Function formalism (Rucinski 1992, 1999) with templates matching the stars in the temperature-gravity plane. The templates were theoretical spectra taken from the library of Coelho et al. (2005). For deriving radial velocities we have analyzed the spectra in the range of 4125 to 6800 $\mathring{A}$. The typical formal errors of the derived velocities are $\sim300$ m/s. In case of a Cepheid component the orbital motion had to be extracted from the original radial velocities by the subtraction of the pulsational RVs. We assumed: a mutual Keplerian motion of both components with a constant orbital period, that the stars were point-like sources (i.e. no proximity effects, like star's oblateness, were incorporated at this stage of analysis) and that the pulsation radial velocity curve could be represented by the Fourier series. We fitted simultaneously orbital period $P$, eccentricity $e$, periastron longitude $\omega$, both velocity semi-amplitudes $K_1$ and $K_2$, both stars systemic velocities $\gamma_1$ and $\gamma_2$, and a number of Fourier series coefficients (depending on the series order). In Table~\ref{tab:vel} the orbital radial velocities of both components, $RV_1$ and $RV_2$, together with the pulsation radial velocities of the Cepheid $RV_p$ are presented. The original Cepheid radial velocities are simply $RV=RV_1+RV_p$. The analysis confirmed the presence of the K-term effect (Nardetto et al. 2008 and references therein), that affects Cepheid-type stars: the Cepheid systemic velocity is blue-shifted in respect to the companion systemic velocity by 0.59 km/s. \section{Method} \label{sect:method} \subsection{Light curve synthesis} \label{sub:lit} As far as we know there is no generally available software that allows to model binary eclipsing stars with pulsating components in a fully consistent physical way. So first, we developed a scheme which was later on followed by the software application that allows the standard modeling tools, like WD or JKTEBOP code, to model this kind of systems. The trick is to generate multiple eclipsing light curves for different stages of a pulsating component while the parameters of the pulsating star remain fixed for every single light curve generated. This way we obtain a two-dimensional light curve that depends on both pulsational and orbital phase. Later on for every observation point both orbital and pulsation phases are calculated and used to obtain a corresponding brightness from the 2D grid. A bilinear interpolation is used to calculate the brightness between the grid points to improve the efficiency and accuracy of the method. \begin{enumerate} \item Generation of the 2-dimensional light curve. For N uniformly spaced phases ($N=100$ in our approach) of the pulsation cycle we calculate the full eclipsing model using the JKTEBOP code. The generated light curve consists of $M=2000$ points uniformly covering the orbital cycle. This number comes from a compromise between the accuracy of modeling the minima shape and the numerical efficiency of the code. In calculating the N models we take into account the following pulsation phase {\it dependent} parameters: the fractional radius of the primary $r_1$, the surface brightness ratio of the components $j_{21}$ and the brightness of the system in a given band (the light scale factor expressed in magnitudes). And the following pulsation phase {\it independent} parameters are kept fixed: the fractional radius of the secondary $r_2$, the eccentricity $e$, the periastron longitude $\omega$, the orbital inclination $i$, the mass ratio $q=m_2/m_1$ and the reference time of the primary minimum $T_0$. The fractional radii are the physical radii, $R_1$ and $R_2$, divided by the semi-marjor axis $a$. The pulsation period is kept constant and the pulsation phase is calculated according to the following ephemeris: \begin{equation} \label{pul:efe} T_{max} ($HJD$) = 2454896.285 + 3.797086\times $E$, \end{equation} where $T_{max}$ refers to the moment of the Cepheid's maximum optical brightness. The reflection and proximity effects are taken into account internally by the JKTEBOP code, but in the case of CEP-0227 (being on the order of 0.001 mag) they are of minor importance. As a result we obtain a two-dimensional grid of magnitudes $m = m(\phi_{\rm orb}, \phi_{\rm puls})$ for each photometric band ($V$, $I_C$ and Spitzer $3.6 \mu$m) we use. The radius of the pulsating component changes during the pulsation cycle. To account for this effect we use the Cepheid disentangled pulsational radial velocities and the projected semi-major axis of the system $a\sin{i}$ -- see details in Section~\ref{sub:radius}. \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{puls.eps}} \\ \caption{Out-of-eclipse light curves of the CEP-0227 folded with the ephemeris given in equation~(\ref{pul:efe}). The overplotted 9th (V and I) and 6th (3.6$\mu$m) order Fourier fits (solid lines) are used to calculate the light scale factor expressed in magnitudes for a given pulsation phase which is an input parameter of JKTEBOP. \label{fig:puls}} \end{center} \end{figure} The light scale factor is calculated from the out-of-eclipse light curve for a given pulsation phase. To parameterize the pulsations we use 9th order Fourier series for V and I bands and a 6th order one for the Spitzer data. Next, this fit is used to set the light scale factor for all N pulsation phases in every band. These out-of-eclipse light curves and Fourier fits are presented in Fig.~\ref{fig:puls}. The method to calculate the actual surface brightness ratio of the components is presented in Section~\ref{sub:surf}. The limb darkening coefficients were treated with special attention and we worked out the methodology to treat them in a consistent way within the model, for details see Section~\ref{sub:limbmet} of this paper. Reflection coefficients were calculated from the model geometry. The gravitational brightening was set to 0.32 -- a value typical for a convective envelope of a late type star. \item Creation of 1D light curve from the 2D one: Once all purely eclipsing models (without pulsations) are calculated for a set of different pulsation phases (from 0.0 to 1.0) we are ready to calculate a 1D light curve that exhibits pulsations. For this we need the specific epochs of observation, because the pulsational and eclipsing variabilities are independent. Thus for each measurement we calculate the both phases and take from the 2D grid the interpolated value that best represents the actual brightness of the system in a given photometric band. This allows us to obtain a direct eclipsing binary light curve model with a pulsating component like the one in Fig.~\ref{fig:pri_ecl} and \ref{fig:sec_ecl}. Note that those figures were simplified to present the idea with more clarity. In reality we do not use brightness values from where the diagonal lines cross the light curve, we use the brightness values for the exact (calculated) pulsation phases. As for the pulsation phase, there is also a small correction applied at this moment during the selection of the best model from the grid, due to the light time travel effect (see Section~\ref{sub:ltte} for details). \end{enumerate} \begin{figure} \begin{center} \resizebox{0.92\linewidth}{!}{\includegraphics{primary_eclipse.eps}} \\ \caption{Generation of a 1D light curve (lower panel) from the 2D grid of light curves. {\it Upper panel:} this plot shows a small subset (just 10) of fixed-radius eclipsing models for different phases of a pulsating component centered on a primary minimum. The size and shape of the eclipses changes as we move through the Y-axis. Small numbers on the left side are the brightness of the system at the maximum. The pulsation maxima are marked with open circles. Diagonal lines mark the propagation of the pulsation phase as we move through the orbital phase -- the pulsation period is many times shorter than the orbital one. {\it Lower panel:} the resulting light curve when a pulsating component is obscured by a companion. \label{fig:pri_ecl}} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{0.92\linewidth}{!}{\includegraphics{secondary_eclipse.eps}} \\ \caption{Same plot as in Fig.~\ref{fig:pri_ecl} but for the secondary eclipse. {\it Lower panel:} the resulting light curve when a pulsating component passes in front of its companion. \label{fig:sec_ecl}} \end{center} \end{figure} In the way described above we manage to obtain a light curve model for a given set of orbital and stellar parameters. In order to find the set of parameters giving the best fit to the observations we decided to employ Markov chain Monte Carlo (MCMC) approach (Press et al. 2007). To fully explore the parameter space we allow for the change of the following parameters: the fractional radius of the pulsating component at phase 0.0 (pulsational), the fractional radius of the second component, the orbital inclination, the orbital period, the reference time of the primary minimum, the eccentricity related parameters ($e\cos\omega$, $e\sin\omega$), the component surface brightness ratios in all the used bands at phase 0.0 (pulsational), the $p$-factor and the third light $l_3$. \subsection{Markov chain Monte Carlo} \label{sub:mcmc} We decided to use the Monte Carlo method as it allowed us to realistically estimate errors of the parameters. Specifically we have used Metropolis-Hastings algorithm (Hastings 1970) -- one of the MCMC random-walk methods, which has an advantage over non random-walk MC sampling being in general independent of the starting point (unless you start it close to the other deep local minimum) and sampling the $\chi^2$ plane with greater density where the $\chi^2$ values are lower. As the acceptance function we use the normal distribution function. The method was also modified by the incorporation of simulated annealing (Press et al. 2007) -- the probability of jumping away from the $\chi^2$ minimum decreases as the number of calculated models increases. We would like to emphasize here that we fit all the light curves simultaneously i.e. geometry related parameters like radii, orbital inclination, $p$-factor, etc. are common to all bands. The observations are weighted by their observational errors and their modal values are 0.009, 0.007 and 0.008 mag in $V$, $I_C$ and 3.6$\mu$m, respectively. At the initial stage all errors were scaled to match the condition that for every single light curve the reduced $\chi^2$ should be equal to unity. To obtain the well-sampled $\chi^2$ plane for 12 fitted parameters we need about 50 000 models to be calculated. While 10 000 gives a good estimate of the best solution, at least 5 times more models is needed to reliably estimate the errors. \subsection{Radius change} \label{sub:radius} The radius absolute change of the pulsating component can be found directly from integrating the pulsation radial velocity curve: \begin{equation} \Delta R_1 (t,p) = B \!\int \!\!p\,(v_r(t) - v_{s})\, {\rm d}t = p D(t), \label{eqn:radchange} \end{equation} where $p$ is the projection factor, $v_r$ is the radial velocity, $B$ is a conversion unit factor and $v_s$ is the Cepheid systemic velocity with respect to the system barycenter. If we choose units to be solar radii for length, km s$^{-1}$ for velocity and days for time we get $B=0.12422$. The systemic velocity $v_s$ is selected to give a zero net effect of the radius change after a pulsation cycle, i.e. we require the star to have always the same radius at a given phase. Fig.~\ref{fig:pulsrad} shows the pulsational radial velocity curve used in the analysis and the resulting radius changes. In general the projection factor can be phase dependent, but as shown by Nardetto et al. (2004) the impact of this dependence is weak ($0.2\%$ on the distance determination) and in our analysis we keep it constant for a given model (it is not fixed in regard to the MCMC analysis though). It is convenient to separate the time independent $p$-factor and the parameter independent D(t) term, as the latter can be calculated once for the whole analysis. Let us denote the Cepheid fractional radius and its absolute radius correction at pulsation phase 0.0 by $r_{1,0}$ and $\Delta R_{1,0}$, then the fractional radius of the pulsating component at any time can be found from the relation: \begin{equation} r_1(t,p) = r_{1,0} + \frac{\Delta R_1(t,p) - \Delta R_{1,0}}{a}, \label{eqn:radius} \end{equation} where $a$ is the semi-major axis of the system obtained from the orbital solution -- see Section~\ref{sub:orbsol}. \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{pulsation.eps}} \\ \caption{Pulsational radial velocities of the Cepheid OGLE-LMC-CEP-0227 (crosses) with the over-plotted 12th order Fourier series fit. The dashed straight line corresponds to the Cepheid's systemic velocity of $-0.59$ km/s in respect to the barycenter of the system. Three continuous lines correspond to the Cepheid's radius changes, in respect to the mean radius, for the projection factor values of 1.0, 1.2 and 1.4. \label{fig:pulsrad}} \end{center} \end{figure} \subsection{Surface brightness ratio} \label{sub:surf} The dimensionless surface brightness ratio of the components that influences the depth of the eclipses is one of the fitted parameters within the JTKEBOP code. This quantity changes during the pulsation cycle because the effective temperature of the Cepheid is not constant and, what follows, neither is the mean flux emitted from the surface area of the star. To calculate the surface brightness ratio in a given moment and band we use an adequate out-of-eclipse pulsation light curve and the fractional radius changes given by equation~\ref{eqn:radius}. Let us denote the Cepheid flux, its surface brightness and the total apparent brightness of the system measured in a given band at pulsation phase 0.0 by $F_{1,0}$, $j_{1,0}$ and $m_0$, respectively. Their current values during pulsations are $F_{1}(t)$, $j_{1}(t)$ and $m(t)$. The radius of the secondary component $r_{2}$ and its surface brightness $j_{2}$ are constant during the pulsation cycle. Let us now define the surface brightness ratio at pulsation phase 0.0 and its current value by: \begin{eqnarray} j_{21,0}&=&\frac{j_{2}}{j_{1,0}} \nonumber \\ j_{21}(t)&=&\frac{j_{2}}{j_{1}(t)} \label{eqn:surfdef} \end{eqnarray} In general some amount of the third light in the system, being an optical blend or an additional physical stellar companion, may be present. Although it does not affect physically the surface brightness ratio of the components, it affects the way we derive this quantity from the out-of-eclipse magnitudes. Let's assume that the third light flux is constant in a given band $F_{3}=const$. Then we can define the third light $l_3$, which is one of the input parameters to the JKTEBOP code, by: \begin{eqnarray} l_{3,0}&=&\frac{F_{3}}{F_{1,0}+F_2+F_3} \nonumber \\ l_{3}(t)&=&\frac{F_{3}}{F_{1}(t)+F_2+F_3}, \label{eqn:thirdlight} \end{eqnarray} where $F_2$ is the flux from the companion and $l_{3,0}$ is the third light at phase 0.0. Note, that although we assume constant $F_3$, the third light $l_3$ changes during the pulsation cycle because its contribution to the total light changes. By the Pogson equation we can link the instantaneous and reference apparent brightness $m(t)$ and $m_0$: \begin{equation} m(t) - m_0 = -2.5 \log \frac{F_1(t)+F_2+F_3}{F_{1,0}+F_2+F_3} \label{eqn:pogson} \end{equation} From equation~(\ref{eqn:thirdlight}) we derive $F_3$: \begin{equation} F_3 = \frac{l_{3,0}(F_{1,0}+F_2)}{1-l_{3,0}} \label{eqn:flux3} \end{equation} The fluxes from both components $F_1$ and $F_2$ are proportional to the product of their projected surface area and surface brightness: \begin{eqnarray} F_1(t)&\sim& r_1^2(t)\,j_1(t) \nonumber \\ F_{1,0}&\sim& r_{1,0}^2\,j_{1,0} \nonumber \\ F_2&\sim& r_2^2\,j_2 \label{eqn:fluxes} \end{eqnarray} where $r_1$ dependence on $p$ is omitted as for any given model the $p$-factor is constant across the pulsation cycle. Inserting equations~(\ref{eqn:flux3}) and (\ref{eqn:fluxes}) into equation~(\ref{eqn:pogson}), and after some algebraic manipulations with the help of equation~(\ref{eqn:surfdef}) we obtain a pulsation phase dependent relation for the surface brightness ratio: \begin{equation} j_{21}(t) = \frac{r_1^2(t)\,j_{21,0}}{(r_{1,0}^2+r_2^2\,j_{21,0})\left(\frac{\textstyle A(t)}{\textstyle 1-l_{3,0}} - l_{3,0}\right) - r_2^2\,j_{21,0}} \label{eqn:surfbrit} \end{equation} where $A(t)=10^{0.4(m_0-m(t))}$. Solving equations~(\ref{eqn:thirdlight}) and~(\ref{eqn:pogson}) for $l_3(t)$ we obtain an expression for the phase dependency of the third light parameter: \begin{equation} l_3(t) = \frac{l_{3,0}}{{\textstyle A(t)}} \label{eqn:3t} \end{equation} \subsection{Limb darkening methodology} \label{sub:limbmet} The limb darkening (LD) of a star surface affects determination of stellar radii in case of the eclipsing binary light curve analysis. Instead of fitting the LD coefficients in all three photometric bands independently we decided to link them by atmospheric parameters i.e. the effective temperature $T_{eff}$, the gravity $\log g$ and metallicity $[$Fe/H$]$ utilizing some theoretical LD predictions. We have tested two sets of stellar limb darkening tables published by Van Hamme (1993) and Claret \& Bloemen (2011) and two limb darkening laws, namely a linear and a logarithmic one (Klinglesmith \& Sobieski 1970). As the Van Hamme tables lack data for the Spitzer $3.6\mu$m band, equivalent Johnson $L$ band coefficients were used instead. Because atmospheric parameters of Cepheids change during pulsation cycle we expected that also limb darkening coefficients would change over time. To account for this effect we need to know how both parameters vary with pulsation phase. The instantaneous surface gravity of the Cepheid is calculated from: $$ \log g (t) = 4.438 + \log{m_1} - 2 \log(r_1(t)\,a), $$ where $m_1$ is the mass of the Cepheid, $a$ is the orbital semi-major axis and the instantaneous star radius is calculated from equation~\ref{eqn:radius}. The masses of both components are adopted from the solution obtained with the WD code. The effective temperature of the Cepheid can be inferred from the color indices like ($V\!-\!I$) or ($V\!-\!K$). In practice temperature calibrations based on ($V\!-\!K$) are much more reliable. In order to obtain intrinsic ($V\!-\!K$) colors of the Cepheid during the pulsation cycle we have to remove the light contribution of the accompanying red giant in $V$ and $K$ bands, and the same is needed to estimate the amount of interstellar reddening in the direction of CEP-0227. Once intrinsic ($V\!-\!K$) index is obtained the effective temperature is estimated using various calibrations. Details of this procedure are described in a separate paper (Gieren et al. in preparation). In Fig.~\ref{fig:temp} we present how the temperature of the Cepheid changes over one pulsation period. The metallicity $[$Fe/H$]=-0.5$ was assumed for both components. It is slightly larger than $[$Fe/H$]\sim-0.65$ derived by Marconi et al. (2013) but resulting change in the LD coefficients is insignificant. In case of the second component the effective temperature and gravity are constant, thus limb darkening coefficients do not need any special treatment. For the secondary component we set the constant effective temperature $T_{eff,2} = 5120$ K and gravity $\log g_2 = 1.71$. During our analysis it appeared that the limb darkening coefficients calculated for constant average effective temperature gave better results than those for the variable one. Therefore we tested this option thoroughly and eventually this was the main method that we have used. Note that it does not mean that the Cepheid temperature is constant nor it means that the LD coefficients are such. It only means that the dependence on the $T_{eff}$ may be different than the one assumed here. Indeed Marengo et al. (2003) based on the theoretical considerations found some significant variability of limb darkening between the pulsation phase $\phi=0.6$ and 0.7 coinciding with a shockwave passage through the photosphere. However for most part of the pulsation period LD coefficients were found to change only a little. \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{VK_temp.eps}} \\ \caption{ Dependence of the Cepheid effective temperature over the pulsation phase (thick solid curve) with 1-$\sigma$ uncertainties (dashed lines). The temperature variation may be used to calculate limb darkening coefficients for a given phase. The horizontal line represents the mean effective temperature ($6050$ K) of the star. \label{fig:temp}} \end{center} \end{figure} \subsection{Light time travel effect} \label{sub:ltte} A pulsating Cepheid star is a kind of a cosmic clock. When a star orbits another star we observe that this clock is accelerating when a Cepheid is approaching us and decelerating when a Cepheid is drifting away. This well known light time travel effect is no doubt present in the binary CEP-0227. The only question is if the photometry we collected is of the precision good enough to make this effect detectable. The instantaneous distance $\rho$ of the Cepheid star from the system barycenter is given by equation: \begin{equation} \rho = \frac{a_1\,(1-e^2)}{1+e\cos\nu}, \label{eqn:kepler1} \end{equation} where $a_1$ is the semi-major axis of the Cepheid orbit, $e$ is the eccentricity and $\nu$ is the true anomaly. The projection of $\rho$ onto the line of sight passing through the system barycenter equals: \begin{equation} d = \rho\,\cos(\omega+\nu-\pi/2)\sin i \label{eqn:kepler2} \end{equation} where $\omega$ is the periastron longitude and $i$ is the orbital inclination. The value of $d$ tells us how much the Cepheid is closer or farther away from us in respect to the system barycenter. The time which light needs to pass this distance is a retardation of the pulsating signal. In other words the observed pulsation phase $\phi_p^{\prime}$ is different from the pulsation phase $\phi_p$ computed for the constant mean pulsation period $P_p$ and they are related as follows: \begin{equation} \label{eqn:light} \phi_p^{\prime} =\phi_p - \frac{d}{cP_p}, \end{equation} where $c$ is the velocity of light. Although the retardation of the pulsation signal in the primary minimum is just $\sim0.0014$ of the pulsation period, during the most steep part of the pulsation light curve (between phases 0.85 and 1.0), it translates into 0.003 mag shift in $I$ band and 0.005 mag shift in $V$ band. Such shifts are comparable to the precision of OGLE-IV photometry and, being a systematic effect, can affect our solution. Indeed after implementation of the effect in our code we detected it on 3.5-$\sigma$ significance level. The implementation is made by applying a correction to the calculated pulsation phase while the best model is being taken from the 2D light curve grid. \section{Results} \label{sect:results} As the initial parameters for our analysis we used the results from our previous study of this system (Pietrzy{\'n}ski et al. 2010). Because we have gathered a lot of new data and applied more sophisticated and direct approach, we expect the results to be more reliable and accurate. Some other effects neglected before were also taken into account this time. First we have obtained a new orbital solution, which was then used as a base for the following analysis of the photometry using the method described above. \subsection{Orbital solution} \label{sub:orbsol} \begin{table} \caption{Orbital solution for CEP-0227. In RaveSpan stars are treated as point like sources, $T_0$ is $HJD - 2450000$~d, and $a \sin i$ is calculated with the rest frame orbital period $P=309.404$~d.} \begin{tabular}{@{}l|@{}c@{}|ccc@{}} \hline Parameter & RaveSpan & \multicolumn{2}{c}{WD} \\ \multicolumn{2}{c}{} & Solution 1 & Solution 2 \\\hline $\gamma$ (km/s) & 256.61 $\pm$ 0.04 &256.48 $\pm$ 0.11 & 256.46 $\pm$ 0.09 \\ $T_0$ (d) & &4818.94 $\pm$ 0.28 & 4820.88 $\pm$ 0.42 \\ $a \sin i$ ($R_\odot$)& 384.24 $\pm$ 0.67 &389.26 $\pm$ 0.44 & 388.89 $\pm$ 0.77\\ $q=M_2/M_1$ & 0.993 $\pm$ 0.002 &0.993 $\pm$ 0.003& 0.994 $\pm$ 0.003\\ $e$ & 0.163 $\pm$ 0.002 &0.166 (fixed) & 0.161 $\pm$ 0.003\\ $\omega$ (deg) & 343.0 $\pm$ 1.4 &342.0 (fixed) & 344.5 $\pm$ 1.8\\ $K_1$ (km/s) & 31.72 $\pm$ 0.06 & 32.14 $\pm$ 0.05 & 32.11 $\pm$ 0.07\\ $K_2$ (km/s) & 31.94 $\pm$ 0.06 & 32.38 $\pm$ 0.05 & 32.31 $\pm$ 0.06\\ rms$_1$ (km/s) & 0.54 & 0.56 & 0.55\\ rms$_2$ (km/s) & 0.48 & 0.48 & 0.44\\\hline \label{tab:spec} \end{tabular} \end{table} \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{cep0227rv.eps}} \\ \caption{Orbital solution for CEP-0227 (solid lines). Measured radial velocities of the Cepheid with the pulsations removed (filled circles) and of its constant companion (open circles) are presented. Model residua are shown in the upper panel. \label{fig:specsol}} \end{center} \end{figure} We analyzed disentangled orbital radial velocities of both components with Wilson-Devinney code (Wilson \& Devinney 1971, Wilson 1979, Van Hamme \& Wilson 2007) to derive the projected semi-major axis of the system $a\sin{i}$ and the mass ratio $q$. The reason behind using the WD code is to account for non-Keplerian corrections originating from stars oblateness. These corrections are relatively small ($\sim 0.4$ km/s) but result in the $a\sin{i}$ different by $3\sigma$ from the purely Keplerian solution (RaveSpan). The adjusted parameters were: the semi-major axis $a$, the systemic velocity $\gamma$, the mass ratio $q$ and the phase shift $\Delta\phi$. The remaining spectroscopic parameters were kept constant during fitting and their values were adopted from the photometric solution -- see Section~\ref{sub:photo}. The velocity semi-amplitudes were calculated according to: \begin{eqnarray} K_2 [{\rm km/s}]&=& 50.579\frac{a\sin i[R_\odot]}{P[d]\,(1+q)\sqrt{1-e^2}} \\ K_1 [{\rm km/s}]&=&q\,K_2 \label{eqn:semiamp} \end{eqnarray} The results of the preliminary fitting with RaveSpan and the final fitting with the WD code are summarized in Table~\ref{tab:spec}. We also perform another run of the WD code adjusting the eccentricity $e$ and the periastron longitude $\omega$ to check the consistency of the photometric and spectroscopic solutions (Solution 2). The resulting $q$ and $a$ are essentially the same as for the one when $e$ and $\omega$ were kept constant. For later analysis we adopted the results from Solution~1 with error on semimajor axis from Solution~2. \subsection{Limb darkening} From the preliminary analysis we have learned that using the Van Hamme tables and the logarithmic law results in smaller residuals than for any other combination of tables and laws used, so they were selected for later analysis. Also, as described in Section~\ref{sub:limbmet} for the primary pulsating component two scenarios have been considered: 1) limb darkening coefficients dependent on the $T_{eff}$ and $\log g$ (which change over pulsation phase), 2) limb darkening coefficients calculated for a constant $T_{eff}$ and variable $\log g$. The latter approach gave significantly better results in terms of $\chi^2$ values so we decided to use it to obtain a final solution. Having set this we then varied $T_{eff}$ in order to find out how $\chi^2$ of the best solution depends on limb darkening. Surprisingly we had to lower the Cepheid temperature (what corresponded to larger LD coefficients) to as low as $T_{LD,1} = 3700$ K. The improvement in $\chi^2$ was considerable and significant to 6-$\sigma$ level -- see Fig.~\ref{fig:t1chi2p}. The minimum lies well within $1\sigma$ from the lower boundary for the tables used, which is 3500 K, but it appears that further decreasing of the temperature (i.e. increasing the LD coefficients) would not improve the fit. The final scaling factor for the temperature is $a_1 = 3700 K / 6050 K \approx 0.61$. We have also tried to find a better solution varying LD coefficients for the secondary component. In this case we had to lower the temperature used to evaluate the limb darkening coefficients only moderately to $T_{LD,1} = 4480$ K (scaling factor $a_2 = 4480 K / 5120 K \approx 0.88$) and the improvement in the obtained $\chi^2$ was much smaller -- see Fig.~\ref{fig:t2chi2p}. In fact the solution obtained for the LD coefficients corresponding to the effective star temperature $T_{2}=5120$ K was only a little more than 1-$\sigma$ inferior to the best one. \subsection{Projection factor} \label{sub:factor} During the last decade there has been a substantial discussion about the proper projection factor ($p$-factor) to apply to observed Cepheid radial velocities to determine pulsational velocities. The issue came up when Gieren et al. (2005) tried to determine direct distances to Magellanic Cloud Cepheids by applying the near-infrared surface-brightness method to LMC Cepheids and found a non-physical period dependency of the derived distances. To correct for this they inferred, as the most likely explanation, a stronger variation of the $p$-factor than what had previously been assumed. This period effect has been observationally confirmed recently by Storm et al. (2011) who applied the surface-brightness method to a much larger sample of Cepheids. Based in large part on these new data Groenewegen (2013) and Ngeow et al. (2012) confirmed the stronger period dependence of the $p$-factor. Recent theoretical studies (e.g. Nardetto et al. 2009, Neilson et al. 2012) however do not predict that the $p$-factor should have a strong period dependence and they found significantly smaller values of the $p$-factor for short period Cepheids than inferred by the surface-brightness method studies. CEP-0227 provides a unique opportunity to directly measure the $p$-factor for a short period Cepheid. In our case the pulsations of the Cepheid star alter the shape of the light curve not only because its flux changes over the pulsation period but also because its radius does. This is manifested during the eclipses as the beginning and the end of the eclipse may be shifted in time and the visible area of the eclipsed star disk depends on the phase of the pulsating component. For any given moment of the eclipse this area is a function of the stars radii and the orbital inclination. As we know the area function from the light curve solution we can calculate directly the Cepheid radius and trace its changes for the given constant orbital parameters. Because the amplitude of the radius change scales with the projection factor (i.e. the larger the $p$-factor the more profound the radius change), measuring those changes we can directly constrain its value. A conversion from the relative radii (used in the light curve analysis) to the absolute radii (used in the derivation of the $p$-factor) is done using the orbital solution previously obtained from the analysis of the radial velocities. \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{t1_chi2_pfac.eps}} \\ \caption{ The dependence of the projection factor (filled circles) and $\chi^2$ minimum value (solid line) on the temperature scaling factor $T_{LD,1}/6050$ for the pulsating component. The $p$-factor and the temperature scaling factor values for the best fit are marked with dashed lines. The y-axis span for the $p$-factor roughly corresponds to its estimated error (0.03). \label{fig:t1chi2p}} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{t2_chi2_pfac.eps}} \\ \caption{ Same plot as in Fig.~\ref{fig:t1chi2p} but for the temperature scaling factor $T_{LD,2}/5120$ for the companion star. \label{fig:t2chi2p}} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{pfac.eps}} \\ \caption{ A plot of $p$-factor in a $\chi^2$ plane obtained with the Monte Carlo simulation, 185,000 models are shown. The 1-, 2- and 3-$\sigma$ values are marked with solid horizontal lines and the zero level with a dashed one. Each point represents one calculated model. The lowest $\chi^2$ value was obtained for $p=1.206$, while central value is $ p=1.208$. The estimated 1-$sigma$ error is about $0.028$. \label{fig:pfac}} \end{center} \end{figure} The value of the projection factor which fits best our data is $p=1.206 \pm 0.028$, see Fig. \ref{fig:pfac}, and the uncertainty quoted is the statistical error estimated from the Monte Carlo simulations. This value is different from the value predicted by empirical calibration of Storm el al. (2011) for a Cepheid star with a period of 3.8 days which equals to $p=1.442$. We have tested the $p$-factor of 1.442 but the $\chi^2$ value was higher by more than 80, i.e. solution was about $9\sigma$ away from the best solution. However, such a low value of the projection factor is in good agreement with the theoretical calibrations by Nardetto at al. (2009) and Neilson et al. (2012). The mean wavelength of the spectral region we have used to derive radial velocities roughly corresponds to the effective wavelength of the Johnson V band. Thus, both calibrations for our Cepheid predict $p=1.26 \pm 0.03$ and $p=1.23 \pm 0.02$, respectively. It is important to note, that the $p$-factor value does not depend much on the used limb darkening coefficients -- see Fig. \ref{fig:t1chi2p} and \ref{fig:t2chi2p}. All the $p$-factors found for different LD coefficients sets are located within the range of 1.18-1.22, inside the 1-$\sigma$ border. Because of this weak dependence we use the value and errors derived for the best set of LD coefficients as the final values. Another important thing is the complete independence of our approach to any assumptions on distance to OGLE-LMC-CEP-0227. In fact our photometric analysis is almost entirely done using only relative radii of the stars, which do not scale with distance. Also a conversion from the radial velocities to the pulsational ones is distance independent. To estimate a systematic uncertainty we compared all the determinations of the projection factor within all sorts of the investigated models (including those with different limb darkening coefficients, the third light neglected, etc.). This tells us how the determined value of the $p$-factor is sensitive to different model assumptions. In all cases the resulting $p$-factor lies within a range of 1.17 to 1.25. Thus, we assumed the systematic error of 0.04. \subsection{Third light} \label{sub:third} The presence of the third light was investigated in our analysis. We allowed for its independent presence in each of the photometric bands. In the beginning the most suspicious was the {\it Spitzer} $3.6\mu$m band because some Galactic Cepheids were reported to have near-infrared excess (e.g. Kervella et al. 2006, M{\'e}rand et al. 2007) which is usually understood as a result of on-going mass loss. The solutions found, however, were consistent with no third light contribution in the {\it Spitzer} band and the V-band as well (being of the order of $0.1\%$). It turned out however, that some significant third light was present in the $I$-band light curve ($l_3=0.015$, i.e. $1.5\%$ of the total flux). The detection of the third light only in the $I$-band is a bit surprising. It may indicate a presence of an unaccounted faint red blend in the OGLE photometry or some minor problems with the absolute calibration of the OGLE or {\it Spitzer} photometry. In fact Udalski et al. (2008) reported that the uncertainty of the absolute calibration of the OGLE photometry can reach 0.02 mag. Taking the I-band third light into account results in a considerably smaller $\chi^2$ value with the detection on about 6-$\sigma$ level. A significant (more than 3-$\sigma$) difference in the obtained parameters between the models with and without the third light was found only in the case of the V-band surface brightness ratio. For the inclination, the Cepheid radius and $3.6\mu$m-band brightness ratio the difference is between 2 and 3 $\sigma$, and for the rest of the parameters the results are very consistent between solutions. \subsection{Photometric parameters} \label{sub:photo} The photometric parameters for our best solution, with the third light in the $I$-band taken into account, are summarized in Table~\ref{tab:photpar}. The light curve solution for all three photometric bands is presented in Fig.~\ref{fig:vmodel}--\ref{fig:modelzoom}. The model usually predicts well the brightness of the system during eclipses, however some small systematic residua are still present. The amplitude of the pulsations during the primary eclipse is smaller because a significant part of the Cepheid disk is covered at this stage and thus, relatively more light comes from the constant component. During the secondary (shallower) eclipse the Cepheid transits across the companion disk and the observed amplitude of the pulsations grows larger. In the near-infrared the pulsations become much less prominent and so they affect the shape of the eclipses less. Also the surface brightness ratio of the components $j_{21}$ changes considerably from the optical to near-infrared. Fig.~\ref{fig:surf} presents the dependency of $j_{21}$ on the pulsation phase and the photometric band for our best model. \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{surf.eps}} \\ \caption{ Surface brightness ratio of the components $j_{21}$ as a function of the pulsation phase for three bands, from up to down: Spitzer 3.6$\mu$m, $I_C$, $V$. \label{fig:surf}} \end{center} \end{figure} \begin{table} \caption{Photometric parameters of CEP-0227 from the Monte Carlo simulations. Values marked with $^{\textstyle a}$ correspond to a pulsation phase 0.0, $T_0$ is $HJD - 2450000$~d. Limb darkening coefficients for the logarithmic law are presented -- all of them were adjusted simultaneously using a single parameter (see text for details). $L_{21}$ is the light ratio of the components in every photometric band.} \begin{tabular}{@{}l|c|c|l@{}} \hline Parameter & Mean value & Best fitted value & Error \\\hline Adjusted & & & \\ $P_{obs} (d) $ & - & 309.6690 & 0.0017 \\ $T_0$ (d) & - & 4895.908 & 0.005\\ $r_1$ & 0.08957 & 0.08532$^{\,\textstyle a}$ & 0.00025 \\ $r_2$ & - & 0.11503 & 0.00025 \\ $j_{21}(V)$& 0.4566 & 0.2296$^{\,\textstyle a}$& 0.0015 \\ $j_{21}(I_C)$& 0.5791 & 0.3881$^{\,\textstyle a}$& 0.0015 \\ $j_{21}(3.6)$& 0.8206 & 0.7146$^{\,\textstyle a}$ & 0.0045 \\ $i$ ($^\circ$) & - & 86.833 & 0.016 \\ $e$ & - & 0.1659 & 0.0006 \\ $\omega$ ($^\circ$)& - & 342.0 & 0.6\\ $p$-factor & - & 1.206 & 0.030\\ $l_{3,V}$ & -& 0.000 & 0.002 \\ $l_{3,I}$ & 0.018 & 0.015$^{\,\textstyle a}$ & 0.002 \\ $l_{3,3.6}$ & -& 0.000 & 0.002 \\ $u_{1,V}$& &\multicolumn{2}{l}{0.805 \, $-0.166$ } \\ $u_{1,I}$ & &\multicolumn{2}{l}{0.648 \,\,\,\, 0.129} \\ $u_{1,3.6}$ & &\multicolumn{2}{l}{0.375 \,\,\,\, 0.218} \\ Derived quantities& & & \\ $L_{21}(V)$ & 0.7504 & 0.4174$^{\,\textstyle a}$ & \\ $L_{21}(I_C)$ & 0.9539& 0.7054$^{\,\textstyle a}$ &\\ $L_{21}(3.6)$ & 1.357 & 1.299$^{\,\textstyle a}$ & \\\hline \label{tab:photpar} \end{tabular} \end{table} \begin{figure*} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{lcmodel_VJ_5484_5725.eps}} \\ \resizebox{\linewidth}{!}{\includegraphics{lcmodel_VJ_5794_6026.eps}} \\ \caption{V-band model for selected eclipses. Observations are marked by small black circles. \label{fig:vmodel}} \end{center} \end{figure*} \begin{figure*} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{lcmodel_IC_5484_5725.eps}} \\ \resizebox{\linewidth}{!}{\includegraphics{lcmodel_IC_5794_6035.eps}} \\ \caption{I-band model of selected eclipses. \label{fig:imodel}} \end{center} \end{figure*} \begin{figure*} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{lcmodel_LJ_5808_6028.eps}} \\ \caption{Spitzer 3.6 $\mu$m-band model. \label{fig:lmodel}} \end{center} \end{figure*} \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{lcmodel_IC_5503_5527.eps}} \\ \resizebox{\linewidth}{!}{\includegraphics{lcmodel_LJ_5812_5836.eps}} \\ \caption{Spitzer 3.6 $\mu$m and I-band model shown for two different primary eclipses. \label{fig:modelzoom}} \end{center} \end{figure} Most of the parameters fitted in our approach are independent and do not exhibit any significant correlation, though some do. We were concerned how the projection factor correlates with the other photometric parameters but in this case we only detected a weak correlation with the surface brightness ratios. In Fig.~\ref{fig:corr} the correlation with the $I$-band surface brightness ratio is presented, which is the main source of the statistical uncertainty on the determined $p$-factor value. The strongest correlation among the parameters in our solution was found between the orbital plane inclination $i$ and the sum of the radii $r_{1}+r_{2}$ (same figure) and it is the prime error source of the absolute radii uncertainty. The aforementioned correlation between the inclination and the third light, as well as between the eccentricity and the sum of the fractional radii are also presented. \begin{figure*} \begin{center} \resizebox{0.49\linewidth}{!}{\includegraphics{pfac-sbratIC.eps}} \resizebox{0.49\linewidth}{!}{\includegraphics{inc-rsum.eps}} \resizebox{0.49\linewidth}{!}{\includegraphics{inc-l3IC.eps}} \resizebox{0.49\linewidth}{!}{\includegraphics{ecc-rsum.eps}} \caption{Correlations between the $p$-factor and the I-band surface brightness ratio, the inclination and the sum of the star fractional radii, the inclination and the third light in the I-band, the eccentricity and the sum of the star radii. The best models for given pair of the parameters are shown and the $\chi^2$ values are coded with color (higher values are darker). Solid lines represent 1-, 2- and 3-$\sigma$ levels for the two-parameter error estimation. \label{fig:corr}} \end{center} \end{figure*} \subsection{Absolute dimensions} \begin{table} \caption{Physical properties of CEP-0227. The spectral type, radius, gravity ($\log g$), temperature, luminosity ($\log L$) and the observed magnitudes are mean values over the pulsation period. The orbital period is a rest frame value.} \begin{tabular}{@{}l|c|c@{}} \hline Parameter & Primary (Cepheid) & Secondary \\\hline spectral type & F7 Ib & G4 II \\ mass ($M_\odot$) & 4.165 $\pm$ 0.032 & 4.134 $\pm$ 0.037 \\ radius ($R_\odot$) & 34.92 $\pm$ 0.34 & 44.85 $\pm$ 0.29 \\ $\log g$ (cgs) & 1.971 $\pm$ 0.011 & 1.751 $\pm$ 0.010 \\ temperature (K) & 6050 $\pm$ 160 & 5120 $\pm$ 130 \\ $\log L$ ($L_\odot$) & 3.158 $\pm$ 0.049 & 3.097 $\pm$ 0.047 \\ $V$ (mag) &15.932 & 16.244\\ $I$ (mag) &15.178 & 15.229\\ $K$ (mag) &14.221 & 13.903\\ $v\sin i$ (km/s) & - & 11.1 $\pm$ 1.2 \\ orbital period (days) & \multicolumn{2}{c}{309.404 $\pm$ 0.002 } \\ semimajor axis ($R_\odot$) & \multicolumn{2}{c}{389.86 $\pm$ 0.77} \\\hline \label{tab:abs} \end{tabular} \end{table} Table~\ref{tab:abs} presents the physical parameters of both components and some orbital parameters as well. The spectral type is estimated from the effective temperature scale given in Table~1 of Alonso et al. (1999). The luminosity class is taken from Ginestet et al. (2000). The surface temperatures of the components were calculated according to the dereddened ($V\!-\!K$) colors (Gieren et al. in preparation). The effective temperature of the primary was independently derived by Marconi et al. (2013) as $T_1 = 6100$ K and it is in good agreement with our estimate. The total error of the absolute radii determination contains statistical uncertainties from the relative radii and the semi-major axis determination. Additionally we add some systematic uncertainty to the budget error which comes from the presence of the small systematic residua still existing in our photometric solution. During eclipses the magnitude of these residua reaches 0.01 mag which translates into $0.9\%$ uncertainty of the flux and $0.45\%$ uncertainty of the radii. One must note however that the similar systematic residua are also present {\em outside} the eclipses and as such may be attributed to some defects of the photometry and not to the model itself. The final error was eventually derived as a sum of all the partial errors in quadrature. The Cepheid is the largest at the pulsation phase $\phi=0.40$ reaching $R_{max}=36.44 \, R_\odot$ and the smallest at the pulsation phase $\phi=0.92$ shrinking to $R_{min}=32.43 \, R_\odot$. \begin{figure} \begin{center} \resizebox{\linewidth}{!}{\includegraphics{eclipse.eps}} \\ \caption{System configuration close to the secondary mid-eclipse. The~Cepheid is passing in front of the red giant companion. Star edge line width represents 1-$\sigma$ formal error in the determination of the radii. The Cepheid radius is a mean value over time and dotted lines represent its minimum and maximum radii. Changes in the radial amplitude of the star that correspond to the $p$-factor error approximately equal the radius error. The distance between the stars at this phase is about 355 $R_\odot$. \label{fig:eclipse}} \end{center} \end{figure} The rotation is derived from the Broadening Function calculated for all the spectra where the components are well-separated. The profile is wide and the instrumental broadening seems to be of secondary importance. We assumed that the rotation axis is perpendicular to the orbital plane. The derived value of the projected rotational velocity of the secondary $v_2\sin{i}=11.1$ km/s is consistent with its pseudosynchronous rotation of 10.4 km/s. The rotational velocity of the primary is strongly affected by the atmospheric turbulence originating from the pulsations, so we do not determine this parameter here. It is worth to mention that the new radius, mass and luminosity estimates do agree within one sigma with the recent pulsation and evolutionary prescriptions. \section{Conclusions} \label{sect:concl} The presented method proofed to be a good tool for the analysis of eclipsing binaries with radially pulsating components. It allows for a consistent treatment of the photometric and spectroscopic data -- calculating the pulsating component radius change we make use of both of them. As a result, very precise measurements of the physical parameters of a Cepheid variable and its companion were obtained. We fully confirmed the findings of Pietrzy{\'n}ski et al. (2010), especially the reported mass and radius values of the classical Cepheid OGLE-LMC-CEP-0227. Our masses for both components and the radius of the secondary are well within 1-$\sigma$ error bars given by Pietrzy{\'n}ski et al. (2010). A slight difference occurs for the Cepheid mean radius which is about $1.7\sigma$ larger in our solution. We do not think that to be significant because we have analyzed here a much larger set of observations and the previous analysis was based on the approximate removal of the pulsations from the light curve. Our mean radius is in perfect agreement with the Cepheid period - radius relation of Gieren et al. (1999) and marginally consistent with Groenewegen (2013) calibration. The present analysis of the {\it Spitzer} data excludes the possibility of the additional third light in the near-infrared larger than $\sim0.2\%$. Because the level of the third light detected in the I-band is also low we conclude that there is no significant K-band excess in this system as well. The observed disk of the Cepheid surface seems to be heavily darkened, especially in the optical region where corresponding linear LD coefficient is $u_V\approx 0.9$. It is at odds with the limb darkening coefficient predicted for the static atmosphere at the temperature $T=6050$ K, the gravity $\log{g}=1.97$ and the metallicity $[$Fe/H$]=-0.5$, namely $u_V=0.56$ (van Hamme 1993). Such a strong limb darkening may arise from the high degree of turbulence in the pulsating atmosphere of the Cepheid and from the presence of very deep and profound convective cells more typical for a late K-type giant. According to our knowledge the method we have used for deriving the projection factor is the first one of this kind reported in the literature. It is also a second time, in a general case of short period Cepheids, that the individual value is precisely determined after the interferometric measurements for $\delta$ Cep (M{\'e}rand et al. 2005). Our value of the projection factor $p=1.21 \pm 0.03$ is close to the $p$-factor determined by M{\'e}rand et al. (2005) $p=1.27 \pm 0.06$. Marconi et al. (2013) basing on the hydrodynamic models of the OGLE-LMC-CEP-0227 derived the $p$-factor $p=1.20 \pm 0.08$, in good agreement with our empirical determination. However, their models were fitted to the pulsation light curves of the Cepheid which were freed from the companion light contribution according to our photometric light curve solution. Thus, their value of the $p$-factor is not fully independent. There are two substantial advantages of our method in comparison with the approach presented by M{\'e}rand et al. (2005) making it less prone to systematics. First, our projection factor is distance independent. Second, only weak dependence on the limb darkening assumptions is present. In fact, limb darkening coefficients are fitted simultaneously, but independently to the $p$-factor. Let us emphasize here that in deriving the interferometric angular diameters one need to convert uniform disk diameters $\theta_{UD}$ into limb darkened ones $\theta_{LD}$. For $\delta$ Cep the conversion was done using the theoretical limb darkening tables for ordinary (non-pulsating) stars. However, in a view of the peculiar limb darkening we have found for the CEP-0227 such procedure may be called into question. Of course there are some other sources of possible systematics in our solution. First of all, the question if JKTEBOP can adequately represent the surfaces of giant stars. Comparison made with the Wilson-Devinney code (Graczyk et al. 2012), which is still the most elaborated program for the analysis of eclipsing binaries, suggests that for well detached binaries (as our CEP-0227) the solutions returned by both codes are very similar. If any systematics connected with the use of JKTEBOP exists, most probably it is shared by other computer tools for modeling eclipsing binaries. Some systematics may arise also from the assumptions of constant limb darkening and projection factor during the whole pulsation cycle. The validation of both assumption is currently under work and will be presented in another paper. The application of the light travel time effect was important in the analysis. For our object it barely affected our derived parameters but the overall fit was significantly better removing some systematic residuals. For some other objects like OGLE-LMC-CEP-1812 we expect the effect to have an even higher impact on the solution. In summary we conclude that the presented method allowed us not only to improve the precision of the determination of the intrinsic and structural parameter of the binary system and the pulsating component in particular, but also to measure some other characteristics like limb-darkening and the p-factor. It has a great potential for the application to other binary systems with radially pulsating components. \section*{Acknowledgements} \label{sect:acknow} We gratefully acknowledge financial support for this work from the Polish National Science Center grant MAESTRO 2012/06/A/ST9/00269 and the TEAM subsidy from the Foundation for Polish Science (FNP). Support from the BASAL Centro de Astrof{\'i}sica y Tecnolog{\'i}as Afines (CATA) PFB-06/2007 is also acknowledged. AG acknowledges support from FONDECYT grant 3130361. RS is supported from the Polish NSC grant UMO-2011/01/M/ST9/05914. This work is based (in part) on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA. The OGLE project has received funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no. 246678 to AU. We would like to thank the support staff at the ESO Paranal and La Silla observatory and at the Las Campanas Observatory for their help in obtaining the observations and the rest of the OGLE team for their contribution in acquiring the data for the object. This research has made use of NASA's Astrophysics Data System Service.
1,116,691,498,722
arxiv
\section{#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \usepackage{bm} \textheight=220truemm \textwidth=160truemm \topmargin -.5in \setlength{\oddsidemargin}{0truemm} \setlength{\evensidemargin}{0truemm} \setlength\arraycolsep{2pt} \setcounter{topnumber}{8} \font\small=cmr10 scaled \magstep0 \font\smallish=cmr8 scaled \magstep1 \font\grande=cmr10 scaled \magstep4 \font\medio=cmr10 scaled \magstep2 \outer\def\beginsection#1\par{\medbreak\bigskip \message{#1}\leftline{\bf#1}\nobreak\medskip \vskip-\parskip \noindent} \def\hskip-8pt \vbox to 11pt{\hbox{..}\vfill}{\hskip-8pt \vbox to 11pt{\hbox{..}\vfill}} \def\hskip-8pt \vbox to 14pt{\hbox{..}\vfill}{\hskip-8pt \vbox to 14pt{\hbox{..}\vfill}} \def\hskip-6pt \vbox to 6pt{\hbox{..}\vfill}{\hskip-6pt \vbox to 6pt{\hbox{..}\vfill}} \renewcommand\topfraction{1} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{a^{\dagger}}{a^{\dagger}} \newcommand{f^{\dagger}}{f^{\dagger}} \newcommand{b^{\dagger}}{b^{\dagger}} \newcommand{g^{\dagger}}{g^{\dagger}} \newcommand{{A^{\dagger}}}{{A^{\dagger}}} \newcommand{{\bf A}}{{\bf A}} \newcommand{{\bf E}}{{\bf E}} \newcommand{{\bf B}}{{\bf B}} \newcommand{\bar\partial}{{\bf p}} \newcommand{{\bf r}}{{\bf r}} \newcommand{{\bf k}}{{\bf k}} \newcommand{{\bf v}}{{\bf v}} \newcommand{{\bf\epsilon}}{{\bf\epsilon}} \newcommand{{Q^{\dagger}}}{{Q^{\dagger}}} \newcommand{\bar{x}}{\bar{x}} \newcommand{\nonumber}{\nonumber} \newcommand{\rangle}{\rangle} \newcommand{\langle}{\langle} \newcommand{\newline}{\newline} \def\Phi{\Phi} \def\nabla^2{\nabla^2} \def\epsilon{\epsilon} \def\eta{\eta} \def\kappa{\kappa} \def\zeta{\zeta} \def\mu{\mu} \def\nu{\nu} \def\rho{\rho} \def\sigma{\sigma} \def\theta{\theta} \def\tau{\tau} \def\partial{\partial} \def\bar\partial{\bar\partial} \def\bar\partial{\bar\partial} \defx^{+}{x^{+}} \defx^{-}{x^{-}} \def\bar z{\bar z} \def\bar w{\bar w} \def\varphi{\varphi} \def\alpha{\alpha} \def\beta{\beta} \def\delta{\delta} \def\lambda{\lambda} \def\gamma{\gamma} \def\Gamma{\Gamma} \def\Delta{\Delta} \def\omega{\omega} \def\Omega{\Omega} \def\Sigma{\Sigma} \def\langle{\langle} \def\rangle{\rangle} \def{\cal O}{{\cal O}} \def\nonumber{\nonumber} \def\a {\rm '}{\alpha {\rm '}} \defq{q} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \newcommand{\color [rgb]{0.8,0,0}}{\color [rgb]{0.8,0,0}} \newcommand{\color [rgb]{0,0.7,0}}{\color [rgb]{0,0.7,0}} \newcommand{\color [rgb]{0,0,0.8}}{\color [rgb]{0,0,0.8}} \newcommand{\color [rgb]{0.8,0.4,0.1}}{\color [rgb]{0.8,0.4,0.1}} \newcommand{\color [rgb]{0.6,0.0,0.6}}{\color [rgb]{0.6,0.0,0.6}} \def\MA#1{{\color [rgb]{0.8,0,0} [MA: #1]}} \def\RA#1{{\color [rgb]{0,0.7,0} [RA: #1]}} \def\PA#1{{\color [rgb]{0,0,0.8} [PA: #1]}} \def\GD#1{{\color [rgb]{0.6,0.0,0.6} [GD: #1]}} \def\red#1{{\color [rgb]{0.8,0,0} #1}} \newcommand{\Ered}[1]{{\color{red} #1}} \newcommand{\MM}[1]{{\color{blue}\bf #1}} \newcommand{\end{eqnarray}}[1]{\begin{align} #1 \end{align}} \newcommand{\sea}[1]{\begin{subequations}\begin{align} #1 \end{align}\end{subequations}} \newcommand{\seal}[2]{\begin{subequations}\label{#1} \begin{align} #2 \end{align}\end{subequations}} \newcommand{\seq}[1]{\begin{equation} \begin{split} #1 \end{split} \end{equation}} \newcommand{\LN}[2]{\log | z_{#1}-z_{#2} |} \newcommand{\zz}[2]{z_{#1}-z_{#2}} \newcommand{\zbzb}[2]{\bar{z}_{#1}-\bar{z}_{#2}} \newcommand{\aqk}[1]{ \alpha' q k_{#1}} \newcommand{\qk}[1]{ q k_{#1}} \newcommand{\teq}[1]{\theta_{#1}\epsilon_{#1} q} \newcommand{\temu}[1]{(\theta_{#1}\epsilon_{#1}^\mu)} \newcommand{\tenu}[1]{(\theta_{#1}\epsilon_{#1}^\nu)} \newcommand{\tbeq}[1]{\bar{\theta_{#1}}\bar{\epsilon}_{#1} q} \newcommand{\tbemu}[1]{\bar{\theta_{#1}}\bar{\epsilon}_{#1}^\mu} \newcommand{\tbenu}[1]{\bar{\theta_{#1}}\bar{\epsilon}_{#1}^\nu} \usepackage{empheq} \setlength\fboxsep{0.5cm} \newcommand*\widefbox[1]{\fbox{\hspace{2em}#1\hspace{2em}}} \newcommand{\bea}[1]{ \begin{empheq}[box=\widefbox]{flalign} #1 \end{empheq} } \setlength{\parskip}{5mm plus2mm minus2mm} \usepackage[titles]{tocloft} \usepackage{titlesec} \titleformat*{\section}{\large \bfseries } \titleformat*{\subsection}{\normalsize \bfseries } \setlength{\cftbeforesecskip}{-1.5ex} \setlength{\cftbeforesubsecskip}{-0.5ex} \setlength{\cftbeforesubsubsecskip}{-0.5ex} \begin{document} \begin{titlepage} \hfill \hbox{NORDITA-2016-025} \vskip 1.5cm \vskip 1.0cm \begin{center} {\Large \bf Subsubleading soft theorems of gravitons and dilatons in the bosonic string} \vskip 1.0cm {\large Paolo Di Vecchia$^{a,b}$, Raffaele Marotta$^{c}$, Matin Mojaza$^{b}$,} \\[0.7cm] {\it $^a$ The Niels Bohr Institute, University of Copenhagen,\\ Blegdamsvej 17, DK-2100 Copenhagen \O , Denmark}\\[.2cm] {\it $^b$ NORDITA, KTH Royal Institute of Technology and Stockholm University, \\ Roslagstullsbacken 23, SE-10691 Stockholm, Sweden}\\[.2cm] {\it $^c$ Istituto Nazionale di Fisica Nucleare, Sezione di Napoli. \\ Complesso Universitario di Monte S. Angelo ed. 6, via Cintia, 80126, Napoli, Italy } \end{center} \begin{abstract} Starting from the amplitude with an arbitrary number of massless closed states of the bosonic string, we compute the soft limit when one of the states becomes soft to subsubleading order in the soft momentum expansion, and we show that when the soft state is a graviton or a dilaton, the full string amplitude can be expressed as a soft theorem through subsubleading order. It turns out that there are string corrections to the field theoretical limit in the case of a soft graviton, while for a soft dilaton the string corrections vanish. We then show that the new soft theorems, including the string corrections, can be simply obtained from the exchange diagrams where the soft state is attached to the other external states through the three-point string vertex of three massless states. In the soft-limit, the propagator of the exchanged state is divergent, and at tree-level these are the only divergent contributions to the full amplitude. However, they do not form a gauge invariant subset and must be supplemented with extra non-singular terms. The requirement of gauge invariance then fixes the complete amplitude through subsubleading order in the soft expansion, reproducing exactly what one gets from the explicit calculation in string theory. From this it is seen that the string corrections at subsubleading order arise as a consequence of the three-point amplitude having string corrections in the bosonic string. When specialized to a soft dilaton, it remarkably turns out that the string corrections vanish and that the non-singular piece of the subsubleading term of the dilaton soft theorem is the generator of space-time special conformal transformation. \end{abstract} \end{titlepage} \tableofcontents \section{Introduction and summary of results} \label{intro} Tremendous progress is happening in understanding the soft factorizing behavior of scattering amplitudes and their relation to underlying, sometimes hidden, symmetries. Most remarkable, perhaps, are the suggestions that the soft behavior of particularly the gravity and Yang-Mills S-matrices are related to asymptotic symmetries in general relativity and in gauge theories~\cite{asymp}. Also remarkable are the similarities pointed out very recently between the soft behavior of the gravity/string dilaton and the Ward identities of scale and special conformal transformations~\cite{Boels:2015pta,DiVecchia:2015jaq}. New uses of soft theorems are also being discovered in the more modern field of amplitudes~\cite{Cheung:2014dqa,Luo:2015tat}. Soft theorems, however, have a long history, and go back to the seminal works in the 1950s on low-energy photon scattering~\cite{Low} and in the 1960s on soft-graviton scattering~\cite{Weinberg}, when they were realized to be important consequences of gauge invariance. Discussions on the generic subleading behavior of soft gluon and graviton scattering were recently taken up in~\cite{GenericSubStart}, and has, since the suggested relations to asymptotic symmetries~\cite{asymp}, received enormous attention, not only in gravity and Yang-Mills theory~\cite{SoftGravityYangMills,BDDN,BianchiR2phi}, but also their extensions in supersymmetric theories~\cite{softsusy}, and in string theory~\cite{softstring,DiVecchia:2015oba}. Double-soft theorems are also receiving increasing interest~\cite{DoubleSoft}, due to their potential to uncover hidden symmetries of the S-matrix (see e.g. Ref.~\cite{ArkaniHamed:2008gz} for a discussion on Adler's zeroes and the pion double-soft theorem). Soft theorems in string theory were first discussed in the 1970s by Ademollo et al.~\cite{Ademollo:1975pf} and by Shapiro~\cite{Shapiro:1975cz} for tree diagram scattering amplitudes involving massless particles only, and with particular emphasis on the string-dilaton as the soft-state (see also Refs.~\cite{YoneyaHata} for this study in string field theory). In a recent work~\cite{DiVecchia:2015oba}, we have revived this line of studies by computing the soft behavior up to subsubleading order, when a soft massless closed string-state is scattering on external tachyons. It turns out that this amplitude is determined by the same gauge invariance that also determines the soft-graviton behavior up to subsubleading order in field theory, derived in Ref.~\cite{BDDN}. Furthermore, we computed the leading soft-behavior of the antisymmetric Kalb-Ramond tensor in the scattering on other massless closed string states. At the same time we rederived the known results involving instead a soft graviton or dilaton, and showed by invoking a slight generalization of the analysis done in Ref.~\cite{BDDN} that the leading soft behavior for both of them is again determined by field theory gauge invariance. The aim of this work is to extend our previous analysis in the bosonic string to the subsubleading order for the case of a soft graviton or dilaton scattering on other closed massless states. At this order, string corrections to the corresponding field theory soft theorems are expected to appear for the first time~\cite{BianchiR2phi}, and indeed this is what we find. Their presence is also expected in heterotic string and it is due to interaction terms of the type $\phi R^2$ which appear, to order $\alpha'$, in the effective actions of such string theories~\cite{Metsaev:1987zx}. String corrections to the graviton soft theorem have also been computed in \cite{1512.00803} in the case of four point bosonic string amplitudes. We have extended this analysis to $n+1$ point amplitudes with a soft dilaton or graviton and $n$ massless hard particles finding that only the graviton soft operator is modified by $\alpha'$-corrections. The lack of string corrections in the soft behavior of the dilaton could be a signal that the dilaton soft theorem is a consequence of some Ward identity as it occurs for the Nambu-Goldstone boson of the spontaneously broken conformal invariance~\cite{DiVecchia:2015jaq}. The similarities between these two particles, both called dilaton, indeed deserve a further study. Let us summarize our primary results before going through the calculational details. In Ref.~\cite{DiVecchia:2015jaq} it has been shown that the field theory amplitude for a soft graviton or a soft dilaton of soft momentum $q$, with $n$ other hard gravitons and/or dilatons can be written in the following factorized form: \end{eqnarray}{ M_{n+1}(q; k_i) \equiv \epsilon_{\mu \nu}^S M^{\mu \nu} (k_i; q) &= \kappa_D \left ( \hat{S}_q^{(-1)}+\hat{S}_q^{(0)}+\hat{S}_q^{(1)} \right ) M_n(k_i) + {\cal O}(q^2) \, , \label{softbe} } where $\kappa_D$ is related to the gravitational constant in $D$ space-time dimension, the superscript of each $\hat{S}_q^{(m)}$ indicates the order $m$ in $q$ of each term, and $M_n$ is the amplitude without the soft particle. $\epsilon_{\mu \nu}^S$ is the polarization of either the graviton or the dilaton, which is symmetric under the exchange of $\mu$ and $\nu$. In Ref.~\cite{DiVecchia:2015oba}, the above soft theorem to subleading order was shown to hold also in the framework of the bosonic string including also the Kalb-Ramond antisymmetric field both in the role of the soft state and as hard states. In the cases of a soft graviton or dilaton, the first two terms are given by~\cite{DiVecchia:2015oba}: \end{eqnarray}{ \hat{S}_q^{(-1)} &= \epsilon_{\mu \nu}^S \sum_{i=1}^n \frac{k_i^\mu k_i^\nu}{k_i \cdot q} \ , \quad \hat{S}_q^{(0)} = \epsilon_{\mu \nu}^S\left ( - \frac{i q_\rho}{2} \right ) \sum_{i=1}^n \frac{k_i^\mu J_i^{\nu \rho} + k_i^\nu J_i^{\mu \rho}}{k_i \cdot q} \, , \label{leasublea} } where \begin{eqnarray} J_i^{\mu \nu} = L_i^{\mu \nu} + \mathcal{S}_i^{\mu \nu} \, , \quad \mathcal{S}_i^{\mu \nu} = S_i^{\mu \nu} + {\bar{S}}^{\mu \nu}_i \ , \label{JLS1} \end{eqnarray} \end{eqnarray}{ L_i^{\mu\nu} =i\left( k_i^\mu\frac{\partial }{\partial k_{i\nu}} -k_i^\nu \frac{\partial }{\partial k_{i\mu}}\right) , \ S_i^{\mu\nu}=i\left( \epsilon_i^\mu\frac{\partial }{\partial \epsilon_{i\nu}} - \epsilon_i^\nu\frac{\partial }{\partial \epsilon_{i\mu}}\right) , \ {\bar{S}}^{\mu\nu}_i=i\left( {\bar{\epsilon}}_i^\mu\frac{\partial }{\partial {\bar{\epsilon}}_{i\nu}} -{\bar{\epsilon}}_i^\nu\frac{\partial }{\partial {\bar{\epsilon}}_{i\mu}}\right) . \label{LandS} } while the third term was computed in the field theory limit in Ref.~\cite{DiVecchia:2015jaq}. The method used is an extension of the one of Ref.~\cite{BDDN} and the soft behavior in Eq.~(\ref{softbe}) is shown to be a direct consequence of the gauge invariance conditions \begin{eqnarray} q_\mu M^{\mu \nu} (k_i; q) =q_\nu M^{\mu \nu} (k_i; q) =0 \, . \label{gaugeinvcond} \end{eqnarray} In this paper we extend the previous method to include string corrections and we check the final result by performing a direct calculation of the subsubleading term in the soft limit in the amplitude of the bosonic string involving an arbitrary number of massless closed strings. This calculation is performed by extending the technique developed in Ref.~\cite{DiVecchia:2015oba} for the computation of the subleading term. As a result we obtain the following subsubleading term: \seq{ S_q^{(1)} = &- \frac{\epsilon_{\mu \nu}^S}{2} \sum_{i=1}^n \left [ \frac{q_\rho J_i^{\mu \rho} q_\sigma J_i^{\nu \sigma}}{k_i \cdot q} + \left ( \frac{k_i^\mu q^\nu}{k_i \cdot q} q^\sigma + q^\mu \eta^{\nu \sigma}- \eta^{\mu \nu} q^\sigma \right ) \frac{\partial}{\partial k_{i}^\sigma} \right . \\ &-\left ( \frac{q_\rho q_\sigma \eta_{\mu \nu} - q_\sigma q_\nu \eta_{\rho \mu} - q_\rho q_\mu \eta_{\sigma \nu}}{ k_i\cdot q} \right ) \Pi_i^{\rho \sigma} \\ &\left . - \alpha' \left (q_\sigma k_{i\nu} \eta_{\rho \mu}+q_\rho k_{i\mu } \eta_{\sigma \nu} - \eta_{\rho\mu}\eta_{\sigma \nu} (k_i \cdot q) - q_\rho q_\sigma \frac{k_{i\mu}k_{i\nu}}{k_i \cdot q} \right ) \Pi_i^{\rho \sigma} \right ] , \label{generalsubsub1} } where \end{eqnarray}{ \Pi_i^{\rho \sigma} = \epsilon_i^\rho\frac{\partial }{\partial \epsilon_{i\sigma}} + {\bar{\epsilon}}_i^\rho\frac{\partial }{\partial {\bar{\epsilon}}_{i\sigma}} \, . \label{sumpoloperator1} } Only the symmetric part $\Pi_i^{\{\rho, \sigma\}} = \frac{\Pi_i^{\rho \sigma} + \Pi_i^{\sigma \rho}}{2}$ contributes in the previous expression because the polarization tensor $\epsilon^S_{\mu \nu}$ is symmetric in the indices $\mu$ and $\nu$. The first two lines of Eq.~(\ref{generalsubsub1}) agree with the expression already presented in Ref.~\cite{DiVecchia:2015jaq}, while the third line gives the string corrections. By choosing the polarization of the graviton, from Eqs.~(\ref{leasublea}) and (\ref{generalsubsub1}) we get the soft theorem of a graviton: \seq{ M_{n+1}^{\rm graviton} = &\, \kappa_D \, \epsilon_{\mu \nu}^{g} \sum_{i=1}^n \left[ \frac{k_i^\mu k_i^\nu - i q_\rho k_i^\mu J_{i}^{\nu \rho} - \frac{1}{2} q_\rho J_i^{\mu \rho} q_\sigma J_i^{\nu \sigma}}{k_i q} \right. \\ & \left . - \frac{\alpha'}{2} \left (q_\sigma k_{i\nu} \eta_{\rho \mu}+q_\rho k_{i\mu } \eta_{\sigma \nu} - \eta_{\rho\mu}\eta_{\sigma \nu} (k_i \cdot q) - q_\rho q_\sigma \frac{k_{i\mu}k_{i\nu}}{k_i \cdot q} \right ) \Pi_i^{\{\rho, \sigma\}} \right] M_n \, , \label{gravisoftfina} } while, by choosing the polarization tensor of the dilaton $\epsilon^{S}_{\mu\nu} \to \frac{1}{\sqrt{D-2}}\left( \eta_{\mu \nu} - q_\mu {\bar{q}}_\nu - q_\nu {\bar{q}}_\mu\right)$, we get the soft theorem for a dilaton: \seq{ M_{n+1}^{\rm dilaton} = &\, \frac{\kappa_D}{\sqrt{D-2}} \left[ 2 - \sum_{i=1}^n k_{i\mu} \frac{\partial}{\partial k_{i\mu}} \right . \\ & \left. + \frac{1}{2} \sum_{i=1}^n \left( q^\rho {\hat{K}}_{i \rho} + \frac{q^\rho q^\sigma}{k_i q} \left( \mathcal{S}_{i, \rho \mu}\eta^{\mu \nu} \mathcal{S}_{i \nu \sigma} + D \Pi_{i,\{\rho, \sigma\}} \right) \right) \right] M_n \, , \label{dilasoftfin} } where \begin{eqnarray} && {\hat{K}}_{i\mu} = 2 \left[ \frac{1}{2} k_{i \mu} \frac{\partial^2}{\partial k_{i\nu} \partial k_i^\nu} -k_{i}^{\rho} \frac{\partial^2}{\partial k_i^\mu \partial k_{i}^{\rho}} + i \mathcal{S}_{i,\rho \mu} \frac{\partial}{\partial k_i^\rho} \right] \, . \label{hatDhatKmu1} \end{eqnarray} Remarkably, these operators are nothing but the generators of space-time special conformal transformations acting in momentum space. As recently shown in Ref.~\cite{DiVecchia:2015jaq}, these operators also control the soft behavior of the Nambu-Goldstone bosons of spontaneously broken conformal invariance and an interesting application of this recently appeared in Ref.~\cite{Luo:2015tat}. It would be interesting to understand the physical reason for why these generators appear in the soft limit of the string dilaton. Notice furthermore that the string corrections vanish completely for a soft dilaton. The paper is organized as follows. In Sect.~\ref{stringdilaton} we write the amplitude with an arbitrary number of massless states of the closed bosonic string and we perform the limit in which one of them (a dilaton or a graviton) becomes soft. In Sect.~\ref{softdilasoftgravi} we derive the explicit form of the soft behavior for a dilaton and for a graviton and we give a physical interpretation of the various terms that appear. In Sect.~\ref{gaugeinvariance1} we derive the string corrections to the soft theorem through gauge invariance from the string corrections of the three-point amplitude for massless closed string states. Finally, details of the calculations presented in Sect.~\ref{stringdilaton} are given in two Appendices. \section{Amplitude of one soft and $n$ massless closed strings} \label{stringdilaton} In this section we consider the amplitude with $n+1$ massless closed string states and we study its behavior in the limit in which one of the massless states is soft. We start by summarizing the results presented in Ref.~\cite{DiVecchia:2015oba}, where more details may be found. The amplitude involving $n+1$ massless closed string states can be written as \end{eqnarray}{ M_{n+1} \sim & \int \frac{\prod_{i=1}^n d^2z_i\,d^2 z}{dV_{abc}} \int d \theta \prod_{i=1}^n d\theta_i ~ \langle 0| e^{i( \theta \epsilon^\mu_q \partial_{z}+ \sqrt{\frac{\alpha'}{2}} q^\mu)X_\mu(z)}~\prod_{i=1}^ne^{i( \theta_i \epsilon_i^{\mu_i} \partial_{z_i}+ \sqrt{\frac{\alpha'}{2}} k^{\mu_i}_i)X_{\mu_i}(z_i)}|0 \rangle \nonumber\\ &\times \int d {\bar{\theta}} \prod_{i=1}^{n} d {\bar{\theta}}_i \langle 0| e^{i( \bar{\theta} \bar{\epsilon}^\mu_q \partial_{\bar{z}}+ \sqrt{\frac{\alpha'}{2}} q^\mu)X_\mu(\bar{z})}~\prod_{i=1}^ne^{i( \bar{\theta}_i \bar{\epsilon}_i^{\nu_i} \partial_{\bar{z}_i}+ \sqrt{\frac{\alpha'}{2}} k^{\nu_i}_i)X_{\nu_i}(\bar{z}_i)} |0 \rangle \ . \label{amplitheta} } where we assume that {$\theta, \bar{\theta}, \epsilon, \bar{\epsilon}$} are Grassmann variables, and we use the definition \mbox{$\epsilon_{i \, \mu \nu} \equiv \epsilon_{i \, \mu} {\bar{\epsilon}}_{i \, \nu}$} for the polarization tensor. We consider the soft string to be the one with momentum $q$ and polarization $\epsilon_{q, \mu \nu}$. After using the contraction $\langle X^\mu (z) X^{\nu} (w) \rangle = - \eta^{\mu \nu} \log (z-w)$ and performing the integration over the Grassmann variables $\theta$ and ${\bar{\theta}}$, the expression reduces to a form which can formally be written in two parts: \begin{eqnarray} M_{n+1} = M_n * S \ , \label{MMS} \end{eqnarray} where by $*$ a convolution of integrals is understood, and where \seq{ S \equiv &\, \kappa_D \int \frac{d^2 z}{2 \pi} \,\, \sum_{i=1}^{n} \left(\theta_i \frac{ (\epsilon_q \epsilon_i)}{(z-z_i)^2} + \sqrt{\frac{\alpha'}{2}} \frac{(\epsilon_q k_i)}{z-z_i} \right) \sum_{j=1}^{n} \left({\bar{\theta}}_j \frac{ ({\bar{\epsilon}}_q {\bar{\epsilon}}_j)}{({\bar{z}}- {\bar{z}}_j)^2} + \sqrt{\frac{\alpha'}{2}} \frac{({\bar{\epsilon}}_q k_i)}{{\bar{z}}-{\bar{z}}_i} \right) \\ & \times \exp \left[ - \sqrt{\frac{\alpha'}{2}} \sum_{i=1}^{n} \theta_i \frac{(\epsilon_i q) }{z-z_i} \right] \exp \left[ - \sqrt{\frac{\alpha'}{2}} \sum_{i=1}^{n} {\bar{\theta}}_i \frac{({\bar{\epsilon}}_i q) }{{\bar{z}}-{\bar{z}}_i} \right]\prod_{i=1}^{n} |z- z_i|^{\alpha' q k_i} \, , \label{last3lines} } is the part describing the soft particle, and \seq{ M_n = &\, \frac{8\pi}{\alpha'}\left (\frac{\kappa_D}{2\pi}\right )^{n-2} \int \frac{\prod_{i=1}^n d^2z_i }{dV_{abc}} \int \left[\prod_{i=1}^n d\theta_i \prod_{i=1}^{n} d {\bar{\theta}}_i \right] \prod_{i<j} |z_i - z_j |^{\alpha' k_i k_j} \\ & \times \exp \left[ -\sum_{i<j} \frac{\theta_i \theta_j}{(z_i - z_j)^2} (\epsilon_i \epsilon_j) + \sqrt{\frac{\alpha'}{2}} \sum_{i \neq j} \frac{ \theta_i (\epsilon_i k_j) }{z_i - z_j} \right] \\ & \times \exp \left[- \sum_{i<j} \frac{{\bar{\theta}}_i {\bar{\theta}}_j}{({\bar{z}}_i - {\bar{z}}_j)^2} ({\bar{\epsilon}}_i {\bar{\epsilon}}_j) + \sqrt{\frac{\alpha'}{2}} \sum_{i \neq j} \frac{ {\bar{\theta}}_i ({\bar{\epsilon}}_i k_j) }{{\bar{z}}_i - {\bar{z}}_j} \right] , \label{nonsoftonly} } is the amplitude of $n$ massless states without the soft particle. We eventually want to find a soft operator $\hat{S}$ such that \mbox{$\hat{S}M_n = M_n \ast S$} through order $q^1$. This can be done by expanding $S$ for small $q$ and keep terms in the integrand up to the order $q^2$, since higher orders of the integrand cannot yield terms of order $q^1$ after integration. It is useful then to divide $S$ in three parts: \begin{eqnarray} S = \kappa_D \left ( S_1 + S_2 + S_3 \right ) + {\cal O}(q^2) \ , \label{SSi} \end{eqnarray} defined by: \seq{ S_{1} = &\, \frac{\alpha'}{2} \int \frac{ d^2 z}{2\pi}\, \sum_{i=1}^{n}\frac{(\epsilon_q k_i)}{z-z_i}\sum_{j=1}^{n} \frac{({\bar{\epsilon}}_q k_j)}{{\bar{z}}-{\bar{z}}_j} \prod_{i=1}^{n} |z- z_i|^{\alpha' q k_i} \\ & \times \Bigg\{ 1 - \sqrt{\frac{\alpha'}{2}} \sum_{k=1}^{n} \Bigg( \theta_k \frac{(\epsilon_k q) }{z-z_k} + {\bar{\theta}}_k \frac{({\bar{\epsilon}}_k q) }{{\bar{z}}-{\bar{z}}_k} \Bigg) + \frac{1}{2}\left( \frac{\alpha'}{2}\right ) \\ & \times\Bigg[ \left( \sum_{h=1}^{n} \theta_h \frac{(\epsilon_h q) }{z-z_h} \right)^2 + \left( \sum_{h=1}^{n} {\bar{\theta}}_h \frac{({\bar{\epsilon}}_h q) }{{\bar{z}}-{\bar{z}}_h} \right)^2 + 2\left( \sum_{h=1}^{n} \theta_h \frac{(\epsilon_h q) }{z-z_h} \right) \left( \sum_{h=1}^{n} {\bar{\theta}}_h \frac{({\bar{\epsilon}}_h q) }{{\bar{z}}-{\bar{z}}_h} \right) \Bigg] \Bigg\} \, , \label{S1} } \seq{ S_2 = &\, \int \frac{ d^2 z}{2\pi} \sum_{i=1}^{n} \left(\theta_i \frac{ (\epsilon_q \epsilon_i)}{(z-z_i)^2}\right) \sum_{j=1}^{n} \left({\bar{\theta}}_j \frac{ ({\bar{\epsilon}}_q {\bar{\epsilon}}_j)}{({\bar{z}}- {\bar{z}}_j)^2} \right)\prod_{\ell=1}^{n} |z- z_{\ell}|^{\alpha' q k_{\ell}} \\ & \times \Bigg\{ 1 - \sqrt{\frac{\alpha'}{2}} \sum_{k=1}^{n} \Bigg( \theta_k \frac{\epsilon_k q}{z-z_k} + {\bar{\theta}}_k \frac{({\bar{\epsilon}}_k q) }{{\bar{z}}-{\bar{z}}_k} \Bigg) + \frac{1}{2}\left( \frac{\alpha'}{2}\right ) \\ & \times \Bigg[ \left( \sum_{h=1}^{n} \theta_h \frac{(\epsilon_h q) }{z-z_h} \right)^2 + \left( \sum_{h=1}^{n} {\bar{\theta}}_h \frac{({\bar{\epsilon}}_h q) }{{\bar{z}}-{\bar{z}}_h} \right)^2 + 2\left( \sum_{h=1}^{n} \theta_h \frac{(\epsilon_h q) }{z-z_h} \right) \left( \sum_{h=1}^{n} {\bar{\theta}}_h \frac{({\bar{\epsilon}}_h q) }{{\bar{z}}-{\bar{z}}_h} \right) \Bigg] \Bigg\}\, , \label{S2} } \seq{ S_3 = &\, \sqrt{\frac{\alpha'}{2}}\int \frac{ d^2 z}{2\pi} \sum_{i=1}^{n} \sum_{j=1}^{n} \left[ \left( \frac{ \theta_i(\epsilon_q \epsilon_i)}{(z-z_i)^2} \right) \left(\frac{({\bar{\epsilon}}_q k_j)}{{\bar{z}}-{\bar{z}}_j} \right) + \left( \frac{ {\bar{\theta}}_i({\bar{\epsilon}}_q {\bar{\epsilon}}_i)}{({\bar{z}}- {\bar{z}}_i)^2} \right) \left(\frac{(\epsilon_q k_j)}{z-z_j} \right) \right] \prod_{\ell=1}^{n} |z- z_{\ell}|^{\alpha' q k_{\ell}} \\ & \times \Bigg\{ 1 - \left( \frac{\sqrt{2\alpha'}}{2} \right) \sum_{k=1}^{n} \Big( \theta_k \frac{\epsilon_k q}{z-z_k} + {\bar{\theta}}_k \frac{({\bar{\epsilon}}_k q) }{{\bar{z}}-{\bar{z}}_k} \Big) + \frac{1}{2}\left( \frac{\alpha'}{2}\right ) \\ & \times \Bigg[ \left( \sum_{h=1}^{n} \theta_h \frac{(\epsilon_h q) }{z-z_h} \right)^2 + \left( \sum_{h=1}^{n} {\bar{\theta}}_h \frac{({\bar{\epsilon}}_h q) }{{\bar{z}}-{\bar{z}}_h} \right)^2 + 2\left( \sum_{h=1}^{n} \theta_h \frac{(\epsilon_h q) }{z-z_h} \right) \left( \sum_{h=1}^{n} {\bar{\theta}}_h \frac{({\bar{\epsilon}}_h q) }{{\bar{z}}-{\bar{z}}_h} \right) \Bigg] \Bigg\}\, . \label{S_3} } These terms provide all contributions to the order $q^1$. They can be further split in $S_i^{(a)}$, a=0,1,2, with the index $a$ labelling the order of expansion in $q$ of the integrand modulo the factor $|z-z_l|^{\aqk{l}}$, which has to be integrated. The integrals involved are all of the form: \end{eqnarray}{ I_{i_1 i_2 \ldots}^{j_1 j_2 \ldots} = \int \frac{d^2 z}{2 \pi} \frac{\prod_{l = 1}^n |z-z_l|^{\alpha' k_l q}}{ (z-z_{i_1})(z-z_{i_2}) \cdots (\bar{z}-\bar{z}_{j_1}) (\bar{z}-\bar{z}_{j_2}) \cdots } \ . \label{GeneralIntegralintext} } Each of the integrals involved has to be computed through order $q^{1-a}$, which we denote by ${I^{(1-a)}}_{i_1 i_2 \ldots}^{j_1 j_2 \ldots}$. Using this notation, each term can be compactly expressed as: \seal{Sl1}{ S_1^{(0)} &= \frac{\alpha'}{2} \sum_{i=1}^n \left [ (\epsilon_qk_i)(\bar{\epsilon}_qk_i ) {I^{(1)}}_i^i + \sum_{j\neq i}^n (\epsilon_qk_i)(\bar{\epsilon}_qk_j ) {I^{(1)}}_i^j \right ], \label{S1su0giu} \\ S_1^{(1)} &= -\left(\frac{\alpha'}{2}\right)^{\frac{3}{2}} \sum_{i,j,l=1}^n (\epsilon_qk_i)(\bar{\epsilon}_qk_j )(\theta_l\epsilon_lq){I^{(0)}}^j_{il}+\text{c.c.} \, , \\ S_1^{(2)} &=\frac{1}{2} \left(\frac{\alpha'}{2}\right)^2\sum_{i,j,l=1}^n (\epsilon_qk_i)(\bar{\epsilon}_qk_j) (\theta_l\epsilon_lq)\left[ \sum_{m\neq l}^n (\theta_m\epsilon_mq) {I^{(-1)}}^j_{ilm} + \sum_{m=1}^n (\bar{\theta}_m\bar{\epsilon}_mq) {I^{(-1)}}^{jm}_{il}\right] +\text{c.c.} \, , } with $\text{c.c.}$ denoting the complex conjugate of the expressions. Similarly: \seal{Sl2}{ S_2^{(0)} =&\, \sum_{i,j=1}^n(\epsilon_q\theta_i\epsilon_i)(\bar{\epsilon}_q\bar{\theta}_j\bar{\epsilon}_j){I^{(1)}}^{jj}_{ii} \, , \\ S_2^{(1)} =&\, -\sqrt{\frac{\alpha'}{2}} \sum_{i,j=1}^n \sum_{l\neq i=1}^n (\epsilon_q\theta_i\epsilon_i)(\bar{\epsilon}_q\bar{\theta}_j\bar{\epsilon}_j)(\theta_l\epsilon_lq){I^{(0)}}^{jj}_{iil}+\text{c.c.}\, ,\\ S_2^{(2)}=&\, \frac{\alpha'}{4}\sum_{i,j=1}^n (\epsilon_q\theta_i\epsilon_i)(\bar{\epsilon}_q\bar{\theta}_j\bar{\epsilon}_j)\left[ \sum_{l\neq m\neq i=1}^n (\theta_l\epsilon_lq)(\theta_m\epsilon_mq) {I^{(-1)}}^{jj}_{iilm} \right. \nonumber \\ &\left. +\sum_{l\neq i=1}^n\sum_{m\neq j=1}^n(\theta_l\epsilon_lq)(\bar{\theta}_m\bar{\epsilon}_mq) {I^{(-1)}}^{jjm}_{iil}\right]+\text{c.c.} \, , } where we note that according to Eq.~(\ref{A88}) the first part of $S_2^{(2)}$ involving $I_{iilm}^{jj}$ does not contribute to the order $q$ for any $i,j$ and $l\neq m\neq i$. Finally: \seal{Sl3}{ S_3^{(0)}=&\, \sqrt{\frac{\alpha'}{2}}\sum_{i,j=1}^n (\epsilon_q\theta_i\epsilon_i)(\bar{\epsilon}_qk_j){I^{(1)}}^j_{ii}+\text{c.c.} \, , \\ S_3^{(1)}=&\,-\frac{\alpha'}{2} \sum_{i,j=1}^n (\epsilon_q\theta_i\epsilon_i)(\bar{\epsilon}_qk_j)\left[\sum_{l\neq i=1}^n (\theta_l\epsilon_lq){I^{(0)}}^j_{iil} +\sum_{l =1}^n(\bar{\theta}_j\bar{\epsilon}_jq){I^{(0)}}^{jl}_{ii}\right]+\text{c.c.}\, , \\ S_3^{(2)} =&\, \frac{1}{2}\left(\frac{\alpha'}{2}\right)^{\frac{3}{2}}\sum_{i,j=1}^n (\epsilon_q\theta_i\epsilon_i)(\bar{\epsilon}_qk_j) \left[\sum_{l\neq m\neq i=1}^n (\theta_l\epsilon_lq)(\theta_m\epsilon_mq){I^{(-1)}}^j_{iilm}\right.\nonumber\\ & \left. + \sum_{l \neq m = 1} (\tbeq{l})(\tbeq{m} ){I^{(-1)}}_{ii}^{jlm} + \sum_{l\neq i=1}^n\sum_{m\neq j=1}^n (\theta_l\epsilon_lq)(\bar{\theta}_m\bar{\epsilon}_mq) {I^{(-1)}}^{jm}_{iil}\right]+\text{c.c.} \, , } where we note that by inspection of Eqs.~\eqref{Iiimii}-\eqref{Iiimjn} the second part of $S_3^{(2)}$ involving $I_{ii}^{jlm}$ does not contribute to the order $q$ for any $j, l$, and $m\neq l$. As a word of warning, notice that the definitions of $S_i$ are not the same as in Ref.~\cite{DiVecchia:2015oba}, but one can identify $S_1^{(0)}$, $S_1^{(1)}$, $S_2^{(0)} + S_2^{(1)}$, and $S_3^{(0)} + S_3^{(1)}$, with respectively $S_1, S_2, S_4$ and $S_3$ of Ref.~\cite{DiVecchia:2015oba}. In App.~\ref{Results} we provide the computational details as well as the explicit results for all the integrals involved. In particular, we show in the appendix that all the integrals are linear combinations of a subset of six of them. The coefficients of these linear combinations are complex functions with poles when two Koba-Nielsen variables coincide. We now report the results. The first term of $S_1$, i.e. $S_1^{(0)}$, is the part equivalent to the amplitude of a soft massless string scattering on $n$ tachyons, and this was already computed to the order $q^1$ in Ref.~\cite{DiVecchia:2015oba}, reading: \end{eqnarray}{ S_1^{(0)}=&\, \epsilon_{q}^{S\mu \nu} \Bigg \{ \sum_{i=1}^n k_{i\mu}k_{i\nu} \Bigg[\frac{(\alpha')^2}{2} \sum_{j \neq i} (k_j q) \log^2 |z_i - z_j| \nonumber \\ & + \frac{1}{k_i q} \left( 1 +\alpha' \sum_{j \neq i} (k_j q) \log |z_i - z_j| + \frac{(\alpha')^2}{2} \sum_{j \neq i} \sum_{k \neq i} (k_j q) (k_k q) \log|z_i -z_j| \log |z_i - z_k| \right) \Bigg] \nonumber \\ & - \alpha'\sum_{i \neq j}^n k_{i\mu}k_{j\nu} \Bigg[ \log|z_i-z_j|-\frac{\alpha' }{2}\sum_{m\neq i,j}(q k_m)\log|{z}_m-{z}_j|\log|z_i-z_m| \nonumber\\ &+ \frac{\alpha' }{2}\sum_{m\neq j} (q k_m)\log|{z}_m-{z}_j|\log|z_i-z_j| + \frac{\alpha' }{2}\sum_{m\neq i}(q k_m)\log|{z}_i-{z}_j|\log|z_i-z_m| \Bigg] \Bigg \} \nonumber \\ & + \epsilon_{q}^{B\mu\nu} \sum_{i\neq j\neq m}^n k_{i\mu}k_{j\nu} \left (\frac{\alpha'}{2}\right )^2 (q k_m) \Bigg [{\rm Li}_2\left( \frac{\bar{z}_i-\bar{z}_m}{\bar{z}_i-\bar{z}_j}\right)-{\rm Li}_2\left(\frac{z_i-z_m}{z_i-z_j }\right) \nonumber\\ & + \log\frac{|z_i-z_j|}{|z_i-z_m|}\log\left(\frac{z_m-z_j}{\bar{z}_m-\bar{z}_j} \frac{\bar{z}_i-\bar{z}_j}{z_i-z_j}\right) \Bigg ] + {\cal O}(q^2) \ , \label{S10} } where \end{eqnarray}{ \epsilon_{q}^{S\mu \nu} = \frac{ \epsilon_q^\mu \bar{\epsilon}_q^\nu + \epsilon_q^\nu \bar{\epsilon}_q^\mu}{2} \ , \ \epsilon_{q }^{B\mu \nu} = \frac{ \epsilon_q^\mu \bar{\epsilon}_q^\nu - \epsilon_q^\nu \bar{\epsilon}_q^\mu}{2} \ . \label{poldecomposition} } For the next terms, only the parts up to order $q^0$ were derived previously. For completeness we express the full result, together with the new terms of order $q$: \end{eqnarray}{ S_1^{(1)} = & - \epsilon_{q \mu} \bar{\epsilon}_{q \nu} \sqrt{\frac{\alpha'}{2}} \sum_{i \neq j} \Bigg [ \frac{\theta_i \epsilon_i q}{z_i - z_j} \Bigg ( \frac{ k_j^\mu k_i^\nu}{ q k_i} - \frac{k_j^\mu k_j^\nu}{ q k_j} +\alpha' (k_j^\mu k_i^\nu -k_j^\mu k_j^\nu )\log | z_i - z_j | \nonumber \\ & + {\frac{\alpha'}{2} k_i^\mu k_i^\nu\frac{ q k_j }{ q k_i}} -\frac{\alpha'}{2} k_i^\mu k_j^\nu + \alpha' k_j^\mu k_i^\nu \sum_{l\neq i} \frac{ q k_l}{q k_i} \log | z_i - z_l | - \alpha' k_j^\mu k_j^\nu \sum_{l\neq j} \frac{ q k_l}{q k_j} \log | z_j - z_l | \Bigg ) \nonumber \\ & + \alpha' k_i^\mu k_j^\nu \sum_{l\neq ij} \theta_l \epsilon_l q \frac{\log |z_j-z_l| - \log |z_i-z_j|}{{z}_i - {z}_l} \Bigg ] + \text{c.c.} \, , } \end{eqnarray}{ S_1^{(2)} = & \epsilon_{q \mu} \bar{\epsilon}_{q \nu} \frac{\alpha'}{2} \sum_{i\neq j}^n \Bigg [ \frac{k_j^\mu k_i^\nu - k_i^\mu k_i^\nu}{qk_i} \frac{(\theta_i \epsilon_i q )(\theta_j \epsilon_j q)}{(z_i-z_j)^2} + \frac{k_i^\mu k_j^\nu}{2} \sum_{l\neq i,j}^n \frac{(\teq{l})(\tbeq{l})}{qk_l (\zz{i}{l})(\zbzb{j}{l})} \qquad \qquad \nonumber \\ & + \left (k_i^\mu k_i^\nu (\tbeq{j}) + k_i^\mu k_j^\nu (\tbeq{i}) \right ) \frac{ \teq{j}}{2|z_i-z_j|^2}\left(\frac{ 1}{\qk{i}} + \frac{1}{\qk{j}} \right ) \nonumber \\ & + \frac{\frac{k_i^\mu k_i^\nu}{2} (\theta_j \epsilon_j q) + k_j^\mu k_i^\nu (\theta_i \epsilon_i q)}{qk_i (\zz{i}{j})} \sum_{l\neq i,j} \left ( \frac{\teq{l}}{(z_i-z_l)} + \frac{\tbeq{l}}{(\zbzb{i}{l})}\right ) \Bigg] + \text{c.c.} \, , \label{l11}} \seal{l12}{ S_2^{(0)} =&\, \epsilon_{q \mu} \bar{\epsilon}_{q \nu} \frac{\alpha'}{2} \sum_{i \neq j}^n \frac{(\theta_i \epsilon_i^\mu)}{|\zz{i}{j}|^2} \Bigg \{(\bar{\theta}_i \bar{\epsilon}_i^\nu)qk_j \Bigg (1 +\frac{1}{2}\sum_{l\neq i} \frac{\qk{l}}{\qk{i}} \left [ \frac{\zbzb{i}{j}}{\zbzb{i}{l}}+ \frac{\zz{i}{j}}{\zz{i}{l}} \right ] \Bigg ) \nonumber \\ & -(\bar{\theta}_j \bar{\epsilon}_j^\nu) \sum_{l \neq i,j} qk_l \frac{(\zbzb{i}{l})(\zz{j}{l})}{(\zz{i}{l})(\zbzb{j}{l})} \Bigg\} \, , \\[5mm] S_2^{(1)} =&\, \epsilon_{q \mu} \bar{\epsilon}_{q \nu} \sqrt{\frac{\alpha'}{2}}\sum_{i\neq j}^n \Bigg[ \frac{ \left (\temu{i} (\teq{j}) - \temu{j} (\teq{i}) \right )(\tbenu{i} ) }{|\zz{i}{j}|^2 (\zz{i}{j})} \left(1+ \sum_{l\neq i} \frac{\qk{l}}{\qk{i}} \frac{\zbzb{i}{j}}{\zbzb{i}{l}} \right ) \nonumber \\ &+ \frac{\temu{i}(\tbenu{j})}{\zbzb{i}{j}} \sum_{l\neq i,j}^n \frac{(\theta_l \epsilon_l q)(\zbzb{i}{l})}{(\zz{i}{l})^2(\zbzb{j}{l}) }\Bigg ] + \text{c.c.} \, ,\\[5mm] S_2^{(2)} =&\, \epsilon_{q\mu} \bar{\epsilon}_{q\nu} \sum_{i \neq j }\frac{(\theta_i \epsilon_i^\mu)(\theta_j \epsilon_j q)}{2(\zz{i}{j})^2} \Bigg [ \frac{1}{\qk{i}}\sum_{l\neq i}\Bigg( \frac{ (\bar{\theta}_i \bar{\epsilon}_i^\nu) (\bar{\theta}_l \bar{\epsilon}_l q)- (\bar{\theta}_l\bar{\epsilon}_l^\nu) (\bar{\theta}_i\bar{\epsilon}_i q)} { (\zbzb{i}{l})^2} \nonumber \\ & + \frac{1}{\qk{j}}\sum_{l\neq j} \frac{ (\bar{\theta}_l\bar{\epsilon}_l^\nu) (\bar{\theta}_j\bar{\epsilon}_j q)-(\bar{\theta}_j\bar{\epsilon}_j^\nu) (\bar{\theta}_l\bar{\epsilon}_l q)}{(\zbzb{l}{j})^2} \Bigg) \Bigg] + \text{c.c.} \, , } \seal{I13}{ S_3^{(0)} = &\, \epsilon_{q \mu} \bar{\epsilon}_{q \nu} \sqrt{\frac{\alpha'}{2}} \sum_{i\neq j}^n \alpha'q_\rho \Bigg [ \frac{ k_i^\nu k_j^\rho - k_j^\nu k_i^\rho}{\zz{i}{j}}\temu{i} \left( \frac{1}{\aqk{i}} +\frac{1}{2} \right) \nonumber \\ & + \sum_{l\neq i} \frac{k_i^\nu k_l^\rho-k_l^\nu k_i^\rho}{\zz{i}{j}} \left (\temu{j} + \temu{i} \frac{\qk{j}}{\qk{i}} \right ) \LN{i}{l} \Bigg ]+\text{c.c.} \, , \\[5mm] S_3^{(1)} = &\, \epsilon_{q \mu} \bar{\epsilon}_{q \nu} \sum_{i\neq j}^n \temu{i}\Bigg\{ \frac{ (\theta_j \epsilon_j q)}{(z_i - z_j)^2} \Bigg [ \frac{k_{i}^{\nu}}{k_i q} - \frac{k_{j}^{\nu}}{k_j q} -\alpha' q_\rho \sum_{l\neq j} \frac{k_j^\nu k_l^\rho - k_l^\nu k_j^\rho}{\qk{j}} \LN{j}{l} \nonumber \\ & -\alpha' q_\rho \sum_{l\neq i} \frac{k_i^\nu k_l^\rho - k_l^\nu k_i^\rho}{\qk{i}} \left ( \frac{1}{2} \frac{\zz{i}{j}}{\zz{i}{l}} - \LN{i}{l} \right) \Bigg] - \frac{\alpha'}{2} \sum_{l\neq i,j} \frac{k_j^\nu \tbeq{l} + k_l^\nu \tbeq{j}}{(\zz{i}{l})(\zbzb{j}{l})} \nonumber \\ & - \frac{\alpha'}{2} \frac{k_i^\nu \bar{\theta}_j \bar{\epsilon}_j q + k_j^\nu \bar{\theta}_i \bar{\epsilon}_i q }{|z_i-z_j|^2} \left (1 + \sum_{l\neq i} \frac{\qk{l}}{\qk{i}} \frac{\zz{i}{j}}{\zz{i}{l}} \right ) \Bigg\} +\text{c.c.} \, , \\[5mm] S_3^{(2)} = &\, \epsilon_{q \mu} \bar{\epsilon}_{q \nu} \sqrt{\frac{\alpha'}{2}} \sum_{i\neq j}^n \Bigg [ \sum_{l\neq i,j}^n \frac{ \temu{i} (\theta_j \epsilon_j q) (\theta_l \epsilon_l q)}{(\zz{j}{l})(\zz{i}{j})^2} \left (\frac{ k_j^\nu}{\qk{j}} -\frac{k_i^\nu}{\qk{i}} \right ) \nonumber \\ & + \frac{ \temu{j}(\teq{i}) -\temu{i}(\teq{j})}{\qk{i}} \sum_{l\neq i} \frac{\left (k_i^\nu \tbeq{l} + k_l^\nu \tbeq{i} \right )}{(\zz{i}{j})^2 (\zbzb{i}{l})} \Bigg ] + {\rm c.c.} \, . } As a nontrivial consistency check, it is possible to show that the full expression \mbox{$S_1 + S_2 + S_3$} obeys gauge invariance, meaning that it vanishes identically by the replacement $\epsilon_{q\mu} \to q_\mu$ and $\bar{\epsilon}_{q\nu} \to q_\nu$. Actually, the identity is stronger, since the full expression vanishes by replacing only $\epsilon_{q\mu} \to q_\mu$ \emph{or} $\bar{\epsilon}_{q\nu} \to q_\nu$, which can be explicitly checked from the above expression. In other words, \end{eqnarray}{ q_\mu M_{n+1}^{\mu \nu} = q_\nu M_{n+1}^{\mu \nu} = 0 \, , \label{gaugeinvariance} } where $M_{n+1}^{\mu \nu}$ is the stripped soft amplitude with respect to the polarization of the soft particle. We want to find a gauge invariant operator that, when acting on $M_n$ reproduces the above results, i.e. \end{eqnarray}{ M_{n+1}(q; k_i) = M_n(k_i) \ast S(q,k_i) &= \kappa_D \left ( \hat{S}_q^{(-1)}+\hat{S}_q^{(0)}+\hat{S}_q^{(1)} \right ) M_n(k_i) + {\cal O}(q^2) \, , \label{softexpansion} } where the superscript of each $\hat{S}_q^{(m)}$ indicates the order $m$ in $q$ of each term. In Ref.~\cite{DiVecchia:2015oba} we showed that the leading and subleading terms, symmetric in the polarization indices $\mu, \nu$, are generated by exactly the same soft-operators that one can infer using just gauge-invariance of the amplitude, which read: \end{eqnarray}{ \hat{S}_q^{(-1)} &= \epsilon_{\mu \nu}^S \sum_{i=1}^n \frac{k_i^\mu k_i^\nu}{k_i \cdot q} \ , \quad \hat{S}_q^{(0)} = \epsilon_{\mu \nu}^S\left ( - \frac{i q_\rho}{2} \right )\sum_{i=1}^n \frac{k_i^\mu J_i^{\nu \rho} + k_i^\nu J_i^{\mu \rho}}{k_i \cdot q} \, , } where \begin{eqnarray} J_i^{\mu \nu} = L_i^{\mu \nu} + \mathcal{S}_i^{\mu \nu} \, , \quad \mathcal{S}_i^{\mu \nu} = S_i^{\mu \nu} + {\bar{S}}^{\mu \nu}_i \ , \label{JLS} \end{eqnarray} \end{eqnarray}{ L_i^{\mu\nu} =i\left( k_i^\mu\frac{\partial }{\partial k_{i\nu}} -k_i^\nu\frac{\partial }{\partial k_{i\mu}}\right) , \ S_i^{\mu\nu}=i\left( \epsilon_i^\mu\frac{\partial }{\partial \epsilon_{i\nu}} -\epsilon_i^\nu\frac{\partial }{\partial \epsilon_{i\mu}}\right) , \ {\bar{S}}^{\mu\nu}_i=i\left( {\bar{\epsilon}}_i^\mu\frac{\partial }{\partial {\bar{\epsilon}}_{i\nu}} -{\bar{\epsilon}}_i^\nu\frac{\partial }{\partial {\bar{\epsilon}}_{i\mu}}\right) \, . \label{LandS1} } The new result here is that the subsubleading terms, symmetric in the polarization indices $\mu, \nu$, are uniquely generated by the following soft operator, which can be explicitly checked: \end{eqnarray}{ S_q^{(1)} = &- \frac{\epsilon_{\mu \nu}^S}{2} \sum_{i=1}^n \left [ \frac{q_\rho J_i^{\mu \rho} q_\sigma J_i^{\nu \sigma}}{k_i \cdot q} + \left ( \frac{k_i^\mu q^\nu}{k_i \cdot q} q^\sigma + q^\mu \eta^{\nu \sigma}- \eta^{\mu \nu} q^\sigma \right ) \frac{\partial}{\partial k_{i}^\sigma} \right . \nonumber \\ &-\left ( \frac{q_\rho q_\sigma \eta_{\mu \nu} - q_\sigma q_\nu \eta_{\rho \mu} - q_\rho q_\mu \eta_{\sigma \nu}}{ k_i\cdot q} \right ) \left (\epsilon_{i}^\rho \frac{\partial}{\partial \epsilon_{i\sigma}} + \bar{\epsilon}_{i}^\rho \frac{\partial}{\partial \bar{\epsilon}_{i\sigma}} \right ) \nonumber \\ &\left . - \alpha' \left (q_\sigma k_{i\nu} \eta_{\rho \mu}+q_\rho k_{i\mu }\eta_{\sigma \nu} - \eta_{\rho\mu}\eta_{\sigma \nu} (k_i \cdot q) - q_\rho q_\sigma \frac{k_{i\mu}k_{i\nu}}{k_i \cdot q} \right ) \left (\epsilon_{i}^\rho \frac{\partial}{\partial \epsilon_{i\sigma}} + \bar{\epsilon}_{i}^\rho \frac{\partial}{\partial \bar{\epsilon}_{i\sigma}} \right ) \right ] . \label{generalsubsub} } It is thus useful to also define: \end{eqnarray}{ \Pi_i^{\rho \sigma} = \epsilon_i^\rho\frac{\partial }{\partial \epsilon_{i\sigma}} + {\bar{\epsilon}}_i^\rho\frac{\partial }{\partial {\bar{\epsilon}}_{i\sigma}} \, . \label{sumpoloperator} } Notice that only the symmetric combination $\Pi_i^{\{\rho, \sigma\}} = \frac{\Pi_i^{\rho \sigma} + \Pi_i^{\sigma \rho}}{2}$ survives the contractions in Eq.~\eqref{generalsubsub}, since the contraction of $\mu$ and $\nu$ is symmetric. The terms in the first two lines of Eq.~\eqref{generalsubsub}, which are finite in the field theory limit, exactly match the soft theorem derived in Ref.~\cite{DiVecchia:2015jaq} using just on-shell gauge invariance of tree-level gravity amplitudes. The terms in the last line can thus be seen as the string corrections to the field theory soft theorem. Notice that each parenthesis is independently gauge invariant. Notice also that in the field theory limit, if the soft particle is a graviton, only the first term is nonzero, since $\epsilon_{\mu \nu}^{\rm graviton}$ is traceless. The extra terms in the first line were found already in Ref.~\cite{DiVecchia:2015oba} for the case, where the $n$ external states were tachyons. In Ref.~\cite{DiVecchia:2015oba}, we also found a soft theorem for the antisymmetric part at the subleading order, corresponding to a soft Kalb-Ramond field. At this point, however, it is not clear how the antisymmetric part of our subsubleading explicit results could also be expressed as a soft theorem, since at this order dilogarithmic terms appear in Eq.~\eqref{S10}. We thus leave the analysis of the antisymmetric part for a possible future study. In the next section we specify the subsubleading operator to the case of a soft dilaton and a soft graviton and we give a physical interpretation of the various terms that appear. \section{Soft gravitons and dilatons} \label{softdilasoftgravi} Specifying our main result Eq.~\eqref{generalsubsub} to the cases where the soft particle is either a graviton or a dilaton, we may first simplify the general expression by imposing the transversality condition $\epsilon_{\mu \nu}^S q^\mu = \epsilon_{\mu \nu}^S q^\nu = 0$, leading to: \seq{ S_q^{(1)} = &- \frac{\epsilon_{\mu \nu}^S}{2} \sum_{i=1}^n \left [ \frac{q_\rho J_i^{\mu \rho} q_\sigma J_i^{\nu \sigma}}{k_i \cdot q} - \frac{\eta^{\mu \nu} q_\rho q_\sigma}{k_i \cdot q} \left ( k_i^\rho \frac{\partial}{\partial k_{i\sigma}} + \Pi_i^{\{\rho, \sigma\}} \right ) \right . \\ &\left . - \alpha' \left (q_\sigma k_{i\nu} \eta_{\rho \mu}+q_\rho k_{i\mu }\eta_{\sigma \nu} - \eta_{\rho\mu}\eta_{\sigma \nu} (k_i \cdot q) - q_\rho q_\sigma \frac{k_{i\mu}k_{i\nu}}{k_i \cdot q} \right ) \Pi_i^{\{\rho, \sigma\}} \right ] . \label{subsub} } Considering the soft particle to be a graviton, tracelessness of its polarization gives: \seq{ S_{{\rm graviton}, q}^{(1)} = &- \frac{\epsilon_{\mu \nu}^{\rm graviton}}{2} \sum_{i=1}^n \left [ \frac{q_\rho J_i^{\mu \rho} q_\sigma J_i^{\nu \sigma}}{k_i \cdot q} \right . \\ &\left . - \alpha' \left (q_\sigma k_{i\nu} \eta_{\rho \mu}+q_\rho k_{i\mu } \eta_{\sigma \nu} - \eta_{\rho\mu}\eta_{\sigma \nu} (k_i \cdot q) - q_\rho q_\sigma \frac{k_{i\mu}k_{i\nu}}{k_i \cdot q} \right ) \Pi_i^{\{\rho, \sigma\}} \right ] . \label{softgraviton} } The first term reproduces the subsubleading soft theorem of gravitons. The second line are the string corrections to the field theory result. We can reduce the derivatives with respect to $\epsilon_i$ and $\bar{\epsilon}_i$ in $\Pi_i$ by acting on the $n$-point amplitude with the polarization vectors stripped off, i.e. \end{eqnarray}{ M_n(k_i , \epsilon_i, \bar{\epsilon}_i) = \epsilon_1^{\mu_1}\bar{\epsilon}_1^{\nu_1} \cdots \epsilon_n^{\mu_n}\bar{\epsilon}_n^{\nu_n} M_{n,(\mu_1,\nu_1), \ldots, (\mu_n,\nu_n)} (k_i) \, . \label{strippedMn} } Then we can express: \seq{ \left (\epsilon_{i}^\rho \frac{\partial}{\partial \epsilon_{i\sigma}} + \bar{\epsilon}_{i}^\rho \frac{\partial}{\partial \bar{\epsilon}_{i\sigma}} \right ) M_n &= \left ( \eta^{\sigma \mu_i} \epsilon_i^\rho \bar{\epsilon}_i^{\nu_i} + \eta^{\sigma \nu_i} \epsilon_i^{\mu_i} \bar{\epsilon}_i^{\rho} \right ) M_{n, (\mu_i \nu_i)} \\ &=2 \eta^{\sigma \mu_i} \left ( \epsilon_i^{\{\rho,} \bar{\epsilon}_i^{\nu_i \}} M_{n,\{\mu_i, \nu_i\}} + \epsilon_i^{[\rho,} \bar{\epsilon}_i^{\nu_i ]} M_{n,[\mu_i, \nu_i]} \right ) \, , \label{gravitonstringpole} } where in the second line we decomposed $M_n$ into its symmetric and antisymmetric parts, as in Eq.~(\ref{poldecomposition}), showing that string corrections can exist for external states $i$ being polarized both symmetrically (gravitons and dilatons) and antisymmetrically (Kalb-Ramond). We will comment further on these new string-theory terms in the next section. Projecting instead the soft leg onto the dilaton, using $\epsilon_{\mu \nu}^d = (\eta_{\mu \nu} - q_\mu \bar{q}_\nu - q_\nu \bar{q}_\mu)/\sqrt{D-2}$, with $q\cdot \bar{q}= 1$ and $\bar{q}^2=q^2 = 0$, we get: \end{eqnarray}{ S_{{\rm dilaton}, q}^{(1)} = \frac{1}{2\sqrt{D-2}} \sum_{i=1}^n &\left[ q^\rho {\hat{K}}_{i \rho} + \frac{q^\rho q^\sigma}{k_i q} \left( \mathcal{S}_{i, \rho \mu} \eta^{\mu \nu} \mathcal{S}_{i \nu \sigma} + D \Pi_{i,\{\rho, \sigma\}} \right) \right . \nonumber \\ & \left . -\alpha'(k_i \cdot q) \left ( \epsilon_i \cdot \frac{\partial}{\partial \epsilon_i} + \bar{\epsilon}_i \cdot \frac{\partial}{\partial \bar{\epsilon}_i} \right ) \right] \, , \label{subsubdilaton} } where both $k_i \cdot \epsilon_i = k_i \cdot \bar{\epsilon}_i = 0$ and gauge invariance, i.e. $k_i\cdot \frac{\partial}{\partial \epsilon_i}M_n = k_i^\mu M_{n, \mu} = 0$ and $k_i\cdot \frac{\partial}{\partial \bar{\epsilon}_i} M_n = k_i^\nu M_{n, \nu} = 0$, were used, and where we introduced the operator: \begin{eqnarray} && {\hat{K}}_{i\mu} = 2 \left[ \frac{1}{2} k_{i \mu} \frac{\partial^2}{\partial k_{i\nu} \partial k_i^\nu} -k_{i}^{\rho} \frac{\partial^2}{\partial k_i^\mu \partial k_{i}^{\rho}} + i \mathcal{S}_{i,\rho \mu} \frac{\partial}{\partial k_i^\rho} \right] \, . \label{hatDhatKmu} \end{eqnarray} Remarkably, this is exactly the generator of special conformal transformations acting on momentum space. The string correction for the dilaton vanishes due to momentum conservation, since the operator $\epsilon_i \cdot \frac{\partial}{\partial \epsilon_i}$ leaves $M_n$ invariant, yielding (correspondingly for the barred term) \end{eqnarray}{ \frac{\alpha'}{2}\sum_{i=1}^n (k_i \cdot q) \epsilon_i \cdot \frac{\partial}{\partial \epsilon_i} M_n = \frac{\alpha'}{2} \sum_{i=1}^n (k_i \cdot q) M_n = - \frac{\alpha'}{2} q^2 M_n = 0 \ . } In conclusion, we find that the subsubleading dilaton soft operator equals the field theory counterpart and reads: \end{eqnarray}{ S_{{\rm dilaton}, q}^{(1)} = & \frac{1}{2\sqrt{D-2}} \sum_{i=1}^n \left[ q^\rho {\hat{K}}_{i \rho} + \frac{q^\rho q^\sigma}{k_i q} \left( \mathcal{S}_{i, \rho \mu}\eta^{\mu \nu} \mathcal{S}_{i \nu \sigma} + D \Pi_{i,\{\rho, \sigma\}} \right) \right] . \label{finalsubsubdilaton} } The subsubleading dilaton soft theorem contains a finite piece, which can be fully expressed by the generator of a special conformal transformation and a singular piece only dependent on polarization derivatives. We may use the polarization-stripped form of $M_n$ in Eq.~\eqref{strippedMn} to understand the singular terms, which then after some simplification read (suppressing for brevity the factor $1/\sqrt{D-2}$): \seq{ &\sum_{i=1}^n \frac{q^\rho q^\sigma}{2 k_i q} \left( \mathcal{S}_{i, \rho \mu}\eta^{\mu \nu} \mathcal{S}_{i \nu \sigma} + D \Pi_{i,\{\rho, \sigma\}} \right) M_n \\ & = \sum_{i=1}^n \frac{1}{ k_i q} \Big ( q_{\mu_i}q_{\nu_i} (\epsilon_i \cdot \bar{\epsilon}_i) + \eta_{\mu_i \nu_i} (q \cdot \epsilon_i)(q \cdot \bar{\epsilon}_i) \\ & \hspace{15mm} + (q \cdot \epsilon_i) (q_{\mu_i} \bar{\epsilon}_{\nu_i} - q_{\nu_i} \bar{\epsilon}_{\mu_i} )- (q \cdot \bar{\epsilon}_i) (q_{\mu_i} {\epsilon}_{\nu_i} - q_{\nu_i} {\epsilon}_{\mu_i} ) \Big ) M_n^{(\mu_i, \nu_i)} \, . } The expression evidently separates into a symmetric and an antisymmetric part, which we can express using $M_n^{\mu_i \nu_i} = M_n^{\{\mu_i, \nu_i\}} + M_n^{[\mu_i, \nu_i]}$, reducing the previous expression to: \end{eqnarray}{ &\sum_{i=1}^n \left (\frac{q_{\mu_i}q_{\nu_i} \eta_{\alpha \beta} + q_\alpha q_\beta \eta_{\mu_i \nu_i} }{ k_i \cdot q} \right ) \epsilon_i^\alpha \bar{\epsilon}_i^\beta M_n^{\{\mu_i, \nu_i\}} + \sum_{i=1}^n 2\left (\frac{q_\alpha q_{\mu_i} \eta_{\beta \nu_i} + q_\beta q_{\nu_i} \eta_{\alpha \mu_i}}{ k_i \cdot q} \right )\epsilon_i^\alpha \bar{\epsilon}_i^\beta M_n^{[\mu_i, \nu_i]} \nonumber \\ =& \sum_{i=1}^n \left (\frac{q_{\mu_i}q_{\nu_i} \eta_{\alpha \beta} + q_\alpha q_\beta \eta_{\mu_i \nu_i} }{ k_i \cdot q} \right ) \epsilon_i^{S \alpha,\beta} M_n^{\{\mu_i, \nu_i\}} + \sum_{i=1}^n 4\frac{ q_\alpha q_{\mu_i} \eta_{\beta \nu_i}}{ k_i \cdot q}\epsilon_i^{B \alpha,\beta} M_n^{[\mu_i, \nu_i]} \, , } where in the second line we also decomposed the polarization vectors as in Eq.~\eqref{poldecomposition}. The form of these terms suggest that they are coming from factorizing exchange diagrams, where the soft dilaton is attached to an external leg through a three-point vertex, with the indices $\alpha, \beta$ being the polarization indices of the internal state. In the field theory limit there are only two types of such vertices, one involving two dilatons derivatively coupled to a graviton and two Kalb-Ramond fields derivatively coupled to one dilaton, giving rise to the three types of factorizing diagrams shown in Fig.~\ref{fieldtheoryfactorization}. Indeed, if we project the external leg $i$ on each of the three massless states, the expression above reduces in each case to one nonzero term: \seal{softdilatonpoles}{ &\text{For } \epsilon_i^{\alpha \beta} = \epsilon_g^{\alpha\beta}: \qquad \frac{ q_\alpha q_\beta}{ k_i \cdot q} \, \eta_{\mu_i \nu_i} \, \epsilon_g^{\alpha,\beta} M_n^{\{\mu_i, \nu_i\}} \, . \\ &\text{For } \epsilon_i^{\alpha \beta} = \epsilon_d^{\alpha\beta}: \qquad \frac{ q_{\mu_i} q_{\nu_i} }{ k_i \cdot q} \, \eta_{\alpha\beta} \, \epsilon_d^{\alpha,\beta} M_n^{\{\mu_i, \nu_i\}} \, . \\ &\text{For } \epsilon_i^{\alpha \beta} = \epsilon_B^{\alpha\beta}: \qquad 4 \frac{ q_{\alpha} q_{\mu_i} }{ k_i \cdot q} \, \eta_{\beta \nu_i} \, \epsilon_B^{\alpha,\beta} M_n^{[\mu_i, \nu_i]} \, . } \begin{figure}[tb] \includegraphics[width=0.2\textwidth]{Softdilaton.pdf} \hspace{3mm} \includegraphics[width=0.22\textwidth]{DDG-vertex.pdf} \hspace{3mm} \includegraphics[width=0.22\textwidth]{DGD-vertex.pdf} \hspace{3mm} \includegraphics[width=0.22\textwidth]{DBB-vertex.pdf} \caption{Soft dilaton (dashed line) scattering on other massless states (solid lines), and the only three types of exchange diagrams appearing in field theory, involving i) another external dilaton and an internal graviton (double-wavy line), ii) an external graviton and an internal dilaton, iii) an external and an internal Kalb-Ramond field (wavy line). } \label{fieldtheoryfactorization} \end{figure} It is worth noticing from Fig.~\ref{fieldtheoryfactorization} that in the cases where the $i$th particle is either a dilaton or a graviton, there are contributions to the dilaton soft theorem, where the $i$th particle in the lower point amplitude $M_n$ is changed to respectively a graviton or a dilaton. This means that the subsubleading term of the soft dilaton amplitude separates into two factorized terms when the $i$th state is a dilaton or a graviton. Specifically, for either of the three possible cases we have: \end{eqnarray}{ M_{n+1}( \phi(q); \phi(k_i), \ldots) &\sim \left (\hat{S}^{(0)} + q^\mu \hat{S}^{(1)}_{ \mu} \right) M_n (\phi(k_i), \ldots ) +q^\mu \hat{S}^{\phi g}_{ \mu} \, M_n (g_{\alpha \beta}(k_i), \ldots ) + {\cal O}(q^2) \, , \nonumber \\[2mm] M_{n+1}( \phi(q); g_{\mu_i\nu_i}(k_i), \ldots) &\sim \left (\hat{S}^{(0)} + q^\mu \hat{S}^{(1)}_{ \mu} \right) M_n (g_{\mu_i\nu_i} (k_i), \ldots ) +q^\mu \hat{S}^{g\phi}_{ \mu} \, M_n (\phi(k_i), \ldots ) + {\cal O}(q^2) \, , \nonumber\\[2mm] M_{n+1}( \phi(q); B_{\mu_i\nu_i}(k_i), \ldots) &\sim \left (\hat{S}^{(0)} + q^\mu \hat{S}^{(1)}_{ \mu} \right) M_n (B_{\mu_i\nu_i}(k_i), \ldots )+ {\cal O}(q^2) \, , } where $\phi$ denotes a dilaton, $g_{\alpha \beta}$ denotes a graviton, and $B_{\alpha \beta}$ denotes a Kalb-Ramond field. The non-standard factorizing behavior of the first two cases was also noticed in Ref.~\cite{BianchiR2phi} in specific examples. In conclusion, in this section we have provided a physical interpretation of the subsubleading terms that survive in the field theory limit. In the next section we extend our analysis to the terms with string corrections. In particular, we show that they are completely determined from gauge invariance and from the string correction terms, appearing in the bosonic string, in the three-point amplitude involving massless particles. \section{String corrections from gauge invariance} \label{gaugeinvariance1} In this section we derive the string corrections to the soft operator, found in the previous section from explicit calculations, by considering the string corrections to the three-point amplitude of the bosonic string involving three massless closed string and exploiting on-shell gauge invariance of the amplitude. Let us consider the three-point amplitude of three massless bosonic strings with the given set of momenta and polarizations: \end{eqnarray}{ (q, \epsilon_q^\mu, \bar{\epsilon}_q^\nu) \, , \quad (k_i, \epsilon_i^{\mu_i}, \bar{\epsilon}_i^{\nu_i}) \, , \quad (-k_i-q, \epsilon_m^{\alpha}, \bar{\epsilon}_m^{\beta}) \, . } The polarization stripped three-point on-shell amplitude then reads: \seq{ M_3^{\mu\nu ;\,\mu_i\nu_i; \,\alpha\beta}= &2 \kappa_D \left( \eta^{\mu\mu_i} q^\alpha-\eta^{\mu\alpha} q^{\mu_i} +\eta^{\mu_i\alpha} k_i^\mu -\frac{\alpha'}{2} k_i^\mu q^{\mu_i} q^\mu\right) \\ &\times\left( \eta^{\nu\nu_i} q^\beta-\eta^{\nu\beta} q^{\nu_i} +\eta^{\nu_i\beta} k_i^\nu -\frac{\alpha'}{2} k_i^\nu q^{\nu_i} q^\nu\right) \, . \label{threepoint} } Contracting this expression with particular polarization tensors yields the explicit expression for particular three-point amplitudes of massless strings. For instance, considering the case where one of the states is a dilaton, and contracting with the polarization tensor used in Eq.~\eqref{subsubdilaton}, we get the following nonzero three-point amplitudes involving one dilaton: \seq{ M_{ddg} = 2 \kappa_D \epsilon_g^{\alpha \beta} q_\alpha q_\beta & \, , \quad M_{dBB} = \frac{2 \kappa_D}{\sqrt{D-2}} \, 4 \epsilon_B^{\mu_i \nu_i} \eta_{\mu_i \alpha}\epsilon_B^{\alpha \beta} q_{\nu_i} q_\beta \, , \\ M_{dgg} &= - \alpha'\frac{2 \kappa_D}{\sqrt{D-2}} \, \epsilon_g^{\mu_i \nu_i} \epsilon_g^{\alpha \beta} q_{\mu_i} q_{\nu_i} q_\alpha q_\beta \, . \label{3couplings} } Notice that the dilaton-graviton-graviton amplitude vanishes in the field theory limit. From these three-point amplitudes we can immediately write down the contributions from the factorizing exchange diagrams in Fig.~\ref{fieldtheoryfactorization} to the soft theorem, i.e. \end{eqnarray}{ M_{n+1}^{\rm ex.}&(\phi(q), \phi(k_i), \ldots ) \nonumber \\ \sim& \ M_{ddg}\, \frac{1}{(k_i + q)^2}\, M_n(g_{\alpha \beta}(k_i+q), \ldots ) =\kappa_D\, \frac{q_\alpha q_\beta}{k_i \cdot q}\, \epsilon_g^{\alpha \beta}M_n(g_{\alpha \beta}(k_i+q), \ldots ) \, , \\[5mm] M_{n+1}^{\rm ex.}&(\phi(q), g_{\mu_i\nu_i}(k_i), \ldots ) \nonumber \\ \sim&\ M_{ddg} \,\frac{1}{(k_i + q)^2} \, M_n(\phi(k_i+q), \ldots ) + M_{dgg} \,\frac{1}{(k_i + q)^2} \, M_n(g_{\alpha \beta} (k_i+q), \ldots ) \nonumber \\[2mm] =&\ \kappa_D \, \frac{q_{\mu_i} q_{\nu_i}}{k_i \cdot q}\, \epsilon_g^{\mu_i \nu_i} M_n(\phi(k_i+q), \ldots ) - \frac{\alpha' \, \kappa_D}{\sqrt{D-2}} \, \epsilon_g^{\mu_i \nu_i} \, \frac{q_{\mu_i} q_{\nu_i} q_\alpha q_\beta}{k_i \cdot q} \, \epsilon_g^{\alpha \beta} M_n(g_{\alpha \beta} (k_i+q), \ldots ) \, , \nonumber \\[5mm] M_{n+1}^{\rm ex.}&(\phi(q), B_{\mu_i\nu_i}(k_i), \ldots ) \\[5mm] \sim&\ \frac{M_{dBB}}{(k_i + q)^2} \, M_n(B_{\alpha \beta}(k_i+q), \ldots ) =\frac{\kappa_D}{\sqrt{D-2}} \, \epsilon_B^{\mu_i \nu_i } \frac{ 4 q_{\nu_i} q_{\beta} \eta_{\mu_i \alpha}}{k_i\cdot q}\, \epsilon_B^{\alpha \beta} M_n (B_{\alpha \beta}(k_i+q), \ldots ) \, . } These expressions match through order ${\cal O}(q)$ exactly the singular terms in the dilaton soft theorem, found in the previous section. Specifically, in Eq.~\eqref{softdilatonpoles} (where a factor $\kappa_D/\sqrt{D-2}$ from Eqs.~\eqref{softexpansion} and \eqref{finalsubsubdilaton} was suppressed) one should make the identifications: \sea{ M_n^{\{\mu_i,\nu_i\}} &\equiv \epsilon_g^{\mu_i \nu_i} M_n (g_{\alpha \beta} (k_i), \ldots ) + \epsilon_d^{\mu_i \nu_i} M_n (\phi(k_i), \ldots ) \, , \\ M_n^{[\mu_i,\nu_i]} &\equiv \epsilon_B^{\mu_i \nu_i} M_n (B_{\alpha \beta} (k_i), \ldots ) \, . } Notice that the three-point amplitude proportional to $\alpha'$, involving one dilaton and two gravitons, does not contribute to the dilaton soft theorem, since it is proportional to the fourth power in the soft momentum, and thus contributes at the order $q^3$. Let us now consider the $\alpha'$-correction terms to the graviton soft theorem appearing at subsubleading order in Eq.~\eqref{softgraviton}. Among them there is a term with the propagator-pole $1/k_i \cdot q$, which should come from a factorizing exchange diagram. Indeed, expanding the three-point amplitude in terms of $q$, with the soft-state now being a graviton, the leading string-correction to the three point amplitude reads: \seq{ M_3^{\mu \nu, \alpha \beta}\Big |_{\alpha'} &\sim - (2 \kappa_D) \frac{\alpha'}{2} k_i^\mu k_i^\nu ( (\epsilon_i \cdot q)q^\alpha \bar{\epsilon}^\beta + (\bar{\epsilon}_i \cdot q) q^\beta \epsilon_i^\alpha ) + {\cal O}(q^3) \\ &= - (2 \kappa_D) \frac{\alpha'}{2} k_i^\mu k_i^\nu q_\rho q_\sigma (\eta^{\sigma \alpha} \epsilon_i^\rho \bar{\epsilon}_i^\beta + \eta^{\sigma \beta} \epsilon_i^\alpha \bar{\epsilon}_i^\rho )+ {\cal O}(q^3) \, , \label{M3graviton} } where we contracted with the polarization vectors of leg $i$. Comparing with Eqs.~\eqref{softgraviton} and \eqref{gravitonstringpole}, we see that the singular string correction in Eq.~\eqref{softgraviton} is exactly reproduced by the exchange diagram given by $\frac{M_3}{(k_i + q)^2}M_n$, with the singularity coming from the pole of the exchanged state. We will now show that the remaining factorizing non-singular terms in Eq.~\eqref{subsub} all follow from on-shell gauge invariance of the amplitude. \begin{figure}[ht] \begin{center} \includegraphics[width=.9\textwidth]{stringfactorization.pdf} \caption{Decomposition of the $n+1$-point massless closed string amplitude, $M_{n+1}$, into a factorizing part involving the three-point amplitude $M_3$, where the soft state with momentum $q$ directly interacts with the $i$th state, exchanging a third massless closed state with momentum $k_i + q$ with the $n$-point amplitude $M_n$, and the reminding part of the amplitude, $N_{n+1}$, which excludes factorization through the former channel. } \label{stringfactorization} \end{center} \end{figure} By using the expression for the three-point amplitude in Eq.~\eqref{threepoint}, specialized to gravitons in the field theory limit $\alpha' \to 0$, it has been shown in Ref.~\cite{BDDN} that by decomposing the $n+1$ point amplitude as in Fig.~\ref{stringfactorization}, and imposing gauge invariance, i.e. $q_\mu M_{n+1}^{\mu \nu} = q_\nu M_{n+1}^{\mu \nu} = 0$, one exactly recovers the tree-level soft graviton theorem to subsubleading order. In the previous sections we have explicitly shown to subsubleading order and for a finite value of $\alpha'$ (see also Ref.~\cite{DiVecchia:2015oba}) that the same gauge invariance also applies to amplitudes where the external states can, as well as gravitons, be dilatons and Kalb-Ramond states. Based on this, it was shown in Ref.~\cite{DiVecchia:2015jaq} that in the limit $\alpha' \to 0$ the dilaton soft theorem to subsubleading order can be found also using just gauge invariance of the on-shell amplitude. (In Ref.~\cite{DiVecchia:2015jaq} only the cases where the other hard states are either dilatons or gravitons were considered, however, the extension to also include the antisymmetric Kalb-Ramond states trivially yields the expression found in Eq.~\eqref{finalsubsubdilaton}.) Now we extend this line of analysis to also involve the string corrections to the three-point amplitude. We first decompose the $n+1$-point amplitude as in Fig.~\ref{stringfactorization}, reading: \end{eqnarray}{ M_{n+1}^{\mu \nu} = \sum_{i=1}^n M_3^{\mu \nu}(q; k_i) \frac{1}{(k_i+q)^2} M_n(k_i+q) + N^{\mu \nu} (q; k_i) \, , } where dependence on all other $k_j \neq k_i$ is implicit. The indices $\mu \nu$ belong to the soft state with momentum $q$, and we assume them to be symmetric, i.e. $M_{n+1}^{\mu \nu} = M_{n+1}^{\nu \mu}$. We now use the explicit form of the three-point string amplitude, $M_3$, and focus here only on the $\alpha'$-terms. We can read off these terms, when $\mu \nu$ is symmetric, from Eq.~\eqref{M3graviton}, and as we noticed before, they can be rewritten as an operator by using Eq.~\eqref{gravitonstringpole}, when they are multiplied $M_n$. Thus we have: \end{eqnarray}{ M_{n+1}^{\mu \nu}\Big |_{\alpha'} = - \frac{\alpha'}{2} \sum_{i=1}^n \frac{k_i^\mu k_i^\nu }{k_i \cdot q} \, q_\rho q_\sigma \, \Pi_i^{\{\rho, \sigma\}}\, M_n(k_i) + N^{\mu \nu} (q; k_i)\Big |_{\alpha'} \, + {\cal O}(q^2)\, , \label{stringterms} } Now, imposing on-shell gauge invariance of the amplitude, we first find from $q_\mu M_{n+1}^{\mu \nu}=0$, and Taylor expanding in $q$: \sea{ N^{\mu \nu} (q=0, k_i) \Big |_{\alpha'} &= 0 \, ,\\[2mm] \frac{1}{2} \left ( \frac{\partial N^{\mu \nu}}{\partial q_\rho} + \frac{\partial N^{\rho \nu}}{\partial q_\mu} \right ) \Big |_{\alpha',\, q=0} &= \alpha' \sum_{i=1}^n k_i^\nu \Pi_i^{\{\rho, \mu\}} M_n(k_i) \, . } Inserting this back into the Taylor-expanded form of Eq.~\eqref{stringterms}, and imposing also $q_\nu M_{n+1}^{\mu \nu}=0$, we get: \end{eqnarray}{ \frac{1}{2} \left ( \frac{\partial N^{\mu \nu}}{\partial q_\rho} - \frac{\partial N^{\rho \nu}}{\partial q_\mu} \right )\Big |_{\alpha', \, q=0} = \frac{\alpha'}{2} \sum_{i=1}^n \left ( k_i^\mu \Pi_i^{\{\rho, \nu\}} - k_i^\nu \Pi_i^{\{\rho, \mu\}} -k_i^\rho \Pi_i^{\{\mu, \nu\}} \right ) M_n(k_i) \, . } Thus, in total we find: \end{eqnarray}{ M_{n+1}^{\mu \nu}\Big |_{\alpha'} = - \frac{\alpha'}{2} q_\rho \sum_{i=1}^n \left ( \frac{k_i^\mu k_i^\nu }{k_i \cdot q} \, q_\sigma \, \Pi_i^{\{\rho, \sigma\}} - k_i^\mu \Pi_i^{\{\rho, \nu\}} - k_i^\nu \Pi_i^{\{\rho, \mu\}} + k_i^\rho \Pi_i^{\{\mu, \nu\}} \right ) M_n(k_i) + {\cal O}(q^2) \, , } which demonstrates that, when the soft state is a graviton or a dilaton, on-shell gauge invariance of the string amplitude implies a soft theorem at subsubleading order even when string corrections are taken into account. Finally we can rewrite this expression as: \end{eqnarray}{ M_{n+1}^{\mu \nu}\Big |_{\alpha'} = \frac{\alpha'}{2} \sum_{i=1}^n \left ( q_\rho k_i^\mu \eta^{\nu}_\sigma + q_\sigma k_i^\nu \eta^{\mu}_\rho - (k_i \cdot q) \eta^{\mu}_{\rho} \eta_\sigma^\nu -\frac{k_i^\mu k_i^\nu }{k_i \cdot q} \, q_\rho q_\sigma \right ) \Pi_i^{\{\rho, \sigma\}} M_n(k_i) + {\cal O}(q^2) \, . } This is exactly the expression we found in Eq.~\eqref{generalsubsub} from explicit calculations. As we noticed in Eq.~\eqref{finalsubsubdilaton}, this expression vanishes when contracted with the polarization of the dilaton. For the graviton, we have explained that the term singular in $q\to 0$ comes from the exchange diagram where the the soft graviton scatters on an external massless closed string. When that string is a graviton, respectively, dilaton, the exchanged states is the opposite, i.e. a dilaton, respectively, graviton, meaning that the $n$-point amplitude in the soft theorem may involve other external hard states, than those specified in the $n+1$-point amplitude. The non-singular terms on the other hand are, as we have shown, direct consequences of on-shell gauge invariance. For the sake of completeness we conclude this section by comparing to the low-energy effective action that appears in Ref.~\cite{Metsaev:1987zx}, given by \seq{ S = \int d^D x \sqrt{-G} &\left\{ \frac{1}{2 \kappa_D^2} R - \frac{1}{2} \partial_{\mu} \phi \,\, G^{\mu \nu} \partial_{\nu} \phi - \frac{1}{24 \kappa_D^2 } {\rm e}^{- \frac{4 \kappa_D \phi }{\sqrt{D-2}} } H^2 \right. \\ & \left. + \frac{\alpha'}{4} {\rm e}^{- \frac{2 \kappa_D \phi}{\sqrt{D-2} } } \left[ \frac{1}{2 \kappa_D^2} \left( R^2_{\mu \nu \rho \sigma} - 4 R_{\mu \nu}^2 + R^2 \right) + \cdots \right] + {\cal O}(\alpha'^2) \right\} \, , \label{MTcanonical} } where, with respect to Eq.~(3.1) of Ref.~\cite{Metsaev:1987zx}, we have chosen a different overall normalization and have introduced a canonically normalized dilaton. The first line corresponds to the field theory limit and from it we can reproduce the two first couplings in Eq.~(\ref{3couplings}) together with the field theoretical three-graviton amplitude. The second line shows the string corrections linear in $\alpha'$, where we have only written down the terms that gives corrections to the three-point amplitude, while the ellipsis $\cdots$ denotes the higher-point operators. In particular, the last coupling in Eq.~(\ref{3couplings}) is reproduced by taking the lowest term of the Gauss-Bonnet part together with a dilaton coming from the exponential in front of the Gauss-Bonnet term, while the cubic term in the metric coming from only the Gauss-Bonnet part reproduces the first string correction to the three-graviton amplitude in Eq.~(\ref{threepoint}), as already noticed in Ref.~\cite{Zwiebach:1985uq}. \section{Discussion and Conclusion} \label{conclusion} In this paper we completed the computation of the amplitude involving $n+1$ massless closed string states in the bosonic string to the subsubleading order in the soft momentum expansion of one of the external states. When the soft state is either a graviton or a dilaton we showed that the result can be expressed as a soft theorem to subsubleading order. The graviton soft theorem has string corrections at the subsubleading order in the bosonic string, while the string corrections for the dilaton all vanish. The leading and subleading terms were already obtained in Ref.~\cite{DiVecchia:2015oba}. The calculation involves an extension of the technique developed in Ref.~\cite{DiVecchia:2015oba}, which is rather involved, but the final result has a very simple explanation: The basic ingredients to derive the soft-theorems are gauge invariance and the three-point amplitude with massless closed string states that, in the bosonic string, contains also string corrections. The procedure is an extension of the one followed in Ref.~\cite{BDDN}, where the total amplitude is written as a sum of two types of terms. One corresponds to diagrams where the soft leg is attached to each of the other external legs through the three-point amplitude of the soft state and two other massless states. These terms trivially factorize and in general have a pole when the momentum of the soft state goes to zero. The other type of terms do not factorize trivially and are finite in the soft limit. The two types of contributions, however, are not separately gauge invariant. Thus, by imposing on-shell gauge invariance of the full amplitude, one is able to factorize the soft behavior of the total amplitude up to subsubleading order in the soft momentum. With respect to the procedure of Ref.~\cite{BDDN} for the case of a soft graviton, there are, however, two important differences. Gauge invariance imposed on the amplitude $M_{\mu \nu}$ does not only give the soft behavior for the graviton, when saturated with the graviton polarization $\epsilon^{\mu \nu}_g$, but also that for the dilaton when saturated with the polarization of the dilaton $\epsilon^{\mu \nu}_d$. The second difference with respect to the field theoretical behavior of an amplitude with only gravitons, is that, in general, the state exchanged in the propagator of the pole term is not necessarily the same that appears in the corresponding external leg. This happens in the string correction term in the amplitude with only gravitons and in the case of a soft dilaton. We thus arrive to the conclusion that the soft theorems for the graviton and the dilaton are both consequence of gauge invariance and that the string corrections appearing at subsubleading order for the graviton are direct consequences of the three-point amplitude with massless closed strings having string corrections in the bosonic string. Since the three-point amplitude in superstring has no string corrections, we expect no string corrections in the soft behavior of a graviton in superstring. A curious outcome of our results is that the soft theorem of a dilaton contains, at subleading order, the generator of scale transformation and, at subsubleading order, the generator of special conformal transformations as is also the case in the soft theorem of a Nambu-Goldstone boson of spontaneously broken conformal symmetry~\cite{DiVecchia:2015jaq}. Some discussions of this was given in Ref.~\cite{DiVecchia:2015jaq}, but it would be interesting to have a more physical understanding of why this happens. Finally, it would also be interesting to extend our considerations to the soft behavior of the Kalb-Ramond antisymmetric tensor, and furthermore how the results go over to the case of superstrings. \vspace{-5mm} \subsection*{Acknowledgments} \vspace{-3mm} We thank Massimo Bianchi, Marco Bill{\`{o}}, Marialuisa Frau, Andrea Guerrieri, Alberto Lerda, Carlo Maccaferri, Josh Nohle, Igor Pesando and Congkao Wen for many useful discussions. We owe special thanks to Josh Nohle for many relevant comments on our results.
1,116,691,498,723
arxiv
\section{Introduction} \label{sect_introduction} The great potential of asteroseismology to address some unresolved issues in stellar physics and even, as was discussed during this meeting, to study the stellar populations making up our Galaxy cannot be overstated. Yet these expectations cannot be completely met if some fundamental quantities that are not encoded in seismic data are not accurately known \citep[e.g.,][]{creevey12}. For this reason, by providing the effective temperature and chemical composition (but also other important information such as the $v\sin i$ or the binary status), a traditional field such as stellar spectroscopy will still play an important role in the future for the study of seismic targets. Conversely, asteroseismology can provide the fundamental quantities (e.g., mass, age, evolutionary status in the case of red giants) that are needed to best interpret the abundance data. These two fields are therefore closely connected and can greatly benefit from each other. The large discrepancies between the $\log g$ and [Fe/H] values derived from spectroscopy and those in the {\it Kepler} Input Catalog \citep{bruntt12,thygesen12} illustrate the clear superiority of spectroscopic techniques over photometric ones for the estimation of these two parameters. Determining accurate temperatures from photometric indices is also challenging in the presence of a significant (and patchy) reddening (e.g., for some CoRoT fields that lie close to the Galactic plane). \section{The samples discussed} \label{sect_samples} Numerous spectroscopic analyses of individual seismic targets have been conducted during the last few years \citep[e.g.,][]{mathur13,morel13}. However, we will restrict ourselves here to discussing the results of studies dealing with a sizeable number of stars observed by either the CoRoT or the {\it Kepler} space missions. The CoRoT satellite operated either through the seismology (observations of a limited number of bright stars in the context of seismic studies) or the exoplanet (observations of numerous faint stars to detect planetary transits) channel. The parameters of a large number of stars in various evolutionary stages in the CoRoT exofields have been determined using an automated pipeline by \citet{gazzano10}, while a more comprehensive analysis of 19 red giants in the seismology fields has been presented by \citet{morel14}.\footnote{Note that the sample of \citet{morel14} discussed in the following contains a few stars which were eventually not observed by the satellite, as well as a number of benchmark stars used for validation purposes.} In the latter case, a standard analysis is employed that imposes excitation and ionisation equilibrium of iron based on the equivalent widths of a set of Fe I and Fe II lines. On the other hand, a study of dwarfs and giants in the {\it Kepler} field has been performed by \citet{bruntt12} and \citet{thygesen12}, respectively (the latter study superseding that of \citealt{bruntt11}). In both cases, the analysis relied on the spectral-synthesis software package {\tt VWA} \citep[see, e.g.,][]{bruntt02}. Table~\ref{tab_uncertainties} gives for all the studies mentioned above the uncertainties associated to the determination of the parameters. Based on the (sometimes rather scanty) information provided in these papers, it may be concluded that these figures are claimed to be representative of the {\it accuracy} of the results. Although these measurements also suffer from limitations (e.g., calibration issues, angular diameter corrections, reddening), the satisfactory agreement with the less model-dependent estimates provided by interferometry for stars at near-solar metallicities \citep[e.g.,][]{bruntt10,huber12,morel14} suggests that the values quoted in Table~\ref{tab_uncertainties} for $T_{\rm eff}$ are reasonable in this metallicity regime (however, this may not be true for metal-poor stars where non-LTE and 3D effects become important; \citealt{lind12,dobrovolskas13}). Much more extensive and stringent tests can be expected in the future thanks to the advent of new long-baseline interferometric facilities. A comparison for a subset of {\it Kepler} targets between the parameters obtained by \citet{bruntt12} and \citet{thygesen12}, and those derived by two other methods has recently been presented by \citet{molenda_zakowicz13}. For the reader interested in the differences arising from the use of different spectroscopic methods, see, e.g., \citet{gillon_magain06} and \citet{creevey12}. The impact of the neglect of non-LTE effects on the parameters inferred from excitation and ionisation balance of iron is discussed by, e.g., \citet{lind12} and \citet{bensby14}. \begin{table} \scriptsize \caption{Typical 1-$\sigma$ uncertainty of the parameter determination for the seismic targets. When available, the second row gives for a given study the uncertainties in case the gravity is fixed to the seismic value (see Sect.~\ref{sect_adopting_seismic_logg}). References: [1] \citet{gazzano10}; [2] \citet{morel14}; [3] \citet{bruntt12}; [4] \citet{thygesen12}.} \label{tab_uncertainties} \begin{tabular}{p{3.2cm}p{1.8cm}p{2.3cm}p{0.8cm}p{0.9cm}p{0.9cm}p{1.0cm}} \hline\noalign{\smallskip} Type of stars & Magnitude range & Type of data & $\sigma_{\rm \, T_{eff}}$ & $\sigma_{\rm \, \log g}$ & $\sigma_{\rm \, [Fe/H]}$ & Reference\\ \noalign{\smallskip}\svhline\noalign{\smallskip} Stars in CoRoT exofields & 12 $<$ $r'$ $<$ 16 & medium resolution$^a$ & 140 & 0.27 & 0.19 & 1\\ Giants in CoRoT seismofields & 6 $<$ $V$ $<$ 9 & high resolution & 85 & 0.20 & 0.10 & 2\\ & & & 60 & 0.07 & 0.08 & 2\\ Dwarfs in {\it Kepler} field & 7 $<$ $V_{\rm \, T}$ $<$ 10.5 & high resolution & 70 & 0.08 & ... & 3\\ & & & 60 & 0.03 & 0.06 & 3\\ Giants in {\it Kepler} field & 7 $<$ $V$ $<$ 12 & high resolution & 80 & 0.20 & 0.15 & 4\\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular} $^a$ Also small wavelength coverage ($\sim$200 \AA). \end{table} \section{Adopting the seismic gravity in spectroscopic analyses} \label{sect_adopting_seismic_logg} As has been exhaustively discussed in the recent literature, $\log g$ can be estimated in various ways from seismic observables: either from a detailed modelling of the oscillation spectrum or from scaling relations/grid-based methods that make use of $\Delta \unu$ (the average large frequency separation) and $\unu_{\rm max}$ (the frequency corresponding to maximum oscillation power). A number of empirical tests \citep[e.g.,][and references therein]{chaplin13} indicate that such estimates are likely more accurate than those derived from spectroscopic methods (typically 0.05 vs 0.15-0.20 dex). There is therefore an advantage in fixing the gravity to the seismic value in spectroscopic analyses, as is indeed now routinely done \citep[e.g.,][]{huber13}.\footnote{The possibility of using an independent and more accurate gravity estimate is also shared by stars with planetary transits \citep[e.g.,][]{torres12}.} We will first discuss in the following the quantitative impact of adopting the seismic gravity on the determination of $T_{\rm eff}$ and [Fe/H], and then turn our attention to the issue of the best metallicity to adopt when such a hybrid approach is employed. \subsection{Impact on the determination of the other parameters} \label{sect_impact_fixing_logg} For the {\it Kepler} targets, there is a good level of agreement in a statistical sense between the spectroscopic and seismic gravities, with no evidence for global systematic offsets: $\langle$$\log g$ (spectroscopy) -- $\log g$ (seismology)$\rangle$ = +0.08$\pm$0.07 for dwarfs \citep{bruntt12} and --0.05$\pm$0.30 dex for giants \citep{thygesen12}, respectively. However, large differences can be found on a star-to-star basis (up to 0.7 dex). As shown by \citet{morel11}, even larger discrepancies are evident for the red giants studied by \citet{gazzano10} (see also discussion by \citealt{valentini13} who independently re-analysed these data and found a more satisfactory agreement, especially for spectra with a low signal-to-noise ratio). On the other hand, the values are identical within the errors for all the giants analysed by \citet{morel14}. The effect of fixing the gravity to the seismic value on the $T_{\rm eff}$ and [Fe/H] determinations is illustrated in Fig.~\ref{fig_impact_adopting_seismic_logg}. A change in $\log g$ of 0.1 dex typically leads for giants in the CoRoT seismology fields to variations in $T_{\rm eff}$ of 15 K and in [Fe/H] of 0.04 dex. The good agreement between the two sets of $\log g$ values only implies relatively small adjustments for $T_{\rm eff}$ and the abundances (generally below 50 K and 0.1 dex). A similar sensitivity of [Fe/H] against changes in $\log g$ is obtained for {\it Kepler} giants. However, variations in the adopted $\log g$ are in this case not accompanied in a coherent way by $T_{\rm eff}$ changes. It is in particular not completely clear how $\log g$ changes amounting to up to 0.6 dex can lead to exactly identical $T_{\rm eff}$ values. There is also a lack of correlation between the $\log g$ and $T_{\rm eff}$ changes for {\it Kepler} dwarfs. On the other hand, \citet{huber13} found for exoplanet host candidates (mostly solar-like) that a change in $\log g$ of 0.1 dex typically leads to variations of 50 K and 0.03 dex for $T_{\rm eff}$ and [Fe/H], respectively. It is important to note that the figures quoted above for dwarfs and giants cannot be generalised and depend on the exact procedures that are implemented to derive the parameters \citep[see][]{torres12}. \begin{figure} \centering \includegraphics[scale=.55, trim = 10mm 60mm 10mm 85mm, clip]{morel_sesto_fig1.pdf} \caption{Effect on the $T_{\rm eff}$ and [Fe/H] determinations of using either the seismic or the spectroscopic gravity. The (un)constrained results are for the $\log g$ (not) fixed to the seismic value. Red circles: {\it Kepler} dwarfs \citep{bruntt12}, blue squares: {\it Kepler} giants \citep{thygesen12}, green triangles: red giants in CoRoT seismofields \citep{morel14}. Note that the metallicities obtained using the spectroscopic gravities are not available for the {\it Kepler} dwarfs \citep{bruntt12}. The extreme outlier KIC~4070746 is not included in this figure \citep[see discussion in][]{thygesen12}.} \label{fig_impact_adopting_seismic_logg} \end{figure} \subsection{The ambiguity surrounding the best metallicity value} \label{sect_ambiguity_metallicity} The surface gravity is usually determined from spectroscopic data by requiring that ionisation balance of iron is fulfilled. In many cases, this condition will no longer be satisfied once the seismic gravity is adopted \citep[][]{bruntt12,thygesen12}. As a result, the mean abundances derived from the Fe I and Fe II lines will differ, and there will therefore be an ambiguity as to which iron abundance should be preferred. As an illustration, using the seismic constraints, \citet{bruntt12} obtained [Fe I/H] = --0.02 and [Fe II/H] = +0.32 for KIC~3424541. The metallicity is an essential ingredient of any seismic modelling, and adopting one value or the other will clearly lead to substantially different estimates for the fundamental stellar parameters, such as the age, for instance. The Fe II lines are known in solar-like dwarfs to be less affected than the Fe I lines by both non-LTE and granulation effects \citep[e.g.,][]{asplund00}. The mean Fe II-based abundance hence appears to be a better proxy of the stellar metallicity when using a 1D LTE analysis. However, the choice is not as straightforward for red giants. Although the departures from LTE are also much less severe for the Fe II lines, these features are affected by a number of caveats: (1) they are only usually a few, difficult to measure, and potentially more affected by blends; (2) they are very sensitive to errors in the effective temperature (varying $T_{\rm eff}$ by 50 K while keeping the gravity fixed typically changes the Fe I abundances by only 0.01-0.02 dex, but the Fe II ones by 0.06 dex); (3) they may suffer more than the Fe I lines at near-solar metallicity from the neglect of granulation effects (\citealt{collet07}; \citealt{kucinskas13}; see also fig.15 of \citealt{dobrovolskas13}). In view of the uncertainties plaguing both the Fe I and Fe II abundances, it is unclear whether the Fe II-based abundances should be deemed as (systematically) more reliable for evolved objects. \section{A step beyond the determination of the basic parameters: the detailed chemical composition} \label{sect_abundances} The detailed abundance pattern can be obtained for stars observed with high-resolution spectrographs. Figure \ref{fig_abundances_vs_Fe} shows some abundance ratios with respect to iron as a function of [Fe/H] for the samples of \citet{bruntt12}, \citet{thygesen12}, and \citet{morel14}. Because of the chemical evolution of the Galaxy, it is well established that - depending on their nucleosynthesis - each element displays a distinct behaviour as a function of the iron content. For instance, the abundance ratio of the $\ualpha$ elements (e.g., Ca) increases when [Fe/H] decreases, whereas the iron-peak elements (e.g., Ni) closely follow Fe. An enhancement with respect to solar of some important species such as oxygen should be taken into account when modelling low-metallicity asteroseismic targets. Some elements behave qualitatively as expected in Fig.~\ref{fig_abundances_vs_Fe} (e.g., Si and Ni), but the expected trends at low metallicities are not seen in some cases (e.g., Ti and Cr) and the patterns generally much noisier than those reported in the literature for disc stars in the solar neighbourhood \citep[e.g.,][]{bensby14}. This can be at least partly attributed to the limitations of (semi)automated pipelines applied to data of lower quality. The fainter {\it Kepler} targets have often only been observed with 1m- or 2m-class telescopes \citep{bruntt12,thygesen12,molenda_zakowicz13}. The data shown in Fig.~\ref{fig_abundances_vs_Fe} are heterogeneous and any study-to-study difference in the global patterns may be misinterpreted as being of physical origin whereas it merely reflects systematic effects. However, the carbon depletion and nitrogen excess of the CoRoT giants compared to {\it Kepler} dwarfs may be expected because of mixing \citep[see, e.g.,][in the case of C]{luck_heiter07}. More robust conclusions could have been drawn for carbon by comparing the data for {\it Kepler} dwarfs and giants (thanks to the similarity of the analyses carried out by \citealt{bruntt12} and \citealt{thygesen12}), but the results for giants are affected by large uncertainties. \begin{figure} \centering \includegraphics[scale=.62, trim = 10mm 68mm 10mm 68mm, clip]{morel_sesto_fig2.pdf} \caption{Abundance ratios with respect to iron as a function of [Fe/H] for stars in the {\it Kepler} and CoRoT fields. The results have been obtained using a 1D LTE analysis and (except for the CoRoT stars) the seismic gravities. Same symbols as in Fig.~\ref{fig_impact_adopting_seismic_logg}. For the {\it Kepler} stars, [Fe/H] is based on the Fe II lines and the abundances of the other elements on the neutral species. Following \citet{bruntt12}, we only consider mean abundances for {\it Kepler} dwarfs with $v\sin i$ below 25 km s$^{-1}$ and computed based on at least five lines of each element (except for nitrogen and oxygen: 2 and 3 lines, respectively).} \label{fig_abundances_vs_Fe} \end{figure} The extent of mixing experienced by red giants results from the combined action of different physical processes (convective and rotational mixing, as well as arguably thermohaline instabilities) whose relative efficiency is a complex function of their evolutionary status, mass, metallicity, and rotational history \cite[e.g.,][]{charbonnel_lagarde10}. Fortunately, several key indicators with a different sensitivity to each of these processes can be measured in the optical wavelength domain (Li, CNO, Na, and $^{12}$C/$^{13}$C) and used to constrain theoretical models. As can be seen in Fig.~\ref{fig_C_Na_vs_N}, the occurrence of internal mixing phenomena is betrayed in CoRoT red giants by the existence of well-defined trends between the surface abundances of some species \citep[for a discussion of these results, see][]{morel14}. It is important to note that such abundance studies of asteroseismic targets may lead to a leap forward in our understanding of transport phenomena in evolved, low- and intermediate-mass stars because of the availability of an accurate mass estimate and, in some cases, a knowledge of the evolutionary status. \begin{figure}[h] \sidecaption \includegraphics[scale=.59, trim = 45mm 155mm 55mm 35mm, clip]{morel_sesto_fig3.pdf} \caption{Top and bottom panels: [C/Fe] and [Na/Fe] as a function of [N/Fe] for the red giants in the CoRoT seismology fields. The C and Na data have been corrected for the effects of the chemical evolution of the Galaxy \citep[for details, see][]{morel14}. The results have been obtained using the spectroscopic gravities.} \label{fig_C_Na_vs_N} \end{figure} \section{Some perspectives} \label{sect_perspectives} A detailed spectroscopic analysis has so far been carried out for only a tiny fraction of all the stars observed by CoRoT and {\it Kepler}. Much more is expected (or may be achievable) in the near future. We briefly mention below two of the most promising avenues of research. Seismic targets are currently used as benchmark stars in various ongoing or soon-to-be-started large-scale surveys, such as APOGEE \citep{meszaros13}, Gaia-ESO \citep{gilmore12}, or GALAH \citep{freeman12}. The combination of these spectroscopic data with the asteroseismic ones for the radii, masses, ages, and distances will be of great relevance for investigating the properties of the stellar populations constituting our Galaxy \cite[see, e.g.,][]{chiappini12,miglio13}. The {\it Gaia} satellite will dramatically contribute to this harvest by providing kinematic information of unprecedented quality. The various evolutionary sequences of red-giant stars can be distinguished from asteroseismic diagnostics \citep[e.g.,][]{stello13,montalban13}. This opens up the possibility of mapping out the evolution of the mixing indicators during the shell-hydrogen and core-helium burning phases for a very large number of stars with accurate masses \citep[see the tentative results for carbon of][]{luck_heiter07}. Knowing the fundamental parameters (e.g., mass, age) of dwarfs and having the possibility of probing their internal structure also make them particularly suitable for investigating the destruction of lithium during the early stages of stellar evolution. \begin{acknowledgement} I acknowledge financial support from Belspo for contract PRODEX GAIA-DPAC. I am very grateful to the Fonds National de la Recherche Scientifique (FNRS) and Annie Baglin for providing the financial resources that made my attendance possible. \end{acknowledgement} \bibliographystyle{spphys}
1,116,691,498,724
arxiv
\section{Causal cone of DMERA and sequentially generated states}\label{sec:causalcone} In this section, we review the constructions for the examples considered in the main article, analyse the growth of the past causal cone and the corresponding implications for the scaling of the error of noisy implementations. \subsection{DMERA} Let us start by briefly recalling for the reader's convenience the construction of DMERA states given in Ref.~\cite{kimswingle}, which are depicted in Fig.~\ref{fig:interactionschemedmera}, . We start with a system consisting of one qubit. Then, at iteration $t$ we add $2^{t-1}$ new qubits to the system, placing one qubit to the right of each existing qubit. Furthermore, at each iteration, we apply $D$ layers of two-qubit unitary gates between neighboring qubits. The resulting state has a final number of $2^T$ qubits and it is necessary to implement $(D-1)\lb2^{T+1}-1\right)$ two-qubit gates to prepare the whole state. While we add $2^{t-1}$ qubits in the Schr\"oedinger picture, when looking at the Heisenberg picture of the evolution we will discard half of the qubits at each iteration. This ensures that the dynamics in the Heisenberg picture will typically be locally mixing. However, as it is the case for usual MERA, local observables have by design a causal cone that is of polynomial size in $t$, which is crucial to all estimates in the main article. We will now discuss their growth in more detail. \begin{figure}[h!] \includegraphics[scale=0.5,trim={0cm 16cm 0cm 1cm},clip]{meranoobs3.png} \caption{Depiction of the DMERA for iterations $0$ to $4$. The circles (green, filled) denote the system qubits. The thick, black lines indicate where a qubit goes from one iteration to the next and the thin, gray lines indicate which qubits are neighbors at a given iteration. The digits, always next to the first qubit, indicate the iteration. } \label{fig:interactionschemedmera} \end{figure} Let us start with the number of unitaries in the past causal cone in DMERA. Recall that when looking at what happens at each iteration in the Heisenberg picture, after discarding every second qubit present in the previous iteration, we apply a unitary circuit of $D$ layers, always with the restriction that we can only apply unitaries between qubits that are neighbors on the line. When we apply the first layer, only unitaries which act on at least one qubit in the support have a nontrivial effect. Let $R(O_t)$ be the radius of the observable before we apply the first layer of unitaries. Then there at most $2R(O_t)-1$ nontrivial unitaries acting on the qubits in the support and two unitaries, one to the left of the support and one to the right, that act on the qubit in the left corner of the support and the first qubit to the left of the support and analogously to the right. Thus, we conclude that as we apply the first layer, we have $2R(O_t)+1$ unitaries acting nontrivially and the support will increase to one qubit to the right and one qubit to the left. The next layer of the unitary circuit will then act on an observable of support with radius at most $R(O_t)+1$. Applying the same reasoning as before, we see that the total number of unitaries that act nontrivially is $2R(O_t)+3$. We conclude that the total number of unitaries that acts nontrivially after repeating this process $D$ times is bounded by: \begin{align}\label{equ:numberofunitariesintermsradius} \sum_{k=0}^{D-1}(2R(O_t)+2k+1)=D(2R(O_t)+D). \end{align} Let us now estimate the size of the radius at each iteration to obtain a more concrete bound on the number of unitaries. As we observed above, if at the beginning of an iteration the radius is $R(O_t)$, it will increase by $D$ and then be halved after we discard the qubits. Thus, it will go from $R(O_t)$ to at most $\ceil{(R(O_t)+D)/2}\leq(R(O_t)+D)/2+1$. Applying this recursive relation, we see that if the initial radius is $R(O_T)$ then at iteration $t$, the radius is bounded by \begin{align*} R(O_t)\leq R(O_T)2^{-(T-t)}+\sum\limits_{k=t}^T\frac{D+2}{2^{T-k}}=R(O_T)2^{-(T-t)}+(D+2)(2-2^{-(T-t)}). \end{align*} Note that this implies that the radius of an observable is bounded by a constant independent of $t$. Combining the bound above on the radius of the observable with Eq.~\eqref{equ:numberofunitariesintermsradius}, we obtain that the number of unitaries added to the cone at iteration $t$ is bounded by: \begin{align}\label{equ:numberunititerationmera} D(2R(O_T)2^{-(T-t)}+2(D+2)(2-2^{-(T-t)})+D). \end{align} From this we can easily bound the total number of unitaries in the past causal cone from iteration $t$ to $T$ by summing the contribution at each step: \begin{align} \label{eq:radius} &N_U(t,R(O_T))\leq \sum\limits_{k=t}^TD(2R(O_k)+D)\leq \sum\limits_{k=t}^T D(2R(O_T)2^{-(T-k)}+2(D+2)(2-2^{-(T-k)})+D)\\&\leq (T-t)D(2R(O_T)+5D+8).\nonumber \end{align} Let us now estimate the number of qubits in the past causal cone. At every iteration, we grow the support by at most $D$ new qubits to the left and $D$ to the right, and we start with at most $2R(O_T)$ qubits. This leads to the bound \begin{align}\label{equ:numberofqubitsdmera} N_Q(t,R(O_T))\leq 2R(O_T)+2D(T-t) \end{align} We will now estimate the error of implementing the past causal cone from iteration $t$ to $T$, which, as explained in the main text, can be bounded by: \begin{align}\label{equ:ourstabilitybound} 2\delta(t,r)+\sum_{k=t+1}^{T}\delta(k,r)\left[\epsilon_U\left( N_U(k,r)-N_U(k+1,r)\right)+\epsilon_P\left( N_Q(k,r)-N_Q(k+1,r)\right)\right], \end{align} where we assume that each unitary is implemented with an error of $\epsilon_U$ in the $1\to1$ norm~\footnote{strictly speaking, a bound in the diamond norm is required. However, as we will discuss in more detail later, as we only consider two qubit unitaries, they are related by a factor of four.} and we can initialize each qubit up to an error $\epsilon_P$, in the sense that we can prepare a state that is $\epsilon_P$ close in trace distance to the ideal one. Let us start by estimating the error stemming from the noisy unitaries. Note that the term $ N_U(k,r)-N_U(k+1,r)$ is nothing but the newly added unitaries at iteration $k$, which we bounded in Eq.~(\ref{equ:numberunititerationmera}). It follows that the contribution to the error from the noisy unitaries from iteration $t$ to $T$ is bounded by \begin{align}\label{equ:boundfrommaintext} \epsilon_U\sum\limits_{k=t}^T\left[N_U(k,R(O_T))-N_U(k+1,R(O_T))\right]\delta(k,R(O_T))\leq\epsilon_U\sum\limits_{k=t}^T D(2R(O_T)+5D+8)\delta(k,R(O_T)). \end{align} To illustrate the bound, we assume that $\delta(k,r)=e^{-\lambda (T-k)}$. Consequently, \begin{align}\label{equ:errorunitariesmera} \epsilon_U \sum\limits_{k=t}^T D(2R(O_T)+5D+8)e^{-\lambda (T-k)}= \epsilon_U \frac{D(e^\lambda-e^{-(T-t)\lambda})(2R(O_T)+5D+8)}{e^{\lambda}-1}. \end{align} One can do a similar computation for state preparation errors. As discussed above, at most $2D$ qubits are added to the causal cone for each iteration. Thus, the error caused by initialization between iterations $t$ and $T$ is bounded by: \begin{align}\label{equ:errorsprepmera} \epsilon_P\lb2R(O_T)+2D\sum_{k=t}^T e^{-\lambda(T-t)}\right)=\epsilon_P\left( 2R(O_T)+2D\frac{ \left(e^{\lambda}-e^{-(T-t)\lambda}\right)}{e^\lambda-1}\right). \end{align} From combining equations~\eqref{equ:errorunitariesmera} and~\eqref{equ:errorsprepmera}, we can conclude that the error in estimating the expectation value of an observable by implementing the past causal cone from iteration $t$ to $T$ is bounded by: \begin{align*} &\epsilon_U \frac{D(e^\lambda-e^{-(T-t)\lambda})(2R(O_T)+5D+8)}{e^{\lambda}-1}+ \epsilon_P\left( 2R(O_T)+2D\frac{ \left(e^{\lambda}-e^{-(T-t)\lambda}\right)}{e^\lambda-1}\right)+2e^{-\lambda(T-t)}. \end{align*} Let us now suppose we only implement the past causal cone from iterations $t_{\epsilon_U}=T-\lambda^{-1}\log(\epsilon_U^{-1})$ until $T$. The resulting error will then be at most \begin{align*} &\epsilon_U \left(\frac{D(e^\lambda-\epsilon_U)(2R(O_T)+5D+8)}{e^{\lambda}-1}+2\right)+ \epsilon_P\left( 2R(O_T)+2D\frac{ \left(e^{\lambda}-\epsilon_U\right)}{e^\lambda-1}\right). \end{align*} By approximating $(e^{\lambda}-1)^{-1}$ by $\lambda^{-1}$, we see that the error stemming from the noisy unitaries is at most of order $\epsilon_U D^2\lambda^{-1}$. Similarly, the error from noisy initialization of qubits is at most of order $\epsilon_P\lambda^{-1}D$. Moreover, by inserting $t_{\epsilon_U}$ into Eq.~\eqref{equ:numberofqubitsdmera}, we obtain that the total number of qubits necessary to perform this computation is at most \begin{align*} 2R(O_T)+2D\lambda^{-1}\log(\epsilon_U^{-1}) \end{align*} and the number of unitaries that needs to be implemented is bounded by \begin{align*} \lambda^{-1}\log(\epsilon_U^{-1})D(2R(O_T)+5D+8), \end{align*} which follows from inserting $t_{\epsilon_U}$ into Eq.~\eqref{eq:radius}. Thus, under these assumptions it possible to compute local expectation values of fixed radius with noisy circuits whose error and size only depends on $\lambda$ and $D$, not $T$. \subsection{MPS and RI-$d$} Another important subclass of states are those that are sequentially generated. The most prominent example is matrix product states (MPS). Here, only one qubit interacts with the bath at each iteration. A simple generalization of this is where a group of qubits (arranged according to a $d-$dimensional graph) interacts with a bath (arranged according to a $d'$-dimensional graph) at each iteration, see Fig.~\ref{fig:repeatedinteraction} for an example with $d=0$ and $d'=2$. For this work, we also make the restriction that the interaction is given by a circuit of depth at most $D$. Setting $d=0$ and $d'=1$, i.e. a qubit interacting with qubits on a line, recovers our version of MPS. We will also discuss the case of $d=d'$ in more detail, which we will refer to as RI-$d$. The of case $d=d'=1$ encapsulates examples like holographic computation discussed in~\cite{kim2017b}. \begin{figure}[!tbp] \centering \begin{minipage}[b]{0.3\textwidth} \includegraphics[width=\textwidth,trim={0cm 9cm 0cm 1cm},clip]{mps3.png} \end{minipage} \begin{minipage}[b]{0.3\textwidth} \includegraphics[width=\textwidth,trim={0cm 9cm 0cm 1cm},clip]{mps2.png} \end{minipage} \begin{minipage}[b]{0.3\textwidth} \includegraphics[width=\textwidth,trim={0cm 8cm 0cm 12cm},clip]{mps1.png} \end{minipage} \caption{Three subsequent iterations of a repeated interaction system with $d=0$ and $d'=2$. That is, the system qubits (green, filled circles) interact at each iteration with bath qubits (green, empty circles) that are arranged according to a two dimensional graph. The black, thick line indicates which system qubits is interacting with the bath, while the bath qubits interact with nearest neighbors.} \label{fig:repeatedinteraction} \end{figure} We now discuss the growth of scaling of the errors in both our version of MPS and RI-$d$. Unlike we did for DMERA, we will not fix the exact graph that models the interactions in the bath and between system and bath at each iteration and choose to focus on the scaling of the size of causal cones. More precisely, we will assume that there are constants $C_V$ and $C_E$ such that for every ball of radius $r$ in the interaction graph there are at most $C_Er^d$ edges and $C_Vr^d$ vertices inside the ball. Let us now analyse the growth of causal cones. As it was the case with DMERA, if at the beginning of iteration $t$ the radius of an observable is $R(O_t)$, it will then grow to at most $R(O_t)+D$. However, unlike for DMERA, for the interaction schemes considered here we do not discard qubits between different iterations. Thus, the radius at iteration $t$ of an observable is bounded by $R(O_T)+(T-t)D$. This allows us to conclude that the number of qubits in the past causal cone is bounded by: \begin{align*} N_Q(t,R(O_T))\leq C_V(R(O_T)+(T-t)D)^d \end{align*} for RI-$d$ and $C_V(R(O_T)+(T-t)D)$ for MPS. Let us now do a similar computation for the number of unitaries in the past causal cone. Supposing that the radius of the observable is $R(O_t)$ at the beginning of the iteration, there are at most $C_E(R(O_t)+1)^d$ unitaries that act nontrivially on the first layer and the radius will grow by one. For the second layer, there will be at most $C_E(R(O_t)+2)^d$ and the radius will again grow by one. We conclude that applying the $D$ layers will require a total of at most \begin{align}\label{equ:numberunitaries} C_E\sum\limits_{k=0}^{D-1}(R(O_t)+k+1)^{d} \end{align} unitaries for iteration $t$. As $(R(O_t)+k+1)^{d}$ is monotone increasing in $k$, we have that the number of unitaries added at each iteration is bounded by: \begin{align*} &C_E\sum\limits_{k=0}^{D-1}(R(O_t)+k+1)^{d}\leq C_E\int\limits_{1}^{D}(R(O_t)+x+1)^{d}dx = \frac{C_E}{d+1}\left[\left( R(O_t)+D+1\right)^{d+1}-\left( R(O_t)+2\right)^{d+1}\right]\\ \leq &\frac{C_E}{d+1}\left[\left( R(O_T)+(T-t+1)D+1\right)^{d+1}-\left( R(O_T)+(T-t)D+1\right)^{d+1}\right], \end{align*} where for the last inequality we used our estimate for the radius of the observable at iteration $t$ and the fact that the function $x\mapsto \left( x+D+1\right)^{d+1}-\left( x+2\right)^{d+1}$ is monotone increasing for $x\geq0$ and $D\geq1$, as can be seen by a direct inspection of its derivative. Thus, we can bound the maximum number of unitaries in the causal cone between iteration $t$ and $T$ by: \begin{align}\label{equ:numberofunitariesinfinalRI} N_U(t,r)\leq \frac{C_E}{d+1}\sum_{k=t}^T\left[\left( R(O_T)+(T-k+1)D+1\right)^{d+1}-\left( R(O_T)+(T-k)D+1\right)^{d+1}\right]. \end{align} Let us now estimate this sum. To this end, define the function $f(x)=\left( R(O_T)+xD+1\right)^{d+1}$. By the mean value theorem there exists $\xi_k\in[T-k,T-k+1]$ such that: \begin{align}\label{equ:newunitariesRIdperstep} f(T-k+1)-f(T-k)=f'(\xi_k)=(d+1)D\left( R(O_T)+\xi_kD+1\right)^{d}\leq(d+1)D\left( R(O_T)+(T-k+1)D+1\right)^{d}. \end{align} Thus, inserting this bound into~\eqref{equ:numberofunitariesinfinalRI} it follows that \begin{align*} &N_U(t,r)\leq C_E(d+1)\sum_{k=t}^TD\left( R(O_T)+(T-k+1)D+1\right)^{d}\leq (d+1)C_ED\int\limits_{1}^{(T-t)+2}\left( R(O_T)+xD+1\right)^ddx\\ &=C_E\left[\left( R(O_T)+(T-t+2)D+1\right)^{d+1}-\left( R(O_T)+D+1\right)^{d+1}\right]. \end{align*} In particular, for MPS this gives a bound of \begin{align*} N_U(t,r)\leq C_E\left[\left( R(O_T)+(T-t+2)D+1\right)^{2}-\left( R(O_T)+D+1\right)^{2}\right]. \end{align*} We now assume that $\delta(t,r)=e^{-(T-t)\lambda}$ to bound the estimation error from implementing the past causal cone, as we did with DMERA. Recall that we bounded the number of new unitaries in the past causal cone at each iteration in Eq.~\eqref{equ:newunitariesRIdperstep}. Once again, combining these estimates with our assumption on the mixing rate function and~\eqref{equ:ourstabilitybound} yields a bound on the error stemming from the unitaries of at most \begin{align*} \epsilon_U C_E\sum\limits_{k=t}^T D(d+1)\left( R(O_T)+(T-k+1)D+1\right)^{d}e^{-\lambda(T-k)}. \end{align*} Let us now estimate this sum. First, define the function \begin{align*} g(x)=\left( R(O_T)+xD+1\right)^{d}e^{-\lambda x}. \end{align*} We have: \begin{align*} g'(x)=\left( R(O_T)+xD+1\right)^{d-1}e^{-\lambda x}\left( dD-\lambda\left( R(O_T)+xD+1\right)\rb \end{align*} For $x\geq0$, we see that the function is monotone increasing for \begin{align*} x\leq x_0:=\frac{1}{D}\left( \frac{dD}{\lambda}-R(O_T)-1\right) \end{align*} and monotone decreasing for $x\geq x_0$. This allow us to conclude that: \begin{align}\label{equ:inequalityintegral} &\epsilon_U C_E\sum\limits_{k=t}^T D(d+1)\left( R(O_T)+(T-k+1)D+1\right)^{d}e^{-\lambda(T-k)}\leq \epsilon_UC_ED(d+1)\left[\int\limits_{0}^{\ceil{x_0}}g(x)dx+\int\limits_{\lfloor x_0\rfloor}^{T-t+1}g(x)dx\right]\\ &\leq2\epsilon_UC_ED(d+1)\int\limits_{0}^{T-t+1}g(x)dx. \end{align} It now remains to estimate this integral. It is easy to compute the integral above using integration by parts $d$ times, although the resulting expressions are quite involved. We only reproduce them for $d=1$ and $d=2$ here. For $d=1$ we have: \begin{align}\label{equ:estimatefordone} \int\limits_{0}^{T-t+1}(R(O_T)+(x+1)D)e^{-\lambda x}dx=\frac{1-e^{-\lambda(T-t+1)}}{\lambda^2}\left( \lambda R(O_T)+D\lambda+D\right)-\lambda^{-2}e^{-\lambda(T-t+1)}\left( D\lambda(T-t+1)\right), \end{align} and for $d=2$ we obtain: \begin{align}\label{equ:estimatefordtwo} &\frac{1-e^{-\lambda(T-t+1)}}{\lambda^3}\left( \lambda^2(D+R(O_T))^2+2\lambda D(D+R(O_T))+2D^2\right)\\ &-\lambda^{-3}e^{-\lambda(T-t+1)}\left( 2\lambda^2D(R(O_T)+D)(T-t+1)+D^2\lambda^2(T-t+1)^2\right).\nonumber \end{align} It is then possible to obtain explicit bounds by combining the equations above with Eq.~\eqref{equ:inequalityintegral}. But it is easy to see by direct inspection that, assuming $R(O_T)\leq D$, the error will converge exponentially fast in $(T-t)$ to $\mathcal{O}\left(\epsilon_UC_E \left( D\lambda^{-1}\right)^{d+1}\right)$, which is again independent of $T$. It is also possible to obtain more explicit bounds on the asymptotic behaviour of the error, i.e. with $T\to\infty$. To this end, note that $g(x)\geq0$, thus: \begin{align*} &\int_{0}^{T-t+1}g(x)dx\leq\int_{0}^{+\infty}\left( R(O_T)+xD+1\right)^{d}e^{-\lambda x}dx=D^{-1}\int\limits_{R(O_T)+1}^{\infty}y^de^{-\frac{\lambda}{D}\left( y-R(O_T-1)\right)}dy\\ &\leq D^{-1}\int\limits_{0}^{\infty}y^de^{-\frac{\lambda}{D}\left( y-R(O_t-1)\right)}dy=\frac{e^{\frac{\lambda}{D}\left( R(O_T)+1\right)}}{D}\int\limits_{0}^{+\infty}y^de^{-\frac{\lambda}{D}\left( y\right)}dy=\frac{D^de^{\frac{\lambda}{D}\left( R(O_T)+1\right)}}{\lambda^{d+1}}\int\limits_{0}^{+\infty}z^{d}e^{-z}dz=\frac{d!D^de^{\frac{\lambda}{D}\left( R(O_T)+1\right)}}{\lambda^{d+1}}. \end{align*} This allows us to conclude that the noise stemming from the noisy unitaries is bounded by: \begin{align*} 2C_E\epsilon_U \frac{(d+1)!D^{d+1}e^{\frac{\lambda}{D}\left( R(O_T)+1\right)}}{\lambda^{d+1}}. \end{align*} Similar estimates hold for the total initialization errors ($\epsilon_P$). We see that the number of qubits added at iteration $t$ is bounded by: \begin{align*} (R(O_T)+(T-t+1)D)^d-(R(O_T)+(T-t)D)^d\leq dD(R(O_T)+(T-t+1)D)^{d-1}, \end{align*} again using the mean value theorem. Thus, we may estimate the initialization error by: \begin{align}\label{equ:estimatequbiterror} \epsilon_PC_VdD\sum\limits_{k=t}^{T}(R(O_T)+(T-k+1)D)^{d-1}e^{-\lambda t}. \end{align} The attentive reader must have already realized that the expression in~\eqref{equ:inequalityintegral} coincides with that of~\eqref{equ:estimatequbiterror} up to a constant if replace $d+1$ by $d$. Thus, we may use the same estimation techniques and conclude that the error is bounded by $\mathcal{O}\left(\epsilon_PC_V \left( D\lambda^{-1}\right)^{d}\right)$. Moreover, we may resort to the expressions in~\eqref{equ:estimatefordone} and~\eqref{equ:estimatefordtwo} if more refined inequalities in terms of $t$ and $R(O_T)$ are desired. Thus, the total error of implementing the causal cone from $T-t$ to $t$ is bounded by: \begin{align*} 2C_V\epsilon_P \frac{d!D^{d}e^{\frac{\lambda}{D}\left( R(O_T)+1\right)}}{\lambda^{d}}+2C_E\epsilon_U \frac{(d+1)!D^{d+1}e^{\frac{\lambda}{D}\left( R(O_T)+1\right)}}{\lambda^{d+1}}+2e^{-\lambda(T-t)}, \end{align*} up to corrections that are exponentially small in $T-t$. \section{Mixing rates of quantum channels}\label{sec:mixingrates} \newcommand{\mathcal{M}}{\mathcal{M}} In this section, we clarify the connections between the mixing rate function and the mixing properties of quantum channels~\cite{Burgarth2013}. \begin{definition}[Mixing quantum channel] A quantum channel $\Lambda:\mathcal{M}_d\to\mathcal{M}_d$ is called \textbf{mixing} if there is a unique state $\sigma$ such that $\Lambda(\sigma)=\sigma$ and for all states $\rho$ we have that \begin{align*} \lim\limits_{n\to\infty}\Lambda^n(\rho)=\sigma, \end{align*} where $\Lambda^n$ denotes the quantum channel composed with itself $n$ times. \end{definition} Given a mixing quantum channel $\Lambda$, the main quantity of interest is $t_1(\epsilon)$, defined as \begin{align*} t_1(\epsilon)=\inf\{n|\sup_{\rho}\|\Lambda^{n}\left( \rho\right)-\sigma\|_1\leq \epsilon\}. \end{align*} For $\epsilon>0$ this quantity measures how long it takes for the quantum channel to converge, i.e., its mixing time~\cite{Burgarth2013,Temme2010}. Here $\|\cdot\|_1$ corresponds to the trace norm. It is well-known that correlations in tensor network or finitely correlated states are governed by mixing properties of the transfer operator~\cite{Fannes1992,Perez-Garcia2006}. We will now show this connection for completeness of the exposition. Note that \begin{align*} \sup\limits_{\rho}\|\Lambda^n(\rho)-\sigma\|_1 \end{align*} corresponds to the $1\to1$ norm of the linear operator $\Lambda-\Lambda_{\infty}$, where $\Lambda_\infty(\rho)=\text{tr}(\rho)\sigma$. It follows from duality that: \begin{align*} \|\Lambda^n-\Lambda_{\infty}\|_{1\to 1}=\|(\Lambda^n)^*-\Lambda_{\infty}^*\|_{\infty\to \infty} \end{align*} and $\Lambda^*_{\infty}(O)=\text{tr}(\sigma O)\mathbb{1}$. Now suppose, for simplicity, that we wish to compute the expectation value of an observable $O$ supported on one qubit in $S_T$ and our interaction scheme is that of MPS. In this case, the qubits only interact with the bath at each iteration and not each other. Moreover, let us assume that the system is translationally invariant in the sense that we assume that $\mathcal{U}_t$ is the same for all $t$. Now note that \begin{align*} O_T=\Phi_{T}^*(O)=\text{tr}_{S_T A_T}\left( \mathcal{U}_{T}^*\left( O\otimes \mathbb{1}_{S_1\ldots S_{T-1}S_B}\right)\rb. \end{align*} will be an observable supported on the bath alone. Furthermore, \begin{align*} \Phi_{t}^*(O_{t+1})=\text{tr}_{S_{t}A_{t}}\left( \mathcal{U}_{t}^*\left( O_{t+1}\otimes \mathbb{1}_{S_1\ldots S_{t}}\right)\rb. \end{align*} Since we have assumed the action of all $\mathcal{U}_{t}$ to be the same, we may define the quantum channel $\Lambda_{B}^*$ from the bath to itself as \begin{align*} \Lambda_{B}^{*}(X)=\text{tr}_{S_{t}A_{t}}\left( \mathcal{U}_{t}^*\left( X\otimes \mathbb{1}_{S_1\ldots S_{t}}\right)\rb. \end{align*} We then have that $O_t=\left( \Lambda_B^*\right)^{T-t}(O_1)$. If $\Lambda_B$ is mixing, which is the generic case~\cite{Burgarth2013}, we may directly bound the mixing rate with a mixing time bound on $\Lambda_B$. Let \begin{align*} \mathcal{B}_r=\{O:R(O)\leq r,\|O\|_\infty\leq1\}. \end{align*} Observe that \begin{align*} \delta(t,r)=\sup\limits_{O\in \mathcal{B}_r}\inf_{c\in \mathds{R}}\|\Phi^*_{[t,T]}\left( O\right)-c\mathbb{1}\|_\infty= \sup\limits_{O\in\mathcal{B}_r}\inf_{c\in \mathds{R}}\| \left( \Lambda_B^*\right)^{T-t}(O)-c\mathbb{1}\|_\infty. \end{align*} For $\Lambda_B$ mixing, a natural choice for the constant $c$ is given by $\text{tr}\left( O_1 \sigma\right)$, as in this case we have: \begin{align*} \delta(t,r)\leq \sup\limits_{\mathcal{B}_r}\| \left( \Lambda_B^*\right)^{T-t-1}(O_1)-\text{tr}\left( O_1 \sigma\right)\mathbb{1}\|_\infty\leq \|\Lambda_B^{T-t-1}-\Lambda_{B,\infty}\|_{1\to1}. \end{align*} We conclude that in this case, $\delta(l,r)$ can be bounded using mixing time techniques~\cite{Burgarth2013,Temme2010,Reeb2011,Bardet2017,Muller-Hermes2018}. But note that these might provide a too pessimistic bound on $\delta(l,r)$, as they do not take into account the radius of the support $r$. Although we made the restrictive assumption that all $\mathcal{U}_t$ are the same, it is straightforward to adapt the arguments above to the case where they are different. This, however, implies that the sequence of quantum channels of interest is not homogeneous in time. It is, in general, not known how to estimate the convergence or even certify convergence for a non-homogeneous sequence. One important exception is when the quantum channels change adiabatically in time~\cite{Hanson2017}. Moreover, the results of Refs.~\cite{Gonzalez-Guillen2018,1906.11682} seem to indicate that we should expect an exponential decay of the mixing rate function for generic local circuits of logarithmic depth in the number of qubits, but we leave this investigation for future work. \subsection{Correlation length of the produced states}\label{sec:corrlength} Here we discuss how the mixing rate function $\delta(t,r)$ and the geometry of the interaction scheme can be used to bound the correlations present in the state produced. We measure the correlations in the state in terms of the covariance, which we introduce below. \begin{definition}[Covariance] Let $E,F$ be observables with disjoint support in $G_T$. Their covariance with respect to a state $\rho$, $\operatorname{cov}_\rho(E,F)$, is defined as: \begin{align*} \operatorname{cov}_\rho(E,F)=\text{tr}\left( \rho E\otimes F\right)-\text{tr}\left(\rho E\right) \text{tr}\left(\rho F\right). \end{align*} \end{definition} We then have: \begin{prop*}[Correlations of the state]\label{cor:corrlength} Let $E$ and $F$ be observables whose support is disjoint and contained in a ball of radius $r$ and $\rho=\Phi_{[0,T]}(\rho_0\otimes \rho_B)$. Moreover, let $t_0$ be the largest $t$ s.t. $E_{t}$ and $F_{t}$ have supports that intersect. Then \begin{align*} \left|\operatorname{cov}_\rho(E,F)\right|\leq 6\delta(t_0,r)\|E\|_\infty\|F\|_\infty. \end{align*} \end{prop*} \begin{proof} Note that for $t>t_0$ the supports of $E_t$ and $F_t$ are disjoint by definition, that is, $\Phi_{[t,T]}^*(E\otimes F)$ are still product observables. By the definition of the mixing rate, there are constants $c_E$ and $c_F$ such that: \begin{align*} \Phi_{[t,T]}^*(E\otimes F)=\left( c_E\mathbb{1}+\delta(t,r)E'_t\right)\otimes\left( c_F\mathbb{1}+\delta(t,r)F'_t\right). \end{align*} Here $E'$ is an observable satisfying $\|E'_t\|_\infty\leq\|E\|_\infty$ and whose support is contained in the support of $E_t$. Analogous properties apply to $F'_t$. Moreover, note that $c_E\leq \|E\|_\infty$. Defining \begin{align*} \tilde{C}=\Phi_{[t,T]}^*( \left( c_E\mathbb{1}\otimes \delta(t,r)F'_t\right)+\left(\delta(t,r)E'_t\otimes c_F\mathbb{1}\right)+ \delta(t,r)E'_t\otimes \delta(t,r)F'_t) \end{align*} we have that \begin{align*} \text{tr}\left( \rho E\otimes F\right)=&\text{tr}\left( \rho_0\otimes \rho_B \Phi_{[0,T]}^*(E\otimes F)\right) =c_Ec_F+\text{tr}\left( \tilde{C}\rho_0\otimes\rho_B\right). \end{align*} An application of the triangle inequality yields $\|\tilde{C}\|_\infty\leq 3\delta(t,r)\|E\|_{\infty}\|F\|_{\infty}$, from which we conclude \begin{align}\label{equ:approxcorr} \left|\text{tr}\left( \rho E\otimes F\right)-c_E c_F\right|\leq 3\delta(t,r)\|E\|_\infty\|F\|_\infty. \end{align} A similar computation yields that \begin{align*} \Phi_{[t,T]}^*(E\otimes \mathbb{1})= \left( c_E+\delta(t,r)E'_t\right)\otimes \mathbb{1},\quad \Phi_{[t,T]}^*(\mathbb{1}\otimes F)= \mathbb{1} \otimes \left( c_F+\delta(t,r)F'_t\right). \end{align*} We, therefore, have that \begin{align}\label{equ:indepexpect} \text{tr}\left(\rho E\right)=c_E+\text{tr}\left( \rho_0\otimes \rho_B\tilde{C}_E\right),\quad \text{tr}\left(\rho F\right)=c_F+\text{tr}\left( \rho_0\otimes\rho_B\tilde{C}_F\right), \end{align} where $\tilde{C}_E=\delta(t,r)\Phi_{[t,T]}^*\left( E'_t\right)$ and $\tilde{C}_F$ is defined analogously. From~\eqref{equ:indepexpect} we conclude that: \begin{align*} |\text{tr}\left(\rho E\right) \text{tr}\left(\rho F\right)-c_E c_F|\leq 3\delta(t,r)\|E\|_\infty\|F\|_\infty. \end{align*} Combining the last inequality with~\eqref{equ:approxcorr} we finally have that: \begin{align*} \left|\text{tr}\left(\rho E\right) \text{tr}\left(\rho F\right)-\text{tr}\left( \rho E\otimes F\right) \right|\leq |\text{tr}\left(\rho E\right) \text{tr}\left(\rho F\right)-c_E c_F|+|\text{tr}\left( \rho E\otimes F\right)-c_E c_F|\leq 6\delta(t,r)\|E\|_\infty\|F\|_\infty. \end{align*} \end{proof} \section{Connection to the results of Kim et al}\label{app:resultskim} First, we briefly review our assumptions on the noise in the implementation, which are closely related to that of Kim et al.~\cite{kim2017noise,kimswingle}. Like them, we assume that noisy versions $\mathcal{N}_U$ of the required two qubit gates $U$ are implemented, which satisfy: \begin{align}\label{equ:noisyunitary1} \|\mathcal{U}-\mathcal{N}_U\|_{\diamond}\leq\epsilon_U. \end{align} and the noise acts on the same qubits as $U$. Here $\mathcal{U}$ is just the quantum channel that corresponds to conjugation with $U$ and $\|\cdot\|_{\diamond}$ is the diamond norm. Recall that the diamond norm is defined as \begin{align*} \|\Lambda\|_{\diamond}=\sup\limits_{X\in \mathcal{M}_n\otimes \mathcal{M}_n}\frac{\|\Lambda\otimes \textrm{id}\left( X\right)\|_1}{\|X\|_1} \end{align*} for a linear operator $\Lambda:\mathcal{M}_n\to\mathcal{M}_n$ and $\|\cdot\|_1$ the trace norm. The diamond norm is a natural way of quantifying the noise in our setting as it also allows us to estimate its effect on systems other than the one the unitary is acting on. However, it should be noted that as all unitaries considered in this work only act nontrivially on two qubits, the diamond norm can differ by at most a factor of $4$ from $\|\cdot\|_{1\to1}$. That is, \begin{align*} \epsilon_U\leq 4\|\mathcal{U}-\mathcal{N}_U\|_{1\to 1}. \end{align*} We also assume that the initial state preparation is noisy. This can be modelled similarly by assuming further that all qubits are initialized in a state that is $\epsilon_P$ in trace distance to the ideal one. Let us now connect the mixing rate function of circuits to stability bounds of noisy implementations, which will allow us to recover~\cite[Theorem 2]{kim2017noise} in our language. \begin{cor*}[Stability of noisy implementation]\label{thm:generalizedkim} Let \begin{align*} \rho=\text{tr}_B\left[ \Phi_{[0,T]} \left(\rho_0\otimes \rho_B\right)\right] \end{align*} and $\tilde{\rho}$ be the quantum state obtained by replacing every two qubit unitary in $\Phi_t$ by a noisy counterpart satisfying~\eqref{equ:noisyunitary1} and every qubit initialized up to a preparation error of $\epsilon_P$. Moreover, let $O$ be an observable supported on a ball of radius $r$ and $\|O\|_\infty\leq 1$. Then for all $0\leq t\leq T$: \begin{align}\label{equ:stabilityboundkim} |\text{tr}\left( O\left( \rho-\tilde{\rho}\right)\rb|\leq \delta(t,r)+\sum_{k=t+1}^{T}\delta(k,r)\left[\epsilon_U\left( N_U(k,r)-N_U(k+1,r)\right)+\epsilon_P\left( N_Q(k,r)-N_Q(k+1,r)\right)\right]. \end{align} \end{cor*} \begin{proof} Let $\tilde{\Phi}_t$ be the noisy counterpart of $\Phi_t$. As in~\cite[Theorem 2]{kim2017noise}, we now consider the decomposition \begin{align*} \Phi_{[0,T]}^*-\tilde{\Phi}_{[0,T]}^*= \left(\Phi_{[0,t-1]}^*-\tilde{\Phi}_{[0,t-1]}^*\right)\circ \Phi_{[0,t]}^*+ \sum\limits_{k=t}^{T}\tilde{\Phi}_{[0,k-1]}^*\circ\left( \Phi_{k}^* -\tilde{\Phi}_{k}^*\right)\circ \Phi_{[k+1,T]}^*, \end{align*} with the convention that $\Phi_{[-1,0]}^*,\Phi_{[T+1,T]}^*$ are the identity. Let us first estimate the error from the sum by estimating each summand. First, note that, as before, we have: \begin{align*} \Phi_{[k+1,T]}^*(O)=\delta(k+1,r)A_{k+1}+c_{k+1}\mathbb{1}, \end{align*} where once again we have $\|A_{k+1}\|_{\infty}\leq\|O\|_{\infty}$ with the same support as $O_{k+1}$ and $c_{k+1}$ is some constant. Moreover, $\left( \Phi_{k}^* -\tilde{\Phi}_{k}^*\right)$ will map the identity to $0$. Thus, \begin{align}\label{equ:preliminarycontraction} \|\tilde{\Phi}_{[0,k-1]}^*\circ\left( \Phi_{k}^*-\tilde{\Phi}_{k}^*\right)\circ \Phi_{[k+1,T]}^*\left( O\right)\|_\infty= \delta(k+1,r)\|\tilde{\Phi}_{[0,k-1]}^*\circ\left( \Phi_{k}^*-\tilde{\Phi}_{k}^*\right)\left( A_{k+1}\right)\|_\infty \end{align} As we assumed that the noise is local, that is, it acts on the same qubits as the two-qubit gate~\footnote{it is possible to treat the case in which the noise acts in a constant neighbourhood of the qubits similarly, but we will not discuss this scenario in order not to overcomplicate the presentation.} the action of $\tilde{\Phi}_{k}$ and $\Phi_{k}$ will be identical outside the support of $A_{k+1}$. This is because both will just map the identity to the identity outside the support. This implies that only the unitary gates in the past causal cone of the observable contribute to the error and each one by $\epsilon_U$. A similar argument holds for the qubit initialization errors, as only erroneous initialization on the past causal cone contribute to the error. As there are at most $N_U(t-1,r)-N_U(t,r)$ new unitaries at iteration $t-1$ and at most $N_Q(t-1,r)-N_Q(t,r)$ new qubits, we conclude that: \begin{align}\label{equ:errorsthatcontribute} \|\tilde{\Phi}_{[0,k-1]}^*\circ\left( \Phi_{k}^*-\tilde{\Phi}_{k}^*\right)\left( A_{k+1}\right)\|_\infty\leq \epsilon_U\left( N_U(k+1,r)-N_U(k,r)\right)+\epsilon_P\left( N_Q(k+1,r)-N_Q(k,r)\right) \end{align} Thus, combining~\eqref{equ:preliminarycontraction} and~\eqref{equ:errorsthatcontribute} yields: \begin{align*} &\|\sum\limits_{k=t}^{T}\tilde{\Phi}_{[0,k-1]}^*\circ\left( \Phi_{k}^*-\tilde{\Phi}_{k}^*\right)\circ \Phi_{[k+1,T]}^*\left( O\right)\|_\infty\leq \sum\limits_{k=t}^{T}\delta(k+1,r)\|\tilde{\Phi}_{[0,k-1]}^*\circ\left( \Phi_{k}^*-\tilde{\Phi}_{k}^*\right) \left( A_{k+1}\right)\|_\infty\\ & \leq\sum_{k=t}^{T}\delta(k+1,r)\left[\epsilon_U\left( N_U(k,r)-N_U(k+1,r)\right)+\epsilon_P\left( N_Q(k,r)-N_Q(k+1,r)\right)\right] \end{align*} Now, by the definition of the mixing rate function there exists an observable $A$ such that \begin{align*} \Phi_{[k,T]}(O)=c \mathbb{1}+\delta(k,r)A \end{align*} with $\|A\|_\infty\leq1$. Thus, we see that \begin{align*} \left(\Phi_{[0,k-1]}^*-\tilde{\Phi}_{[0,k-1]}^*\right)\circ \Phi_{[k,T]}(O)= \delta(k,r)\left(\Phi_{[0,k-1]}^*-\tilde{\Phi}_{[0,k-1]}^*\right)\circ \Phi_{[k,T]}^*(A), \end{align*} as the identity is in the kernel of $\Phi_{[0,k-1]}^*-\tilde{\Phi}_{[0,k-1]}^*$. We conclude that \begin{align*} \|\Phi_{[0,T]}^*-\tilde{\Phi}_{[0,T]}^*(O)\|_\infty\leq \delta(t,r)+\sum_{k=t}^{T}\delta(k,r)\left[\epsilon_U\left( N_U(k,r)-N_U(k+1,r)\right)+\epsilon_P\left( N_Q(k,r)-N_Q(k+1,r)\right)\right], \end{align*} from which the claim follows. \end{proof} The stability results of Refs.~\cite{kim2017noise,kimswingle} are captured by this corollary. For instance, the main result of Ref.~\cite{kim2017noise} follows from assuming that there exist constants $r_0,c,k,\alpha,\Delta\geq 0$ independent of system size such that for all $r\leq r_0$: \begin{align*} \delta(t,r)=cr^{\alpha}e^{-\gamma (T-t)}+\Delta. \end{align*} Optimizing \begin{align*} t_0=T-\frac{1}{\gamma}\log\left(\frac{\epsilon}{Dr^{\alpha}c}\right)^2 \end{align*} suffices to guarantee an estimate up to $\mathcal{O}\left( D^2\epsilon\log(\epsilon^{-1})^2+\Delta\right)$, as in~\cite{kim2017noise}. By comparing Corollary~\ref{thm:generalizedkim} with our main theorem (see article), we see that this stability comes from the fact that the assumptions on $\delta(t,r)$ imply that there is an "effective" circuit of constant size underlying the computation. Moreover, each iteration of the evolution can only change the expectation value by an amount that decreases with time. This is well illustrated when we compare the bound in Eq.~\eqref{equ:stabilityboundkim} and the one we obtained with our main result, reproduced in the supplemental material in Eq.~\eqref{equ:ourstabilitybound}. Note that the two bounds only differ by a factor of $\delta(t,r)$. This difference has a clear interpretation in light of the discussion above: in our result we allowed for an arbitrary initial state $\tilde{\rho}$ when implementing the past causal cone from iteration $t$ to $T$ , while above the state at iteration $t$ is given by $\Phi_{[0,t-1]}(\rho_0\otimes \rho_B)$ in the noiseless version. With the previous discussion in mind, we see that \emph{any} change to the state produced from iteration $0$ to $t$ can only change the expectation value by $\delta(t,r)$, which explains the extra $\delta(t,r)$ factor. \section{Certifying mixing}\label{sec:mixing} A close look at the proof of the main theorem shows that $\delta(t,r)$ provides a worst-case estimate for how fast the expectation values stabilize. If we are only interested in estimating the expectation value of a given observable $O$, we see that \begin{align*} \inf\limits_{c\in \mathds{R}}\|O_t-c\mathbb{1}\|_\infty \end{align*} gives an upper bound on the error we obtain when we estimate $\text{tr}(\rho O)$ by only implementing the circuit from iteration $t$ to $T$. Thus, it is not necessary to bound the mixing rate for arbitrary observables, which is expected to be hard in general. E.g the results of~\cite{Bookatz2013} show that it is QMA-hard to determine the spectral gap~\cite{Temme2010} of certain quantum channels, which is a central quantity in determining the mixing time of quantum channels. We will therefore focus on bounding the mixing for a given observable $O$. We will show that in case $\operatorname{supp} O_t$ is small it is possible to bound $\|O_t-c\mathbb{1}\|_\infty$ on a quantum computer. As can be seen in the proof of the main theorem, if $\delta(t,r)$ is small, then the output of the circuit is essentially independent of the initial state. Thus, it should be expected that the dependence of the expectation value of an observable $O$ on the initial state gives an estimate on the mixing time. Indeed, if we draw a state $\sigma_t$ from a state two design~\citep{Ambainis2007} on the support of $O_t$ and define the random variable $X_t=\text{tr}\left( \sigma_t O_t\right)$, then: \begin{align}\label{equ:variance} &\left( 2^{n_t}\left( \mathbb{E}(X_t^2)\left( 2^{n_t}+1\right)-2\mathbb{E}\left( X_t\right)^2\right)\rb ^{\frac{1}{2}}\geq \|O_t-\text{tr}\left( \Phi^{*}_{[t,T]}\left( O\right)\rb\frac{\mathbb{1}}{2^{n_t}}\|_\infty. \end{align} Here $n_t$ is the number of qubits on the support of $O_t$. As it is possible to generate a two state design using $\mathcal{O}(n_t\log^2(n_t))$ gates~\cite{Cleve2001}, equation~\eqref{equ:variance} gives a protocol to measure how far each local observable is from stabilizing as long as $n_t$ is small by estimating the first and second moments of $X_t$. This protocol applies to interaction schemes for which the support of observables has a bounded radius, like DMERA. We now discuss to derive~\eqref{equ:variance} and its consequences in more detail. We start by recalling the definition of a quantum state design~\cite{Ambainis2007}: \begin{definition}[State design] A distribution $\mu$ over the set of $d$ dimensional quantum states is called a $k-$state design for some $k>0$ if \begin{align*} \int(|\psi\rangle\langle\psi|)^{\otimes k} d\mu=\int(|\psi\rangle\langle\psi|)^{\otimes k} d \mu_U, \end{align*} where $\mu_U$ is the (normalized) uniform measure on the set of pure quantum states. \end{definition} That is, these states have the same first $k$ moments as the uniform distribution on the set of pure states. Let us now compute some relevant moments of the random quantum states: Let $\ket{\psi}$ be drawn from the uniform distribution of $d-$ dimensional pure quantum states and $O$ be an observable. Moreover, define the random variable $X=\text{tr}\left( \ketbra{\psi}{\psi} O\right)$. Then: \begin{align}\label{equ:firstmoments} \mathbb{E}(X)=\frac{\text{tr}\left( O\right)}{n},\quad\mathbb{E}(X^2)=\frac{1}{n(n+1)}\left( \text{tr}(O^2)+\text{tr}(O)^2\right). \end{align} This can be derived by e.g. noting that $\ketbra{\psi}{\psi}$ has the same distribution as $U\ketbra{0}{0}U^\dagger$, where $U$ is a Haar random unitary. A simple application of the Weingarten calculus for the moments of the Haar measure on the unitary group~\cite{Collins2006,1902.08539} yields the result. We are now ready to prove equation~\eqref{equ:variance}, which we restate as a lemma for the reader's convenience: \begin{lemma*}[Checking mixing] Let $O$ be an observable and $n_t$ be the number of qubits in the support of $O_t$. Moreover, let $\sigma_t$ be drawn from a state $2-$design on the support of $O_t$ and denote by $X_t$ the random variable $X_t=\text{tr}\left( \Phi_{[t,T]}\left(\sigma_t\right) O\right)$. Then \begin{align*} \left( 2^{n_l}\left( \mathbb{E}(X^2)\left( 2^{n_l}+1\right)-2\mathbb{E}\left( X\right)^2\right)\rb ^{\frac{1}{2}}\geq \|\Phi^{*}_{[t,T]}\left( O\right)-\text{tr}\left( \Phi^{*}_{[t,T]}\left( O\right)\rb\frac{\mathbb{1}}{2^{n_t}}\|_\infty. \end{align*} \end{lemma*} \begin{proof} Note that \begin{align*} \left\| \Phi^{*}_{[t,T]}\left( O\right)-\text{tr}\left( \Phi^{*}_{[t,T]}\left( O\right)\rb\frac{\mathbb{1}}{2^{n_t}}\right\|_F^2= \text{tr} \left( \Phi^{*}_{[t,T]}\left( O\right)^2\right)-2^{-n_t}\text{tr}\left( \Phi^{*}_{[t,T]}\left( O\right)\rb^2. \end{align*} Here $\|\cdot\|_F$ is the Frobenius norm. It follows from~\eqref{equ:firstmoments} that \begin{align*} 2^{n_t}\left( \mathbb{E}(X^2)\left( 2^{n_t}+1\right)-2\mathbb{E}\left( X\right)^2\right)= \left\| \Phi^{*}_{[t,T]}\left( O\right)-\text{tr}\left( \Phi^{*}_{[t,T]}\left( O\right)\rb\frac{\mathbb{1}}{2^{n_t}}\right\|_F^2 \end{align*} if we draw $\sigma$ from the uniform distribution on states. But it is clear that the expression only depends on the second and first moments of the random variable. Thus, a state $2-$design satisfies the same properties. The claim then follows from the fact that $\|\cdot\|_F\geq\|\cdot\|_\infty$. \end{proof} As a quantum state two design of $n$ qubits can be generated with a circuit consisting of $\mathcal{O}(n\log^2(n))$ two-qubit gates~\cite{Cleve2001}, we see that is possible to check if the operator is mixing as long as it support is a small constant. Otherwise, the $2^{n_t}$ factor implies that the precision and number of samples required to check mixing is infeasible.
1,116,691,498,725
arxiv
\section{Introduction} \label{introduction} Cells are the basic units of living organisms. Despite the diversity of cellular types, they share similar structures. Biological cells are composed of a number of polymers, proteins, and lipids, which form stable structures such as the lipid membrane. At the same time, cells exist in a state far from equilibrium. Inside a cell, complex chemical reactions take place and are converted to mechanical forces by molecular motors. Consequently, biological cells spontaneously exhibit various dynamics, such as locomotion and proliferation. In general, objects that exhibit spontaneous motion are called active matter~\cite{Lauga2009The,Ramaswamy2010The,Vicsek2012Collective,Cates2012Diffusive,Marchetti2013Hydrodynamics,Bechinger2016Active}. In contrast to objects passively driven by external forcing, active matter, including living cells, generates force in itself, which is characterized by a vanishing force monopole due to the action-reaction law. Under this force-free condition, it is necessary to break symmetry to achieve spontaneous motion, such as directional motion. For microorganisms that swim in a fluidic environment, the scallop theorem~\cite{Purcell1977Life} describes the importance of breaking reciprocality to achieve net migration via internal cyclic motions. In nature, there exist a number of microorganisms that crawl on substrates, such as the extracellular matrix and other cells. In contrast to the locomotion of microswimmers, adhesion to the substrate plays an important role in the locomotion of crawling microorganisms. Such crawling motion is often observed in eukaryotic cells, such as Dictyostelium cells and keratocytes. Many models have been introduced to explain various aspects of the dynamics of crawling cells. Examples include the vertex model~\cite{Honda1978Description,Honda2004A,Honda2008Computer,Bi2016Motility-Driven}, the cellular Potts model~\cite{Nishimura2009Cortical,Niculescu2015Crawling}, the continuous model~\cite{Marchetti2013Hydrodynamics,Nier2016Inference}, the phase field model~\cite{Ziebert2016Computational,Shao2010Computational,Taniguchi2013Phase,Shi2013Interaction,Ziebert2012Model,Tjhung2015A}, and the particle-based model~\cite{Newman2007Modeling,Sandersius2008Modeling,Basan2011Dissipative,Zimmermann2016Contact,Smeets2016Emergent}. Here, we construct a model based on the particle-based model. In our previous study~\cite{Tarama2018Mechanics}, we systematically investigated the cycle of the typical crawling mechanism~\cite{Ananthakrishnan2007The}: 1) protrusion at the leading edge, 2) adhesion to the substrate at the leading edge, 3) deadhesion from the substrate at the trailing edge, and 4) contraction at the trailing edge. To clarify the role of this cycle in efficient crawling motion, we introduced a simple mechanical model in which a cell is described by two subcellular elements connected by a viscoelastic bond, which includes an actuator that elongates and shrinks cyclically. The substrate friction of each element switches cyclically between the adhered stick state and the deadhered slip state. By tuning the phase shifts between the actuator elongation and the substrate friction of each element, we demonstrated that the order of the four basic processes of the typical crawling mechanism has a great impact on the crawling distance and efficiency, as well as the crawling direction. If we consider the extension of the mechanical model to a model cell consisting of far more than two elements or to two- or three-dimensional (2D or 3D) space, adjusting the phase shift of each element ``by hand'' becomes less feasible. Instead, we need to consider the underlying processes that regulate the actuator elongation and the stick-slip transition of the substrate friction. The aim of this paper is to develop a basic particle-based model for cell crawling. To this end, we consider a cell crawling on a flat substrate and extend our previous mechanical model to two dimensions. We describe a cell by a set of many subcellular elements connected by viscoelastic bonds~\cite{Newman2007Modeling}. In addition, intracellular chemical reactions are represented by simple reaction-diffusion (RD) equations~\cite{Murray2002Mathematical,Murray2003Mathematical,Kuramoto1984Chemical,Epstein1998An,Pismen2006Patterns}, which trigger mechanical activities. We then couple the RD equations and mechanical models to achieve efficient migration. In particular, we focus on the time delay between the intracellular chemical reactions and cell mechanics, which corresponds to the ordering of the basic crawling processes. This paper is organized as follows. In the next section, we introduce the model equations that couples cell mechanics and intracellular chemistry. Then, we show that the dependence of the substrate adhesion on the intracellular chemistry determines the direction of the cell crawling in Sect.~\ref{sec:direct_retrograde_waves}. In Sect.~\ref{sec:adhesion_mechanochemical}, we investigate how the cell crawling changes depending on the mechanical and chemical signals that control the substrate adhesion. The impact of the cell shape and size are studied in Sets.~\ref{sec:cell-shape} and \ref{sec:cell-size}, respectively. In Sect.~\ref{sec:random}, random crawling motion is realized by random excitation of intracellular chemistry, for which we analyze the traction force multipoles in Sect.~\ref{sec:traction}. Sect.~\ref{Summary and Discussion} is devoted to the summary and discussion, and this paper concludes with Sect.~\ref{sec:conclusion}. \section{Model} \label{Model} First, we introduce our mechanical model of a crawling cell and the RD equations representing intracellular chemical reactions. The choice of RD equations is arbitrary, and we employ a previously introduced model. We then couple the mechanical model and the RD equations, which regulate the intracellular mechanical activities. In particular, we confine ourselves to studying possible couplings between the intracellular chemical and mechanical models. \begin{figure}[t] \centering \includegraphics{./figure1.pdf} \caption{ Sketch of the subcellular element model of a cell crawling on a substrate. (a) The cell is described by a set of subcellular elements (magenta circles) connected by viscoelastic bonds (blue lines). The shape of a cell at rest is assumed to be a perfect hexagonal lattice. The element indicated by the star is the activator element. (b) Details of the subcellular elements and the connecting viscoelastic bond. Each element possesses the chemical concentrations $\bm{c}_i$. The actuator length $\ell^{\rm act}_{ij}(t)$ and the substrate friction coefficient $\zeta_i(t)$ change over time. }% \label{fig:ScEM_schematics} \end{figure} \subsection{Subcellular-element model} \label{Subcellular-element model} We describe a single cell by a set of subcellular elements~\cite{Newman2007Modeling} connected by Kelvin-Voigt type viscoelastic bonds, as schematically depicted in Fig.~\ref{fig:ScEM_schematics}. Since the typical size of a cell is on the order of ten micrometers, the effect of inertia is negligible. Then, the force balance equation of element $i$ is given by \begin{align} &\zeta_i(t) \bm{v}_i + \sum_{j \in \Omega_i} \xi \ell_{ij} ( \bm{v}_i -\bm{v}_j ) \notag \\ &\hspace{2em} = \sum_{j \in \Omega_i} \frac{\kappa}{\ell_{ij}} \hat{\bm{r}}_{ij} \left\{ r_{ij} - \left(\ell_{ij} +\ell^{\rm act}_{ij}(t) \right) \right\} +\bm{f}^{\rm area}_i, \label{eq:force_balance} \end{align} where $\bm{v}_i$ is the velocity of the element $i$ located at the position $\bm{r}_i$. Here, the abbreviations $r = |\bm{r}|$ and $\hat{\bm{r}} = \bm{r}/r$ are used for the relative position $\bm{r}_{ij} = \bm{r}_j - \bm{r}_i$. The summation is over the set of the elements $\Omega_i$ connected to the element $i$ by the viscoelastic bonds. The first term on the left-hand side of Eq.~\eqref{eq:force_balance} represents the substrate friction with coefficient $\zeta_i(t)$, which changes over time due to intracellular activity. The second term represents intracellular dissipation with the rate $\xi$. The first term on the right-hand side represents intracellular elasticity with the elastic modulus $\kappa$ and free length $\ell_{ij}$. Intracellular activity is also included in the actuator, which tends to elongate the connecting bond by changing the free length over time as $\ell_{ij} +\ell^{\rm act}_{ij}(t)$. Here, $\ell^{\rm act}_{ij}(t)$ represents the actuator elongation, from which the force generated by the actuator is calculated as $\bm{f}^{\rm act}_{ij}(t) = -\kappa \ell^{\rm act}_{ij}(t) \hat{\bm{r}}_{ij} /\ell_{ij}$. We emphasize that the model Eq.~\eqref{eq:force_balance} satisfies the force-free condition since the intracellular force acts symmetrically on the pair of subcellular elements. Namely, the sum of the intracellular force in Eq.~\eqref{eq:force_balance} vanishes as \begin{align} &\sum_i \bm{f}^{\rm int}_i = 0. \label{eq:force-free} \end{align} where \begin{align} \bm{f}^{\rm int}_i =& - \sum_{j \in \Omega_i} \xi \ell_{ij} ( \bm{v}_i -\bm{v}_j ) \notag\\ &+\sum_{j \in \Omega_i} \frac{\kappa}{\ell_{ij}} \hat{\bm{r}}_{ij} \left\{ r_{ij} - \left(\ell_{ij} +\ell^{\rm act}_{ij}(t) \right) \right\} +\bm{f}^{\rm area}_i \label{eq:intracellular_force} \end{align} is the intracellular force acting on the element $i$. The last term on the right-hand side of Eq.~\eqref{eq:force_balance} prevents the collapse of the subcellular element network, which is given by $\bm{f}^{\rm area}_i = -\delta U^{\rm area} /\delta \bm{r}_i$, where $U^{\rm area} = \sum_{\langle i,j,k \rangle} \sigma/S_{ijk}^2$. This potential $U^{\rm area}$ penalizes shrinking of the area of each triangle $\langle i,j,k \rangle$ formed by connected subcellular elements $i$, $j$, and $k$, which is defined by $S_{ijk} = ( \bm{r}_{ij} \times \bm{r}_{ik} ) \cdot \hat{\bm{e}}_z /2$ with $\hat{\bm{e}}_z$ as the unit vector perpendicular to the 2D substrate. We scale the system by $L_0 = 10 \, \mu \rm{m}$ for length and $T_0 = 1 \, \rm{min}$ for time, which are physiologically relevant values for typical living cells ~\cite{Maeda2008Ordered,Tanimoto2014A}. In addition, the scale of the force is set to $F_0 = 10 \, {\rm nN}$, which is on the order of the traction force that cells exert on the substrate. The typical values of the mechanical parameters of the model Eq.~\eqref{eq:force_balance} are summarized in Appendix~\ref{sec:scaling}. \subsection{Chemical reaction} \label{Chemical reaction} In the model Eq.~\eqref{eq:force_balance}, the effects of the intracellular activities are included in the actuator elongation $\ell^{\rm act}_{ij}(t)$ and the change in the substrate friction coefficient $\zeta_i(t)$. The former represents the protrusion and contraction processes. The latter corresponds to the adhesion and deadhesion of the cell to the underlying substrate. In actual cells, such cellular activities are caused by various intracellular chemical signals. However, it is not realistic to include all chemical components and their signaling pathways. Therefore, we model the intracellular chemical reactions by simple RD equations. Here, we employ the RD equations proposed by Taniguchi et al.~\cite{Taniguchi2013Phase}, which are two-component activator-inhibitor equations: \begin{align} \frac{\partial U_i}{\partial t} = D_U \nabla^2 U_i +G_U( U_i, V_i ), \cr \frac{\partial V_i}{\partial t} = D_V \nabla^2 V_i +G_V( U_i, V_i ), \label{eq:Taniguchi_RD} \end{align} where $U_i$ and $V_i$ represent the inhibitor and activator concentrations for subcellular element $i$, respectively. In the original paper~\cite{Taniguchi2013Phase}, Eq.~\eqref{eq:Taniguchi_RD} were introduced to model the phosphoinositide signaling pathway of Dictyostelium cells, and the activator $U$ and the inhibitor $V$ corresponded to phosphatidylinositol (3,4,5)-trisphosphate (PIP3) and phosphatidylinositol (4,5)-bisphosphate (PIP2) concentrations, respectively. The details of these RD equations are given in Appendix~\ref{sec:RD}. The important property of the RD equations, Eq.~\eqref{eq:Taniguchi_RD}, is that they are of the Grey-Scott type~\cite{Gray1983Autocatalytic,Gary1984Autocatalytic}. One of the advantages of the Grey-Scott model is that it can show either an excitable or a bistable nature depending on the parameters. Taniguchi et al. claimed that the signaling pathway that they were modeling was excitable, and thus, they considered the parameter region of the excitable case to successfully reproduce the experimental results~\cite{Taniguchi2013Phase}. Interestingly, similar RD equations were studied by Shao et al.~\cite{Shao2010Computational} in the context of cell crawling. However, they assumed a bistable regime to reproduce the steady migration of keratocyte cells. In this paper, we consider the excitable case with the parameters summarized in Table~\ref{table:parameters_RD}, following the study by Taniguchi et al.~\cite{Taniguchi2013Phase}. The Laplacian terms in Eq.~\eqref{eq:Taniguchi_RD} are calculated by using the moving particle semi-implicit (MPS) method~\cite{Koshizuka1998Numerical}. See also Appendix~\ref{sec:RD} for further information. \subsection{Mechanochemical coupling} \label{Mechanochemical coupling} To combine the cell mechanics, Eq.~\eqref{eq:force_balance}, and the RD equations, Eq.~\eqref{eq:Taniguchi_RD}, we consider the coupling of the chemical concentrations to the actuator elongation $\ell^{\rm act}_{ij}(t)$ and the substrate friction coefficient $\zeta_i(t)$ individually. Before introducing the coupling, we tested a specified traveling wave that couples to the actuator elongation and the substrate friction change. We introduced a time delay between the elongation and substrate friction change. The results showed that there exist an optimum time delay and optimum wavelength for which the cell exhibits the largest migration distance, as summarized in Appendix~\ref{sec:traveling_wave}. \subsubsection{Actuator elongation} \label{sec:actuator_elongation} First, we introduce the coupling between the RD equations and the actuator elongation. In Ref.~\cite{Taniguchi2013Phase}, actin polymerization was found to be enhanced with increasing PIP3 concentration. Therefore, we presume that the actuator elongation depends on the PIP3 concentration as \begin{equation} \ell^{\rm act}_{ij}( t ) = \ell_V \tanh [ a V_{ij}(t) ], \label{eqTaniguchi_ell_ij} \end{equation} where $V_{ij}(t) = ( V_i(t)+V_j(t) )/2$ is the mean PIP3 concentration for the bond connecting the elements $i$ and $j$. Although $V_i(t)$ is a positive quantity, its maximal value depends on the strength of the initial fluctuation because of the excitable nature of Eq.~\eqref{eq:Taniguchi_RD}. Therefore, $\tanh$ is introduced on the right hand side of Eq.~\eqref{eqTaniguchi_ell_ij} to prevent an extremely large elongation. $a$ is a constant denoting sensitivity, and $\ell_V$ is the magnitude of the elongation. Here, we set $a=\pi$ and $\ell_V=\ell_{ij}$. \subsubsection{Substrate adhesion} \label{sec:adhesion} \begin{figure}[t] \centering \includegraphics{./figure2.pdf} \caption{ Functional forms of $h_{\zeta}(\zeta)$, $h_v(v)$, and $h_V(V)$, which determine the substrate friction. The parameters are set to the values given in Table~\ref{table:scales}. }% \label{fig:zeta_function} \end{figure} Next, we consider the adhesion to the substrate underneath and the deadhesion from it. We model the adhesion/deadhesion processes by the transition of the substrate friction coefficient between the adhered stick state and the deadhered slip state. Here, we consider the dependence of the substrate friction coefficient on both mechanical and chemical signals: \begin{equation} \tau_{\zeta} \frac{d \zeta_i}{d t} = h_{\zeta}(\zeta_i) - A_v h_v(v_i) - (1-A_v ) h_V(V_i), \label{eq:dzeta_F_sv_sU} \end{equation} where $\tau_{\zeta}$ is the time delay. $A_v$ takes a value between 0 and 1, representing the ratio of the mechanical and chemical dependence of the stick-slip transition of the substrate friction. See also Fig.~\ref{fig:zeta_function} for the plot of these functions. The function $h_{\zeta}(\zeta)$ is defined by \begin{equation} h_{\zeta} (\zeta) = - \frac{1}{2} \tanh \Big( \frac{\zeta -\zeta_{\rm stick}}{\epsilon_{\zeta}} \Big) -\frac{1}{2} \tanh \Big( \frac{\zeta -\zeta_{\rm slip}}{\epsilon_{\zeta}} \Big), \label{eq:g_zeta} \end{equation} where $\zeta_{\rm stick}$ and $\zeta_{\rm slip}$ are the substrate friction coefficients in the adhered stick state and the deadhered slip state, respectively. The small parameter $\epsilon_{\zeta}$ indicates the sharpness of the adhesion-deadhesion transition. Here, we set the transition sharpness to $\epsilon_{\zeta} = \zeta_{\rm slip} /2$. We note that, if there are no change in the substrate friction coefficient, the cell does not exhibit any translational motion. If we consider an artificial vesicle or droplet sitting on a substrate, its adhesion strength changes depending on the force acting on it~\cite{Schwarz2013Physics}. The term $h_v(v)$ in Eq.~\eqref{eq:dzeta_F_sv_sU} represents this dependence of the cell adhesion to the substrate. Here, instead of the force acting on each subcellular element, we presume that the local velocity changes the adhesion strength through \begin{equation} h_v(v) = \frac{ ( v/v^* )^2 }{ 1 + ( v/v^* )^2 } -\frac{1}{2}, \label{eq:g_v} \end{equation} where $v^*$ is the threshold value. The subcellular element tends to adhere to the substrate ($\zeta = \zeta_{\rm stick}$) if the speed is smaller than the threshold value, i.e., $v<v^*$, while the element slips on the substrate ($\zeta = \zeta_{\rm slip}$) if $v>v^*$. We set the threshold value to $v^*=1$. The formula of the stick-slip transition of the cell-substrate friction depending on the local velocity, i.e., Eq.~\eqref{eq:dzeta_F_sv_sU} with $A_v=1$, was introduced in Ref.~\cite{Barnhart2015Balance}. As a result of the balance of the two functions, $h_{\zeta}(\zeta)$ and $h_v(v)$, the substrate friction switches between the stick and slip states with a sharp transition. In addition to the mechanical dependence, the adhesion strength of a cell can change depending on its internal chemical conditions~\cite{Schwarz2013Physics}. Since the molecular details of cell adhesion are complicated, we assume here that it changes depending on the PIP3 concentration as the actuator elongation: \begin{equation} h_V(V) = \frac{1}{2} \tanh( \sigma_V (V -V^*)). \label{eq:g_V} \end{equation} In Eq.~\eqref{eq:g_V}, $\sigma_V$ stands for the sensitivity, and $V^*$ is the threshold concentration. Due to this term, a large value of $V$ prevents strong adhesion if $\sigma_V > 0$, while large $V$ enhances the adhesion if $\sigma_V < 0$. However, we are not sure whether PIP3 enhances or diminishes adhesion. \begin{figure}[t] \centering \includegraphics{./figure3.pdf} \caption{ Cell crawling obtained from Eqs.~\eqref{eq:force_balance}--\eqref{eq:dzeta_F_sv_sU} for different signs of $\sigma_V$: (a) $\sigma_V = 2\pi$ and (b) $\sigma_V = -2 \pi$. The cell for positive $\sigma_V$ crawls in the opposite direction against the traveling chemical wave as shown in panel (a), whereas, for negative $\sigma_V$, it moves in the same direction as the wave as displayed in panel (b). The other parameters are set to $A_v=0$ and $\tau_{\zeta}=0.01$. The position of each subcellular element is plotted by a circle whose size and color indicate the value of $\zeta_i$ and $V_i$, respectively. The color of the connecting bonds corresponds to $V_{ij}$. The number in the bottom left corner of each subplot represents the time. }% \label{fig:direct_retrograde_flow} \end{figure} \section{Crawling by direct and retrograde waves} \label{sec:direct_retrograde_waves} To determine the sign of $\sigma_V$, we numerically integrated Eqs.~\eqref{eq:force_balance}--\eqref{eq:g_V} for both positive and negative $\sigma_V$. See Appendix~\ref{sec:method} for the details of the simulation. Fig.~\ref{fig:direct_retrograde_flow} depicts a time series of snapshots of a crawling cell for $\sigma_V=2\pi$ in Fig.~\ref{fig:direct_retrograde_flow}(a) and $\sigma_V=-2\pi$ in Fig.~\ref{fig:direct_retrograde_flow}(b), respectively. Here, we set the threshold in Eq.~\eqref{eq:g_V} to $V^*=0.5$ throughout this paper. If $\sigma_V$ is positive, the cell moves in the opposite direction to the PIP3 traveling wave, as shown in Fig.~\ref{fig:direct_retrograde_flow}(a). Interestingly, however, if the sign of $\sigma_V$ is negative, a qualitatively different result appears: namely, the cell starts to move in the same direction as the traveling wave, as displayed in Fig.~\ref{fig:direct_retrograde_flow}(b). With respect to the migration direction, the traveling wave in the same direction is called the direct wave, while the one in the opposite direction is referred to as the retrograde wave~\cite{Iwamoto2014The}. In this sense, the above crawling motion for positive $\sigma_V$ in Fig.~\ref{fig:direct_retrograde_flow}(a) corresponds to the motion with the retrograde wave, and the one for $\sigma_V<0$ in Fig.~\ref{fig:direct_retrograde_flow}(b) corresponds to the motion with the direct wave. Since the experiments in Ref.~\cite{Taniguchi2013Phase} show that cells move in the direction in which PIP3 concentration is increased and thus actin polymerization is enhanced, we set $\sigma_V = 2\pi$ for the rest of this paper. \begin{figure*}[t] \centering \includegraphics{./figure4.pdf} \caption{ Normalized migration distance $\Delta R/L_{\rm cell}$ against the time delay $\tau_{\zeta}$ for (a) a hexagonal cell ($L_{\rm cell} = 1$), (b) a circular cell ($L_{\rm cell} = 1$), and (c) a large hexagonal cell ($L_{\rm cell} = 1.4$). The results for several values of $A_v$ are plotted by different lines. }% \label{fig:deltaR} \end{figure*} \section{Mechanical vs.\ chemical control of adhesion} \label{sec:adhesion_mechanochemical} Now, we study the effect of the mechanical and chemical dependence of substrate friction coefficient on cell crawling. To characterize the cell migration, we measure the migration distance $\Delta R$ of the cell in one cycle, i.e., with one traveling wave, varying the time delay $\tau_{\zeta}$. Varying $A_v$ between 0 and 1 tunes the ratio of the mechanical and chemical dependences. The results are summarized in Fig.~\ref{fig:deltaR}(a), where different lines correspond to different values of $A_v$ as indicated in the legend. Interestingly, the mixing of the mechanical and chemical dependences of the substrate friction may result in larger values of $\Delta R$ than purely mechanical or chemical control. In Fig.~\ref{fig:deltaR}(a), $\Delta R /L_{\rm cell}$ reaches approximately 0.36 for $A_v=0.6$ and $\tau_{\zeta} =0.01$. This value, however, is smaller than the crawling of a Dictyostelium cell, which moves its body length in approximately two cycles~\cite{Tanimoto2014A}. \section{Impact of cellular shape} \label{sec:cell-shape} Thus far, we have assumed a hexagonal cell shape, where the structure of the subcellular elements is given by a perfect hexagonal lattice when the cell is at rest. However, this structure does not describe real cells, which are instead circular or often of more complicated shapes. To elucidate the impact of the cell shape on the crawling motion, we prepare a cell of circular shape as described in Appendix~\ref{sec:Circular}. We then perform the simulation and measure the migration distance for different values of $A_v$. The results are plotted in Fig.~\ref{fig:deltaR}(b) and are qualitatively the same as those of the hexagonal cell in Fig.~\ref{fig:deltaR}(a). \section{Impact of cell size} \label{sec:cell-size} Now, we study the impact of the size of the cell. We prepare a hexagonal cell of size $L_{\rm cell} = 1.4$, which is approximately twice the area of the previous cells. We again measure the migration distance $\Delta R$, which is normalized by the cell length $L_{\rm cell}$ to facilitate comparison with the previous results for $L_{\rm cell}=1$. The results are summarized in Fig.~\ref{fig:deltaR}(c). Qualitatively, the tendency is the same as that of the results in Fig.~\ref{fig:deltaR}(a) for $L_{\rm cell}=1$. Namely, the migration distance can be larger if the substrate friction depends both on the mechanical and chemical signals than if it depends on only either one of them. \section{Random excitation} \label{sec:random} \begin{figure*}[t] \centering \includegraphics{./figure5.pdf} \caption{ Cell crawling with randomly-chosen activator element for (a,b) $K_K=5$ and (c,d) $K_K=5.5$. The trajectory of the center-of-mass position is plotted in (a) and (c). The gray circles indicate the initial position at $t=0$, and the red triangles represent the position at every time interval of 10. The orange squares are the final position at time 40. (b,d) Time series of snapshots, (b) where the crawling direction of the cell changes and (d) where the cell undergoes a spinning motion. The numbers at the left bottom corners of the subplots in panels (b) and (d) show the time. The other parameters are the same as in Table~\ref{table:parameters_RD}. }% \label{fig:random} \end{figure*} In reality, cells change their migration direction over time. In our model, we can reproduce such motion by introducing stochasticity, which may originate from, e.g., the complexity of intracellular processes. Here, we randomly choose one element in every $t=0.15$ and add to that element the stimulus $(\delta U,\delta V)=( -I_{\rm excite}, +I_{\rm excite})$ with intensity $I_{\rm excite}=0.75$. The results are summarized in Fig.~\ref{fig:random}. Due to the stochasticity, the cell changes its migration direction frequently, as shown in the trajectory of the center-of-mass position in Fig.~\ref{fig:random}(a). From the snapshots of the cell in Fig.~\ref{fig:random}(b), we can see that the migration direction depends on the position at which the chemical wave occurs. Because of the excitable nature of the RD equations, a stimulus on the element that has relaxed back to the resting state is more likely to be the origin of the next wave. Therefore, a new wave tends to originate from elements that are near the origin of the previous wave. As a result, the cell tends to maintain the same migration direction for a time, as shown in Fig.~\ref{fig:random}(a). In Figs.~\ref{fig:random}(a) and (b), the parameter values are kept the same as in Fig.~\ref{fig:direct_retrograde_flow}(a). If the parameter $K_K$ is slightly increased from 5 to 5.5, the cell changes its migration direction more frequently, as depicted in Fig.~\ref{fig:random}(c). Depending on the random stimuli, the cell switches from directional motion to spinning motion as a spiral wave appears, as shown in Fig.~\ref{fig:random}(d). The spinning motion is rather stable, but the cell can also switch back to directional motion in response to a stimulus. Note that the RD equations maintain their excitable nature at this parameter value. \section{Traction force multipoles} \label{sec:traction} \begin{figure}[t] \centering \includegraphics{./figure6.pdf} \caption{ Traction force multipole of the randomly crawling cell in Fig.~\ref{fig:random}(a). (a) Times series of each component of traction force monopole ($M^{(1)}_1$ and $M^{(1)}_2$) and the diagonal ($M^{(2)}_{11}$ and $M^{(2)}_{22}$) and off-diagonal components ($M^{(2)}_{12}$ and $M^{(2)}_{21}$) of the traction force dipole. (b) Time evolution of traction force dipole $M^{(2)}_{11}$ and quadrupole $M^{(3)}_{111}$. The color indicates the time. The axis of the $M^{(2)}_{11}$ is inverted, to match the plot in Ref.~\cite{Tanimoto2014A}. }% \label{fig:traction_multipole} \end{figure} In many experiments of cell crawling, traction force is measured~\cite{Style2014Traction} since it is of fundamental importance for cell motility. Here, we measure the traction force of our model cell. Since traction force is the force that a cell exerts on the substrate underneath, it is described by the interaction between the cell and the substrate: $\bm{f}^{\rm traction}_i = \zeta_i(t) \bm{v}_i$. We calculate the traction force force multipoles, which are defined as follows, to compare with the experimental result~\cite{Tanimoto2014A}. First, the traction force monopole is defined by \begin{equation} M^{(1)}_{\alpha} = \sum_i \bm{f}^{\rm traction}_{i,\alpha}, \label{eq:traction_monopole} \end{equation} where $i$ represents the subcellular element $i$ and the summation runs over the entire cell. The subscript $\alpha$ indicates spatial component $\alpha = 1,2$. Here, the traction force multipoles are calculated in the comoving coordinate on the cell, and thus, $\alpha=1$ and 2 represent the components parallel and perpendicular to the centre-of-mass velocity, respectively, to compare with the experimental results in Ref.~\cite{Tanimoto2014A}. In the numerical simulation of the model crawling cell, the traction force monopole is equal to 0, as shown in Fig.~\ref{fig:traction_multipole}, which is consistent with the experimental result~\cite{Tanimoto2014A}. This is readily understood from the force balance equation, Eq.~\eqref{eq:force_balance}, and the force-free condition, Eq.~\eqref{eq:force-free}(a). That is, the traction force monopole vanishes because of the fact that the inertia is negligibly small for crawling cells on top of the force-free condition. The next lowest mode is the traction force dipole, the element of which is defined by \begin{equation} M^{(2)}_{\alpha\beta} = \sum_i r_{i,\alpha} f^{\rm traction}_{i,\beta}. \label{eq:traction_dipole} \end{equation} On the one hand, from Fig.~\ref{fig:traction_multipole}(a), the diagonal components of the traction force dipole are oscillating around 0. Here, note that the positive and negative force dipoles represent the extensile and contractile force. In the experiment~\cite{Tanimoto2014A}, only the contractile traction force dipole was observed, which our results fail to reproduce. The reason why our model does not reproduce the contractile force dipole is not clear yet. One possibility is that the friction coefficient of the protrusion process may be different from that of the contraction process, which are set to the same in our current model. On the other hand, the off-diagonal components $M^{(2)}_{12}$ and $M^{(2)}_{21}$ take the same values, indicating that the traction force dipole is symmetric, although such symmetry is not presumed in its definition, Eq.~\eqref{eq:traction_dipole}. Such symmetric property of the traction force dipole was also obtained in the experiment~\cite{Tanimoto2014A}. Now, we consider the meaning of the symmetric property of the traction force dipole that is obtained in our simulation as well as in the experiment. Actually, this symmetric property of the traction force dipole indicates the torque-free nature of the cell. Here, the torque is defined by \begin{align} \bm{T} = \sum_i \bm{r}_i \times \bm{f}^{\rm traction}_i, \label{eq:torque} \end{align} which, in a two dimensional space, becomes \begin{align} T = \sum_i ( r_{i,1} f^{\rm traction}_{i,2} -r_{i,2} f^{\rm traction}_{i,1} ) = M^{(2)}_{12} -M^{(2)}_{21} \label{eq:torque_2d} \end{align} by using Eq.~\eqref{eq:traction_dipole}. Note that, if one describes the force dipole tensor with its invariant, the torque appears as the imaginary part of the eigenvalues. Finally, in Fig.~\ref{fig:traction_multipole}(b), we plot the time evolution of the traction force dipole $M^{(2)}_{11}$ and the traction force quadrupole $M^{(3)}_{111}$, which was also measured in the experiment~\cite{Tanimoto2014A}. Here the traction force quadrupole is defined by \begin{equation} M^{(3)}_{\alpha\beta\gamma} = \sum_i r_{i,\alpha} r_{i,\beta} f^{\rm traction}_{i,\gamma}. \label{eq:traction_quadrupole} \end{equation} Interestingly, the trajectory in $M^{(2)}_{11}$-$M^{(3)}_{111}$ space shows a counterclockwise rotation, which is qualitatively consistent with the experimental results~\cite{Tanimoto2014A}. \section{Summary and Discussion} \label{Summary and Discussion} To summarize, we have constructed a mechanochemical model of a cell crawling on a substrate. The mechanical part is described by a subcellular-element model, and the chemical part is described by RD equations. To combine them, we introduce two mechanical activities. One is the actuator of the bond connecting each pair of the subcellular elements, which elongates depending on the intracellular chemical concentration. The other is the substrate friction coefficient of each subcellular element, which shows a sharp transition between the adhered stick state and the deadhered slip state. We consider the dependence of the substrate friction coefficient on both the local velocity and the intracellular chemical concentration. We also introduce a time delay of the substrate friction change. By using this model, we clarified that the substrate adhesion dynamics affect how the intracellular force is converted cell crawling motion. Depending on the sign of the sensitivity of the substrate friction coefficient to the PIP3 concentration, the model cell exhibited crawling with the retrograde flow or with the direct flow. For the former case, which is consistent with experimental observations, our model showed that there is an optimum time delay and that the combined effect of the mechanical and chemical signals on the substrate friction coefficient can increase the migration distance. We also investigated the impact of the cell shape and the cell size, which led to qualitatively the same results. In addition, we included stochasticity in the RD equations, enabling the cell to change its migration direction and to change its dynamical mode from translational motion to spinning motion. Further, we performed multipole analysis of the substrate traction force, which was qualitatively consistent with the experimental results except the contractile nature of the traction force dipole. Finally, we discuss some possible extensions of our current model. \begin{description} \item[Contraction process] In our current model, the protrusion and contraction processes are both modeled by the actuator elongation, $\ell^{\rm act}_{ij}(t)$, of the bond connecting two subcellular elements. The two processes are distinguished by the sign of $\ell^{\rm act}_{ij}(t)$. In this paper, however, we consider only the protrusion process, i.e., $\ell^{\rm act}_{ij}(t) \geq 0$, which is related to the PIP3 concentration. One reason of this is that the chemical reactions that regulate contraction process are not well understood yet. Therefore, in principle, we can also consider the contraction process by introducing dependence on relevant chemical concentrations. \item[Adhesion dynamics] The cell adhesion is simply modeled by the switching of the substrate friction coefficient of each subcellular element in our model. However, in real cells, adhesion is mediated by adhesion molecules, which can diffuse and form focal adhesions. To represent these processes of adhesion molecules, we include detailed dynamics of the concentration of adhesion molecules and their diffusion to other subcellular elements. Then, we can discuss detailed structures such as the footstep-like focal adhesion observed for Dictyostelium cells~\cite{Tanimoto2014A}. \item[Shape deformation] Our model cell shows a lateral expansion with respect to the crawling direction, as shown in Fig.~\ref{fig:direct_retrograde_flow}(a). However, real cells, e.g., Dictyostelium cells, tend to elongate in the direction of motion~\cite{Maeda2008Ordered,Bosgraaf2009The}. One possible reason that our current model fails to reproduce this elongated shape is that actuator elongation depends only on the absolute value of $V_i$. Therefore, this inconsistency may be resolved by, e.g., introducing dependence on the gradient of $V_i$. \item[Three dimensions] In this paper, we modeled a cell as a two-dimensional network of viscoelastic springs by assuming crawling on a flat substrate. In reality, however, cells are three dimensional object. The extension of our current model to three dimensions is straightforward. \end{description} \section{Conclusion} \label{sec:conclusion} In conclusion, the modeling of crawling cells is still a challenging task due to the complexity of intra- and intercellular processes. The force that a cell generates should satisfy the force-free condition, where the total force vanishes. To achieve net migration under the force-free condition, the cell needs to break symmetry. In our mechanochemical subcellular-element model, the intracellular force acts symmetrically on each pair of subcellular elements; therefore, it naturally satisfies the force-free condition. Symmetry breaking occurs due to the switching of the substrate friction coefficient between the adhered stick state and the deadhered slip state. Therefore, our model clearly distinguishes intracellular force and external force. To control those mechanical activities, we included RD equations representing intracellular chemical reactions. The RD equations that we employed in this study were introduced to explain the chemical traveling wave observed within Dictyostelium cells~\cite{Taniguchi2013Phase}. However, a number of chemical reactions occur inside a cell, and which chemical reactions are relevant may depend on the phenomena of interest. Nevertheless, we believe that our model can provide a basic framework for the future construction of mechanochemical models of crawling cells by replacing the RD equations with suitable ones for each specific phenomenon. \begin{acknowledgements}% This work was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI (17H01083, 19H14673) grant, as well as the JSPS bilateral joint research projects. \end{acknowledgements}%
1,116,691,498,726
arxiv
\section*{Acknowledgments} This work was supported in part by Telekom Malaysia R\&D Grant under Project 2beAware ((MMUE/140098), MOHE Grant FRGS/1/2016/ICT02/MMU/02/2, Multimedia University and University of Malaya. \fi \section*{References} \section{Introduction} Facial expression is one of the popular nonverbal communication types that plays an important role to reflect one's emotional state. Different combinations of facial muscular movement eventually represent specific type of emotions. According to the psychologists, people portray some particular emotions on the face in the same way, regardless the race or culture \citep{ekman1971constants}. Furthermore, it was verified by ~\cite{matsumoto2009spontaneous} that there is no difference between the sighted and blind individuals on the configuration of the facial muscle movements to response to the emotional stimuli. In other words, facial expressions are universal. They can be commonly classified into six emotion classes: happiness, sadness, fear, anger, disgust and surprise. Generally, facial expression is categorized into two types, namely, macro-expression and micro-expression. The formal expression typically lasts between three quarters of a second to two seconds, and the muscle movements are possibly occurred simultaneously at multiple parts on the face. Therefore, macro-expressions are readily perceived by humans in real time conversations. Over the past few decades, the research in automated macro-expression recognition analysis has been an active topics. To date, plenty of the recognition systems developed achieved more than 95\% of expression classification accuracy \citep{lopes2017facial,wang2016facial} and some of them even reached almost 100\% perfect recognition performance \citep{kharat2009emotion, ali2015facial, rivera2013local}. However, it should be noted that macro-expression does not accurately implies one's emotion state as it can be easily faked. Hence, it is worth to investigate to deeper emotion states from the muscular movements. Among several types of nonverbal communications, micro-expressions are discovered to be more likely to reveal one's true emotions. Micro-expressions often sustain within one-twenty-fifth to one-fifth of a second \citep{ekman1971constants} and they may only present in a few small regions on the face. Besides, they are stimulated involuntary which means that people cannot control their appearance. This allows the competent in exposing one's concealed genuine perceptions without deliberately control. Owing to its characteristic of potentially exposing a person's true emotions, it can be deployed in several applications such as national security, police interrogation, business negotiation, social interaction and clinical practice \citep{seidenstat2009protecting, o2009police, matsumoto2011evidence, turner1997evolution, frank2009see}. Micro-expressions were first discovered by ~\cite{haggard1966micromomentary} about fifty years ago, when analyzing on a couples of psychotherapeutic interviews films. At that time, they referred to the expression as ``micro momentary expression (MME)" and its appearance is the result of a repression feeling. A few years later, ~\cite{ekman1969nonverbal} did a groundbreaking discovery when watching on a slow-motion interview film of a depressed patient who was requesting for a weekend pass from the psychiatric hospital to go home. Through a carefully frame-by-frame observation on the video, Ekman and Friesen noticed the appearance of strong negative intense micro-expressions that the patient was trying to hide. However, the emotions were quickly covered up with another expressions (i.e., smile). In fact, the patient was planned to commit suicide without the supervision. Since then, analysis in micro-expression is gaining more attention in both the psychological and computer vision fields. Thus far, the identification and annotation of micro-expressions are done manually by psychologists or trained experts. This may lead to reliability inconsistency as the labeling of the expression is solely dependent on the personal judgment. In addition, it is time and effort consuming as the annotators are required to inspect the tiny facial muscle changes in each frame transition. Therefore, it is essential to implement reliable computer-based micro-expression detection and classification systems to obtain trustable, accurate and precise ground-truths (i.e., emotion state, action unit, onset, apex and offset indices) of each video. In general, a micro-expression recognition system involves three basic steps, include: (1) Image preprocessing - enhancement of image by preserving the significant features; (2) Feature extraction - identification of the important features from the image; (3) Expression classification - recognition of the emotion based on the features extracted. Figure~\ref{fig:basicStep} illustrates the basic flowchart of the recognition process. Each step plays a vital role to obtain a promising recognition performance and they are all equivalently important because each of them is targeting unique strategies to address the desired features in different perspective. In the recent years, the automated micro-expression systems developed in the literature are increasing gradually. This might due to the lack of suitable databases for data training and testing purposes, and hence hindering further analysis study especially in performance assessment and investigation. To date, there are three spontaneous publicly-available micro-expressions databases (i.e., CASME II \citep{casme2}, SMIC \citep{smic} and SAMM \citep{samm}) that contain sufficiently large number of video samples for experimental evaluation. \begin{figure}[tb] \centering \includegraphics[width=1\linewidth]{pic/basicStep} \caption{Block diagram of a typical facial micro-expression recognition system} \label{fig:basicStep} \end{figure} Recent works ~\citep{patel2016selective, takalkar2017image, peng2017dual} have shown the feasibility of adopting deep learning (e.g., convolutional neural network (CNN)) in micro-expression recognition systems. However, the recognition accuracy of previous works are still unsatisfactory. To the best of our knowledge, there has not been any attempt that performs cross database evaluation for micro-expression recognition task using CNN mechanism. In this paper, a novel and robust feature extraction approach that can effectively represent the subtle facial muscle contractions for micro-expression recognition system is presented. Concretely, the contributions of this paper are listed as follows: \begin{enumerate} \item Adoption of only two frames (i.e., onset and apex) from each video to better represent the significant expression details and applying optical flow guided techniques to encode the motion flow features. \item Proposal of a novel feature extractor that incorporates both the handcrafted (i.e., optical flow) and data-driven (i.e., CNN) features. \item Implementation of a novel CNN architecture that is capable to highlight valuable input features and improve the emotion state prediction. \item Comprehensive evaluation of the proposed approach on three recent spontaneous micro-expression databases is performed to validate its consistency and effectiveness. \end{enumerate} The remainder of the paper is organized as follows. Section~\ref{sec:related} discusses related works on the state-of-the-art apex frame spotting and feature extraction techniques. The proposal of the recognition system framework, theoretical derivations and the effective use of CNN are elaborated in Section~\ref{sec:algorithm}. Overview of the databases used and the experimental settings are described in Section~\ref{sec:experiment}. Followed by Section~\ref{sec:results} that reports the recognition performance, with discussion and analysis. Finally, conclusions are drawn in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related} In the literature, most of the automated micro-expression studies focused on the first and second stages of the recognition system, i.e., image preprocessing and feature extraction. Some promising preprocessing techniques and feature extractors exploited in micro-expressions analysis systems will be discussed and elaborated in the following subsections. \begin{comment} \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{pic/basicStep} \caption{Block diagram of a typical facial micro-expression recognition system} \label{fig:basicStep} \end{figure} \end{comment} \subsection{Image Preprocessing} The two properties of the micro-expressions are low intensity and often occur in specific facial regions. Therefore, some of the previous works aim to emphasize the facial muscle movements in some particular areas, instead of extracting the features from the entire face. By focusing to extract features from several small facial regions can omit the noticeable background noises captured by the camera (which are probably due to the flickering lights). In addition, considering the regions of interest (RoIs) is able to accelerate the feature extraction and classification processes as irrelevant data are eliminated. For instance, ~\cite{wang2014micro} encode the expression features from 16 RoIs based on the Facial Action Coding System (FACS) \citep{ekman1978facial} which indicate the relation between the facial muscle changes and the emotion state. However, the shapes and sizes of the 16 RoIs are not flexible as they are heavily rely on the feature coordinates detected by the landmark detector. On the other hand, ~\cite{liong2018hybrid} proposed to reduce number of RoIs to three regions (i.e., ``left eye + left eyebrow", ``right eye + right eyebrow" and ``mouth"). The selection of these three areas are identified according to the occurrence frequency of the muscle movements in the videos provided by CASME II and SMIC databases. Although the size and location of the 3 RoIs are not fixed, they are merely dependent on the position of the landmark coordinates. Unfortunately, the landmark-based approach might not be sufficiently accurate and the 3 regions selected are not always the optimal areas that can capture the perfect expression information. In addition, it is pointed by~\cite{xu2017microexpression}, that a fine-scale alignment is essential to be performed as the preprocessing step. This is because the subtle misalignment resulted from the conventional facial registration and alignment tools could cause degradation in the recognition performance. Moreover, there are some works that minimize the information redundancy in micro-expressions by emphasizing only a portion of all frames of each video. For example, ~\cite{le2017sparsity} select several important frames for extraction. This is intuitive as the images are captured using high frame rate cameras, there will be similar facial motion patterns appearing in consecutive frames. Therefore, they intent to identify and remove unfavorable redundant frames as the preprocessing step. Besides, this could boost the discrimination power of the feature vectors. On a similar note, another recent method proposed by ~\cite{he2017multi} also describes the expression details from a reduced set of frames. Concretely, Temporal Interpolation Model (TIM) ~\citep{zhou2012image} is applied to normalize all the videos in SMIC dataset to 20 frames and CASME II to 30 frames. It should be noted that the average frame length for SMIC and CASME II are 33 and 67, respectively. Although shorten the video length improves the efficiency and accuracy performance, an arbitrary decision has to be made about what frame length should be used. Another remarkable preprocessing technique proposed in~\cite{liong2018less,liong2017micro} well-represent the entire video by utilizing only the apex frame (and onset frame as reference frame). To be concise, there are generally three temporal segments in each micro-expression videos (i.e., onset, apex and offset). The onset is the instant that the facial muscles begins to contract and grow stronger. The apex frame indicate the most expressive facial action when it reaches the peak. The offset is the moment where the muscles are relaxing and the face returns to its neutral appearance. From the results reported in~\cite{liong2018less,liong2017micro}, it supports that, encoding the features from apex frame provides more valuable expression details than a series of frames. Furthermore, the apex-based approach is employed in the other work of~\cite{liong2016automatic}, where they tested on other micro-expression databases comprising only of raw long videos and promising performance results are obtained. \subsection{Feature Extraction} The primitive feature extraction method that evaluated on spontaneous micro-expression databases (i.e., CASME II and SMIC) is known as Local Binary Pattern on Three Orthogonal Planes (LBP-TOP) \citep{zhao2007dynamic}. LBP-TOP was eventually designed to describe dynamic texture patterns. In brief, it is capable to capture the local spatio-temporal motion information (i.e., pixel, region and volume levels). Furthermore, it is robust against geometric variations caused by scaling, rotation, or translation. With great discriminative feature representation as well as its computational simplicity, LBP-TOP has been comprehensively studied and modified to accommodate in different applications. As a result, several LBP variants are proposed and some of them are examined in micro-expression analysis, such as Local Binary Patterns with Six Intersection Points (LBP-SIP) \citep{wang2014lbp}, Spatiotemporal Local Binary Pattern with Integral Projection (STLBP-IP) \citep{huang2015facial}, Completed Local Quantization Pattern (STCLQP) \citep{huang2016spontaneous}. Apart from LBPTOP, optical flow \citep{gibson1950perception} is one of the most popular feature extractors, as it has been very successful in a variety of computer vision tasks, such as action recognition \citep{chaudhry2009histograms}, face tracking \citep{decarlo2000optical}, medical image reconstruction \citep{weng1997three}. Succinctly, optical flow measures the apparent motion of the brightness patterns in a sequence of images in terms of velocity vector field. Owing to its robust feature representations with data from multiple domains, a number of researchers unleashed the potential of optical flow in micro-expression recognition systems. For instance, ~\cite{liu2016main} proposed to construct a RoI-based feature vector using optical flow to describe the local motion information and the spatial location. Thus, aside having compact feature representation (i.e., feature dimension of 72 per video), it is robust to translation, rotation and illumination changes. As an extension of optical flow, ~\cite{shreve2009towards} derived a higher order accurate differential approximation, namely optical strain. Optical strain leads to better performance in determining the motion changes compared to optical flow, as it is capable to preserve relatively meaningful facial muscle movements \citep{liong2016spontaneous, liong2014subtle}. Deep learning has emerged as a family of machine learning technique that operates such that the important information are iteratively extracting from data and transforming them into the final output features. Deep learning has significant impacts on a variety of application domains as it yields numerous state-of-the-art results, such as speech recognition \citep{amodei2016deep}, face recognition \citep{sun2015deepid3} and scene recognition \citep{zhou2014learning}. However, deep learning has yet to have a widespread impact on micro-expression studies. Particularly, the first work that adopts Convolutional Neural Network (CNN) is established by ~\cite{patel2016selective} to evaluate the proposed algorithm in CASME II and SMIC databases with Leave-One-Subject-Out Cross Validation (LOSOCV) protocol during the data training and testing stages. However, the accuracy results obtained by their work do not outperform the conventional methods as the model is possibly being overfitted. Besides, ~\cite{takalkar2017image} intent to increase the number of samples to the double of each dataset using data augmentation. They partitioned all the images into three sets, namely training, testing and validation, which consist the portion of 80\%, 1\% and 1\%, respectively. On the other hand, a recent work by~\cite{peng2017dual} directly feeds the CNN model with high level features (i.e., optical flow). \section{Feature Extraction} \section{Proposed Algorithm} \label{sec:algorithm} The impressive recognition performance presented in the earlier work by ~\cite{liong2018less} has brought the significance of the apex frame into a sharp focus, especially in the feature extraction stage. With rich motion patterns obtained from the apex frame (with onset as the reference frame), it is possible to select the features with minimal redundancy. As a result, the facial regions containing relevant details of the expression can be easily noticed and encoded. The proposed method is targeted to emphasize on the preprocessing and feature extraction stages. In brief, it incorporates the following three steps: \begin{enumerate} \item Apex frame acquisition: to spot the apex frame location from each video. \item Optical flow features elicitation: to estimate the horizontal and vertical optical flow from the apex and onset frames. \item Feature enhancement with CNN: to enrich the optical flow features that can automatically identify and learn relevant spatio-temporal context information in a hierarchical way. \end{enumerate} A conceptual framework in this paper is illustrated in Figure~\ref{fig:proposedFlow}. The detailed procedures for each step are described in the following subsections. \begin{figure*}[tb] \centering \includegraphics[width=1\linewidth]{pic/proposedFlow} \caption{Overview of the proposed micro-expression recognition system. It consists of three main steps, namely apex frame acquisition, optical flow features elicitation and feature enhancement with CNN.} \label{fig:proposedFlow} \end{figure*} \subsection{Apex Frame Acquisition} \label{subsec:apex} There are three micro-expression databases exploited in the experiment, namely, CASME II \citep{casme2}, SMIC \citep{smic} and SAMM \citep{samm}. The location of the ground-truth apex frame has been provided in CASME II and SAMM, which are annotated by at least 2 trained experts. Since the apex frame index in SMIC is absence, an automatic apex spotting system has to be applied to approximate the location of apex frame. It has been demonstrated that the apex spotting mechanism, D\&C-RoIs \citep{liong2015automatic}, is capable to exhibit reasonable good recognition performance \citep{liong2018less,liong2016automatic}. Succinctly, the D\&C-RoIs method first computes the LBP features from three facial sub-regions (i.e., ``left eye+eyebrow", ``right eye+eyebrow" and ``mouth") of each image. Then, a correlation coefficient principle is employed to acquire the changes in difference of the LBP features between the onset frame to the rest of the frames. Finally, a Divide \& Conquer strategy is utilized on the rate of the feature difference to search for the apex frame, whereby it indicates the frame index of the local maximum. For clarity, the notations used in this paper are defined and explained in the following sections. A micro-expression video clip is expressed as: \begin{equation} S = \left[ s_1, s_2, ... , s_n\right], \end{equation} \noindent where $n$ is the number of video clips. The $i$-th of the sample video clip is molded to: \begin{equation} s_{i} = \{f_{i,j} | i=1,\dots,n; j=1,\dots ,F_{i}\}, \end{equation} \noindent where $F_i$ is the total number of image frames in the $i$-th sequence. There will be one apex frame in each video sequence and it can be located at any frame index between the onset (first frame) and offset (last frame). The onset, apex and offset frames are denoted as $f_{i,1}$, $f_{i,\alpha}$ and $f_{i,F_i}$, respectively. The apex frame can be denoted as: \begin{equation} f_{i,\alpha} \in f_{i,1},\dots ,f_{i,F{i}} \end{equation} Thus, $f_{i,\alpha}$ is predicted after adopting the D\&C-RoIs approach. \subsection{Optical Flow Features Elicitation} \label{subsec:elicitation} In this process, a higher level with reduced dimension features are produced in this stage. Consequently, the optical flow features are obtained prior to passing the raw onset and apex images to the CNN architecture. Optical flow is able to indicate the apparent facial motion changes between frames. It is an approximation of the image patterns based on the local derivatives between two images. Specifically, it aims to generate a two-dimensional vector field, i.e., motion field, that represents the velocities and directions of each pixel. In order to attain the dynamical movement of the desired optimal expression (i.e., $p_{i,\alpha}$), the intensity difference between the onset (i.e., $f_{i,1}$) and apex (i.e., $f_{i,\alpha}$) is estimated. To estimate the optical flow, it is generally assumed that: \begin{itemize} \item The apparent brightness of the moving objects remains unchanged between the source and target frames. Thus the noises generated by a large variety of imaging variables such as the shadows, highlights, illumination and surface translucency phenomena are entirely neglected. \item The movement between two consecutive frames are small as the motion changes gradually over time. \item Image flow field is continuous and differentiable in both the space and time domains. \item The scene is static, the objects in the scene are rigid, and the changes of the objects' shape are ignored. \end{itemize} Suppose that the intensity of the reference frame that locates at $t$-th of a video sequence is defined as $I_t(x,y)$. The intensity of the next consecutive frame, $(t+1)$-th is denoted as $I_{t+1}(x+\delta x, y + \delta y)$. According to the brightness constancy constraint, the intensity of the two adjacent frames is achieved as: \begin{equation} \label{eq:It} I_t(x,y) = I_{t+1}(x + \delta x, y + \delta y), \end{equation} \noindent where $\delta x = u^t \delta t$ and $\delta y = v^t \delta t$. Explicitly, $u^t(x,y)$ and $v^t(x,y)$ refer to the horizontal and vertical of the optical flow field, respectively. By adopting Taylor series expansion on (\ref{eq:It}), it becomes an expanded form: \begin{equation} \label{eq:It2} I_{t+1}(x + \delta x, y + \delta y) \approx I_t(x,y) + \delta{x} \frac{\partial{I}}{\partial{x}}+\delta{y} \frac{\partial{I}}{\partial{y}} + \delta {t} \frac{\partial{I}}{\partial{t}} \end{equation} We then combines (\ref{eq:It}) and (\ref{eq:It2}), the optical flow equation can be succinctly formulated as follows: \begin{equation} \begin{split} I_t(x,y) &= I_t(x,y) + \delta{x} \frac{\partial{I}}{\partial{x}}+\delta{y} \frac{\partial{I}}{\partial{y}} + \delta {t} \frac{\partial{I}}{\partial{t}}, \\ 0 &= \delta{x} \frac{\partial{I}}{\partial{x}}+\delta{y} \frac{\partial{I}}{\partial{y}} + \delta {t} \frac{\partial{I}}{\partial{t}} \end{split} \end{equation} \noindent By dividing both sides of the equations by $\delta t$: \begin{equation} \begin{split} 0 &= \frac{\delta{x}}{\delta{t} } \frac{\partial{I}}{\partial{x}}+\frac{\delta{y}}{\delta{t} }\frac{\partial{I}}{\partial{y}} + \frac{\delta{t}}{\delta{t} }\frac{\partial{I}}{\partial{t}},\\ 0 &= u^t(x,y)\frac{\partial I}{\partial x} + v^t(x,y)\frac{\partial I}{\partial y} + \frac{\partial I}{\partial t} \end{split} \end{equation} For a sufficiently small interval time between the onset and apex frames (i.e., less than 0.2 seconds), it is assumed that the brightness of the surface patches remains constant. Hence, the optimal expression flow feature $p_{i,\alpha}$, can be obtained as: \begin{equation} I_{t = 1} (x,y) = I_{t + \alpha}(x + u^t(x,y)\delta t, y+ v^t(x,y) \delta t) \end{equation} Finally, the optical flow map that computed from the two frames (i.e., onset and apex) is formed to represent the entire video: \begin{equation} O_i = \{(u(x,y), v(x,y)) | x = 1, 2, ... , X, y = 1, ... , Y\} \end{equation} \noindent where X and Y denote the width and height of the images, $f_{i,j}$, respectively. \begin{comment} Thereupon, after obtaining the optical flow vectors, it can be further formulated into three characteristics, namely magnitude, orientation and optical strain. Magnitude refers to the intensity of the pixel’s movement whereas orientation describes the direction of the flow motion. To compute the magnitude and orientation, the horizontal and vertical components of optical flow $O_i=\{(u(x,y), v(x,y))\}$ are utilized to be molded into the equations: \begin{equation} \rho(u,v) = ||O_i || = \sqrt[]{(u(x,y))^2 + (v(x,y))^2 } \end{equation} and \begin{equation} \theta(u, v) = tan^{-1} \left(\frac{v(x,y)}{u(x,y)}\right) \end{equation} \noindent where $\rho$ and $\theta$ represent the magnitude and orientation, respectively. The final component, optical strain, which is capable to approximate the deformation intensity and can be defined as: \begin{equation} \label{eq:tensor} \varepsilon = \frac{1}{2} [\nabla \bf u + (\nabla \bf u)^{\it T} ], \end{equation} \noindent where {\bf u} = $[u,v]^T$ is the displacement vector. It can also be re-written as: \begin{equation} \varepsilon = \begin{bmatrix} \varepsilon_{xx} = \frac{\partial u}{\partial x} & \varepsilon_{xy} = \frac{1}{2}(\frac{\partial u}{\partial y} + \frac{\partial v}{\partial x}) \\[1em] \varepsilon_{yx} = \frac{1}{2}(\frac{\partial v}{\partial x} + \frac{\partial u}{\partial y}) & \varepsilon_{yy} = \frac{\partial v}{\partial y} \end{bmatrix}, \end{equation} \noindent where the diagonal strain components, ($\varepsilon_{xx},\varepsilon_{yy}$), are normal strain components and ($\varepsilon_{xy},\varepsilon_{yx}$) are shear strain components. Concretely, normal strain calculates the changes in length along a specific direction, whereas shear strain measures the changes in angles with respect to two specific directions. The optical strain magnitude for each pixel can then be computed by taking the sum of squares of the normal and shear strain components, such that: \begin{equation} \begin{split} |\varepsilon_{x,y}| &= \sqrt{{\varepsilon_{xx}}^{2} + {\varepsilon_{yy}}^{2} + {\varepsilon_{xy}}^{2} +{\varepsilon_{yx}}^{2}} \\ & = \sqrt{{\frac{\partial u}{\partial x}}^{2} + {\frac{\partial v}{\partial y}}^{2} +\frac{1}{2}{(\frac{\partial u}{\partial x} + \frac{\partial u}{\partial x})}^{2}}. \end{split} \label{eq:osm} \end{equation} To summarize, each video can be derived into the following five optical flow derived representations: \begin{enumerate} \item $u$ - Horizontal component of the optical flow field $O_i$, \item $v$ - Vertical component of the optical flow field $O_i$, \item $\rho$ - Magnitude, \item $\theta$ - Orientation, \item $\varepsilon$ - Optical strain \end{enumerate} \end{comment} In short, each video sequence, $s_i$ is summarized into the following two optical flow derived representations: \begin{enumerate} \item $u(x,y)$ - Horizontal component of the optical flow field $O_i$ \item $v(x,y)$ - Vertical component of the optical flow field $O_i$ \end{enumerate} The optical flow technique utilized in the experiments later is TV-L1 \citep{zach2007duality} method. This is because it is better in preserving the flow discontinuities and is more robust compared to the classical optical flow method (i.e., Black and Anandan \citep{black1996robust}) \citep{shreve2009towards}. \subsection{Feature Enhancement with Convolutional Neural Network} \label{subsec:enhancement} The optical flow features contain the spatio-temporal expression details. They are then fed into a CNN architecture which is expected to further improve the feature information by reconstructing and refining the selection of more significant motion details. CNN is one of the deep artificial neural networks that has been widely used in analyzing visual imagery \citep{tu2018multi, bai2018sequence, ullah2018action}. It consists of several layers, such as the input layer, convolutional layer, pooling layer, fully connected layer and output layer. CNN has also been recently exploited in micro-expression recognition mechanisms. For example, ~\cite{peng2017dual} designed a 3D-CNN architecture to effectively learn the high-level features (i.e., optical-flow data). However, in contrast to~\cite{peng2017dual}, the optical flow representations obtained from the previous stage (i.e., Section~\ref{subsec:elicitation}) are having two dimensional maps (i.e., $X\times Y$). Therefore, a new 2D-CNN architecture is proposed to perform the feature learning task. Figure~\ref{fig:proposedCNN} illustrates the conceptual visualization of our proposed OFF-ApexNet (Optical Flow Features from Apex frame Network) architecture. The horizontal and vertical components of the optical flow are used as the input data of the CNN. Two independently trained CNN models (i.e., to train \textit{u} and \textit{v} separately) will be merged to form a resultant feature vector at the fully connected layers. The basic overview of the duty of each layer is described and explained as follows. \begin{figure*}[tb] \centering \includegraphics[width=1\linewidth]{pic/proposedCNN} \caption{Framework of the proposed OFF-ApexNet architecture. The input data is the horizontal and vertical optical flow images. They are then processed by two convolutional layers and two pooling layers, followed by two fully connected layers.} \label{fig:proposedCNN} \end{figure*} First, for the input layer, all the input data are normalized to a fix size (i.e., $\aleph\times \aleph$), whereby the input data in this case is the optical flow based components, such that: \begin{equation} u= \frac{\delta x(t)}{\delta t}, \end{equation} \noindent and \begin{equation} v = \frac{\delta {y (t)}}{\delta y}, \end{equation} \noindent where $u$ and $v$ refers to the horizontal and vertical components of optical flow, respectively. The normalized data is then multiplied with a convolution kernel to form a feature map in the following convolutional layer. Concretely, each $e_{ij}$ pixel in the feature map is calculated by: \begin{equation} \begin{split} e^{l}_{ij} &= \{f^l(x^{l}_{ij} + b^{l})| i = 1, 2, ... , \aleph, j = 1, ... , \aleph\},\\ \text{where } x^{(l)}_{ij} &= \Sigma^{m-1}_{a=0}\Sigma_{b=0}^{m-1}w^{(l)}_{ab} y^{l-1}_{(i+a)(j+b)}, \end{split} \end{equation} \noindent $x^{(l)}_{ij}$ is the pixel value vector of the set of units in the small neighborhood corresponding to $e_{ij}$ pixel at layer $l$, whereas $f^l$ denotes the ReLu activation function at layer $l$. $w$ and $b$ are the coefficient vector and bias respectively, determined by the feature map. Thus for an input $x$, the ReLu function can be indicated as: \begin{equation} f(x) = max(0, x) \end{equation} The input optical flow features (i.e., $u$ and $v$) are now transformed into feature maps (i.e., $e$) representation. The size of generated feature map is rely on the number of convolution kernels. Conventional kernel sizes chosen in the past research are $3\times 3$, $5\times 5$ and $7\times 7$. The subsequent layer is the pooling layer. It is used as a subsampling operator to progressively reduce the spatial size of the feature map representation. As a result, it can effectively minimize the computational complexity of the CNN architecture. The $k$-$th$ unit in the feature map in the pooling layer can be achieved by: \begin{equation} Pool_k = f(down(C)*W + b), \end{equation} \noindent where $W$ and $b$ are the coefficient and bias, respectively. $down(\cdot)$ is a subsampling function, which can be expressed as: \begin{equation} down(C) = max\{C_{s,l} | s\in Z^+, l \in Z^+ \le m\}, \end{equation} \noindent where $C_{s,l}$ refers to the pixel value of C in the feature map $e$. $m$ denotes the sampling size. It is observed that each layer (i.e., convolutional layers and pooling layers) in the CNN architecture deliberately learn and convert the optical flow features to higher level features in other subsequence layers. After passing through all the convolution network layers (which may consists of several convolution layers and pooling layers), the final feature representation (denoted as $Out(\tau)$) comprises significant expression information, where $\tau$ is the optical flow based features of input images (i.e., $u$ and $v$). Since the total number of videos used in the experiments is relatively few (i.e., 441 from three datasets), the proposed CNN architecture is composed of only four layers (i.e., two convolution layers and two pooling layers). These layers are responsible to generate meaningful features from the input data, where the final output $Out(\tau)$ can be concisely expressed as follows: \begin{equation} \begin{split} Out(u) =& f^4(down(f^3( ( f^2(down(f^1(u \ast W^1 + b^1)) \\ &\ast W^2 + b^2))\ast W^3+b^3)) \ast W^4 + b^4) \end{split} \end{equation} \noindent and \begin{equation} \begin{split} Out(v) =& f^4(down(f^3( ( f^2(down(f^1(v \ast W^1 + b^1)) \\ &\ast W^2 + b^2))\ast W^3+b^3)) \ast W^4 + b^4) \end{split} \end{equation} The high-level reasoning features (i.e., $Out_u$ and $Out_v$) derived from the input data are then flattened and merged tbefore passing to the following fully connected layer. In general, the fully connected layers transforms the features to the a set of desired number of classes from the analysis of frequencies based on the importance of features. There are three emotion classes in the experiments, namely positive, negative, and surprise. Note that similar to the convolutional layer, a ReLu activation function is applied to to all of the output after the fully connected layer. Next, the transformed features from the fully connected layer is passed into the output layer. The amount of neurons in the output layer is associated with the number of classes to be classified, which is three in this case. The output probabilities of each class are computed using an activation function, which will result to a sum of one. However, practically the output given by the former layers do not guarantee that the total sum of the probabilities over all classes equals to one. To resolve this issue, a softmax regression is utilized as the activation function. Specifically, the probability of classifying into class $c$ is given by: \begin{equation} \hat{y} = p\left(y = c | x_j \right) = \frac{e^{x_j}}{\Sigma_{n = 1}^{N}e^{x_n}}, 1\le c \le C, \end{equation} where $y$ is the ground-truth value of input $x_j$, C is the number of the classes. The loss function can be defined as follows: \begin{equation} \label{eq1} L(y, \hat{y}) = - \Sigma_{i=1}^N l(y_i) \log(\hat{y_i}), \end{equation} \noindent where $l\{\cdot\}$ is eigenfunction. When $l\{\cdot\}$ is true, the loss function will return a number of one. The gradient of error can be calculated using (\ref{eq1}). Then sum of errors from multiple inputs is anticipated to be minimized by updating the weights of networks using a stochastic Adam gradient descent. This particular type of gradient descent is known as an optimization algorithm. It aims to search for the weights and coefficient in the neural network by performing backpropagation, so that the actual output to be closer the target output. Thereby, decreases the error of each output neuron and the network as a whole. \begin{comment} \begin{figure}[t!] \centering \includegraphics[width=1\linewidth]{pic/proposedCNN} \caption{Desired CNN architecture that takes optical flow derived components as input data, where $u$, $v$, $\rho$, $\theta$ and $\varepsilon$ refer to the horizontal component of the optical flow, vertical component of the optical flow, magnitude, orientation and optical strain, respectively} \label{fig:proposedCNN} \end{figure} \end{comment} \section{Experiment} \label{sec:experiment} \subsection{Database} There are a total of three micro-expression databases involve in the experiment, namely SMIC \citep{smic}, CASME II \citep{casme2} and SAMM \citep{samm}. This is to avoid the issue of overfitting, which will happen when the gap between training and testing errors is large. Since the number of video of each single database is considered small (i.e., $\approx$ 150), it will fit the training dataset very well but underperform on new datasets. Besides, more training data can improve the data generalization capability. As such, by considering all the three datasets as a whole, it could lead to constructing a good predictive model. Thus, better in recognizing the new (i.e., unseen) faces with different imaging conditions and environments. Note that the databases are being preprocessed prior to releasing to the recorded videos to the public. For instance, facial alignment is carried out in order to standardize all the faces into a uniform size and shape. Besides, it is also to ensure that the data extracted later are capable of integration. Succinctly, face alignment is a process of detecting the transforming a set of landmark coordinates to map the face to the model face. Specifically, both the SMIC \citep{smic} and CASME II \citep{casme2} utilized Active Shape Model (ASM) \citep{van2002active} to allocate the 68 facial landmark points then Local Weighted Mean (LWM) \citep{goshtasby1988image} is employed to transform the faces based on the model face. For SAMM, the faces are first registered with a Face++ automatic facial point detector \citep{Face}, then dlib \citep{king2009dlib} is adopted as the face alignment tool. An overview of the micro-expression datasets information that used in the experiment is shown in Table~\ref{table:database}. More details are elaborated as follows. \begin{table}[tb] \begin{center} \caption{Detailed information of the SMIC, CASME II and SAMM databases used in the experiment} \label{table:database} \begin{tabular}{llccc} \noalign{\smallskip} \cline{3-5} \noalign{\smallskip} \multicolumn{2}{l} {} & SMIC & CASME II & SAMM \\ \hline \noalign{\smallskip} \multicolumn{2}{l}{Participants} & 16 & 24 & 28\\ \hline \noalign{\smallskip} \multicolumn{2}{l}{Frame rate (\textit{fps})} & 100 & 200 & 200 \\ \hline \noalign{\smallskip} \multicolumn{2}{l}{\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Cropped resolution\\ (pixels)\end{tabular}}} & \multicolumn{3}{c}{\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}} $170 \times 140$\end{tabular}}} \\ \multicolumn{2}{l}{} & \multicolumn{3}{l}{} \\ \hline \noalign{\smallskip} \multicolumn{2}{l}{Avg. frame number} & 34 & 68 & 74 \\ \hline \noalign{\smallskip} \multicolumn{2}{l}{Avg. video duration (\textit{s})} & 0.34 & 0.34 & 0.37 \\ \hline \noalign{\smallskip} \multirow{4}{*}{Expression} & Negative & 70 & 88 & 91\\ & Positive & 51 & 32 & 26\\ & Surprise & 43 & 25 & 15\\ & Total & 164 & 145& 132\\ \hline \noalign{\smallskip} \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Ground-truth\\ (index)\end{tabular}} & Onset & Yes & Yes & Yes\\ & Offset & Yes & Yes & Yes\\ & Apex & No & Yes & Yes\\ \hline \noalign{\smallskip} \multicolumn{2}{l}{Number of coder} & 2 & 2 & 3 \\ \hline \noalign{\smallskip} \multicolumn{2}{l}{Inter-coder reliability} & N/A & 0.846 & 0.82 \\ \hline \end{tabular} \end{center} \end{table} \begin{comment} \begin{table}[tb] \begin{center} \caption{Detailed information of the SMIC, CASME II and SAMM databases used in the experiment} \label{table:database} \begin{tabular}{|l|l|c|c|c|} \cline{3-5} \multicolumn{2}{l|} {} & SMIC & CASME II & SAMM \\ \hline \multicolumn{2}{|l|}{Participants} & 16 & 24 & 28\\ \hline \multicolumn{2}{|l|}{Frame rate (\textit{fps})} & 100 & 200 & 200 \\ \hline \multicolumn{2}{|l|}{\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Cropped resolution\\ (pixels)\end{tabular}}} & \multicolumn{3}{c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}} $170 \times 140$\end{tabular}}} \\ \multicolumn{2}{|l|}{} & \multicolumn{3}{l|}{} \\ \hline \multicolumn{2}{|l|}{Avg. frame number} & 34 & 68 & 74 \\ \hline \multicolumn{2}{|l|}{Avg. video duration (\textit{s})} & 0.34 & 0.34 & 0.37 \\ \hline \multirow{4}{*}{Expression} & Negative & 70 & 88 & 91\\ & Positive & 51 & 32 & 26\\ & Surprise & 43 & 25 & 15\\ & Total & 164 & 145& 132\\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Ground-truth\\ (index)\end{tabular}} & Onset & Yes & Yes & Yes\\ & Offset & Yes & Yes & Yes\\ & Apex & No & Yes & Yes\\ \hline \multicolumn{2}{|l|}{Number of coder} & 2 & 2 & 3 \\ \hline \multicolumn{2}{|l|}{Inter-coder reliability} & N/A & 0.846 & 0.82 \\ \hline \end{tabular} \end{center} \end{table} \end{comment} \subsubsection{SMIC} The Spontaneous Micro-expression (SMIC) dataset comprises 16 subjects with 164 video clip. The camera used to capture the video was PixeLINK PL-B774U with a temporal resolution of 100$fps$. The cropped images have an average spatial resolution of 170 $\times$ 140 pixels, and each video consists of 34 frames (viz., 0.34$s$). The ground-truths are labeled by two annotators, which include the emotion state, the action unit, the onset, offset frame indices. However, the apex frame information of each video is not provided. The videos include three classes: positive (51 videos), negative (70 videos) and surprise (43 videos). A three-class baseline recognition accuracy is reported as 48.78\% by employing LBP-TOP as the feature descriptor and SVM with Leave-One-Subject-Out Cross-Validation (LOSOCV) protocol. \subsubsection{CASME II} The Chinese Academy of Sciences Micro-Expression (CASME II) consists of 255 videos, elicited from 26 participants. The videos are recorded using Point Gray GRAS-03K2C camera which has a frame rate of 200$fps$. The average video length is 0.34$s$, equivalent to 68 frames. Each video's emotion label is annotated by two coders, where the reliability is 0.846. All the images are cropped to $170\times 140$ pixels. The ground-truth information provided by the database include the emotion state, the action unit, the onset, apex and offset frame indices. The videos are grouped into seven categories: others (99 videos), disgust (63 videos), happiness (32 videos), repression (27 videos), surprise (25 videos), sadness (7 videos) and fear (2 videos). A 5-class recognition baseline result of 63.41\% is reported which the feature extractor utilized was LBP-TOP and the classifier was Support Vector Machine (SVM) with Leave-One-Video-Out Cross-Validation (LOVOCV) protocol. To perform cross database evaluation in the experiment later, some of the videos are recategorized based on the emotion state. This is to cope with the database (i.e., SMIC) that has few expressions. As a result, three main emotion classes are standardized: positive, negative and surprise. Negative class include repression and disgust expressions; happiness is regarded as positive class, while the videos with others expression are not considered in the experiment. \subsubsection{SAMM} The Spontaneous Actions and Micro-Movements (SAMM) dataset contains 159 spontaneous videos, elicited from 32 participants. The videos are recorded using Basler Ace acA2000-340km camera with a temporal resolution of 200$fps$. The average number of frames of the micro-expression video sequences is 74 frames (viz., 0.37$s$). This dataset provides the cropped face video sequence with a spatial resolution of 400 $\times$ 400 pixels. In an attempt to standardize the image resolution so that it is equivalently behaved as the other two databases, all the images are resized to 170 $\times$ 140 pixels. Each video is assigned with its emotion label, action unit, frame indices of apex, onset and offset. The reliability of the marked labels by 3 coders is 0.82. This database composes of eight classes of expressions: anger (57 videos), happiness (26 videos), other (26 videos), surprise (15 videos), contempt (12 videos), disgust (9 videos), fear (8 videos) and sadness (6 videos). A recognition accuracy of 80.06\% is achieved with LBP-TOP as the feature extractor and Random Forest as the classifier with LOSOCV protocol. For the experiment purpose, the videos are reclassified, such that it consists of three main classes: negative (i.e., anger, contempt, disgust, fear and sadness), positive (happiness) and surprise. Note that videos with other expression are neglected. \subsection{Experiment Settings} In the OFF-ApexNet, the input features (i.e., $u$ and $v$) are resized into $[\aleph\times \aleph]=[28 \times28]$. After that, they are processed by the convolutional, pooling, fully connected layers and finally the output layer. The parameter setting for each layer is tabulated in Table~\ref{table:CNNsetting}. To reduce the overfitting phenomena, a dropout regularisation operation is applied after the two fully connected layers. A ratio of 0.5 is set, so that it keeps 50\% of the original output. The initial learning rate is set to 0.0001 and a set of epochs values (i.e., 1000, 2000, 3000, 4000 and 5000) are examined. Next, in the softmax classification layer, a cross-database micro-expression recognition will be performed, which means the videos from the three databases are combined in the experiment. Therefore, the total number of video involved in the experiment is 441, which is made up from SMIC (164 videos), CASME II (145 videos) and SAMM (132 videos). There are three main emotion classes: negative, positive and surprise. Specifically, a LOSOCV protocol is employed to examine the robustness of the proposed framework. The principle of LOSOCV protocol is to iteratively leave out the videos of a single subject or participant as the testing set, while the rest of the videos will be served as training set. This procedure is repeated for $k$ times, where $k$ is the number of participants in the experiment. Finally, the recognition results for all the participants are averaged to indicate the final recognition accuracy. It should be reminded that, the video of the same subject will not be appearing in both the training and testing sets simultaneously. Thus, it is considered as a person-independent approach. To deal with the imbalance class distribution (i.e., 249 negative videos, 109 positive videos and 83 surprise videos), an alternative recognition performance measurement is exploited, namely F-measure. Concretely, F-measure is defined as: \begin{equation}\label{eq:f-measure} \text{F-measure} := 2 \times \frac{\text{Precision} \times \text{Recall}}{\text{Precision + Recall}}, \end{equation} for \begin{equation}\label{eq:recall} \text{Recall} := \frac{\text{TP}}{\text{TP + FN}}, \end{equation} and \begin{equation}\label{eq:precision} \text{Precision} := \frac{\text{TP}}{\text{TP + FP}}, \end{equation} \noindent where TP, FN and FP are the true positive, false negative and false positive, respectively. \begin{table*}[tb] \begin{center} \caption{OFF-ApexNet configuration for two convolution layers, two pooling layers, two fully connected layers and an output layer } \label{table:CNNsetting} \begin{tabular}{lccccc} \noalign{\smallskip} \hline \noalign{\smallskip} Layer & Filter size & Kernel size & Stride & Padding & Output size \\ \hline \noalign{\smallskip} Conv 1 & 5 $\times$ 5 $\times$ 1 & 6 & [1,1,1,1] & Same & 28 $\times$ 28 $\times$ 6 \\ \noalign{\smallskip} Pool 1 & 2 $\times$ 2 & - & [1,2,2,1] & Same & 14 $\times$ 14 $\times$ 6 \\ \noalign{\smallskip} Conv 2 & 5 $\times$ 5 $\times$ 6 & 16 & [1,1,1,1] & Same & 14 $\times$ 14 $\times$ 16 \\ \noalign{\smallskip} Pool 2 & 2 $\times$ 2 & - & [1,2,2,1] & Same & 7 $\times$ 7 $\times$ 16 \\ \noalign{\smallskip} FC 1 & - & - & - & - & 1024 $\times$ 1 \\ \noalign{\smallskip} FC 2 & - & - & - & - & 1024 $\times$ 1 \\ \noalign{\smallskip} Output & - & - & - & - & 3 $\times$ 1 \\ \hline \end{tabular} \end{center} \end{table*} \begin{comment} \begin{table*}[tb] \begin{center} \caption{OFF-ApexNet configuration. } \label{table:CNNsetting} \begin{tabular}{|l|c|c|c|c|c|} \hline Layer & Filter size & Kernel size & Stride & Padding & Output size \\ \hline Conv 1 & 5 $\times$ 5 $\times$ 1 & 6 & [1,1,1,1] & Same & 28 $\times$ 28 $\times$ 6 \\ \hline Pool 1 & 2 $\times$ 2 & - & [1,2,2,1] & Same & 14 $\times$ 14 $\times$ 6 \\ \hline Conv 2 & 5 $\times$ 5 $\times$ 6 & 16 & [1,1,1,1] & Same & 14 $\times$ 14 $\times$ 16 \\ \hline Pool 2 & 2 $\times$ 2 & - & [1,2,2,1] & Same & 7 $\times$ 7 $\times$ 16 \\ \hline FC 1 & - & - & - & - & 1024 $\times$ 1 \\ \hline FC 2 & - & - & - & - & 1024 $\times$ 1 \\ \hline Output & - & - & - & - & 3 $\times$ 1 \\ \hline \end{tabular} \end{center} \end{table*} \end{comment} \section{Results and Discussion} \label{sec:results} \subsection{Recognition Performance} To the best of our knowledge, this is the first attempt that evaluates the feature extractor on three micro-expression databases. Table~\ref{table:3db} reports the micro-expression recognition performance in both accuracy and F-measure of OFF-ApexNet method with various epoch size. Concisely, all the three databases (i.e., SMIC, CASME II and SAMM) are merged and treated as a single database. Therefore, a LOSOCV classification will be applied for $k=$68 times, which the 16 times are from SMIC, 24 times from CASME II and 28 times from SAMM. From Table~\ref{table:3db}, it is noticed that OFF-ApexNet approach achieves the highest accuracy of 74.60\% and F-measure of 71.04\% when the epoch value is set to 3000. \setlength{\tabcolsep}{5pt} \begin{table}[h] \begin{center} \caption{Overall micro-expression recognition accuracy and F-measure evaluated on SMIC, CASME II and SAMM databases using the proposed method, OFF-ApexNet} \label{table:3db} \begin{tabular}{ccc} \noalign{\smallskip} \hline \noalign{\smallskip} Epoch & Accuracy (\%) & F-measure\\ \hline \noalign{\smallskip} 1000 & 72.56 & .6905\\ \noalign{\smallskip} 2000 & 73.47 & .7027\\ \noalign{\smallskip} 3000 & \textbf{74.60} & \textbf{.7104}\\ \noalign{\smallskip} 4000 & 72.79 & .6918\\ \noalign{\smallskip} 5000 & 73.70 & .6998\\ \hline \end{tabular} \end{center} \end{table} On the other hand, Table~\ref{table:compare} shows the comparison of the micro-expression recognition performances of the proposed method (i.e., OFF-ApexNet) with other state-of-the-art feature extraction methods when evaluated on SMIC, CASME II and SAMM databases individually. Particularly, the previous research works (i.e., methods \#1 to \#11) focus to conduct the training and testing videos on a single database separately. For methods \#1 to \#11, the number of the expression to be predicted are based on the suggested expression category from the original papers \citep{casme2, smic, samm}. Some of the number of videos for certain expressions are quite few (i.e., less than 10 samples), thus those videos are neglected in the experiments. Concisely, there are a total of three expressions (i.e., positive, negative and surprise) in SMIC, five expressions (i.e., disgust, happiness, repression, surprise and others) in CASME II and five expressions (i.e., anger, happiness, contempt, surprise and other) in SAMM. In Table~\ref{table:compare}, method \#1 (i.e., LBP-TOP) is commonly known as the baseline approach in this automated micro-expression recognition domain, the recognition results reported are obtained by reproducing the experiments for each database. Since the SAMM database is released very recently, methods \#2 to \#11 did not examine the methods on this database. It can be seen that method \#11 (i.e., Bi-WOOF) outperformed the feature descriptors \#1 to \#10. As such, Bi-WOOF approach is adopted to compare with the proposed method OFF-ApexNet later. To establish a fair comparison on the effectiveness of the proposed method, two state-of-the-art approaches (i.e., LBP-TOP and Bi-WOOF approach) are selected and the experimental configurations are set to similar across the comparing methods. More precisely, the videos from the three databases (i.e., SMIC, CASME II and SAMM) are recategorized into exclusively three expressions (i.e., positive, negative and surprise). As a result, the recognition performance is presented as methods \#12 and \#13. Particularly, for the proposed OFF-ApexNet approach (i.e., \#14), the feature extraction process follows the procedure as described in Section~\ref{sec:algorithm}. Firstly, the OFF-ApexNet model is trained by three databases as a whole using a LOSOCV strategy, then tested on each database separately. It is observed that, among all the methods shown in Table~\ref{table:compare}, OFF-ApexNet method achieves the best recognition results across all the three databases. \setlength{\tabcolsep}{5pt} \begin{table*}[tb] \begin{center} \caption{Comparison of micro-expression recognition performance in terms of \textit{Acc} (Accuracy (\%)) and \textit{F-mea} (F-measure) on the SMIC, CASME II and SAMM databases for the state-of-the-art feature extraction methods, and the proposed method} \label{table:compare} \begin{tabular}{llcccccccc} \noalign{\smallskip} \hline \noalign{\smallskip} & \multirow{2}{*}{Methods} & \multicolumn{2}{c}{SMIC} & \multicolumn{2}{c}{CASME II} & \multicolumn{2}{c}{SAMM} \\ \cline{3-8} \noalign{\smallskip} & & Acc & F-mea & Acc & F-mea & Acc & F-mea \\ \noalign{\smallskip} \hline \noalign{\smallskip} & & \multicolumn{2}{c}{3 classes} & \multicolumn{2}{c}{5 classes} & \multicolumn{2}{c}{5 classes} \\ \noalign{\smallskip} \hline \noalign{\smallskip} 1 & \begin{tabular}{@{}l@{}}LBP-TOP \\ \citep{smic,casme2,samm}\end{tabular} & 45.73 & .4600 & 39.68 & .3589 & 35.56 & .1768 \\ \noalign{\smallskip} 2 & OSF \citep{liong2014optical} & 31.98 & .4461 & - & - & - & - \\ \noalign{\smallskip} 3 & OSW \citep{liong2014subtle} & 53.05 & .5431 & 41.70 & .3820 & - & - \\ \noalign{\smallskip} 4 & LBP-SIP \citep{wang2014lbp} & 54.88 & .5502 & 43.32 & .3976 & - & - \\ \noalign{\smallskip} 5 & MRW \citep{oh2015monogenic} & 34.15 & .3451 & 46.15 & .4307 & - & - \\ \noalign{\smallskip} 6 & STLBP-IP \citep{huang2015facial} & 57.93 & .5829 & 59.51 & .5679 & - & - \\ \noalign{\smallskip} 7 & FDM \citep{xu2017microexpression} & 54.88 & .5380 & 41.96 & .2972 & - & - \\ \noalign{\smallskip} 8 & \begin{tabular}{@{}l@{}}Sparse Sampling \\ \cite{le2017sparsity} \end{tabular} & 58.00 & .6000 & 49.00 & .5100 & - & - \\ \noalign{\smallskip} 9 & STCLQP \citep{huang2016spontaneous} & 64.02 & .6381 & 58.39 & .5836 & - & - \\ \noalign{\smallskip} 10 & MDMO \citep{liu2016main} & - & - & 44.25 & .4416 & - & - \\ \noalign{\smallskip} 11 & Bi-WOOF \citep{liong2018less} & 61.59 & .6110 & 57.89 & .6125 & - & - \\ \iffalse 12 & Bi-WOOF with Phase \citep{liong2017phase} & 68.29 & .6730 & 62.55 & .6472 & - & - \\ \fi \noalign{\smallskip} \hline \noalign{\smallskip} & & \multicolumn{6}{c}{3 classes} \\ \noalign{\smallskip} \hline \noalign{\smallskip} \noalign{\smallskip} 12 & LBP-TOP & 38.41 & .3875 & 60.00 & .5222 & 59.09 & .3640 \\ \noalign{\smallskip} 13 & Bi-WOOF \citep{liong2018less} & 61.59 & .6110 & 80.69 & .7902 & 58.33 & .3970 \\ \noalign{\smallskip} 14 & \textbf{OFF-ApexNet} & \textbf{67.68} & \textbf{.6709} & \textbf{88.28} & \textbf{.8697} & \textbf{68.18} & \textbf{.5423}\\ \hline \begin{comment} 3 & STM \cite{le2014spontaneous} & 43.78 & .33 &44.34 &.47 \\ 4 & OSW \cite{liong2014subtle} & 41.70 & .38 &53.05 &.54 \\ 5 & LBP-SIP \cite{wang2015lbp} & 43.32 & .40 &54.88 &.55 \\ 6 & MRW \cite{oh2015monogenic} & 46.15 & .43 &34.15 &.35 \\ 7 & STLBP-IP \cite{huang2015facial} & 59.51 & .57 &57.93 &.58\\ 8 & OSF+OSW \cite{liong2016spontaneous} & 37.25 & .29 &52.44 &.53\\ 9 & FDM \cite{xu2016microexpression} & 41.96 &.30 &54.88 &.54\\ 10 & \begin{tabular}{@{}l@{}}Sparse \\ Sampling \cite{le2016sparsity} \end{tabular} & 49 & .51 &58 &.60\\ 11 & STCLQP \cite{huang2016spontaneous} & 58.39 &.58 &64.02 &.64\\ 12 & Bi-WOOF \cite{liong2016less} & 57.89 & .61 &62.20 &.62 \\ 13 & \bf Ours & \bf 62.55 &\bf.65 &\bf 68.29 &\bf.67 \\ \end{comment} \end{tabular} \end{center} \end{table*} \subsection{Analysis and Discussion} In Table 4, it can be seen that the accuracy result in SMIC database is the lowest among the three databases, when utilizing OFF-ApexNet. It might because of the apex frames of each video are spotted using an automatic apex spotting system, instead of utilizing the ground-truths. Referring to~\cite{liong2015automatic}, the average of frame difference between the detected and ground-truth apex is 13 frames. Thus, extracting the features from imprecise apex frame could affect the classification performance. For SAMM database, the F-measure is only 0.5423. This is due to the imbalance emotion class distribution where the ratio distribution is summarized in Table~\ref{table:distribution}. SAMM database has the most severe imbalance data issue, whereby there are only 10\% surprise videos and 20\% positive videos. It is also observed that the although SMIC is having balanced data distribution, the recognition performance (i.e., accuracy and F-measure) exhibited is lower than CASME II. This is possibly due to the prominent expressive frames in SMIC database are not being captured by the camera as it has a much lower frame rate (i.e. 100$fps$), compared to CASME II (200$fps$). In a consequence, it fails to spot the precise apex frame in such circumstances. \setlength{\tabcolsep}{5pt} \begin{table}[tb] \begin{center} \caption{Emotion ratio distribution of the three databases} \label{table:distribution} \begin{tabular}{lccccc} \noalign{\smallskip} \cline{2-4} \noalign{\smallskip} & SMIC & CASME II & SAMM \\ \noalign{\smallskip} \hline \noalign{\smallskip} Negative & 4 & 6 & 7 \\ Positive & 3 & 2 & 2 \\ Surprise & 3 & 2 & 1 \\ \hline \end{tabular} \end{center} \end{table} To further analyze the three-class recognition performance, confusion matrices are computed and shown in Table~\ref{table:cf_all} to~\ref{table:cf_samm}. Generally, confusion matrix is a typical measurement to illustrate the classification rate for each expression. The confusion matrix in Table~\ref{table:cf_all} indicates the overall performance, which means all the three databases are treated as a single database for training and testing purposes. The other three confusion matrices (i.e., in Table~\ref{table:cf_smic} to~\ref{table:cf_samm}) are tested on each database independently. It can be seen that the negative emotion can always exhibit the highest prediction rate compared to positive and surprise. The main reason is that, the negative emotion is the dominant class across the three databases (refer to Table~\ref{table:distribution}). \setlength{\tabcolsep}{5pt} \begin{table}[tb] \begin{center} \caption{Confusion matrices of OFF-ApexNet for the recognition task on all the databases} \label{table:cf_all} \begin{tabular}{lccccc} \noalign{\smallskip} \cline{2-4} \noalign{\smallskip} & Negative & Positive & Surprise \\ \noalign{\smallskip} \hline \noalign{\smallskip} Negative &\bf.84 & .11 & .05 \\ Positive & .35 &\bf.58 & .07\\ Surprise & .20 & .11 &\bf.69 \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{5pt} \begin{table}[tb] \begin{center} \caption{Confusion matrices of OFF-ApexNet for the recognition task on SMIC database} \label{table:cf_smic} \begin{tabular}{lccccc} \noalign{\smallskip} \cline{2-4} \noalign{\smallskip} & Negative & Positive & Surprise \\ \noalign{\smallskip} \hline \noalign{\smallskip} Negative &\bf.76 & .17 & .07 \\ Positive & .25 &\bf.65 & .10 \\ Surprise & .28 & .14 &\bf.58 \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{5pt} \begin{table}[tb] \begin{center} \caption{Confusion matrices of OFF-ApexNet for the recognition task on CASME II database} \label{table:cf_casme} \begin{tabular}{lccccc} \noalign{\smallskip} \cline{2-4} \noalign{\smallskip} & Negative & Positive & Surprise \\ \noalign{\smallskip} \hline \noalign{\smallskip} Negative &\bf.93 & .07 & 0 \\ Positive & .31 &\bf.66 & .03\\ Surprise & 0 & 0 &\bf 1 \\ \hline \end{tabular} \end{center} \end{table} \setlength{\tabcolsep}{5pt} \begin{table}[tb] \begin{center} \caption{Confusion matrices of OFF-ApexNet for the recognition task on SAMM database} \label{table:cf_samm} \begin{tabular}{lccccc} \noalign{\smallskip} \cline{2-4} \noalign{\smallskip} & Negative & Positive & Surprise \\ \noalign{\smallskip} \hline \noalign{\smallskip} Negative &\bf.81 & .10 & .09 \\ Positive & .58 &\bf.35 & .08\\ Surprise & .33 & .20 &\bf.47 \\ \hline \end{tabular} \end{center} \end{table} \begin{comment} \setlength{\tabcolsep}{5pt} \begin{table}[!tb] \begin{center} \caption{Confusion matrices of OFF-ApexNet for the recognition task on all the databases and each database individually. The emotion types are, NEG: negative; POS: positive; and SUR: surprise} \label{table:mat_total} \begin{subtable}{1\linewidth}\centering \caption{Overall} \begin{tabular}{lccccc} \noalign{\smallskip} \cline{2-4} & NEG & POS & SUR \\ \noalign{\smallskip} \hline \noalign{\smallskip} NEG &\bf.84 & .11 & .05 \\ POS & .35 &\bf.58 & .07\\ SUR & .20 & .11 &\bf.69 \\ \hline \end{tabular} \end{subtable} \vskip 8pt \begin{subtable}{1\linewidth}\centering \caption{SMIC} \begin{tabular}{lccc} \noalign{\smallskip} \cline{2-4} & NEG & POS & SUR \\ \noalign{\smallskip} \hline \noalign{\smallskip} NEG &\bf.76 & .17 & .07 \\ POS & .25 &\bf.65 & .10 \\ SUR & .28 & .14 &\bf.58 \\ \hline \end{tabular} \end{subtable \vskip 8pt \begin{subtable}{1\linewidth}\centering \caption{CASME II} \begin{tabular}{lccccc} \noalign{\smallskip} \cline{2-4} & NEG & POS & SUR \\ \noalign{\smallskip} \hline \noalign{\smallskip} NEG &\bf.93 & .07 & 0 \\ POS & .31 &\bf.66 & .03\\ SUR & 0 & 0 &\bf 1 \\ \hline \end{tabular} \end{subtable} \vskip 8pt \begin{subtable}{1\linewidth}\centering \caption{SAMM} \begin{tabular}{lccccc} \noalign{\smallskip} \cline{2-4} & NEG & POS & SUR \\ \noalign{\smallskip} \hline \noalign{\smallskip} NEG &\bf.81 & .10 & .09 \\ POS & .58 &\bf.35 & .08\\ SUR & .33 & .20 &\bf.47 \\ \hline \end{tabular} \end{subtable} \end{center} \end{table} \end{comment} On the other hand, instead of utilizing both the horizontal and vertical optical flow component as the input data for OFF-ApexNet approach, the performance results for the individual flow component are also evaluated. A comparison of the choice of input features is tabulated in Table~\ref{table:uv}, with a variation of the epoch values. Concretely, $u$ is simply taking account only the horizontal optical flow features, while $v$ considers the vertical optical flow features. $u+v$ refers to the proposed OFF-ApexNet method which fuses $u$ and $v$ motion information as the input data. It is observed that OFF-ApexNet approach exhibits consistent high performance results compared to both the $u$ and $v$ methods. From all the recognition performance shown, it is believed that the demonstration of OFF-ApexNet executes satisfactory recognition performance on the three micro-expression databases. \setlength{\tabcolsep}{5pt} \begin{table*}[tb] \begin{center} \caption{Comparison of the micro-expression recognition accuracy and F-measure when the input data to the network are: $u$, the horizontal optical flow features; $v$, the vertical optical flow featuers and $u+v$, both horizontal and vertical optical flow.} \label{table:uv} \begin{tabular}{ccccccc} \noalign{\smallskip} \hline \noalign{\smallskip} \multirow{2}{*}{Epoch} & \multicolumn{2}{c}{$u$} & \multicolumn{2}{c}{$v$} & \multicolumn{2}{c}{$u$+$v$}\\ \cline{2-7} \noalign{\smallskip} & Accuracy (\%) & F-measure & Accuracy (\%) & F-measure & Accuracy (\%) & F-measure\\ \hline \noalign{\smallskip} 1000 & 67.35 & 0.6224 & 66.89 & 0.6199 & 72.56 & 0.6905\\ \hline \noalign{\smallskip} 2000 & 68.03 & 0.6307 & 67.57 & 0.6253 & 73.47 & 0.7027\\ \hline \noalign{\smallskip} 3000 & 66.21 & 0.6134 & 65.31 & 0.6047 & \textbf{74.60} & \textbf{0.7104}\\ \hline \noalign{\smallskip} 4000 & 67.35 & 0.6287 & 65.99 & 0.6111 & 72.79 & 0.6918\\ \hline \noalign{\smallskip} 5000 & 66.67 & 0.6220 & 66.21 & 0.6150 & 73.70 & 0.6998\\ \hline \end{tabular} \end{center} \end{table*} \section{Conclusion} \label{sec:conclusion} In a nutshell, a novel feature extraction approach, Optical Flow Features from Apex frame Network (OFF-ApexNet) is introduced to recognize the micro-expressions. As its name implies, it combines both the handcrafted features (i.e., optical flow derived components) and the fully data-driven architecture (i.e., convolutional neural network). First, the horizontal and vertical optical flow features are computed from onset and apex frames. Then, the features are proceed to feed into a neural network to further highlight significant expression information. The utilization of both the handcrafted and data-driven features is capable to achieve promising performance results on three recent state-of-the-art databases, namely SMIC, CASME II and SAMM. Note that this is the first attempt for cross-dataset validation on three databases in this domain. As a result, a highest three-class classification accuracy of 74.60\% was achieved with its F-measure of 0.71, when considering the three databases as a whole. The contributions of this work point to some avenues for further research. For instance, rather than utilizing optical flow feature, other feature extractors (i.e., LBP, HOG, SIFT, etc.) can be applied to better represent the motion details. As a result, valuable input data will be passed to the convolutional neural network architecture for feature enrichment and selection, thereby improve the classification performance. Besides, attention can be devoted to handling the issues of imbalance data in these databases so that the methods proposed can lead to consistent good recognition results across all the expressions.
1,116,691,498,727
arxiv
\section{Introduction} \subsection{The Einstein-Euler system} In this paper, we consider the Einstein-relativistic Euler equations \begin{subequations}\label{Ein-PF} \begin{align} \text{Ric}[\gb]_{\mu\nu}-\frac12\gb_{\mu\nu}\text{R}[\gb]&=\Tb_{\mu\nu},\label{Ein} \\\label{rel-Eul} \nablab_\mu \Tb^{\mu\nu}&=0,\\ \Tb^{\mu\nu} &= (\rhob + \pb)\vb^\mu \vb^\nu + \pb \gb^{\mu\nu}.\label{EnMom} \end{align} \end{subequations} Here $\rhob$ is the proper energy density of the fluid, $\pb$ the fluid pressure and $\vb^\mu$ is the fluid 4-velocity, which we assume is normalized by \begin{equation} \label{vb-norm} \gb_{\mu\nu}\vb^\mu \vb^\nu = -1. \end{equation} The system \eqref{Ein-PF} models a four-dimensional fluid-filled spacetime $(\mathcal{M}, \gb)$, whose time-evolution is determined by the coupled interaction between spacetime evolution under the Einstein equations \eqref{Ein} and fluid propagation under the relativistic Euler equations \eqref{rel-Eul}. The Einstein--relativistic Euler equations are particularly relevant in cosmology, where they are used to model the Universe on astrophysical scales (see e.g. \cite{Ch31,OS39}). For simplicity, we henceforth drop ``relativistic" when referring to the relativistic Euler equations. From an analytical point of view, the PDE system \eqref{Ein-PF} presents challenging problems since both the Einstein equations and the Euler equations may form singularities. Singularities arising in the spacetime can occur in the context of gravitational collapse to black holes or a Big Bang, while singularities in solutions of the Euler equations correspond to fluid shock formation \cite{Ch07}. In the cosmological context, singularity formation for the coupled Einstein-Euler equations can be interpreted as the onset of structure formation in the large scale evolution of spacetime \cite{BiBuKa92}. Structure formation in cosmology describes the process by which regions with high matter densities emerge from an initially homogeneous matter distribution. Hence, it describes the formation of structures such as stars, galaxies and galaxy clusters from an initially homogeneous fluid. \subsection{Previous work} To close the system \eqref{Ein-PF}, we consider the linear, barotropic equation of state \begin{equation} \label{eos-lin} \pb = K\rhob. \end{equation} The equation of state parameter $K$ is a constant and gives the fluid speed of sound via $c_s = \sqrt{K}$. On physical grounds, $0 \leq K \leq 1$, however the most common applications in cosmology use $K \in [0,1/3]$. The case $K=0$ corresponds to a dust fluid, while $K=1/3$ corresponds to a radiation fluid. In the present article we consider cosmological spacetimes with Lorentzian metrics of the form \begin{equation}\label{background-cosmology} -d\tb^2+a(\tb)^2 g_0, \end{equation} where $(M,g_0)$ is a closed Riemannian 3-manifold without boundary. The increasing function $a(\tb)$ is the \emph{scale factor}. We say the geometry exhibits \textit{accelerated expansion} if $\ddot{a}>0$, \textit{linear expansion} if $a(\tb) = \tb$ and \textit{decelerated expansion} if $\ddot{a}<0$. Note that the direction of cosmological expansion corresponds to $a(\tb)\rightarrow \infty$ as $\tb\nearrow\infty$. If \eqref{background-cosmology} solves the Einstein equations, then it is a solution to the Friedman equations describing an isotropic and (locally) homogeneous cosmology. A particularly important example for the present paper is the \textit{Milne model} where $a(\tb)=\tb$ and $(M, g_0)$ is a negative Einstein space satisfying $\text{Ric}[g_0]=-\frac29 g_0$. \subsubsection{Fluid stabilisation from cosmological expansion} On a fixed Minkowski spacetime, small perturbations of a class of homogeneous solutions to the Euler equations \eqref{rel-Eul} are known to form shocks in finite time \cite{Ch07}. This result was shown for a large class of equations of state including \eqref{eos-lin}. By contrast, on cosmological backgrounds such as \eqref{background-cosmology}, accelerated spacetime expansion is known to suppress shock formation in fluids. This was first discovered in the Newtonian cosmological setting for a class of exponentially expanding spacetimes in \cite{BRR94}. In the context of Einstein-Euler, stability of homogeneous solutions on exponentially expanding cosmological spacetimes was first studied in \cite{RoSp13}, with several later works \cite{Fr17,HaSp15, LiuOliynyk:2018b,LiuOliynyk:2018a,LVK13,MarshallOliynyk:2022,Oliynyk16, Oliynyk:2021,Sp12}. For various other results on fluid stabilization in the regime of accelerated expansion\cnote{purple}{,} we refer to \cite{LeFlochWei:2021,MondalPuskar2022,Wei:2018}. We mention also earlier work \cite{Ri08} concerning the stability of solutions to the Einstein equations undergoing accelerated expansion. To go below accelerated expansion, i.e. when $\dot{a}>0$ but $\ddot{a}\leq 0$, it is illuminating to first study the fluid behaviour on fixed metrics of the form \eqref{background-cosmology}. Fluid stabilization depends on the spacetime expansion rate, the fluid parameter $K$, and the geometry and topology of the expansion-normalized spatial geometry $(M,g_0)$. Roughly speaking, larger speeds of sound and slower expansion rates tend to facilitate singularity formation, while slower speeds of sound and fast expansion rates suppress it. To be concrete\footnote{We emphasise though that \cite{Sp13} considers more general conditions on the scale factor than we outline in this paragraph.}, consider homogeneous solutions to the Euler equations \eqref{rel-Eul} on fixed cosmological spacetimes undergoing power law inflation, i.e. $a(\tb)= \tb^{\alpha}$ for $\alpha>0$ with $(M,g_0)$ flat. Note that $\alpha>1$ corresponds to accelerated expansion. In the case of dust $K=0$, \cite{Sp13} showed that small perturbations of the homogeneous fluid solutions are globally regular for all $\alpha>1/2$. If $K\in(0,1/3)$, work by some of the present authors showed that homogeneous fluid solutions are globally regular under irrotational perturbations for $\alpha=1$ \cite{FOfW:2021}. The case $\alpha>1$ without the irrotational restriction was shown in \cite{Sp13}. Finally, if $K=1/3$, then \cite{Sp13} remarkably showed that radiation fluids do not stabilize for linear expansion i.e. the fluid stabilises if $\alpha>1$, while shocks develop in finite time if $\alpha=1$. Moving next to the coupled Einstein-Euler equations \eqref{Ein-PF}, the only stability result below accelerated expansion is in the case of dust $K=0$ and linear expansion. More precisely, some of the present authors showed the stability of the Milne model as a solution to the Einstein-dust equations \cite{FOfW:2021}. We mention for context that the Milne model is known to be a stable solution solution to the Einstein vacuum equations \cite{AnderssonMoncrief:2011} as well as a solution to several Einstein-matter models \cite{AF20, BFK19, Wang-KG, FW21, BarzegarFajman20pub}. Note that the case of negative spatial curvature is the only known class of solutions where we can study the fluid dynamics with gravitational backreaction in the regime of linear expansion. For the other FLRW models the long-time dynamics are either recollapsing (in the case of positive spatial curvature) towards a big-crunch singularity or a matter-dominated decelerated expansion in the case of toroidal spatial topology with vanishing curvature \cite{Re08}. Finally, we emphasise that \textit{without} the backreaction of the fluid on the spacetime, \cite{Sp13} showed dust stability with \textit{de}celerated expansion. \textit{Notation:} Our indexing convention is as follows: lower case Greek letters, e.g.~$\mu, \nu, \gamma$, will label spacetime coordinate indices that run from $0$ to $3$ while lower case Latin letters, e.g.~$i, j, k$, will label spatial coordinate indices that run from $1$ to $3$. \subsection{Results and technical advances in the present paper} This paper is roughly divided into three parts. In the first part, outlined in Section \ref{intro:transf}, we introduce a \emph{novel transformation} that allows us to treat fluids with non-vanishing rotation. In the second part, see Section \ref{intro:middlesec} below, we apply our transformation to show fluid stability with $K\in(0,1/3)$ on \emph{linearly expanding spacetimes}: \begin{thm}\label{thm:intro1} The canonical homogeneous fluid solutions to \eqref{rel-Eul} with a linear equation of state \eqref{eos-lin} and $K \in (0,1/3)$ are nonlinearly stable in the expanding direction of fixed linearly-expanding FLRW spacetimes of the form $$(\mathbb{R}\times\mathbb{T}^3, -d\bar{t}^2 + \bar{t}^2 \delta_{ij}dx^idx^j).$$ \end{thm} The significance of Theorem \ref{thm:intro1} is that it removes the restriction to irrotational fluid perturbations in \cite{FOW:2021}. Finally in the third part, see Section \ref{intro:finalsec} below, we consider a fully nonlinear problem including backreaction on spacetimes with negative spatial Einstein geometries. We prove the following theorem: \begin{thm}\label{thm:intro2} All four-dimensional FLRW spacetime models with compact spatial slices and negative spatial Einstein geometry are future stable solutions of the Einstein-Euler equations \eqref{Ein-PF} with linear equation of state \eqref{eos-lin} and $K\in (0,1/3)$. \end{thm} \subsubsection{Transformation of the fluid variables}\label{intro:transf} The key advance in this paper is a new version of the Fuchsian method, originally introduced in \cite{FOW:2021}, which is designed to deal with general fluids without the irrotationality restriction. As first identified in \cite{Oliynyk16}, the Euler equations \eqref{rel-Eul} can be written as a symmetric hyperbolic Fuchsian PDE system as $$ B^0 \partial_t U + B^k \partial_k U = H.$$ Note that the time function $t$ appearing here is related to the cosmological time function $ \tb $ by $ t=1/\tb $, and hence timelike infinity is located at $ t\to 0 $. Precise definitions are given in Section \ref{sec:transf}, however the main idea is that the unknown $U = (\zeta, u^i)^{\tr}$ encodes the fluid energy density and velocity. We then transform to new unknowns $Z= (\psi,z^j)^{\tr}$ by $$ (\zeta, u^i)=\bigl( a(\psi,|z|^2_{\gc}), b(\psi)z^i\bigr).$$ The new variables obey the following PDE: \begin{equation}\label{eq:intro_mod_E_EoM} A^0\del{t}Z + \frac{1}{t}A^k \Dc_k Z = H', \end{equation} for $H'$ is the source term as given in Lemma \ref{lem:transf-Eul-general}. By design, the functions $a, b$ are chosen so that the matrices $A^0, A^k$ satisfy the relations \begin{equation}\label{A-relations} |\Pbb A^0 \Pbb^\perp|_{\op} = |\Pbb^\perp A^0 \Pbb|_{\op} = \Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr), \qquad |\Pbb^\perp A^k \Pbb^\perp |_{\op} = \Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr). \end{equation} In these expression $ \beta $ represents the shift vector of the dynamical metric and $ \Pbb $ and $ \Pbb^{\perp} $ are projections that single out specific matrix elements. For a fixed expanding FLRW background metric, or small perturbations thereof, the geometric quantities in $ H^{\prime} $ are negligible and thus the source term can be written as \begin{equation*} H'= c(K)\Pbb Z + \text{error}. \end{equation*} Here $c(K)$ is a strictly positive constant, thanks to our choice of equation of state and $K\in(0,1/3)$, and the error is of order $ |z|_{\gc}^{2} $. Extracting this structure on both sides of the equation \eqref{eq:intro_mod_E_EoM} is crucial for the analysis of the system. For further details we refer to Remark \ref{rem:conjeffect}. \subsubsection{Application to fluids on linearly expanding spacetimes}\label{intro:middlesec} In Section \ref{section:fixed_MFLRW} we apply our novel transformation to establish small data stability for the homogeneous solutions to the Euler equation on the fixed Milne-like (i.e.~linearly expanding spatially flat) FLRW background. After the application of the transformation described above, we show, due to \eqref{A-relations} as well as the decomposition of $ H^{\prime}$, that the conditions of the Fuchsian global existence theorem from \cite[Thm.~4.5]{FOW:2021} are satisfied. Note that Theorem \ref{thm:intro1} is also true for linearly expanding FLRW-type spacetimes $ (\mathbb{R}\times M,-d\bar{t}^{2}+\bar{t}^{2}g_0) $, where $ (M,g_0) $ is an arbitrary closed Riemannian manifold. This extension is discussed in Remark \ref{arbitrarygeometry}. Finally, we expect that our method is robust enough to be applied to other spacetime geometries and expansion rates than considered here. \subsubsection{Application to Einstein-fluid spacetimes with backreaction}\label{intro:finalsec} In Sections \ref{EinEulBR} - \ref{EinEulBR-3} we prove Theorem \ref{thm:intro2}. The main task is to apply the aforementioned transformation to the setting of a dynamical spacetime geometry. This requires controlling various error terms based on suitable decay properties of the perturbations of the spacetime geometry. A key challenge arises from the non-trivial spatial curvature of the expansion-normalized spatial metric. In particular, unlike the flat scenario considered in \cite{FOW:2021} and Theorem \ref{thm:intro1}, the analysis leads to certain non-vanishing `problematic' terms that arise from the commutation of curved covariant derivatives. For example, the lowest order problematic term is \begin{equation*} \int_{M}\left( g^{mn}\nabla_{m}z^{a}[\nabla_{n},\nabla_{a}]\psi+g^{mn}\nabla_{m}\psi[\nabla_{n},\nabla_{a}]z^{a} \right). \end{equation*} Up to a negligible error term, this reduces to \begin{equation}\label{eq:intro_baddie} \frac{2}{9}\int_{M}z^{a}\nabla_{a}\psi. \end{equation} Such a nonlinearity cannot be absorbed into a generic error term due to the structure of the energy estimates. Schematically, the energy estimate looks like \begin{equation*} \partial_{T}\left( \| \psi \|_{H^1(M)} + \| z\|_{H^1(M)}\right) \lesssim -C(K)\| z\|_{H^1(M)} + \eqref{eq:intro_baddie} + \text{error}, \end{equation*} for some fixed $ C(K)>0 $ involving the equation of state parameter $K$. The expression \eqref{eq:intro_baddie} is, however, only linear in $ z $ and hence it cannot be absorbed into the negative definite term. To overcome this obstacle, we set up in Section \ref{corr-en} a correction mechanism for the original Fuchsian energy functionals for the fluid as given in Definition \ref{def:energy}. This technique is usually applied to geometric wave equations and consists of adding an indefinite term to a standard geometric $L^2$-energy, which leads to a cancellation of the problematic terms in the energy equality while preserving coercivity of the energy (see e.g. \cite{AnderssonMoncrief:2011}). For the first order, our method boils down to adding a correction \begin{equation}\label{eq:intro_correction} -\frac{1}{9}\int_{M}|z|^{2}. \end{equation} Note that this correction term is of order zero, i.e.~one order lower than the energy. We then need to take the time evolution of the correction term \eqref{eq:intro_correction} using the equations of motion \eqref{eq:intro_mod_E_EoM}. Properties of the matrix $A^k$ and projection term $\Pbb$ (given precisely in \eqref{Milne-Ckeq}) yield a term of the form \begin{equation*} -\frac{2}{9}\int_{M} \langle \Pbb Z, \Pbb C \nabla Z \rangle =-\frac{2}{9}\int_{M} z^{a}\nabla_{a}\psi \end{equation*} in the energy estimates, which exactly cancels the problematic term \eqref{eq:intro_baddie}. Note that in this case the coefficient in front of the correction is of small modulus and hence does not compromise the equivalence of the corrected energies to the original ones. For details on how to treat more complicated higher-order terms, see Proposition \ref{finalenergy}. \subsection{Conclusions} Given a spacetime of the form \eqref{background-cosmology} with $a(\tb)\approx \tb^\alpha$, we define, in this paper, the threshold rate $\alpha_c(K)$ to be the critical rate above which fluid stabilization occurs and below which fluid singularities form from arbitrary small perturbations of homogeneous solutions. \textit{A priori} the threshold rate also depends on the presence of gravitational backreaction and whether general fluids are considered or only the irrotational data. From the perspective of cosmology the threshold rate sets limits on the epoch of cosmological evolution where structure formation could have occurred. The present paper shows that fluid stabilization occurs in the presence of gravitational backreaction in the regime of linear expansion for the full range $K \in (0,1/3)$. Thus, together with \cite{FOfW:2021}, we obtain \begin{equation*} \alpha_c(K)<1 \qquad\mbox{for}\quad K \in [0, 1/3). \end{equation*} Since it is known in addition that fluids with $0\leq K<\frac13$ stabilize when the geometry undergoes accelerated expansion, we conclude that structure formation in a cosmological spacetime filled with non-radiation relativistic fluids requires an epoch of \textit{de}celerated expansion. Finally, we mention that for fluids with $0\leq K<\frac13$ the present paper completes the analysis for linear expansion rates in the neighborhood of the canonical Friedman models. \subsection{Acknowledgements} D.F. and M.O. acknowledge support by the Austrian Science Fund (FWF): [P 34313-N]. Z.W. thanks the African Institute of Mathematical Sciences Rwanda for hospitality during part of the work of this paper. \section{The Fluid Transformation}\label{sec:transf} In this section we introduce the transformation of the fluid variables. \subsection{The conformal Euler equations} Rather than working with the physical variables $\gb_{\mu\nu}$ and $\vb^\mu$, we begin by defining conformal variables. \begin{Def}[Conformal variables $g, v^\mu, \Gamma, \nabla$] For $\Psi$ an arbitrary scalar, define the \textit{conformal metric} $g_{\mu\nu}$ and the \textit{conformal four-velocity} $v^\mu$ by \begin{equation} \label{g-v-def} \gb_{\mu\nu}= e^{-2\Psi}g_{\mu\nu}, \quad v^\mu = e^{-\Psi}\vb^\mu. \end{equation} Let $\Gammab^\gamma_{\mu\nu}$ and $\Gamma^\gamma_{\mu\nu}$ denote the Christoffel symbols of $\gb_{\mu\nu}$ and $g_{\mu\nu}$, respectively, and let $\nabla_\mu$ denote the Levi-Civita connection of $g_{\mu\nu}$. \end{Def} \begin{lem}[The conformal Euler equations] The Euler equations \eqref{rel-Eul} can be rewritten in terms of conformal variables as \begin{align} v^\gamma\del{\gamma}\rhob + (\rhob+\pb) L^\gamma_j \delta^j_\nu \nabla_\gamma v^\nu &= 3(\rhob+\pb)v^\gamma \del{\gamma}\Psi, \label{conf-Eul-B.1} \\ (\rhob+\pb)M_{ij}\delta^j_\nu v^\gamma\nabla_\gamma v^\nu + L^\gamma_i \del{\gamma}\pb &= (\rhob+\pb)L^\gamma_i \del{\gamma}\Psi, \label{conf-Eul-B.2} \end{align} where \begin{equation}\label{Lgammai-Mij-def} L^\gamma_i := \delta^\gamma_i -\frac{v_i}{v_0}\delta^\gamma_0, \AND M_{ij}:= g_{ij}-\frac{v_i}{v_0}g_{0j}-\frac{v_j}{v_0}g_{0i}+\frac{v_iv_j}{v_0^2}g_{00}. \end{equation} \end{lem} \begin{proof} By \eqref{g-v-def} it is straightforward to verify that the Christoffel symbols are related via \begin{equation*} \Gammab^\gamma_{\mu\nu} -\Gamma^\gamma_{\mu\nu} = - g^{\gamma\lambda}(g_{\mu\lambda}\del{\nu}\Psi + g_{\nu\lambda}\del{\mu}\Psi - g_{\mu\nu}\del{\lambda}\Psi). \end{equation*} With the help of this relation, we can express the Euler equations \eqref{rel-Eul} as \begin{equation}\label{conf-Eul-A} \nabla_\mu \Tb^{\mu\nu} = 6 \Tb^{\mu\nu}\del{\mu}\Psi - g_{\lambda\gamma}\Tb^{\lambda\gamma}g^{\mu\nu}\del{\mu}\Psi. \end{equation} We refer to these equations as the \textit{conformal Euler equations}. In \cite{Oliynyk:CMP_2015}, see, in particular, equations (2.36)-(2.37) and (2.40)-(2.43) from that article, it is shown that, in an arbitrary coordinate system $(x^\mu)$, the conformal Euler equations \eqref{conf-Eul-A} reduce to \eqref{conf-Eul-B.1}-\eqref{conf-Eul-B.2}. \end{proof} \begin{Def}[The ADM decomposition of the conformal metric] Let $(I\times \Sigma,g)$ be a Lorentzian spacetime where $I$ is an interval in $\Rbb$ and $\Sigma$ is a three manifold. We assume that $t=x^0$ is a time function, that is it takes values in $I$, and that the level sets $\Sigma_{\tau}=t^{-1}(\tau)$, $\tau\in I$, are diffeomorphic to $\Sigma$. If $n$ denotes the unit conormal to the spatial slices $\Sigma_t$, then we can express it as \begin{equation} \label{n-down-def} n=-\alpha dt \quad \Longleftrightarrow \quad n_\mu = -\alpha \delta_\mu^0, \end{equation} where the positive function $\alpha$ is known as the \textit{lapse} and in the following we view it as a time-dependent scalar field on $\Sigma$. The \textit{shift vector} $ \beta=\beta^i\del{i}$, which we can view as a time-dependent vector field on $\Sigma$, is then determined via the expression \begin{equation*} n^\sharp =\frac{1}{\alpha}(\del{t}-\beta) \quad \Longleftrightarrow \quad n^\mu = \frac{1}{\alpha}(\delta^\mu_0-\beta^i\delta_i^\mu), \end{equation*} that characterises the difference between the covariant form $n^\sharp$ of the normal vector and $\del{t}$. The $3+1$ decomposition of the conformal spacetime metric $g$ is then given by \begin{equation}\label{3+1-g} g= -\alpha^2 dt\otimes dt +\gc_{ij}(dx^i +\beta^i dt)\otimes (dx^j +\beta^j dt), \end{equation} where $ \gc=\gc_{ij}dx^i\otimes dx^j $ is the induced spatial metric on the spatial slices $\Sigma_t$, which we view as a time-dependent Riemmanian metric on $\Sigma$, while the shift and lapse are defined by $\beta_j=g_{j0}$ and $\alpha = (-g^{00})^{-\frac{1}{2}}$, respectively. We denote the Levi-Civita connection of $\gc_{ij}=g_{ij}$ by $\Dc_k$ and the Christoffel symbols $\gamma^k_{ij}$. \end{Def} Spacetime co-vector fields can be projected to three dimensional co-vector fields using the operator \begin{equation}\label{P-def} P^\mu_i = \delta^\mu_i, \end{equation} which we note by \eqref{n-down-def} satisfies $P^\mu_i n_\mu = 0$. By our conventions, where the spacetime and spatial metrics are respectively used to raise and lower spacetime and spatial indices, we have from \eqref{P-def} that \begin{equation} \label{P-rl-def} P^i_\mu = g_{\mu\nu}\gc^{ij}P_j^\nu. \end{equation} An explicit formula for this operator is then easily computed from \eqref{3+1-g} and \eqref{P-def} and is given by \begin{equation} \label{P-up-form} P^i_\mu = \delta^i_\mu + \beta^i\delta^0_\mu. \end{equation} It is also worth noting that the spacetime metric can be expressed using the operator \eqref{P-rl-def} as $g_{\mu\nu}=\gc_{ij}P^i_\mu P^j_\nu - n_\mu n_\nu$ and that the identities $P^\mu_i P_\mu^j = \delta^j_i$ and $P^i_\mu n^\mu =0$ hold. \begin{Def}[Christoffel components $\Upsilon^i$ and $\Xi^i_j$] A calculation shows that the Christoffel symbols $\Gamma^\gamma_{\mu\nu}$ of the four dimensional conformal metric $g_{\mu\nu}$ are given by the following: \begin{equation}\label{3+1-Christ-0s} \Gamma^{0}_{00} = \frac{1}{\alpha}(\del{t}\alpha+\beta^j \Dc_j \alpha - \beta^i \beta^j \Kc_{ij} ) , \qquad \Gamma^{0}_{i0} = \frac{1}{\alpha}(\Dc_i\alpha- \beta^j \Kc_{ij}) , \qquad \Gamma^{0}_{ij} =-\frac{1}{\alpha}\Kc_{ij}, \end{equation} and \begin{equation}\label{3+1-Christ-ij0} \begin{split} \Gamma^{i}_{00} &=\Upsilon^i:=\alpha \Dc^i \alpha - 2\alpha\beta^j \Kc_j^i-\frac{1}{\alpha}(\del{t}\alpha+\beta^j\Dc_j\alpha-\beta^j\beta^k\Kc_{jk})\beta^i+\del{t}\beta^i +\beta^j\Dc_j\beta^i , \\ \Gamma^{i}_{j0} &=\Xi^i_j:= -\frac{1}{\alpha}\beta^i(\Dc_j \alpha -\beta^k\Kc_{kj})-\alpha \Kc_j^i +\Dc_j \beta^i , \end{split} \end{equation} and \begin{equation}\label{3+1-Christ-kij} \Gamma^{k}_{ij} = \gamma^k_{ij} +\frac{1}{\alpha}\beta^k\Kc_{ij} , \end{equation} where $\Kc_{ij} = -\frac{1}{2\alpha}(\del{t}\gc_{ij} -\Dc_i \beta_j-\Dc_j\beta_i)$ and $\Kc_i^j = \gc^{ik}\Kc_{ik}$. \end{Def} \begin{rem} It is worth noting from \eqref{3+1-Christ-0s}-\eqref{3+1-Christ-kij} that each of the groups of Christoffel symbols define a time-dependent tensor field on $\Sigma$ except for the last one \eqref{3+1-Christ-kij}, which is not a tensor due to the appearance of the Christoffel symbols $\gamma^k_{ij}$. \end{rem} \subsection{The ADM decomposition of the conformal Euler equations} We first use the normal vector $n_\mu$ and the operator $P^\mu_i$ to decompose the four-velocity. \begin{Def}[Decomposition of conformal four-velocity $\nu, w_j, \mu, u^j$] We define \begin{equation} \label{conform-decomp} \nu:=-\frac{1}{\alpha} n_\nu v^\nu , \quad w_j:= P^\mu_j v_\mu, \quad \mu :=(\alpha n^\mu + \beta^i P_i^\mu)v_\mu \AND u^j := P^j_\mu v^\mu -\nu\beta^j. \end{equation} \end{Def} On account of \eqref{n-down-def}-\eqref{P-def} and \eqref{P-rl-def}-\eqref{P-up-form}, we have \begin{equation*} \nu = v^0, \quad w_j=v_j, \quad \mu =v_0 \AND u^j = v^j. \end{equation*} Furthermore, using this notation, we observe from \eqref{3+1-g} and \eqref{conform-decomp} that $M_{ij}$ can be expressed as \begin{equation}\label{Mij-3+1} M_{ij}=\gc_{ij}-\frac{1}{\mu}(w_i\beta_j +w_j \beta_i) + \frac{-\alpha^2+|\beta|_{\gc}^2}{\mu^2}w_i w_j. \end{equation} We further observe that the fields \eqref{conform-decomp} are not independent due to the relation $v_\mu =g_{\mu\nu}v^\nu$ and, on account of \eqref{vb-norm} and \eqref{g-v-def}, the following normalization condition holds \begin{equation} \label{v-norm} g_{\mu\nu}v^\mu v^\nu= -1. \end{equation} In fact, $\nu$, $w_j$ and $\mu$ can be expressed in terms of $u^j$. To see why this is the case, we observe from \eqref{conform-decomp} and \eqref{P-rl-def} that \begin{equation*} u_j = P^\mu_j v_\mu - \nu\beta_j=w_j -\nu \beta_j \end{equation*} or equivalently \begin{equation} \label{wj-form} w_j = u_j + \nu \beta_j. \end{equation} Using this, we then have by \eqref{conform-decomp} that \begin{equation} \label{mu-form} \mu = \beta^i u_i + (|\beta|_{\gc}^2 -\alpha^2)\nu. \end{equation} Additionally, by \eqref{3+1-g}, \eqref{conform-decomp} and \eqref{v-norm}, we have that \begin{equation*} (-\alpha^2+|\beta|_{\gc}^2)\nu^2 + 2\beta_j u^j \nu + 1+|u|_{\gc}^2 =0. \end{equation*} Solving this quadratic equation for $\nu$, we find the two roots given by \begin{equation} \label{nu-form} \nu =\frac{1}{-\alpha^2+|\beta|^2_{\gc}} \Bigl(-\beta_j u^j \pm \sqrt{(\beta_j u^j)^2 +(1+|u|_{\gc}^2)(\alpha^2-|\beta|^2_{\gc})}\Bigr). \end{equation} \begin{rem} The choice of root, i.e. the choice of the $\pm$ sign, in \eqref{nu-form} determines the orientation of the conformal four velocity $v^\mu$. If we want $v^\mu$ to point in the direction of \textit{increasing} $t$ then we take the ``$-$'' sign. On the other hand, if we want $v^\mu$ to point in the direction of \textit{decreasing} $t$, then we take the ``$+$'' sign. In either case, we always have that \begin{equation*} \mu \nu < 0, \end{equation*} as can be verified from \eqref{mu-form} and \eqref{nu-form}. \end{rem} \begin{lem}[The conformal Euler equations in ADM variables] The conformal Euler equations \eqref{conf-Eul-B.1}-\eqref{conf-Eul-B.2} can be written as \begin{align} \del{t}\rhob -\frac{(\rhob+\pb)}{\nu\mu}w_j\del{t}u^j+\frac{1}{\nu}u^k \Dc_k \rhob +\frac{(\rhob+\pb)}{\nu}\Dc_ju^j &= 3(\rhob+p)\Bigl(\del{t}\Psi + \frac{1}{\nu}u^k \Dc_k \Psi\Bigr) + (\rhob+\pb)\ell , \label{conf-Eul-C.1} \\ (\rhob+p)M_{ij}\del{t}u^j-\frac{K}{\nu\mu}w_i \del{t}\rhob+ \frac{(\rhob+\pb)}{\nu}M_{ij}u^k\Dc_k u^j + \frac{K}{\nu} \Dc_i \rhob &= (\rhob+p)\Bigl(-\frac{1}{\nu\mu}w_i\del{t}\Psi + \frac{1}{\nu}\Dc_i\Psi\Bigr)+(\rhob+\pb)m_i, \label{conf-Eul-C.2} \end{align} where, from the barotropic equation of state $ \pb = f(\rhob)$, the square of the sound speed is \begin{equation} \label{K-def} K = \frac{dp}{d\rhob}=f'(\rhob) \end{equation} and we define \begin{align} \ell &:= -\frac{1}{\nu\alpha}\Kc_{jk}\beta^ju^k-\Xi^j_j +\frac{1}{\mu}\Upsilon^j w_j + \frac{1}{\nu\mu}\Xi^j_k w_j u^k \label{ell-def} \intertext{and} m_i &:= - M_{ij}\Bigl( \frac{1}{\nu\alpha}\Kc_{lk}\beta^j u^l u^k + \nu \Upsilon^j +2 \Xi^j_k u^k\Bigr). \label{mi-def} \end{align} \end{lem} \begin{proof} For an arbitrary scalar field $h$, we observe from \eqref{Lgammai-Mij-def} and \eqref{conform-decomp} that \begin{equation} \label{L-del-h} L^\gamma_i \del{\gamma}h =-\frac{1}{\mu}w_i\del{t}h+\Dc_i u. \end{equation} We also observe using \eqref{Lgammai-Mij-def} and \eqref{conform-decomp} that \begin{align*} L^\gamma_j \delta^j_\nu \nabla_\gamma v^\nu &= L^\gamma_j \delta^j_\nu(\del{\gamma}v^\nu + \Gamma^\nu_{\gamma\sigma}v^\sigma) \\ &= -\frac{1}{\mu}w_j\del{t}u^j +\del{j}u^j + \Gamma^j_{j0}\nu + \Gamma^j_{jk}u^k -\frac{1}{\mu}w_j \Gamma^j_{00}\nu - \frac{1}{\mu}w_j \Gamma_{0k}^j u^k. \end{align*} Then, with the help of \eqref{3+1-Christ-0s}, \eqref{3+1-Christ-ij0} and \eqref{3+1-Christ-kij}, it follows that we can write the above expression as \begin{equation}\label{L-nabla-nu-A} L^\gamma_j \delta^j_\nu \nabla_\gamma v^\nu= -\frac{1}{\mu}w_j\del{t}u^j +\Dc_ju^j +\frac{1}{\alpha}\Kc_{jk}\beta^ju^k+\Xi^j_j\nu -\frac{\nu}{\mu}\Upsilon^j w_j - \frac{1}{\mu}\Xi^j_k w_j u^k. \end{equation} Using a similar calculation, it is also not difficult to verify that \begin{equation}\label{L-nabla-nu-B} \delta^j_\nu v^\gamma \nabla_\gamma v^\nu = \nu \del{t}u^j + u^k\Dc_k u^j + \frac{1}{\alpha}\Kc_{ik}\beta^j u^i u^k + \nu^2 \Upsilon^j +2\nu \Xi^j_i u^i. \end{equation} The identities \eqref{L-del-h}, \eqref{L-nabla-nu-A} and \eqref{L-nabla-nu-B} then allows us to obtain the required form of the conformal Euler equations \eqref{conf-Eul-B.1}-\eqref{conf-Eul-B.2}. \end{proof} To proceed, we introduce a modified density variable $\zeta$ defined by subtracting $3\Psi$ from the fluid enthalpy. \begin{Def}[Modified fluid density variable $\zeta$] For a given barotropic equation of state $ \pb = f(\rhob)$, the \textit{modified fluid density} is defined by \begin{equation} \label{zetadef} \zeta := \int^{\rhob}_{\rhob_0}\frac{d\xi}{\xi+f(\xi)} - 3\Psi, \end{equation} where $\rhob_0$ is any positive constant. \end{Def} \begin{rem} For the linear equation of state \eqref{eos-lin}, we note that square of the sound speed $K$ is constant and lies in $[0,1]$, by assumption, while the the modified fluid density is given by \begin{equation*} \zeta = \int^{\rhob}_{\rhob_0}\frac{d\xi}{(1+K)\xi} - 3\Psi= \frac{1}{1+K}\ln\Bigl(\frac{\rhob}{\rhob_0}\Bigr)-3 \Psi. \end{equation*} \end{rem} \begin{lem}\label{lem:conf-Eul-U} The conformal Euler equations \eqref{conf-Eul-C.1}-\eqref{conf-Eul-C.2} can be written as \begin{equation} \label{conf-Eul-E} B^0 \del{t}U + B^k\Dc_k U = H, \end{equation} where $U= (\zeta,u^j)^{\tr}$, and the matrices $B^0, B^k, H$ are given by \begin{align} B^0 &= \begin{pmatrix}K & -\frac{K}{\nu\mu}w_j \\ -\frac{K}{\nu\mu}w_i & M_{ij} \end{pmatrix}, \quad B^k = \begin{pmatrix} \frac{K}{\nu}u^k & \frac{K}{\nu}\delta^k_j \\ \frac{K}{\nu} \delta^k_i & \frac{1}{\nu} M_{ij} u^k \end{pmatrix}, \AND \begin{pmatrix} K \ell \\ \Bigl(\frac{3 K -1}{\nu\mu}w_i\del{t}\Psi + \frac{1- 3K}{\nu}\Dc_i\Psi\Bigr)+m_i\end{pmatrix}.\label{B-def} \end{align} \end{lem} \begin{proof} Differentiating the expression \eqref{zetadef}, we find that \begin{equation*} \del{t}\zeta = \frac{\del{t}\rhob}{\rhob+\pb} -3 \del{t}\Psi \AND \Dc_k\zeta = \frac{\Dc_k\rhob}{\rhob+\pb} -3 \Dc_k\Psi. \end{equation*} With the help of these expressions, a short calculation shows that the conformal Euler equations \eqref{conf-Eul-C.1}-\eqref{conf-Eul-C.2} when expressed in terms of the modified density $\zeta$ become \begin{align*} \del{t}\zeta -\frac{1}{\nu\mu}w_j\del{t}u^j+\frac{1}{\nu}u^k\Dc_k \zeta +\frac{1}{\nu}\Dc_ju^j &= \ell , \\ M_{ij}\del{t}u^j-\frac{K}{\nu\mu}w_i \del{t}\zeta+ \frac{1}{\nu}M_{ij}u^k\Dc_k u^j + \frac{K}{\nu}\Dc_i \zeta &= \Bigl(\frac{3 K -1}{\nu\mu}w_i\del{t}\Psi + \frac{1- 3K}{\nu}\Dc_i\Psi\Bigr)+m_i. \end{align*} It is then a simple exercise to put these into the desired matrix form. \end{proof} \begin{rem} It is worth noting that \eqref{conf-Eul-E} is manifestly symmetric hyperbolic, and consequently represents a symmetric hyperbolic formulation of the conformal Euler equations. The symmetric hyperbolic nature of this formulation guarantees the existence of local-in-time solutions by standard existence results for symmetric hyperbolic systems of equations, e.g. see Propositions 1.4, 1.5 and 2.1 from \cite[Chapter 16]{TaylorIII:1996}. While this at least yields the existence of local solutions, there is still the question of whether or not global solutions exist. \end{rem} In order to address the long-time existence of solutions, we need to bring the system \eqref{conf-Eul-E} into a more favourable form. This will be accomplished by the following change of variables. \begin{Def}[Fluid transformation] Define $Z= (\psi,z^j)^{\tr}$ and a transformation given by \begin{equation}\label{cov} U=(\zeta,u^i)^{\tr} =:\bigl( a(\psi,|z|^2_{\gc}), b(\psi)z^i\bigr)^{\tr}, \end{equation} where $a(\cdot,\cdot)$ and $b(\cdot)$ are, for the moment, arbitrary functions. \end{Def} \begin{lem}\label{lem:transf-Eul-general} If we choose the functions $a(\psi,|z|^2_{\gc})$ and $b(\psi)$ as \begin{align*} a(\psi, |z|^2_{\gc}) &= c_1- \ln(4)+\ln((\psi+c_2)^2)+ \frac{\kappa |z|_{\gc}^2}{(\psi+c_2)^2}, \quad \text{ and } \quad b(\psi) = \frac{2}{\psi+c_2}, \end{align*} where $ c_{1,2} $ are arbitrary constants and $ \kappa:= K^{-1}-2 $, then the equations given in \eqref{conf-Eul-E} take form \begin{equation}\label{conf-Eul-F} A^0\del{t}Z + \frac{1}{\nu}A^k \Dc_k Z = Q^{\tr}(H-B^0Y), \end{equation} and the following conditions hold: \begin{align} A^0_j &= \Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr), \qquad A^k_0 = \Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr) ,\label{A-exp-A} \intertext{and} \frac{D_1 a}{b} &= c_0 + \Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr), \label{ab-fix-A} \end{align} for $c_0$ a non-zero constant, where \begin{equation*} A^0 =\begin{pmatrix} A^0_0 & A^0_j \\ A^0_i & A^0_{ij} \end{pmatrix} \AND A^k =\begin{pmatrix} A^k_0 & A^k_j \\ A^k_i & A^k_{ij} \end{pmatrix}. \end{equation*} \end{lem} \begin{rem} Note that the explicit form of the functions $ a $ and $ b $ given in Lemma \ref{lem:transf-Eul-general} are smooth in a neighborhood of $ (\psi,|z|_{\gc}^{2})=(0,0) $ for the choice of $ c_{2}\neq 0 $. \end{rem} \begin{proof} Differentiating $U$, we find after a short calculation that \begin{align} \label{dU-2-dZ} \del{t}U = Q\del{t}Z + Y \AND \Dc_{k}U = Q\Dc_k U, \end{align} where \begin{align} Q&= \begin{pmatrix} D_1 a & 2 D_2 a z_j \\ b'z^i & b\delta^i_j\end{pmatrix} \label{Q-Y-def} \quad \text{ and } \quad Y= \begin{pmatrix} D_2 a \del{t}\gc_{ij} z^i z^j \\ 0 \end{pmatrix} . \end{align} Here $D_1 a$ and $D_2 a$ denote the partial derivatives with respect to the first and second variables of $a=a(\psi,|z|^2_{\gc})$. Using \eqref{dU-2-dZ} and the fact that $\Dc_k \gc_{ij}=0$, it is then clear that we can write \eqref{conf-Eul-E} as \begin{equation*} A^0\del{t}Z + \frac{1}{\nu}A^k \Dc_k Z = Q^{\tr}(H-B^0Y), \end{equation*} where \begin{equation*} A^0=Q^{\tr}B^0 Q, \quad A^k=Q^{\tr}A^k Q, \AND \label{Q-tr} Q^{\tr}= \begin{pmatrix} D_1 a &b'z^j \\ 2 D_2 a z_i & b\delta^i_j\end{pmatrix}. \end{equation*} We then have by \eqref{B-def}, \eqref{Q-Y-def} and \eqref{Q-tr}, that \begin{align} A^0_0 &= K (D_1 a)^2 - \frac{2 K b' D_1 a }{\nu\mu}z^k w_k + (b')^2M_{kl}z^k z^l, \label{A00} \\ A^0_j &= 2K D_1 a D_2 a z_j -2\frac{K b'}{\nu\mu}D_2 a z_j z^k w_k -\frac{K b D_1 a}{\nu\mu} w_j + bb' z^k M_{kj}, \notag\\ A^0_{ij} &= b^2 M_{ij} - \frac{2 K b D_2a }{\nu \mu}(z_i w_j + z_j w_i) + 4K (D_2 a)^2 z_i z_j ,\notag\\ A^k_0 &= K (D_1 a)^2u^k+2 K b' D_1 a z^k+(b')^2M_{lm}z^l z^m u^k,\label{Ak0} \\ A^k_j &=K b D_1 a\delta^k_j+ 2 K D_1 a D_2 a u^k z_j + 2 K b' D_2 a z^k z_j + b b'M_{lj}z^l u^k, \notag \intertext{and} A^k_{ij} &=b^2 M_{ij} u^k + 2 K b D_2 a (\delta^k_j z_i +\delta^k_i z_j) + 4 K (D_2 a)^2u^k z_i z_j.\label{Akij} \end{align} Now, our goal is to try and choose $a$ and $b$ so that \eqref{A-exp-A}-\eqref{ab-fix-A} hold. By \eqref{nu-form} and \eqref{cov}, we observe that \begin{equation*} \nu = -\frac{1}{\alpha}+ \Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr), \end{equation*} which, in turn, implies by \eqref{mu-form} that \begin{equation*} \mu = \alpha + \Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr). \end{equation*} Using these, we then see from \eqref{Mij-3+1}, \eqref{wj-form} and \eqref{cov} that \begin{equation}\label{A0j-exp-B} A^0_j = \bigl(KD_1 a(2 D_2 a + b^2)+b b'\bigr)z_j+\Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr). \end{equation} We see also from \eqref{cov} and \eqref{Ak0} that \begin{equation} \label{Ak0-exp-B} A^k_0 = K D_1 a(b D_1 a + 2 b')z^k + \Ord(|z|_{\gc}^2). \end{equation} By \eqref{A0j-exp-B} and \eqref{Ak0-exp-B}, we then see that if $a$ and $b$ are chosen so that \begin{align} b D_1 a + 2b' &= \Ord(|z|_{\gc}^2), \label{ab-fix-B.1} \\ KD_1 a(2 D_2 a + b^2)+b b'&= \Ord(|z|_{\gc}^2),\label{ab-fix-B.2} \end{align} then the conditions \eqref{A-exp-A}-\eqref{ab-fix-A} are satisfied. Now, writing \eqref{ab-fix-B.1} as \begin{equation*} \del{\psi}\bigl( a(\psi,|z|_{\gc}^2)+ \ln\bigl( b(\psi)^2\bigr) \bigr) = \Ord(|z|_{\gc}^2), \end{equation*} it is clear that we can ensure that this condition holds by choosing $a$ so that \begin{equation} \label{ab-fix-C} a(\psi,|z|_{\gc}^2) = c_1 -\ln(b(\psi)^2)+c(\psi)|z|_{\gc}^2, \end{equation} where $c(\psi)$ is an arbitrary function and $c_1$ is an arbitrary constant. Inserting this into \eqref{ab-fix-B.2} and multiplying through by $\frac{b}{b'}$ yields \begin{equation*} -2 K(2c+b^2) + b^2 = \Ord(|z|_{\gc}^2), \end{equation*} where here we are viewing $K$ as a function of $\psi$ via\footnote{It will also, of course be a function of $\Psi$, but we can for the purpose of this argument treat it as a ``constant''.} \eqref{K-def}, \eqref{zetadef} and \eqref{cov}. It is then clear that we can guarantee that this condition holds by setting \begin{equation*} c= \frac{1}{4}\kappa b^2, \quad \kappa= K^{-1} -2. \end{equation*} Now, \eqref{ab-fix-C} implies that \begin{equation} \label{ab-fix-D} a =c_1 - \ln(b^2)+ \frac{1}{4}\kappa b^2 |z|_{\gc}^2. \end{equation} Substituting this into \eqref{ab-fix-A}, it follows that \eqref{ab-fix-A} will hold provided \begin{equation*} -2\frac{b'}{b^2}= c_0. \end{equation*} Solving this yields \begin{equation*} b = \frac{2}{c_0 \psi + c_2}, \end{equation*} where $c_2$ is an arbitrary constant. We then fix the constant $c_0$ by setting $ c_0=1$. By \eqref{ab-fix-D}, this gives \begin{align*} a(\psi, |z|^2_{\gc}) &= c_1- \ln(4)+\ln((\psi+c_2)^2)+ \frac{\kappa |z|_{\gc}^2}{(\psi+c_2)^2}, \quad \text{ and } \quad b(\psi) = \frac{2}{\psi+c_2}. \end{align*} By construction, the conditions \eqref{A-exp-A}-\eqref{ab-fix-A} with $c_0=1$ hold. Note that this choice gives \begin{equation}\label{transf-derivs1} D_1 a = \frac{2}{\psi+c_2}\Bigl( 1 - \kappa \frac{|z|_{\gc}^2}{(\psi+c_2)^2}\Bigr), \quad D_2 a = \frac{\kappa}{(\psi+c_2)^2}, \quad b'(\psi) = \frac{-2}{(\psi+c_2)^2}. \end{equation} \end{proof} \section{The Euler equations on Milne-like FLRW spacetimes}\label{section:fixed_MFLRW} We now apply the transformation of Section \ref{sec:transf} to the FLRW-type geometries considered in \cite{FOW:2021} with a linear equation of state \eqref{eos-lin}. \begin{Def}[Milne-like FLRW spacetimes] Consider the manifold $\Rbb_{>0} \times \Tbb^3$ with the spacetime metric \begin{equation} \label{MFLRW-g} \gb = \frac{1}{t^2} g, \quad\text{where} \quad g= -\frac{1}{t^2}dt\otimes dt + \delta_{ij}dx^i \otimes dx^j. \end{equation} We refer to the pair $(\Rbb_{>0} \times \Tbb^3,\gb)$ as a \textit{Milne-like FLRW spacetime}. \end{Def} Note that, with respect to the coordinates used in \eqref{MFLRW-g}, the future is located in the direction of decreasing $t$ and future timelike infinity is located at $t=0$. If we choose \begin{equation*} \Psi=\ln(t), \end{equation*} then the Milne-like FLRW metric $ \gb $ is of the form \eqref{g-v-def} where the confomal metric is given by $ g $ as written in \eqref{MFLRW-g}. Moreover, the $3+1$ decomposition of the conformal metric $g$ in \eqref{MFLRW-g} yields \begin{align}\label{MFLRW} \alpha = t^{-1},\quad \beta_j = 0, \quad \gc_{ij}= \delta_{ij}. \end{align} The spatial manifold on which these fields are defined is $\Tbb^3$. \begin{lem}\label{MFLRW-transformation-variables} On the Milne-like FLRW spacetime we have the simplifications \begin{equation}\label{MFLRW-vars}\begin{split} \nu &= v^0, \; w_j = v_j, \;\mu = - t^{-2} \nu,\; u^j = v^j, \\ M_{ij} &= \delta_{ij} - \Bigl( \frac{t}{\nu}\Bigr)^2 v_i v_j = \delta_{ij} - \Bigl( \frac{t}{\nu}\Bigr)^2 b^2 z_i z_j, \AND H= \begin{pmatrix} 0 \\ \frac{3K-1}{t \nu \mu} bz_j\end{pmatrix}, \end{split}\end{equation} where we defined $z_i = \delta_{ij} z^j$. Moreover, $\mathcal{K}_{ij}, \Upsilon^i, \Xi^i_j, m_i, \ell, Y$ all identically vanish. \end{lem} \begin{proof} This is a straightforward computation using \eqref{MFLRW} in the appropriate definitions, and lowering and raising indices with \eqref{MFLRW}. \end{proof} \begin{lem}[Homogeneous solutions]\label{homsolutions} The homogeneous background solutions of the Euler equations \eqref{rel-Eul} on a fixed Milne-like FLRW spacetime $(\Rbb_{>0} \times \Tbb^3,\gb)$ are given by $U \equiv 0$. Furthermore, this reduces to $Z \equiv 0$ and on these background solutions we have the simplifications \begin{equation}\label{MFLRW-simp} D_1 a = (2/c_2), \quad D_2 a = \kappa/{c_2^2}, \quad b'= - 2/{c_2^2}, \quad \nu \mu = -1, \AND t/\nu = -1\,. \end{equation} \end{lem} \begin{proof} The homogeneous solutions considered in \cite{FOW:2021} are given by \begin{equation*} (\vb^\mu_H, \rhob_H) = (-t^2 \delta^\mu_0, \big( (1-3K) c_H \big)^{\frac{1+K}{K}} t^{3(1+K)}), \end{equation*} where $c_H>0$ is a constant. On such a homogeneous solution we have $$v^\mu_H = -t \delta^\mu_0, \quad \zeta_H = 0\,,$$ provided we pick $ \rho_0 = ((1-3K)c_H)^{\frac{1+K}{K}}$. This implies that $U\equiv 0$ and $Z= (C_H, 0)$ for some constant $C_H$. We can in fact set $C_H=0$ by picking $c_2 = - 2\exp( - c_1/2)$. Inspecting the statements given in \eqref{transf-derivs1} and simply substituting in $ Z\equiv 0 $ yields the desired result for $ D_{1}a $, $ D_{2}a $ and $ b'$ in \eqref{MFLRW-simp}. By Lemma \ref{MFLRW-transformation-variables} and using $ v^{0}=-t $ on the homogeneous solutions, we have that $ t/\nu=t/{v^{0}}=-1 $ as well as \begin{equation*} \nu \mu = -t^{-2}\nu^{2}=-t^{-2}(v^{0})^{2}=-1. \end{equation*} \end{proof} \begin{lem} The Euler equations on a fixed Milne-like FLRW spacetime take the form \begin{equation}\label{relEuler_fixedMLRW} M_0 \del{t} Z + \frac1t (\mathcal C^k + M^k(Z)) \del{k}Z = \frac1t \Bc(Z) \Pi Z + \frac1t F(Z)\,, \end{equation} where \begin{equation}\begin{split} \label{MLRW:Mk-ests} M_0(Z) & = \begin{pmatrix} 1 & 0 \\ 0 & K^{-1} \delta_{ij} \end{pmatrix} + \begin{pmatrix} 0 & \Ord\bigl(|z|_{\gc}^2\bigr) \\ \Ord\bigl(|z|_{\gc}^2\bigr) & \Ord\bigl(|z|_{\gc}^2\bigr) \end{pmatrix}, \\ M^k(Z) & = \frac{t}{\nu} \begin{pmatrix} 0 & 0 \\ 0 & K^{-1}\Bigl(\frac{2}{\psi+c_2}\Bigr)\delta_{ij} z^k + \frac{\kappa}{(\psi+c_2)} (\delta^k_j z_i +\delta^k_i z_j) \end{pmatrix}+ \Ord\bigl(|z|_{\gc}^2\bigr)\,, \\ \mathcal C^k &= \begin{pmatrix} 0 & \delta^k_j \\ \delta^k_i & 0 \end{pmatrix}\,, \quad F(Z) = \Ord\bigl(|z|_{\gc}^2\bigr) \,, \end{split}\end{equation} and \begin{align*} \Bc &:= (K^{-1}-3)\id, \qquad \Pi := \begin{pmatrix} 0 & 0 \\0 & \delta^i_j\end{pmatrix}, \qquad \Pi^\perp := \id - \Pi = \begin{pmatrix} 1 & 0 \\0 & 0\end{pmatrix}. \end{align*} \end{lem} \begin{rem}\label{M0component} Note that the non-leading order term of the upper left component of $ M^{0} $ is zero and hence $ D\Pbb M^{0}\Pbb $ for any derivative $ D $. \end{rem} \begin{proof} From the PDE system \eqref{conf-Eul-F}, we define \begin{align*} M_0(Z) &:= (A^0_0)^{-1} \begin{pmatrix} A^0_0 & A^0_j \\ A^0_i & A^0_{ij} \end{pmatrix},\quad \mathcal C^k := (A_0^0)^{-1} \begin{pmatrix} A^k_0 & A^k_j \\ A^k_i & A^k_{ij} \end{pmatrix}\Big|_{z^i\equiv 0}, \quad M^k(Z):= \frac{t}{\nu} (A^0_0)^{-1} \begin{pmatrix} A^k_0 & A^k_j \\ A^k_i & A^k_{ij} \end{pmatrix}- \mathcal C^k. \end{align*} Using \eqref{A00}-\eqref{Akij}, we compute \begin{align*} A^0_0 &= \Bigl(\frac{2\sqrt{K}}{\psi+c_2}\Bigr)^2 + \Ord\bigl(|z|_{\gc}^2\bigr), \quad A^0_j =\Ord\bigl(|z|_{\gc}^2\bigr), \quad A^0_{ij} = \Bigl(\frac{2}{\psi+c_2}\Bigr)^2 \delta_{ij} +\Ord\bigl(|z|_{\gc}^2\bigr) , \\ A^k_0 &= \Ord\bigl(|z|_{\gc}^2\bigr),\quad A^k_j =\Bigl(\frac{2\sqrt{K}}{\psi+c_2}\Bigr)^2 \delta^k_j + \Ord\bigl(|z|_{\gc}^2\bigr),\AND \\ A^k_{ij} &=\Bigl(\frac{2}{\psi+c_2}\Bigr)^3\delta_{ij} z^k + \Bigl(\frac{2\sqrt{K}}{\psi+c_2}\Bigr)^2 \frac{\kappa}{(\psi+c_2)} (\delta^k_j z_i +\delta^k_i z_j) + \Ord\bigl(|z|_{\gc}^2\bigr). \end{align*} Multiplying the system \eqref{conf-Eul-F} from the left by $(A_0^0)^{-1}$ yields $$ (A_0^0)^{-1} A^0\del{t}Z + \frac{1}{t} \frac{t}{\nu} (A_0^0)^{-1}A^k \del{k} Z =(A_0^0)^{-1} Q^{\tr}(H-B^0Y) . $$ Using that $(A_0^0)^{-1} = \bigl(\frac{2\sqrt{K}}{\psi+c_2}\bigr)^{-2} + \Ord\bigl(|z|_{\gc}^2\bigr) $, we obtain \begin{equation}\label{MLRW:M0} \begin{split} M_0(Z) & = \begin{pmatrix} 1 & 0 \\ 0 & K^{-1} \delta_{ij} \end{pmatrix} + \begin{pmatrix} 0 & \Ord\bigl(|z|_{\gc}^2\bigr) \\ \Ord\bigl(|z|_{\gc}^2\bigr) & \Ord\bigl(|z|_{\gc}^2\bigr) \end{pmatrix} , \quad \mathcal C^k = \begin{pmatrix} 0 & \delta^k_j \\ \delta^k_i & 0 \end{pmatrix}\,,\\ M^k(Z) & = \frac{t}{\nu} \begin{pmatrix} 0 & 0 \\ 0 & \frac1{K}\Bigl(\frac{2}{\psi+c_2}\Bigr)\delta_{ij} z^k + \frac{\kappa}{(\psi+c_2)} (\delta^k_j z_i +\delta^k_i z_j) \end{pmatrix}+ \Ord\bigl(|z|_{\gc}^2\bigr) . \end{split} \end{equation} Finally, for the right hand side of the equation we use \eqref{Q-tr}, \eqref{MFLRW-vars} and \eqref{MFLRW-simp} to compute $$ (A_0^0)^{-1} Q^{\tr}(H-B^0Y) = - \frac{1}{t}(1-3K)(\nu \mu)^{-1} b (A_0^0)^{-1} \begin{pmatrix} b' z^jz_j \\ b z_i\end{pmatrix} = \frac1t(1-3K)K^{-1} \begin{pmatrix} 0 \\ z_i\end{pmatrix}+ \frac1t F(Z), $$ where \begin{equation}\label{MLRW-estsF} F(Z) := t (A_0^0)^{-1} Q^{\tr}H- \Bc(Z)\cdot \Pi Z = - \frac{ (K^{-1}-3)}{(\psi+c_2)} \begin{pmatrix} z^j z_j \\ 0\end{pmatrix}+ \Ord\bigl(|z|_{\gc}^2\bigr) = \Ord\bigl(|z|_{\gc}^2\bigr). \end{equation} \end{proof} \begin{Def}[Matrix norm] For any $ M\in \mathbb{R}^{d\times d} $ we write \begin{equation*} |M|_{\op}=\sup\{|Mv|:v\in \mathbb{R}^{d}\}. \end{equation*} We will use this to estimate inner products by operator norms. \end{Def} \begin{rem}\label{rem:conjeffect} The projection matrix $\Pbb$ is used to extract particular components of an arbitrary matrix $$ A = \begin{pmatrix} A_0 & A_i \\ A_j & A_{ij} \end{pmatrix}. $$ For example, $$ \Pbb M \Pbb = \begin{pmatrix} 0 & 0 \\ 0 & A_{ij} \end{pmatrix}$$ and so an estimate on $\Pbb A \Pbb$ is precisely an estimate on the lower-diagonal piece $A_{ij}$ of the matrix $A$. Similarly, $\Pbb A \Pbb^\perp$ extracts the $A_i$ component, $\Pbb^\perp A \Pbb$ the $A_j$ component and $\Pbb^\perp A \Pbb^\perp$ the $A_0$ component. With this perspective, we can also reinterpret condition \eqref{A-exp-A} as ensuring that the fluid PDE \begin{equation}\label{eq:444} A^0\del{t}Z + \frac{1}{\nu}A^k \Dc_k Z = Q^{\tr}(H-B^0Y) \end{equation} enjoys the estimates $$ |\Pbb A^0 \Pbb^\perp|_{\op} = |\Pbb^\perp A^0 \Pbb|_{\op} = \Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr) \AND |\Pbb^\perp A^k \Pbb^\perp |_{\op} = \Ord\bigl(|\beta|_{\gc} +|z|_{\gc}^2\bigr). $$ Furthermore, the right hand side of \eqref{eq:444} explicitly reads \begin{equation*} \begin{pmatrix} D_1 a & b'z^i \\ 2 D_2 a z_j& b\delta^i_j\end{pmatrix} \left(\begin{pmatrix} K \ell \\ \Bigl(\frac{3 K -1}{\nu\mu}w_i\del{t}\Psi + \frac{1- 3K}{\nu}\Dc_i\Psi\Bigr)+m_i\end{pmatrix}-\begin{pmatrix}K & -\frac{K}{\nu\mu}w_j \\ -\frac{K}{\nu\mu}w_i & M_{ij} \end{pmatrix} \begin{pmatrix} D_2 a \del{t}\gc_{ij} z^i z^j \\ 0 \end{pmatrix}\right). \end{equation*} If we assume perturbations of the metric to be negligible, we may set $ \ell=m_{i}=\partial_{t}\gc_{ij}=0 $. Furthermore, we may restrict ourselves to the case in which $ \Psi=\Psi(t) $ so that $\Dc \Psi=0$. In this setting we have \begin{equation*} Q^{\tr}(H-B^{0}Y)\simeq \frac{3 K -1}{\nu\mu}\begin{pmatrix}b^{\prime}z^{i}w_i\del{t}\Psi\\ bw_j\del{t}\Psi\end{pmatrix}. \end{equation*} As can be seen in the next section, an expanding FLRW--type model forces $ \nu\mu<0 $ and $ w\simeq b z $. Hence, we have that \begin{equation*} Q^{\tr}(H-B^{0}Y)\simeq c(K)\begin{pmatrix} 0 \\ z^{j}\end{pmatrix} + \Ord(|z|^2) = c(K)\Pbb Z + \Ord(|z|^2), \end{equation*} for $ \partial_{t}\Psi>0 $ and some fixed constant $c(K)>0$. Note that this is in the regime of \emph{compactified} time. A switch to physical time translates $ c(K)\mapsto -c(K) $ and thus yields a \emph{negative} term. \end{rem} We conclude this section with some estimates on the quantities appearing in \eqref{relEuler_fixedMLRW}. \begin{lem} Let $ \delta>0 $ and assume that $ |Z|\leq \delta $. Then the following estimates hold: \begin{equation}\label{MLRW-ests1} \begin{split} |M_0(Z)|_{\op} + |\mathcal C^k|_{\op} + |(M_{0})^{-1}(Z)|_{\op} &\lesssim 1, \\ |M^k(Z)|_{\op} &\lesssim |\Pi Z|, \end{split} \end{equation} and \begin{equation}\label{MLRW-ests2} \begin{split} |\Pbb\del{a}M^k(Z)\Pbb|_{\op} &\lesssim |\Pi Z|+|\Pi DZ|,\\ |\Pbb^{\perp}\del{a}M^k(Z)\Pbb|_{\op} +|\Pbb^{\perp}\del{a}M^k(Z)\Pbb^{\perp}|_{\op} +\quad &\\ |\Pbb \del{a}M^k(Z)\Pbb^{\perp}|_{\op} +|\del{a}(M_0)(Z)|_{\op} &\lesssim |\Pi Z|^{2}+|\Pi DZ|^{2},\\ \end{split} \end{equation} and \begin{equation}\label{MLRW-estsFH} |F(Z)| \lesssim |\Pi Z|^2 . \end{equation} \end{lem} \begin{rem} In this paper, see also the earlier work \cite{Oliynyk16}, we write $ |f|\lesssim |g| $, or $f = \Ord(g)$, to denote a universal constant $C>0$ such that $|f|\leq C |g|$ with $f, g$ some functions. Note that this is not simply an estimate of absolute values but also tacitly implies that \begin{equation*} f=h(g) \end{equation*} for some function $ h\in\mathcal{C}^{\infty} $ with $ a<|h|<b$, $a,b>0 $. Thus, for $ s\in \mathbb{N} $, \begin{equation*} \sum_{|l|\leq s }|\nabla^{l}f|\lesssim \sum_{|l|\leq s }|\nabla^{l}g|. \end{equation*} \end{rem} \begin{proof} It is straightforward to check \eqref{MLRW-ests1} using the explicit form of these matrices given in \eqref{MLRW:Mk-ests} and \eqref{MLRW:M0}. Turning to \eqref{MLRW-ests2}, we note that the leading order piece of $M^k$ is $ \Pbb M^{k}\Pbb $. Thus, we start with $|\Pbb \del{a}M^k(Z)\Pbb |_{\op} $. Using the expression for $\nu$ given in \eqref{nu-form}, we have $$|\del{a}(t/\nu)|= \Big|\frac{t}{\nu} \frac{\del{a} \nu}{\nu}\Big| = \Big|\Big(\frac{t}{\nu}\Big)^2 \frac{v_i \del{a}(v^i)}{\sqrt{1+|v|^2}}\Big|. $$ Using the transformation $v^j = b(\psi) z^i$, we get $$ |\del{a}(t/\nu)\cdot (A_0^0)^{-1} A^k(Z)|_{\op} \lesssim |z|^2 + |Dz|^2. $$ The other part of $\Pbb \del{a}M^k(Z)\Pbb $ is simple to estimate and in total \begin{equation*} |\Pbb \del{a}M^k(Z)\Pbb |_{\op} \lesssim |z| + |Dz|. \end{equation*} The remaining $ \Pbb $ and $ \Pbb^{\perp} $ conjugations of $ \partial_{a}M^{k} $ are easier to estimate. The estimates on $\partial_a M_0$ and \eqref{MLRW-estsFH} use \eqref{MLRW:M0} and \eqref{MLRW-estsF} respectively. \end{proof} \begin{comment} \subsection{Extended system} In order to apply the global existence result from \cite{FOW:2021} we wish to consider the Fuchsian system for $\bar{Z} := (Z, DZ)^T$. Thus we apply the differential operator $M^0 \del{a} (M^0)^{-1}$ to \eqref{conf-Eul-F} to get $$M_0(Z)\del{t} \del{a} Z + \frac1t (\mathcal C^k + M^k(Z)) \del{k}\del{a}Z = \frac1t\Bc \Pi\del{a} Z + \frac1t G_a(\bar{Z}) $$ where we compute \begin{align*} G_a(\bar{Z}) &:= M_0(Z) \Big[ -\del{a}\left[ M_0(Z)^{-1} \right](\mathcal C^k +M^k(Z))\del{k}Z - M_0(Z)^{-1} \left( \del{a}M^k(Z)\right) \del{k} Z \\&\qquad\qquad + \del{a}\left[ M_0(Z)^{-1} \right] \Bc \Pi Z +\del{a}\left[ M_0(Z)^{-1} F(Z)\right] \Big] \end{align*} Define \begin{align*} B^0(\bar{Z}) := \begin{pmatrix} M_0(Z) & 0 \\ 0 & M_0(Z) \end{pmatrix}, \quad C^k := \begin{pmatrix} \mathcal C^k & 0 \\ 0 & \mathcal C^k \end{pmatrix}, \quad B^k(\bar{Z}):= \begin{pmatrix} M^k(Z) & 0 \\ 0 & M^k(Z) \end{pmatrix}, \intertext{and} \Pbb := \begin{pmatrix} \Pi & 0 \\ 0 & \Pi \end{pmatrix}, \quad \mathcal B(\bar{Z}) := \begin{pmatrix} \Bc & 0 \\ 0 & \Bc \end{pmatrix}, \quad H(\bar{Z}) := \begin{pmatrix} F(Z) \\ G(Z, DZ) \end{pmatrix}. \end{align*} \begin{lem} The extended system for $\bar{Z} := (Z, DZ)^T$ is given by the PDEs: \begin{equation}\label{fixed-EoM} B^0(\bar{Z}) \del{t}\bar{Z}+\frac1t(C^k + B^k(\bar{Z}))\del{k}\bar{Z} = \frac1t \Bc(\bar{Z})\Pbb \bar{Z} + \frac1t H(\bar{Z}) \end{equation} where \begin{equation}\label{MLRW-ests1} \begin{split} |M_0(Z)|_{\op} + |\mathcal C^k|_{\op} + |M_0^{-1}(Z)|_{\op} &\lesssim 1 \\ |M^k(Z)|_{\op} &\lesssim |\Pi Z| \\ |\del{a}M^k(Z)|_{\op} + |\del{a}M_0^{-1}(Z)|_{\op} &\lesssim |\Pbb \bar{Z}| \end{split} \end{equation} and \begin{equation}\label{MLRW-estsFH} |F(Z)| \lesssim |\Pbb Z|^2 \AND |G|\lesssim |\bar{Z}||\Pbb \bar{Z}| \end{equation} \end{lem} \begin{proof} It is useful to note that $|z|\lesssim |\Pbb Z|$ and $|z|+|Dz|\lesssim |\Pi\bar{Z}|$. It is straightforward to check \eqref{MLRW-ests1} using WHAT? cite the equation numbers? The only slight care is for $|\del{a}M^k(Z)|_{\op} $. Using the expression for $\nu$ given in \eqref{nu-form} we have $$\del{a}(t/\nu)= - \frac{t}{\nu} \frac{\del{a} \nu}{\nu} = (\pm) \Big(\frac{t}{\nu}\Big)^2 \frac{v_i \del{a}(v^i)}{\sqrt{1+|v|^2}}. $$ Thus, using the transformation $v^j = b(\psi) z^i$, we get $$ |\del{a}(t/\nu)\cdot (A_0^0)^{-1} A^k(Z)|_{\op} \lesssim |z|^2 + |Dz|^2 $$ The other part of $\del{a}M^k(Z)$ is simple to estimate and we get \begin{equation} |\del{a}M^k(Z)|_{\op} \lesssim |z| + |Dz| \end{equation} We next define \begin{equation} \begin{split} \label{MLRW-comms1} C_1:= [\Pi, M_0]& = \begin{pmatrix} 0 & - A^0_j \\ A^0_i & 0 \end{pmatrix} , \quad C_2:= [\Pi, M_0^{-1} ] = - M_0^{-1} ([\Pi, M_0])M_0^{-1}, \AND C_3^k := [\Pi^\perp, M^k] \end{split}\end{equation} and a computation shows that \begin{equation}\label{MLRW-ests2} \begin{split} |\del{a}C_2|_{\op} &\lesssim |z|^2 + |Dz|^2 \lesssim |\Pbb \bar{Z}|^2 \\ |C_1|_{\op} + |C_2|_{\op} + |C_3|_{\op} &\lesssim |z|^2 \end{split} \end{equation} Using \eqref{MLRW-comms1} and \eqref{MLRW-ests2} we find \begin{align*} \Pi^\perp G_a&= C_1 G_a + M_0 \left[ - \del{a}C_2 \cdot (\mathcal C^k+M^k)\del{k}Z - C_2 (\del{a}M^k)\del{k}Z + \del{a}C_2 \cdot \Bc \Pi Z\right] \\& + M_0 \Big[ -\del{a}\left( M_0^{-1} \right)(\Pi^\perp \mathcal C^k +\Pi^\perp M^k)\del{k}Z - M_0^{-1} \del{a}\left( \Pi^\perp M^k\Pi^\perp\right) \del{k} Z \\&\qquad\qquad - M_0^{-1} \del{a}\left( \Pi M^k\Pi^\perp\right) \del{k} \Pi Z +\Pi^\perp \del{a}\left( M_0^{-1} F\right) \Big] \end{align*} Using \eqref{MLRW-ests1}, \eqref{MLRW-ests2} (and probably some other things??) we get \begin{equation}\label{MLRW:Ga-ests}\begin{split} |G_a(Z, DZ)|_{\op} &\lesssim(|z| + |Dz|) |DZ|\\ |\Pi^\perp G_a(Z, DZ)|_{\op} &\lesssim |z|^2 + |Dz|^2, \\ |\Pi G_a(Z, DZ)|_{\op} &\lesssim (|z| + |Dz|) |DZ| + |z|^2 + |Dz|^2, \end{split}\end{equation} Do we need the second two equations above? Also (from where???) \begin{equation}\label{MLRW-estsF} |\Pbb F(Z)| + |\Pbb^\perp F(Z)| \lesssim |\Pbb Z|^2 \end{equation} \end{proof} \begin{prop} Suppose that $ k\geq 4 $ and $ \bar{Z}_{0}\in H^{k}(\mathbb{T}^{3}) $ is initial data to the initial value problem posed by \eqref{fixed-EoM}. Then there exists a $ \epsilon>0 $ such that if $ \|U_{0}\|_{H_{k}}<\epsilon $, there exists a global and unique solution $\bar{Z}$ to \eqref{fixed-EoM} with regularity \begin{equation*} \bar{Z}\in C^{0}((0,T_{0}],H^{k}(\mathbb{T}^{3}))\cap C^{1}((0,T_{0}],H^{k-1}(\mathbb{T}^{3})) \end{equation*} \end{prop} \begin{proof} After a transformation $ t\to -t $ we check the assumptions from section 4.1 of \cite{FOW:2021}. We get \begin{equation} \Pbb^2 = \Pbb, \quad \Pbb^{\tr}=\Pbb, \quad \del{t} \Pbb = 0, \quad\del{j} \Pbb = 0 \end{equation} and \begin{equation} (C^k)^{\tr} = C^k, \quad\del{t}C^k = 0, \quad\del{i} C^k = 0 \end{equation} Using that $0<K < \frac13$ we have constants $\gamma_1, \gamma_2, \gamma_3>0$ such that \begin{equation} \frac{1}{\gamma_1} \id \leq M_0(Z) \leq \frac1\gamma_3\Bc(Z) \leq \gamma_2\id. \end{equation} Since $A^0$ is symmetric we have $(B^0(\bar{Z}))^{\tr} = B^0(\bar{Z})$. Also \begin{align} [\Pbb, \Bc(\bar{Z})]& =0 \\ \del{k}(\Pbb \Bc(\bar{Z})) &= 0 \\ |\Pbb \Bc(\bar{Z})- \Pbb \Bc(0)|_{\op} &= 0 \\ \notag |\Pbb (B^0(\bar{Z})- B^0(0)) \Pbb|_{\op} + |\Pbb^\perp(B^0(\bar{Z})- B^0(0))\Pbb^\perp|_{\op} & \\ + |\Pbb^\perp B^0(\bar{Z}) \Pbb|_{\op} +|\Pbb B^0(\bar{Z}) \Pbb^\perp|_{\op} &\lesssim |\Pbb \bar{Z}|^2 \end{align} Using \eqref{MLRW-estsF} and \eqref{MLRW:Ga-ests}, we get \begin{equation}\label{MLRW-estsH} |\Pbb H(\bar{Z})| \lesssim |\bar{Z}| |\Pbb \bar{Z}|, \quad |\Pbb^\perp H(\bar{Z})| \lesssim |\Pbb \bar{Z}|^2 \end{equation} Since $A^k$ and $C^k$ are symmetric we have that $B^k$ are symmetric. We find \begin{align} |\Pbb^\perp B^k(\bar{Z})\Pbb|_{\op} + |\Pbb^\perp B^k(\bar{Z})\Pbb^\perp|_{\op}+ |\Pbb B^k(\bar{Z})\Pbb^\perp|_{\op}& \lesssim |\Pbb \bar{Z}|^2\\ |\Pbb B^k(\bar{Z})\Pbb|_{\op}& \lesssim |\Pbb \bar{Z}| \end{align} Finally we study the map $$ \text{div}B(t,v,w) = D_v B^0(v) \cdot (B^0(v))^{-1} \left( - \frac1t (C^k + B^k(v))w_k + \frac16 \Bc(v)\Pbb v + \frac1t H(v) \right) + \frac1t D_v B^k(v) w_k $$ We compute $$|D_v B^0(v)| \lesssim |D_Z(f(\psi)\cdot |z|^2_{\gc})| \lesssim |\Pbb v|$$ where $f$ is some smooth bounded function that comes from the choice of transformation. Also $B^0 (v)^{-1} = \id + \Ord\bigl(|\Pbb v|^2\bigr)$. Thanks to \eqref{MLRW:Mk-ests}, \eqref{MLRW-ests1} and \eqref{MLRW-estsH} we just need to check $$ |\Pbb^\perp (C^k+B^k(v))w_k \Pbb^\perp |_{\op} \lesssim |\Pbb v|$$ Thus \begin{align*} |\Pbb \text{div} B(t,v,w)\Pbb|_{\op}&\leq |t|^{-1} \beta_1 \\ |\Pbb \text{div} B(t,v,w)\Pbb^\perp |_{\op} + |\Pbb^\perp \text{div} B(t,v,w)\Pbb|_{\op} &\leq |t|^{-1} \beta_2 |\Pbb v| \\ |\Pbb^\perp \text{div} B(t,v,w)\Pbb^\perp|_{\op}&\leq |t|^{-1} \beta_4 |\Pbb v|^2 \end{align*} \end{proof} \end{comment} \subsection{Extended system} In order to apply the global existence result from \cite{FOW:2021} we wish to consider the Fuchsian system for $\bar{Z} := (Z, DZ)^T$. Thus, we apply the differential operator $M^0 \del{a} (M^0)^{-1}$ to \eqref{conf-Eul-F} to get $$M_0(Z)\del{t} \del{a} Z + \frac1t (\mathcal C^k + M^k(Z)) \del{k}\del{a}Z = \frac1t\Bc \Pi\del{a} Z + \frac1t G_a(\bar{Z}), $$ where we introduce \begin{align*} G_a(\bar{Z}) &:= M_0(Z) \Big[ -\del{a}\left[ M_0(Z)^{-1} \right](\mathcal C^k +M^k(Z))\del{k}Z - M_0(Z)^{-1} \left( \del{a}M^k(Z)\right) \del{k} Z \\&\qquad\qquad + \del{a}\left[ M_0(Z)^{-1} \right] \Bc \Pi Z +\del{a}\left[ M_0(Z)^{-1} F(Z)\right] \Big]. \end{align*} \begin{lem}\label{conditions:relEuler} The extended system $\bar{Z} := (Z, DZ)^T$ is governed by the PDEs: \begin{equation}\label{fixed-EoM} B^0(\bar{Z}) \del{t}\bar{Z}+\frac1t(C^k + B^k(\bar{Z}))\del{k}\bar{Z} = \frac1t \Bc(\bar{Z})\Pbb \bar{Z} + \frac1t H(\bar{Z}), \end{equation} where \begin{align*} B^0(\bar{Z}) := \begin{pmatrix} M_0(Z) & 0 \\ 0 & M_0(Z) \end{pmatrix}, \quad C^k := \begin{pmatrix} \mathcal C^k & 0 \\ 0 & \mathcal C^k \end{pmatrix}, \quad B^k(\bar{Z}):= \begin{pmatrix} M^k(Z) & 0 \\ 0 & M^k(Z) \end{pmatrix}, \intertext{and} \Pbb := \begin{pmatrix} \Pi & 0 \\ 0 & \Pi \end{pmatrix}, \quad \mathcal B(\bar{Z}) := \begin{pmatrix} \Bc & 0 \\ 0 & \Bc \end{pmatrix}, \quad H(\bar{Z}) := \begin{pmatrix} F(Z) \\ G(Z, DZ) \end{pmatrix}. \end{align*} Moreover, let $ \delta>0 $ and assume that $ |\bar{Z}|\leq \delta $. Then, the following estimates hold: \begin{align} \notag |\Pbb (B^0(\bar{Z})- B^0(0)) \Pbb|_{\op} + |\Pbb^\perp(B^0(\bar{Z})- B^0(0))\Pbb^\perp|_{\op} &\label{est:B0}\\ + |\Pbb^\perp B^0(\bar{Z}) \Pbb|_{\op} +|\Pbb B^0(\bar{Z}) \Pbb^\perp|_{\op} &\lesssim |\Pbb \bar{Z}|^2,\\ |\Pbb H(\bar{Z})| &\lesssim |\Pbb \bar{Z}|,\label{est:PH}\\ |\Pbb^\perp H(\bar{Z})| &\lesssim |\Pbb \bar{Z}|^2 ,\label{est:PperpH}\\ |\Pbb^\perp B^k(\bar{Z})\Pbb|_{\op} + |\Pbb^\perp B^k(\bar{Z})\Pbb^\perp|_{\op}+ |\Pbb B^k(\bar{Z})\Pbb^\perp|_{\op}& \lesssim |\Pbb \bar{Z}|^2,\label{est:BK1}\\ |\Pbb B^k(\bar{Z})\Pbb|_{\op}& \lesssim |\Pbb \bar{Z}|,\label{est:BK2}\\ |\Pbb {\rm{div}} B(t,v,w)\Pbb|_{\op}&\lesssim |t|^{-1} ,\label{est:div1}\\ |\Pbb {\rm{div}} B(t,v,w)\Pbb^\perp |_{\op} + |\Pbb^\perp {\rm{div}} B(t,v,w)\Pbb|_{\op} &\lesssim |t|^{-1} |\Pbb v|,\label{est:div2} \\ |\Pbb^\perp {\rm{div}} B(t,v,w)\Pbb^\perp|_{\op}&\lesssim |t|^{-1} |\Pbb v|^2,\label{est:div3} \end{align} where \begin{equation*} {\rm{div}} B(t,v,w) := D_v B^0(v) \cdot (B^0(v))^{-1} \left( - \frac1t (C^k + B^k(v))w_k + \frac1t \Bc(v)\Pbb v + \frac1t H(v) \right) + \frac1t D_v B^k(v) w_k. \end{equation*} Following the conventions in \cite{beyeroliynyk}, we use $ v $ and $ w $ which denote $ \Bar{Z} $ and $ D \Bar{Z} $ respectively. \begin{comment} \begin{equation}\label{MLRW-ests1} \begin{split} |B^0(Z)|_{\op} + | C^k|_{\op} + |(B^{0})^{-1}(Z)|_{\op} &\lesssim 1 \\ |B^k(Z)|_{\op} &\lesssim |\Pi Z| \\ |\del{a}B^k(Z)|_{\op} + |\del{a}(B^0)^{-1}(Z)|_{\op} &\lesssim |\Pbb \bar{Z}| \end{split} \end{equation} as well as \begin{equation}\label{MLRW-estsFH} |F(Z)| \lesssim |\Pbb Z|^2 \AND |G|\lesssim |\bar{Z}||\Pbb \bar{Z}| \end{equation} \end{comment} \end{lem} \begin{proof} \eqref{est:B0} immediately follows from the definition of $ B^{0} $ and $ M^{0} $ given in \eqref{MLRW:M0}. For \eqref{est:PH} we use \eqref{MLRW-estsFH} for the estimate on $ F $. To estimate $ G $, we define \begin{equation} \begin{split} \label{MLRW-comms1} C_1:= [\Pi, M_0]& = \begin{pmatrix} 0 & - A^0_j \\ A^0_i & 0 \end{pmatrix} , \quad C_2:= [\Pi, M_0^{-1} ] = - M_0^{-1} ([\Pi, M_0])M_0^{-1}, \AND C_3^k := [\Pi^\perp, M^k]\,. \end{split}\end{equation} A computation shows that \begin{equation*} \begin{split} |\del{a}C_2|_{\op} &\lesssim |z|^2 + |Dz|^2 \lesssim |\Pbb \bar{Z}|^2\,, \\ |C_1|_{\op} + |C_2|_{\op} + |C_3|_{\op} &\lesssim |z|^2\,. \end{split} \end{equation*} Using \eqref{MLRW-comms1} and \eqref{MLRW-ests2}, we find \begin{align*} \Pi^\perp G_a&= C_1 G_a + M_0 \left[ - \del{a}C_2 \cdot (\mathcal C^k+M^k)\del{k}Z - C_2 (\del{a}M^k)\del{k}Z + \del{a}C_2 \cdot \Bc \Pi Z\right] \\& + M_0 \Big[ -\del{a}\left( M_0^{-1} \right)(\Pi^\perp \mathcal C^k +\Pi^\perp M^k)\del{k}Z - M_0^{-1} \del{a}\left( \Pi^\perp M^k\Pi^\perp\right) \del{k} Z \\&\qquad\qquad - M_0^{-1} \del{a}\left( \Pi M^k\Pi^\perp\right) \del{k} \Pi Z +\Pi^\perp \del{a}\left( M_0^{-1} F\right) \Big]. \end{align*} Note that it is crucial that the term $ M_{0}^{-1}\del{a}\left( \Pi M^k\Pi^\perp\right) \del{k} \Pi Z $ is of higher order in $ \Pbb Z $. The estimate \eqref{est:BK1} and \eqref{est:BK2} follow immediately from the estimates in \eqref{MLRW-ests1}. Using \eqref{MLRW-ests1}, \eqref{MLRW-estsFH} and \eqref{MLRW-ests2} we may conclude \eqref{est:PH} as well as \eqref{est:PperpH}. Finally, we study the map $$ \text{div}B(t,v,w) = D_v B^0(v) \cdot (B^0(v))^{-1} \left( - \frac1t (C^k + B^k(v))w_k + \frac1t \Bc(v)\Pbb v + \frac1t H(v) \right) + \frac1t D_v B^k(v) w_k. $$ We compute $$|D_v B^0(v)| \lesssim |D_Z(f(\psi)\cdot |z|^2_{\gc})| \lesssim |\Pbb v|,$$ where $f$ is some smooth bounded function that comes from the choice of transformation. Also, $B^0 (v)^{-1} = \id + \Ord\bigl(|\Pbb v|^2\bigr)$. Together with \eqref{MLRW:Mk-ests}, \eqref{MLRW-ests1}, \eqref{est:PH} as well as Remark \ref{M0component}, this yields the desired results as given in \eqref{est:div1}--\eqref{est:div3}. \end{proof} \begin{prop} Suppose that $ k\geq 4 $ and $ \bar{Z}_{0}\in H^{k}(\mathbb{T}^{3}) $ is initial data to the initial value problem posed by \eqref{fixed-EoM}. Then, there exists an $ \epsilon>0 $ such that if $ \|\bar{Z}_{0}\|_{H_{k}}<\epsilon $, there exists a unique global solution $\bar{Z}$ to \eqref{fixed-EoM} with \begin{equation*} \bar{Z}\in C^{0}((0,T_{0}],H^{k}(\mathbb{T}^{3}))\cap C^{1}((0,T_{0}],H^{k-1}(\mathbb{T}^{3})). \end{equation*} \end{prop} \begin{proof} After a transformation $ t\to -t $, our only task is to check the assumptions from \cite[\textsection 4.1]{FOW:2021}. It is straightforward to check that \begin{equation*} \Pbb^2 = \Pbb, \quad \Pbb^{\tr}=\Pbb, \quad \del{t} \Pbb = 0, \quad\del{j} \Pbb = 0\,, \end{equation*} and \begin{equation*} (C^k)^{\tr} = C^k, \quad\del{t}C^k = 0, \quad\del{i} C^k = 0\,. \end{equation*} Using that $0<K < \frac13$, there exist constants $\gamma_1, \gamma_2, \gamma_3>0$ such that \begin{equation*} \frac{1}{\gamma_1} \id \leq M_0(Z) \leq \frac1\gamma_3\Bc(Z) \leq \gamma_2\id. \end{equation*} Since $A^0$ is symmetric, we have $(B^0(\bar{Z}))^{\tr} = B^0(\bar{Z})$. Also \begin{equation*} [\Pbb, \Bc(\bar{Z})] =0,\qquad \del{k}(\Pbb \Bc(\bar{Z})) = 0,\AND |\Pbb \Bc(\bar{Z})- \Pbb \Bc(0)|_{\op} = 0 \,. \end{equation*} Since $A^k$ and $C^k$ are symmetric, $B^k$ is also. Together with Lemma \ref{conditions:relEuler} we conclude that all the conditions in \cite[\textsection 4.1]{FOW:2021} are met. \end{proof} \section{The Einstein Euler equations near a Milne background}\label{EinEulBR} We now consider solutions to the Einstein Euler equations \eqref{Ein-PF} allowing for the dynamical metric $\bar{g}$ to be a perturbation away from the following linearly expanding Einstein spacetime: \begin{Def}[The Milne spacetime] Let $(M, \gamma)$ negative closed Einstein space of dimension 3 and $\text{Ric}[\gamma]=-\frac29 \gamma$. Then, the Lorentz cone $\mathcal{M}= \mathbb{R}\times M$ with metric \begin{equation} \label{background}g_M = -d\bar{t}^2 + \frac{\bar{t}^2}{9}\gamma \end{equation} is a Lorentzian solution to the vacuum Einstein equations. We term $(\mathcal{M}, g_M)$ the (compactified) \textit{Milne} spacetime and refer to $\bar{t}$ as cosmological (or physical) time. \end{Def} In this section we prove the following theorem \begin{thm}\label{thm:Milne_stability} Let $ (\mathcal{M}, g_M) $ be the Milne spacetime and consider the Einstein-Euler equations \eqref{Ein-PF} with linear equation of state \eqref{eos-lin} where $K \in (0, 1/3)$. Let $ (g_{0},k_{0},\rho_{0},u_{0}) $ be initial data satisfying the constraint equations \eqref{EoM-constraints} for at physical time $ t_{0} $ such that $\rho_0>0$ and \begin{equation} (g_{0},k_{0},\rho_{0},u_{0})\in \mathscr{B}^{6,5,5,5}_{\epsilon}\big(\tfrac{t_{0}^{2}}{9}\gamma,-\tfrac{t_{0}}{9}\gamma,0,0\big).\label{initialsmallness} \end{equation} Then, there exists an $ \epsilon>0 $ sufficiently small, such that the future development of $ (g_{0},k_{0},\rho_{0},u_{0}) $ under the Einstein-Euler equations is complete and admits a constant mean curvature foliation labelled by $ \tau\in[\tau_{0},0) $, such that the induced metric and second fundamental form on constant mean curvature slices converge as \begin{equation*} (\tau^{2}g,\tau k)\overset{\tau \to 0}{\longrightarrow} \big(\gamma,\frac{1}{3}\gamma\big). \end{equation*} \end{thm} \begin{rem} In theorem \ref{thm:Milne_stability} above $ \mathscr{B}_{\epsilon}^{6, 5, 5, 5}(\cdot, \cdot, \cdot, \cdot) $ denotes the ball of radius $\epsilon$ centered at the argument in the set $ H^{6}(M)\times H^{5}(M)\times H^{5}(M)\times H^{5}(M) $, with the canonical Sobolev norms given in Definition \ref{Sobolevspaces} below. \end{rem} \begin{Def}[ADM decomposition and CMCSH gauge] We reparametrise the physical dynamical metric $\bar{g}$ in terms of the ADM variables via \begin{equation}\label{399} \bar{g} = -\tilde{N}^2 d t^2 +\tilde{g}_{ab}(d x^a+\tilde{X}^ad t)(d x^b + \tilde{X}^b d t). \end{equation} That is, $\tilde{N}$ is the lapse, $\tilde{X}$ is the shift and $\tilde{g}$ is the induced Riemannian metric on $M$. Furthermore, we denote the mean curvature and trace-free part of the second fundamental form\footnote{We assume that all spatial indices are raised and lowered using the metric $ g $. } by $$ \tau:= \text{tr}_{\tilde g} \tilde k = \tilde{g}^{ab} \tilde{k}_{ab},\qquad \tilde k_{ab}:= \tilde\Sigma_{ab}+\tfrac13 \tau \tilde g_{ab}. $$ We follow the standard approach for the Milne spacetime \cite{AM03, AnderssonMoncrief:2011} and impose the \emph{constant mean curvature} and \textit{spatial harmonic gauge} conditions \begin{equation}\label{eq:spatial_harmonic} t=\tau, \qquad H^a:= \tilde{g}^{cb}(\Gamma[\tilde{g}]^{a}_{cb}-\Gamma[\gamma]^a_{cb})=0. \end{equation} \end{Def} \begin{Def}[Rescaled geometric variables] We define \begin{equation*} g_{ab}:=\tau^2\tilde{g}_{ab}, \quad g^{ab}:=(\tau^2)^{-1}\tilde{g}^{ab}, \quad N:=\tau^2\tilde{N}, \quad X^a:=\tau \tilde{X}^a,\quad \Sigma_{ab}:=\tau\tilde \Sigma_{ab}, \end{equation*} as well as $\hat{N}:= N-3$. We also define the \emph{logarithmic time} $ T $ as $T\coloneqq -\ln(\frac{\tau}{e\tau_{0}})$ so that $T$ satisfies the relation $\partial_T = - \tau \partial_\tau$. \end{Def} \begin{rem} The expression for the Milne geometry given in \eqref{background} is with respect to cosmological time $ \bar{t} $. Moving to CMC time, given by $ \tau =-3/\bar{t}$, we see that the metric takes the form $$ g_M = \frac{1}{\tau^2}\left( - \frac{9}{\tau^2}d\tau^2 + \gamma\right).$$ Thus, the Milne spacetime, when written in CMCSH-gauge and rescaled variables, is given by \begin{equation*} (g_{ij},\Sigma_{ij},N,X^{i})=(\gamma_{ij},0,3,0). \end{equation*} \end{rem} We also define various components of the energy momentum tensor that contribute to the dynamics under the Einstein flow, see \cite{AF20}. \begin{Def}[Matter variables $E, \jmath, \eta, S$]\label{matterquant} We define the following physical matter quantities \begin{align*} \tilde{E}:= \Tb^{\mu\nu}n_{\mu}n_{\nu}, \quad \tilde{j}^{a}:=\tilde{g}^{ab}\tilde{N}\Tb^{0\mu}\bar{g}_{b\mu}, \quad \tilde{\eta}:=\tilde{E}+\tilde{g}^{ab}\Tb_{ab}, \quad \tilde{S}_{ab}:=\Tb_{ab}-\frac{1}{2}\tr_{\bar{g}}\Tb\cdot \tilde{g}_{ab}, \end{align*} and the rescaled matter quantities by \begin{align*} E:=(-\tau)^{-3}\tilde{E}, \quad j^{a}:=(-\tau)^{-5}\tilde{j}^{a}, \quad \eta:= (-\tau)^{-3} \tilde{\eta}, \quad S_{ab}:= (-\tau)^{-1}\tilde{S}_{ab}. \end{align*} \end{Def} \begin{Def}[Elliptic operators $\Delta_{g,\gamma}, \mathscr{L}_{g, \gamma}$] Let $ V $ be a symmetric $ (0,2) $-tensor on $ M $. We define the following self-adjoint, elliptic differential operators \begin{equation*} \begin{aligned} \Delta_{g,\gamma} V_{ij}&\coloneqq \big(\sqrt{g}\big)^{-1}\nabla[\gamma]_{a}\big(\sqrt{g}g^{ab}\nabla[\gamma]_{b}V_{ij}\big), \\ \mathscr{L}_{g,\gamma}V_{ij}&\coloneqq -\Delta_{g,\gamma}V_{ij}-2\text{Riem}[\gamma]_{iajb}\gamma^{ac}\gamma^{bd}V_{cd}. \end{aligned} \end{equation*} \end{Def} The equations of motion for an Einstein-matter system in CMCSH gauge are derived in \cite{AF20}. Denoting the Levi-Civita connection of $g$ by $ \nabla$, the PDEs are as follows. \begin{lem}[Equations of motion for the geometric variables] The Einstein equations \eqref{Ein} reduce to two constraint equations \begin{subequations}\label{EoM} \begin{equation}\begin{aligned}\label{EoM-constraints} \text{R}(g)-|\Sigma|_g^2+\tfrac{2}{3}&= -4 \tau E , \\ \nabla^a \Sigma_{ab} &= 2 \tau^2 \jmath_b , \end{aligned}\end{equation} two elliptic equations for the lapse and shift variables \begin{equation*}\begin{aligned} (\Delta - \tfrac{1}{3})N &= N \left( |\Sigma|_g^2 - \tau \eta \right)-1, \\ \Delta X^a + \text{Ric}[g]^a{}_b X^b &= 2 \nabla_b N \Sigma^{ba} - \nabla^a \hat{N}+ 2 N \tau^2 \jmath^a - (2N \Sigma^{bc} - \nabla^b X^c)(\Gamma[g]^a_{bc} - \Gamma[\gamma]^a_{bc}), \end{aligned}\end{equation*} and two evolution equations for the induced metric and trace-free part of the second fundamental form \begin{equation}\begin{aligned}\label{eq:EoM-pT-g-Sigma} \partial_T g_{ab} &= 2N \Sigma_{ab} + 2\hat{N} g_{ab} - \mathcal{L}_X g_{ab}, \\ \partial_T \Sigma_{ab} &= -2\Sigma_{ab} - N(\text{Ric}[g]_{ab} +\tfrac{2}{9}g_{ab} ) + \nabla_a \nabla_b N + 2N \Sigma_{ac} \Sigma^c_b \\ & \quad -\tfrac{1}{3} \hat{N} g_{ab} - \hat{N} \Sigma_{ab} - \mathcal{L}_X \Sigma_{ab} + N \tau S_{ab}. \end{aligned}\end{equation} \end{subequations} \end{lem} \begin{rem} In the above equations \eqref{EoM}, $\mathcal{L}_X$ denotes the Lie derivative with respect to $X$. We also recall from \cite{AM03} the following decomposition of the curvature term in the spatially harmonic gauge: \begin{equation}\label{eq:Ricci-decomp}\text{Ric}[g]_{ab} +\frac29 g_{ab} = \frac12 \mathscr{L}_{g,\gamma}g_{ab} +J_{ab}, \end{equation} where there is a constant $C>0$ such that $ \| J\|_{H^{s-1}} \leq C \| g-\gamma\|_{H^s}$. \end{rem} \subsection{Equations of motion for the fluid variables} In CMCSH-coordinates and rescaled variables the dynamical metric \eqref{399} is given by \begin{equation*}\bar{g} = \frac{1}{\tau^2} \left[ - \tfrac{N^2}{\tau^2} d\tau^2 + g_{ab} (dx^a+ \tfrac{X^a}{\tau}d\tau)(dx^b+\tfrac{X^b}{\tau} d\tau) \right]. \end{equation*} This is in the form considered in \eqref{3+1-g}, provided we make the identifications $$ x^0 \equiv \tau, \quad \alpha \equiv \frac{N}{\tau}, \quad \beta^a \equiv \frac{X^a}{\tau}, \quad g_{ab} \equiv \gc_{ab}, \quad \Psi(\tau) = \ln(-\tau). $$ \begin{lem} The Euler equations \eqref{rel-Eul} with respect to the dynamical metric $\bar{g}$ can be written as \begin{equation} \label{Milne_rel_Eul} B^0 \del{\tau} U + B^k\nabla_k U = H, \end{equation} where $ U = (\zeta, u^j)^{\tr}$ and \begin{equation*} B^0 = \begin{pmatrix}K & -\frac{K}{ \nu \mu} w_j \\ -\frac{K}{ \nu \mu} w_i & M_{ij} \end{pmatrix}, \quad B^k = \begin{pmatrix} \frac{K}{ \nu} u^k & \frac{s^2}{ \nu}\delta^k_j \\ \frac{K}{ \nu} \delta^k_i & \frac{1}{ \nu} M_{ij} u^k \end{pmatrix}, \quad H= \begin{pmatrix} K \ell \\ \Bigl(\frac{1-3 K }{ \nu \mu} w_i\Bigr)+ m_i\end{pmatrix}. \end{equation*} \end{lem} \begin{proof} Recall from \eqref{conform-decomp} that $ \nu = v^\tau, \; w_j =v_j, \; \mu = v_\tau$ and $u^j = v^j$. Evaluating the expressions given in \eqref{3+1-Christ-ij0}, \eqref{Mij-3+1}, \eqref{ell-def} and \eqref{mi-def} in CMCSH-coordinates, we compute the geometric quantities to be \begin{equation}\label{345a}\begin{split} \Kc_{ij} &= \frac{1}{2N}(\del{T}g_{ij} + \nabla_i X_j+\nabla_j X_i), \\ \Xi^i_j&=\frac{1}{\tau}\Big[-\frac{1}{N}X^i(\nabla_j N -X^k\Kc_{kj})-N \Kc_j^i +\nabla_j X^i\Big], \\ \Upsilon^i&= \frac{1}{\tau^2}\Big[N \nabla^i N - 2NX^j \Kc_j^i-\frac{X^i }{N}(-\del{T}N-N+X^j\nabla_j N-X^jX^k\Kc_{jk}) \\&\qquad\qquad -\del{T}X^i - X^i+X^j\nabla_jX^i \Big], \end{split}\end{equation} while the matter quantities are \begin{equation}\label{345b}\begin{split} M_{ij} &=g_{ij}-\frac{1}{\tau \mu}(w_iX_j +w_j X_i) + \frac{-N^2+|X|_{g}^2}{\tau^2 \mu^2}w_i w_j, \\ \ell &= -\frac{1}{\nu N}\Kc_{jk}X^j u^k-\Xi^j_j +\frac{1}{\mu}\Upsilon^j w_j + \frac{1}{\nu\mu}\Xi^j_k w_j u^k , \\ m_i &= - M_{ij}\Bigl( \frac{1}{\nu N}\Kc_{lk}X^j u^l u^k + \nu \Upsilon^j +2 \Xi^j_k u^k\Bigr). \end{split}\end{equation} The result then follows from Lemma \ref{lem:conf-Eul-U}. \end{proof} \begin{rem} On the background Milne geometry the tensors $\Kc_{ij}, \tau \Xi^i_j$ and $\tau^2 \Upsilon^i$ identically vanish. \end{rem} \begin{lem} The homogeneous fluid solutions to \eqref{rel-Eul} are given by $U=(\zeta, v^a)=0$. \end{lem} \begin{proof} The modified density variable as defined in Section \ref{sec:transf} takes the form \begin{equation}\label{zeta:dynamic} \zeta = \frac{1}{1+K} \ln(\bar{\rho}/\bar{\rho}_0) - 3 \ln(-\tau). \end{equation} Considering the Euler equations \eqref{rel-Eul} with $\bar{g}=g_M$ and $\vb^i = 0$ (i.e.~a homogeneous regime) we arrive at the ODEs \begin{comment} \begin{align*} \vb^{\bar{t}}\partial_{\bar{t}}\bar{\rho} + (1+K) \bar{\rho}\partial_{\bar{t}}\vb^{\bar{t}}+ \frac{3(1+K)}{\bar{t}}\bar{\rho} \vb^{\bar{t}}=0, \quad \bar{\rho}\vb^{\bar{t}}\partial_{\bar{t}}\vb^{\bar{t}} + \frac{K}{1+K} \left( (\vb^{\bar{t}})^2 - 1\right) \partial_{\bar{t}} \bar{\rho} = 0 \end{align*} \end{comment} \begin{equation*} \big((\bar{v}^{\bar{t}})^{2}-K/(1+K)\big)\partial_{t}\bar{\rho}+2\bar{\rho} \bar{v}^{\bar{t}}\partial_{t}\bar{v}^{\bar{t}}+\frac{3}{t}(\bar{v}^{\bar{t}})^{2}\bar{\rho}=0,\qquad \partial_{i}\bar{\rho}=0. \end{equation*} These are satisfied by $$\vb^{\bar{t}}=1 \AND \bar{\rho} = \bar{\rho}_0 \bar{t}^{-3(1+K)}$$ for some constant $\bar{\rho}_0>0$. This implies $\bar{\rho} = c_h (-\tau)^{3(1+K)}$ for some constant $c_h>0$ and thus on the background $$ \zeta = \frac{1}{1+K} \ln (\bar{\rho}/\bar{\rho}_0)- 3\Psi=3 \ln (-\tau) - 3 \Psi= 0,$$ provided we pick $\bar{\rho}_0 = c_h$. Hence, our homogeneous solution is $U=(\zeta, v^a)=0$. \end{proof} \begin{rem} On the above homogeneous fluid solutions, we have the simplifications \begin{align}\label{nu-background} \C\nu = v^\tau= -\frac{\tau}{3}, \quad \C\mu=v_\tau =\frac{3}{\tau}, \quad \quad \C\nu\C\mu = -1, \quad M_{ij} = \gamma_{ij}. \end{align} Additionally, $ w_j, u^j, w^j, \tau \ell$ and $\tau m_i$ identically vanish. \end{rem} Due to the factor of $\tau$ appearing in the expression for $\nu$ in \eqref{nu-background}, we introduce an additional rescaling. \begin{Def}[$\hat{v}^\tau$] We rescale the time-component of the conformal fluid four-velocity by $$ \hat{v}^{\tau}:= (-\tau)^{-1}v^{\tau}.$$ \end{Def} \subsection{Preliminary notation and computations} In this subsection we assume that $ (M,g) $ is a closed $ 3 $-dimensional Riemannian manifold with Levi-Civita connection $\nabla$. \begin{Def}[Inner products] Let $ v $ and $ w $ be vectors fields on $M$. We define $ \langle \cdot,\cdot \rangle_g$ as \begin{equation*} \langle v,w \rangle_{g} \coloneqq g_{ij}v^{i}w^{j}. \end{equation*} If $ \ell\geq 1 $, then we define \begin{align*} \langle \nabla^{\ell}v, \nabla^{\ell}w\rangle_{g}&\coloneqq g^{a_{1}b_{1}}\cdots g^{a_{\ell}b_{\ell}}\langle\nabla_{a_{1}}\cdots\nabla_{a_{l}}v,\nabla_{b_{1}}\cdots\nabla_{b_{\ell}}w\rangle_{g}. \end{align*} This definition is extended in the usual way to arbitrary tensor fields. We also define the modulus as $ |v|_{g}^{2}\coloneqq \langle v,v \rangle_{g} $. If the subsript is omitted, we assume the bracket to be the Euclidean inner product, i.e.~$\langle v,w \rangle \coloneqq v^{T}(w) $. \end{Def} \begin{Def}[Sobolev norms and measures]\label{Sobolevspaces} We denote by $ \mu_{g} $ the Riemannian measure associated to $ g $, which is given locally by $ \sqrt{\det g}dx^{1}\wedge dx^{2} \wedge dx^{3} $. When the context is unambiguous, we suppress the measure and write \begin{equation*} \int_{M} f = \int_{M} f \mu_{g} \end{equation*} for $f$ some function. For a tensor field $ V $ and $s\in\mathbb{N}$, we define the \emph{Sobolev norm of order $s$} as \begin{align*} \|V\|_{H^{s}}^{2}&\coloneqq \sum_{0 \leq \ell\leq s}\int_{M}|\nabla^{\ell}V|_{g}. \end{align*} We frequently consider the norms of an abstract vector quantity $ \mathcal{V}=(f,v)^{T} $ which consists of a function $ f $ and a spatial vector field $ v $. We simply write \begin{equation*} \|\mathcal{V}\|_{H^{k}}\coloneqq \|f\|_{H^{k}}+\|v\|_{H^{k}}. \end{equation*} \end{Def} We also require the following result that allows us to commute the time-derivative with integration. \begin{lem}\label{timederivative} For an arbitrary scalar function $ f $ the following identity holds: \begin{equation*} \frac{d}{d T} \int_{M}f\; \mu_g \lesssim \|N-3\|_{H^{2}}\|f\|_{L^{1}}+\|X\|_{H^{3}}\|f\|_{L^{1}}+\int_{M}\partial_{T}f\; \mu_g. \end{equation*} \end{lem} \begin{proof} The following identity from \cite{ChoquetBruhatMoncrief:2001}, together with integration by parts on the shift term, yields \begin{align*} \frac{d}{dt}(\int_{M}f\; \mu_g)&=\int_{M}\big(3(N-3)f+\partial_{T}f-f\nabla_{i}X^{i})\big)\mu_{g}. \end{align*} The result then follows by the Hölder inequality and the Sobolev embedding $L^\infty(M) \hookrightarrow H^2(M)$. \end{proof} We also state an additional lemma which will be needed later on, when discussing the behavior of the energy functionals. \begin{lem}\label{Riemannestimate} Suppose that $\gamma$ is another Riemannian metric on $M$, such that $\text{Ric}[\gamma]=-\frac29 \gamma$, the harmonic condition \eqref{eq:spatial_harmonic} holds and also the bounds \begin{equation*} \| g-\gamma\|_{H^s} \leq C \epsilon \end{equation*} hold for $s\in\mathbb{N}$. Then \begin{equation*} \| {\rm{Riem}}[g]-{\rm{Riem}}[\gamma]\|_{H^{s}}\lesssim \|g-\gamma\|_{H^{s+2}}. \end{equation*} \end{lem} \begin{proof} By the Ricci decomposition of the Riemann tensor in 3 dimensions, we have that \begin{align} \text{Riem}[\gamma]_{ijkl}&=-\frac{\text{R}[\gamma]}{6}(\gamma_{il}\gamma_{jk}-\gamma_{ik}\gamma_{jl})=\frac{1}{9}(\gamma_{il}\gamma_{jk}-\gamma_{ik}\gamma_{jl}),\notag\\ \text{Riem}[g]_{ijkl}&=-\frac{\text{R}[g]}{6}(g_{il}g_{jk}-g_{ik}g_{jl})+(V_{il}g_{jk}-V_{jl}g_{ik}-V_{ik}g_{jl}+V_{jk}g_{il}),\label{Riemann} \end{align} where $ V_{jk}:=-\text{Ric}[g]_{jk}+\frac{1}{3}\text{R}[g]g_{jk} $ and we used that $\text{Ric}[\gamma]=-\frac{2}{9}\gamma $. From \eqref{eq:Ricci-decomp}, we have that \begin{equation*} \text{Ric}[g]_{ab} -\text{Ric}[\gamma]_{ab}= -\frac29 g_{ab} + \frac12 \mathscr{L}_{g,\gamma}(g_{ab}-\gamma_{ab}) +J_{ab} + \frac29 \gamma_{ab}. \end{equation*} Thus, by elliptic regularity (see e.g. \cite{Besse}) \begin{align*} \| \text{Ric}[g]_{ab} -\text{Ric}[\gamma]_{ab}\|_{H^{s}}&\lesssim \|g-\gamma\|_{H^s} + \| \mathscr{L}_{g,\gamma}(g-\gamma)\|_{H^s} + \| J\|_{H^s} \lesssim \| g-\gamma\|_{H^{s+2}}. \end{align*} To compute the difference of the Riemann tensors, one then expand all of the terms in \eqref{Riemann} including $ g $ by $ g-\gamma +\gamma $ and $ \text{Ric}[g] $ by $ \text{Ric}[g]-\text{Ric}[\gamma]+\text{Ric}[\gamma] $. Applying the triangle inequality then shows that $ \|V\|_{H^{s}}\lesssim \|g-\gamma\|_{H^{s+2}} $ and furthermore the desired estimate. \end{proof} \subsection{Local existence and bootstrap assumptions} \begin{thm} Let $ k\geq 6 $ be a fixed integer. At $ T=T_{0} $, suppose that we have CMC initial data satisfying the constraints \eqref{EoM-constraints} with regularity \begin{equation*} (g_{0},k_{0},N_{0},X_{0},\rho_{0},u_{0})\in H^{k}\times H^{k-1}\times H^{k}\times H^{k}\times H^{k-1}\times H^{k-1}. \end{equation*} Then, there exists a unique classical solution $ (g,k,N,X,\rho,u) $ to \eqref{EoM} on $ [T_{0},T_{+}) $ with $ T_{+}>T_{0} $. This local solution satisfies \begin{align*} g,N,X\in C^{0}([T_{0},T_{+}),H^{k})\cap C^{1}([T_{0},T_{+}),H^{k-1}),\\ k\in C^{0}([T_{0},T_{+}),H^{k-1})\cap C^{1}([T_{0},T_{+}),H^{k-2}),\\ u,\rho\in C^{0}([T_{0},T_{+}),H^{k-1}),\\ \partial_{T}\rho,\partial_{T}u\in C^{0}([T_{0},T_{+}),H^{k-2}). \end{align*} In addition, the norms as well as the time of existence $ T_{+} $ depend continuously on the initial data. By the continuation principle the maximal time of existence $ T_{\text{max}} $ is either $ T_{\text{max}}=\infty $, i.e. global existence, or \begin{align*} \lim_{T\to T_{\text{max}}}\sup_{[T_{0},T]}\|g-\gamma\|_{H^{k}}+\|\Sigma\|_{H^{k-1}}+\|N-3\|_{H^{k}}+\|X\|_{H^{k}}\\+\|\partial_{T}N\|_{H^{k-1}}+\|\partial_{T}X\|_{H^{k-1}}+\|\rho\|_{H^{k-1}}+\|u\|_{H^{k-1}}>\delta, \end{align*} where $ \delta>0 $ is a fixed constant. \end{thm} In light of the above local existence theorem, we can now make certain bootstrap assumptions that hold in a non-empty time interval. Let $\lambda<1$ be a fixed positive constant and, again, $k \geq 6$ a fixed integer. We assume that, for all $T_0 \leq T \leq T'$, where $T'>T_0$, there is a constant $C>0$ such that the following bootstrap assumptions hold \begin{equation}\label{bootstrap} \begin{aligned} \|U\|_{H^{k-1}}&\leq C \epsilon, \\ \| g-\gamma\|_{H^{k}} + \| \Sigma\|_{H^{k-1}} &\leq C \epsilon e^{-\lambda T}, \\ \|N-3\|_{H^{k}}+ \|X\|_{H^{k}} + \|\partial_T N\|_{H^{k-1}} + \|\partial_T X\|_{H^{k-1}} &\leq C \epsilon e^{-T}.\\ \end{aligned} \end{equation} The results in the rest of this paper will be derived under these bootstrap assumptions. \begin{lem} Under \eqref{bootstrap} we have that $$\| \hat{v}^{\tau} -\tfrac13\|_{H^k} \lesssim \epsilon.$$ \end{lem} \begin{proof} The normalization condition (\ref{vb-norm}) implies \begin{align*} \bar{v}^{\tau}&=\frac{1}{2(-\tilde{N}^{2}+\tilde{X}_{a}\tilde{X}^{a})}(-2\tilde{X}_{a}\tilde{v}^{a}+\big[4(\tilde{X}_{a}\tilde{X}^{a})^{2}-4(-\tilde{N}^{2}+\tilde{X}_{a}+\tilde{x}^{a})(\tilde{g}_{ab}\tilde{v}^{a}\tilde{v}^{b}+1)\big]^{\frac{1}{2}})\\ &=\frac{\tau^{2}}{X_{a}X^{a}-N^{2}}(-X_{a}v^{a}+\big[(X_{a}v^{a})^{2}+(N^{2}-X_{a}X^{a})(g_{ab}v^{a}v^{b}+1)\big]^{\frac{1}{2}}), \end{align*} and so the final estimate is obtained by applying \eqref{bootstrap}. \end{proof} \begin{rem} From the geometric equations of motion \eqref{eq:EoM-pT-g-Sigma} we may immediately conclude that \begin{equation}\label{estgtime} \|\partial_{T}g\|_{H^{k-1}}\lesssim \|N\Sigma\|_{H^{k-1}}+\|\hat{N}g\|_{H^{k-1}}+\|\mathscr{L}_{X}g\|_{H^{k-1}}\lesssim \epsilon e^{- T}. \end{equation} \end{rem} We now explain why the bootstrap estimate on $ U $ yields a similar estimate on $ Z $. \begin{lem} Under the bootstrap assumptions \eqref{bootstrap} we have that \begin{equation*} \|Z\|_{H^{k-1}}\lesssim \epsilon. \end{equation*} \end{lem} \begin{proof} Consider the transformation $ Z=(\psi, z^i) \mapsto U =(\xi, u^i)$ as given in \eqref{cov} which we will denote by $ \varphi:\mathbb{R}^{4}\to \mathbb{R}^{4} $. By the bootstrap \eqref{bootstrap} as well as Sobolev embedding we have that \begin{equation*} \|bz\|_{L^{\infty}}+\|a\|_{L^{\infty}}\lesssim \|U\|_{H^{k-1}}\lesssim \epsilon. \end{equation*} Using the explicit form of $ |b| $, which is bounded from below, we can conclude that $ \|z\|_{L^{\infty}}\lesssim \epsilon $. Choosing $ c_{1,2} $ the same as in the proof of Lemma \ref{homsolutions} we see that smallness of $ a $ must imply that $ \|\psi\|_{L^{\infty}}\lesssim \epsilon $. Simply calculating the Jacobian of $ \varphi $, we see that due to smallness of $ Z $, $ \varphi $ is a diffeomorphism. Also note that $ z^{i}=f(\psi)u^{i} $ is a product of a scalar and a vector and is hence a vector and furthermore that due to the structure of $ a $, $ \psi $ then has to be a scalar as well. Let us now consider the $ \dot{H}^{1} $-norm of $ \psi $: \begin{align*} \|\psi\|_{\dot{H}^{1}}^{2}&=\int_{M}\big(g^{mn}\nabla_{m}(\psi(\zeta,|u|_{g}^{2}))\nabla_{n}(\psi(\zeta,|u|_{g}^{2}))\\ &=\int_{M}\big(g^{mn}\big[(\partial_{1}\psi)(\zeta,|u|_{g}^{2})\partial _{m}\zeta+(\partial_{2}\psi)(\zeta,|u|_{g}^{2})\partial _{m}|u|_{g}^{2}\big]\big[(\partial_{1}\psi)(\zeta,|u|_{g}^{2})\partial _{n}\zeta+(\partial_{2}\psi)(\zeta,|u|_{g}^{2})\partial _{n}|u|_{g}^{2}\big]\big)\\ &\lesssim \int_{M} \big(g^{mn}\nabla_{m}\zeta\nabla_{n}\zeta +g^{mn}u_{i}\nabla_{m}u^{i}u_{k}\nabla_{n}u^{k}+g^{mn}\nabla_{m}\zeta u_{i}\nabla_{n}u^{i}\big)\lesssim \|U\|_{H^{1}}, \end{align*} where we used Cauchy-Schwarz and the fact that $ \partial_1\psi, \partial_2 \psi $ are scalar functions on a compact manifold and hence bounded. Since $ z^{i}=f(\psi)u^{i} $ we have, by the Leibniz rule of the covariant derivative, \begin{equation*} \|z\|_{\dot{H}^{1}}\lesssim \|U\|_{H^{1}}. \end{equation*} For higher derivatives the same calculation follows with more applications of the Leibniz rule. \end{proof} \subsection{Transformed Fuchsian system for the Euler equations} In this section we derive the expression for the equations of motion of the fluid. \begin{lem}\label{lem:rough-sources} Under \eqref{bootstrap} the following estimates hold: \begin{equation*}\begin{split} |\tau \ell| + |\tau m^i|_g+|\Kc_{ij}|_g+ |\tau \Xi^i_j|_g+|\tau^2 \C\Upsilon^i|_g &\lesssim |X|_g+|\nabla X|_g+|\hat{N}|+|\nabla N|_g+|\Sigma|_g+|\partial_{T}X|_g , \\ |M_{ij}-\gamma_{ij}|_g & \lesssim |g-\gamma|_g+|X|_g+ |z|_g^2. \end{split} \end{equation*} \end{lem} \begin{proof} Using the expressions given in \eqref{345a} as well as the equation of motion of the geometry, \eqref{eq:EoM-pT-g-Sigma}, we obtain \begin{align*} |\Kc_{ij}|_g &\lesssim |\partial_T g |_g + |\nabla X|_g \lesssim |\nabla X|_g+|X|_g+|\hat{N}|+|\Sigma|_g , \\ |\tau \Xi^i_j|_g&\lesssim |\partial_T g |_g + |\nabla X|_g + |X \nabla N|_g \lesssim |\nabla X|_g + |\nabla N|_g+|X|+|\hat{N}|+|\Sigma|_g , \\ |\tau^2 \C\Upsilon^i|_g&\lesssim |\nabla N|_g + |\partial_T X|_g + |X|_g. \end{align*} Similarly, using the expressions given in \eqref{345b} we find \begin{align*} |M_{ij}-\gamma_{ij}|_g &\lesssim |g-\gamma|_g + |X|_g^2|\nu| + |z|_g|X| + |z|_g^2 \lesssim |g-\gamma|_g+|X|_g+ |z|_g^2, \\ |\tau \ell|&\lesssim \left( |\tau/\nu| |\Kc|_g|X|_g + |\tau^2 \Upsilon|_g|\tau\mu|^{-1}+|\tau\Xi|_g\right) |z^j|_g + \left(|\tau \Xi|_g+ |\tau^2 \Upsilon|_g \right)| X|_g|\nu|+ |\tau \Xi|_g, \\ |\tau m^i|_g&\lesssim |M||z^j|_g\left( |\tau/\nu||\Kc|_g|X|_g|z^j|_g + |\tau \Xi|_g\right)+ |M| |\nu/\tau||\tau^2 \Upsilon|_g. \end{align*} \end{proof} \begin{lem}[Fluid equations of motion] The Euler equations \eqref{Milne_rel_Eul} can be rewritten as \begin{equation}\label{Milne:fluid-PDE} M^0 \partial_T Z - (C^k + M^k) \nabla_k Z = -\Bc \Pbb Z - F, \end{equation} where \begin{align*} M^0(Z) &= \begin{pmatrix} 1 & 0 \\ 0 & K^{-1} g_{ij} \end{pmatrix} + e^{-2T} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} + \Ord\bigl(|X|^2_g\bigr)+\begin{pmatrix} 0 & | z|_{g}^2 \\ | z|_{g}^2 & | z|_{g}^2 \end{pmatrix} , \\ M^k(Z) &= \begin{pmatrix} 0 & 0 \\ 0 & \Ord(|z|_g) \end{pmatrix} + \Ord\bigl(|X|^2_g + | z|_{g}^2\bigr), \AND \end{align*} \begin{align}\label{Milne-Ckeq} \Bc &:= (K^{-1} -3)\id, \quad \Pbb := \begin{pmatrix} 0 & 0 \\0 & \delta^i_j\end{pmatrix}, \quad C^k = \begin{pmatrix} 0 & \delta^k_j \\ \delta^k_i & 0 \end{pmatrix}, \end{align} and \begin{equation}\begin{split}\label{estF} |\Pbb F| +|\Pbb^\perp F| &\lesssim \Ord( |z|_g^2+|X|_g+|\nabla X|_g+|\hat{N}|+|\nabla N|_g+|\Sigma|_g+|\partial_{T}X|_g). \end{split} \end{equation} \end{lem} \begin{proof} We transform to new variables $Z=( \psi, z^a)$ and define $z_i = g_{ij} z^j$. We first compute $$ w_a = v_a = g_{a\mu} v^\mu = X_a v^\tau + g_{ac} z^c b( \psi) = X_a \nu + b z_a \,. $$ Using \eqref{conf-Eul-F} and \eqref{transf-derivs1}, we compute (using things like $\mu^{-1} \sim \tau \lesssim 1$) \begin{align*} A^0_0 &= b^2 K \left[ 1 + \frac{2}{\mu (\psi+c_2)} X_k z^k\right] + \Ord\bigl(| z|_{g}^2\bigr) \,, \\ A^0_j &= b^2 K \frac{X_j}{\mu}\left[ -1 - \frac{K^{-1}}{(\psi+c_2)} z^k X_k \frac{\nu}{\tau} \left( - 2+ \frac{\nu}{\tau\mu} (-N^2 + |X|_g^2)\right)\right] + \Ord\bigl(| z|_{g}^2\bigr), \\ A^0_{ij}& = b^2 \left[ g_{ij} + X_i X_j \frac{\nu}{\tau \mu} \Big( -2 + (-N^2 + |X|_g^2)\frac{\nu}{\tau\mu}\Big) \right] \\&\qquad + 2 b^2 z_{(i} X_{j)}\left[- \frac{b}{\tau \mu} + (-N^2 + |X|_g^2)\frac{b\nu}{\tau\mu} - 2\frac{K}{\mu}\right] + \Ord\bigl(| z|_{g}^2\bigr) , \\ A^k_0 &= \Ord\bigl(| z|_{\gc}^2\bigr), \qquad A^k_j =b^2 K \delta^k_j + \Ord\bigl(| z|_{\gc}^2\bigr), \\ A^k_{ij} &=b^2 \left[ \Bigl(\frac{2}{\psi+c_2}\Bigr)g_{ij} z^k + \frac{\kappa K}{(\psi+c_2)} (\delta^k_j z_i +\delta^k_i z_j) \right] \\&\qquad + b^3 X_i X_j z^k\frac{\nu}{\tau\mu}\left[ -2 + (-N^2 + |X|^2_g)\frac{\nu}{\tau\mu}\right] + \Ord\bigl(| z|_{g}^2\bigr)\,. \end{align*} Premultiply the PDE system \eqref{conf-Eul-F} by $(A_0^0)^{-1}$ and rewrite it as $$ (A_0^0)^{-1} A^0\del{\tau}Z + \frac{1}{\tau} \frac{\tau}{\nu} (A_0^0)^{-1}A^k \nabla_{k} Z =(A_0^0)^{-1} Q^{\tr}(H-B^0Y) \,. $$ We define \begin{align*} M^0(Z) &:= (A^0_0)^{-1} A^0(Z), \quad C^k := \left( (A^0_0)^{-1} A^k \right)|_{z^i=0}, \\ M^k(Z) & := \frac\tau\nu(A^0_0)^{-1} A^k(Z) - C^k \AND (\mathcal{F}_0 \,, \mathcal{F}_j )^{\tr} := \tau (A_0^0)^{-1} Q^{\tr}(H-B^0Y)\,. \end{align*} Then, thanks to Lemma \ref{lem:transf-Eul-general}, the following PDE holds $$ M^0 \del{\tau} Z +\frac1\tau (C^k + M^k) \nabla_{k}Z = \frac{1}{\tau} (\mathcal{F}_0, \mathcal{F}_j)^{\tr}.$$ Noting that $(A_0^0)^{-1} = (b^2 K)^{-1} + \Ord\bigl(|X|^2_g + | z|_{g}^2\bigr)$ we compute \begin{align} \begin{split}\label{M0} M^0(Z) & = \begin{pmatrix} 1 & 0 \\ 0 & K^{-1} g_{ij} \end{pmatrix} + \begin{pmatrix} 0 & X_j \mu^{-1} \\ X_j \mu^{-1} & \Ord(|X|^2|z|_g\mu^{-1}) \end{pmatrix} + \Ord\bigl(|X|^2_g + | z|_{g}^2\bigr) , \\&= \begin{pmatrix} 1 & 0 \\ 0 & K^{-1}g_{ij} \end{pmatrix} + |\tau| e^{- T} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} + \Ord\bigl(|X|^2_g + | z|_{g}^2\bigr) , \end{split} \end{align} and \begin{align}\notag M^k(Z) &= \begin{pmatrix} 0 & 0 \\ 0 & \frac1{K}\Bigl(\frac{2}{\psi+c_2}\Bigr)g_{ij} z^k + \frac{\kappa}{(\psi+c_2)} (\delta^k_j z_i +\delta^k_i z_j) \end{pmatrix} + \begin{pmatrix} 0 & 0 \\0 & \Ord(|X|^2|z|_g\mu^{-1}) \end{pmatrix} + \Ord\bigl(|X|^2_g + | z|_{g}^2\bigr) \\ &= \begin{pmatrix} 0 & 0 \\ 0 & \Ord(|z|_g) \end{pmatrix} + \Ord\bigl(|X|^2_g + | z|_{g}^2\bigr) . \label{Mk} \end{align} Next, we introduce the following notation \begin{align*} R_i := \frac{1-3K}{\mu \nu} w_i \partial_\tau \Psi + m_i + \frac{K}{ \mu \nu} w_i D_2 a \partial_\tau g_{lk} \cdot z^l z^k. \end{align*} Using Lemma \eqref{lem:rough-sources} we compute \begin{align*} |\mathcal{F}_0| &\lesssim | \tau (bs)^{-2} \left[K D_1a ( \ell - D_2 a \partial_\tau g_{ij} \cdot z^i z^j) + b' z^j R_j\right]| + \Ord(|X|_g^2 + |z|_g^2) \\ &\lesssim |\psi+c_2| \left( |\tau \ell| + |\partial_T g| |z|_g^2\right) + |z|_g|X||\nu| + |z|_g|\tau m_j|_g + \Ord(|X|_g^2 + |z|_g^2) \\ &\lesssim \Ord(|X|_g^2 + |z|_g^2+|X|_g+|\nabla X|_g+|\hat{N}|+|\nabla N|_g+|\Sigma|_g+|\partial_{T}X|_g). \end{align*} Similarly, for each $j$ we have \begin{align*} \mathcal{F}_j &= \tau (bs)^{-2}\left[2K D_2a z_j(\ell-D_2a\partial_\tau g_{ij} \cdot z^i z^j) + b R_j\right]+\Ord(|X|_g^2 + |z|_g^2) \\&\lesssim (3-K^{-1})z_j + (3-K^{-1})|\nu| |X|_g + |\psi+c_2||\tau m_i|_g + |z|_g|\tau \ell| + \Ord\bigl(|X|_g^2 + | z|_{g}^2\bigr) \\&\lesssim (3-K^{-1})z_j + \Ord(|X|_g^2 + |z|_g^2+|X|_g+|\nabla X|_g+|\hat{N}|+|\nabla N|_g+|\Sigma|_g+|\partial_{T}X|_g). \end{align*} Defining $F = \mathcal{F} - \Bc \Pbb Z $ we see that the above bounds on the components of $\mathcal{F}$ imply the estimate on $F$ in \eqref{estF}. \end{proof} \subsection{Definition and coercivity properties of the fluid energy} Now we are equipped to introduce the higher order energy functionals, which are coercive with respect to the Sobolev-energies of their respective order. \begin{Def}\label{def:energy} Let $ 0\leq s \leq k-1 $. We define the following energy functionals \begin{align*} E_{s}(Z)&=\frac{1}{2}\sum_{l\leq s}\int_{M}\langle \nabla^{l}Z,M^{0}\nabla^{l}Z\rangle_g \,\mu_g , \quad E^{p}_{s}(Z)=\frac{1}{2}\sum_{l\leq s}\int_{M}\langle \Pbb \nabla^{l}Z,M^{0}\Pbb\nabla^{l}Z\rangle_g \,\mu_g, \\ \dot{E}_{s}(Z)&=\int_{M}\langle \nabla^{s}Z,M^{0}\nabla^{s}Z\rangle_g \,\mu_g . \end{align*} We refer to $ E^{p}_{s} $ as the \emph{parallel energy} and $ \dot{E}_{s} $ as the \emph{homogeneous energy}, of order $s$ respectively. \end{Def} \begin{lem}[Equivalence of the energy norms] Under the bootstrap assumptions \eqref{bootstrap}, for $ 0\leq s\leq k-1 $, we have that \begin{equation*} E_{s}(Z)\cong \|Z\|_{H^{s}}^{2}, \quad E^{p}_{s}(Z)\cong \|\Pbb Z\|_{H^{s}}^{2}, \quad \dot{E}_{s}(Z)\cong \|Z\|_{\dot{H}_{s}}. \end{equation*} \end{lem} \begin{proof} From \eqref{M0} we see that $ M^{0}$ is approximately the diagonal matrix $\text{diag}(1,g) $. Together with Sobolev embedding, this yields \begin{equation*} |\dot{E}_{s}(Z)-\|Z\|_{\dot{H}^{s}}^{2}|\lesssim \|Z\|_{\dot{H}^{s}}^{2}+(e^{-(1+\lambda)T}+\||X|_{g}^{2}+|z|_{g}^{2}\|_{L^{\infty}})\|Z\|_{\dot{H}^{s}}^{2}\lesssim \epsilon. \end{equation*} The other expressions involve similar estimates, and so we find the energy norms are equivalent to their Sobolev counterparts. \end{proof} \subsection{Fluid energy estimates of lower order} Before we consider the time evolution of the lowest order energy $ E(Z) $ we first derive some preliminary estimates and identities for the coefficient matrices. \begin{lem}[Properties of the coefficient matrices]\label{est:matrices} We have that \begin{align*} |\Pbb^{\perp} (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb|_{\op}+|\Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb^{\perp}|_{\op} \quad & \\ +|\Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb|_{\op}&\lesssim |\Pbb Z|+|\Pbb \nabla Z| + \epsilon e^{-\lambda T},\\ |\Pbb^{\perp} (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb^{\perp}|_{\op}&\lesssim |\Pbb Z|^{2}+|\Pbb \nabla Z|^{2} + \epsilon e^{-\lambda T}. \end{align*} Furthermore, for any square matrix $ A$, we have the following identity \begin{equation*} \langle Z, AZ\rangle=\langle \Pbb Z, (\Pbb A \Pbb)\Pbb Z \rangle+\langle \Pbb^{\perp} Z, (\Pbb^{\perp} A \Pbb)\Pbb Z \rangle+\langle \Pbb Z, (\Pbb A \Pbb^{\perp})\Pbb^{\perp} Z \rangle+\langle \Pbb^{\perp} Z, (\Pbb^{\perp} A \Pbb^{\perp})\Pbb^{\perp}Z \rangle. \end{equation*} \end{lem} \begin{proof} First note that, using the equations of motion \eqref{Milne:fluid-PDE}, $ \partial_{T}M^{0} $ can be schematically rewritten using the chain rule as \begin{equation*} \partial_{T}M^{0}(Z,g,X,N)=D_{Z}M^{0}\cdot\partial_{T}Z+D_{g}M^{0}\cdot\partial_{T}g+D_{X}M^{0}\cdot\partial_{T}X+D_{N}M^{0}\partial_{T}N. \end{equation*} The last three terms involving the geometry can be estimated by \begin{equation*} \|D_{g}M^{0}\partial_{T}g+D_{X}M^{0}\partial_{T}X+D_{N}M^{0}\partial_{T}N\|_{L^{\infty}}\lesssim \|\partial_{T}g\|_{H^{2}}+\|\partial_{T}X\|_{H^{2}}+\|\partial_{T}N\|_{H^{2}}\lesssim \epsilon e^{-\lambda T}, \end{equation*} using Sobolev-embedding as well as the bootstrap assumptions and \eqref{estgtime}. Next, we inspect \begin{equation*} D_{Z}M^{0}\cdot\partial_{T}Z=D_{Z}M^{0}\cdot(M^{0})^{-1}\left((C^{k}+M^{k})\nabla_{k}Z-\Bc \Pbb Z -F\right). \end{equation*} Every term in this expression is a sum of matrices of the form $ f(Z, \nabla Z)D_{Z}M^{0} $, where the function $ f $ schematically indicates everything coming from $ (M^{0})^{-1}\left((C^{k}+M^{k})\nabla_{k}Z-\Bc \Pbb Z -F\right) $. Now from \eqref{M0} we see that to leading order $ M^{0} $ is given by the diagonal matrix $ \text{diag}(1,K^{-1}g_{ij}) $. Thus, these leading order terms vanish under the derivative $ D_{Z} $. The remaining terms in $D_{Z}M^0$ involve $X,\nabla X$ or $ |z|_{g}$ or $ |z|_{g}^{2} $, where we used the fact that $ D_{z^{i}}(|z|_{g}^{2})=\Ord(|z|_{g}) $. Hence, the first inequality follows for $ \partial_{T}M^{0} $. As explained in Remark \ref{M0component}, we again find that $ \Pbb^{\perp}D_{Z}M^{0}\Pbb^{\perp}=\Ord(|X|_{g}) $ and hence the second inequality of the statement is trivially satisfied for $ \partial_{T}M^{0} $. Inspecting $ M^{k} $ using \eqref{Mk}, we see the leading order term is the diagonal matrix $ \text{diag}(0, \Ord(|z|_{g})) $. We immediately infer that each piece of $\nabla_kM^k$, except for $ \Pbb \nabla_{k}M^{k} \Pbb $, may be estimated in the same way as the error terms of $ \partial_{T}M^{0} $. For the $ \Pbb \nabla_{k}M^{k} \Pbb $ part, the leading order term consists of terms of order of $ |z|_{g}$ as well as $|\nabla z|_{g} $. These are estimated by $|\Pbb Z|$ and $|\Pbb \nabla Z|$. Finally, using $\Pbb^2 =\Pbb$, $\Pbb^T = \Pbb$ and $\id = \Pbb+\Pbb^\perp$, we calculate \begin{align*} \langle Z, AZ \rangle &= \langle (\Pbb +\Pbb^{\perp})Z, A (\Pbb+\Pbb^{\perp})Z \rangle\\ &=\langle \Pbb\Pbb Z, A \Pbb\Pbb Z \rangle+\langle \Pbb^{\perp}\Pbb^{\perp} Z, A \Pbb\Pbb Z \rangle+\langle \Pbb\Pbb Z, A \Pbb^{\perp}\Pbb^{\perp} Z \rangle+\langle \Pbb^{\perp}\Pbb^{\perp} Z, A \Pbb ^{\perp}\Pbb^{\perp}Z \rangle\\ &=\langle \Pbb Z, (\Pbb A \Pbb)\Pbb Z \rangle+\langle \Pbb^{\perp} Z, (\Pbb^{\perp} A \Pbb)\Pbb Z \rangle+\langle \Pbb Z, (\Pbb A \Pbb^{\perp})\Pbb^{\perp} Z \rangle+\langle \Pbb^{\perp} Z, (\Pbb^{\perp} A \Pbb^{\perp})\Pbb^{\perp}Z \rangle. \end{align*} \end{proof} \begin{comment} \begin{align*} \partial_T E(Z) &= \int_M \langle Z, M^0 \partial_T Z\rangle \mu_g + \frac12 \int_M \langle Z, (\partial_T M^0)Z\rangle \mu_g \\ &\leq \int_M \langle Z, (C^k+M^k) \nabla_k Z\rangle - \int_M \langle \Pbb Z, \Bc \Pbb Z\rangle - \int_M \langle \Pbb Z, \Pbb F\rangle - \int_M \langle Z, \Pbb^\perp F\rangle + \frac12 |\partial_T M^0| \| Z \|_{L^2(M)}^2 \\ &\leq - (3-s^{-2}) E_p(Z) + C \left( |(\nabla_k M^k)| + |\partial_T M^0| \right) E(Z) + E_p(Z)^{1/2} \| \Pbb F\|_{L^2(M)} + E(Z)^{1/2} \| \Pbb^\perp F\|_{L^2(M)} \\ &\leq - (3-s^{-2}) E_p(Z) + C \left( |(\nabla_k M^k)| + |\partial_T M^0| \right) E(Z) + \varepsilon e^{-\lambda T} E(Z)^{1/2} + E(Z)^{3/2} + ... \end{align*} \end{comment} Equipped with the preliminary results above we can now establish an estimate for the time evolution of the energy of order zero. \begin{prop}\label{est:zeroenergy} Under \eqref{bootstrap} there exists a constant $C>0$ such that \begin{equation*} \partial_{T}E_{0}(Z)\leq (3-K^{-1}+C\epsilon)E^{p}_{0}(Z)+C\epsilon e^{-\lambda T}+C\epsilon E^{p}_{1}(Z). \end{equation*} \end{prop} \begin{rem} The parameter $ K<\frac{1}{3} $ is fixed, and so $ 3-K^{-1}+C\epsilon $ is negative provided $ \epsilon $ is sufficiently small. This overall negative sign of the parallel part of the energy is essential to closing the final energy estimates. \end{rem} \begin{proof} A computation using the equations of motion (\ref{Milne:fluid-PDE}) and Lemma \ref{timederivative} yields \begin{align*} \partial_{T}E_{0}(Z)&\leq \int_{M}\langle Z,M^{0}\partial_{T}Z\rangle +\frac{1}{2} \int_{M}\langle Z,(\partial_{T}M^{0})Z\rangle+C\epsilon e^{-\lambda T}E_{0}(Z)\\ &=\int_{M}\langle Z,(C^{a}+M^{a})\nabla_{a}Z\rangle -\int_{M}\langle \Pbb Z,\Bc \Pbb Z\rangle -\int_{M}\langle Z, F \rangle \\ &\qquad +\frac{1}{2}\int_{M}\langle Z,(\partial_{T}M^{0})Z\rangle+C\epsilon e^{-\lambda T}E_{0}(Z). \end{align*} Using integration by parts and the fact that the matrices $ C^{a} $ and $ M^{a} $ are symmetric we find \begin{equation*} \int_{M}\langle Z,(C^{a}+M^{a})\nabla_{a}Z\rangle=-\frac{1}{2}\int_{M}\langle Z,\nabla_{a}(C^{a}+M^{a})Z\rangle=-\frac{1}{2}\int_{M}\langle Z,(\nabla_{a}M^{a})Z\rangle. \end{equation*} Note that $ a $ is indeed an index that stems from a spatial vector field, in particular $ z $, which allows for integration by parts. Hence we have \begin{equation*} \partial_{T}E_{0}(Z) \leq (3-K^{-1})E^{p}_{0}(Z) + \int_{M}\langle Z,(\partial_{T}M^{0}-\nabla_{a} M^{a})Z\rangle-\int_{M}\langle Z, F \rangle +C\epsilon e^{-\lambda T}E_{0}(Z). \end{equation*} We start with the term involving $ F $. Using equation (\ref{estF}), we obtain \begin{align*} |\int_{M}\langle Z,F\rangle|\lesssim \|Z\|_{L^{2}}\|F\|_{L^{2}}\lesssim \|Z\|_{L^{2}}\|F\|_{L^{\infty}}\lesssim \epsilon^{2} e^{-\lambda T}. \end{align*} We next analyse the term involving $ \partial_{T}M^{0}-\nabla_{k}M^{k} $. Using the matrix identity in Lemma \ref{est:matrices} as well as Hölder's inequality we find \begin{align*} \int_{M}&\langle Z, (\partial_{T}M^{0}-\nabla_{a}M^{a})Z\rangle \\&=\int_{M}\langle \Pbb Z, \Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb\Pbb Z\rangle +\int_{M}\langle \Pbb^{\perp} Z, \Pbb^{\perp} (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb\Pbb Z\rangle \\ &\qquad +\int_{M}\langle \Pbb Z, \Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb^{\perp}\Pbb^{\perp} Z\rangle+\langle \Pbb^{\perp} Z, \Pbb^{\perp} (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb^{\perp}\Pbb^{\perp} Z\rangle \\ &\lesssim E^{p}_{0}(Z)|\Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb|_{\op}+\|Z\|_{L^{\infty}}^{2} \int_{M}|\Pbb^{\perp}(\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb^{\perp}|_{\op}\\ &\qquad +\|Z\|_{L^{\infty}}\int_{M}\big(|\Pbb^{\perp} (\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb|_{\op}+|\Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb^{\perp}|_{\op}\big)|\Pbb Z|\\ &\lesssim E^{p}_{0}(Z)(|\Pbb Z|+|\Pbb \nabla Z|+\epsilon e^{-\lambda T})+\|Z\|_{L^{\infty}}^{2}(E^{p}_{0}(Z)+E^{p}_{1}(Z)+\epsilon e^{-\lambda T})\\ &\qquad+\|Z\|_{L^{\infty}}\|\Pbb Z\|_{L^{\infty}}(E^{p}_{0}(Z)+E^{p}_{1}(Z)+\epsilon e^{-\lambda T}). \end{align*} In conclusion, using Sobolev embedding and the smallness assumption, we find that \begin{align*} \int_{M}\langle Z, (\partial_{T}M^{0}-\nabla_{a}M^a)Z\rangle &\lesssim (\sqrt{\epsilon}+\epsilon e^{-\lambda T}) E^{p}_{0}(Z)+\epsilon (E^{p}_{0}(Z)+E^{p}_{1}(Z))+\epsilon e^{-\lambda T}, \end{align*} where we used Young's inequality and Cauchy-Schwarz in the last step. All together we get \begin{equation*} \partial_{T}E_{0}(Z)\leq (3-K^{-1}+C \epsilon)E^{p}_{0}(Z)+C \epsilon e^{-\lambda T}+C\epsilon E^{p}_{1}(Z). \end{equation*} \end{proof} \begin{rem} Note that, due to the inclusion of $ E^{p}_{1}(Z) $, the energy estimate in Proposition \ref{est:zeroenergy} does not close. However, this is not an issue, since this problem is unique to the first order and this term can be absorbed into a negative definite term of the type $ -cE^{p}_{1}(Z) $, $ c>0 $, appearing in the final estimate. \end{rem} \section{Estimates of the higher order fluid energy} In this section we derive an estimate for the higher-order fluid energies. These estimates will be weaker compared to the zero order case. However, in the end this will be remedied by exploiting the lower order estimate. We start be deriving an expression for the time-evolution of the homogeneous part of the energy of order $ \ell\geq 1$. \begin{lem} Let $1 \leq \ell \leq k-1$. Under \eqref{bootstrap} there is a constant $C>0$ such that \begin{equation}\label{higherordercalc} \begin{aligned} \frac{1}{2}\partial_{T}\Big( \int_{M}\langle \nabla^{\ell}Z,M^{0}\nabla^{\ell}Z\rangle\Big) &\leq (3-K^{-1}) \int_M \langle \nabla^\ell Z, \Pbb \nabla^\ell Z\rangle + \frac12 \int_{M}\langle \nabla^{\ell}Z,(\partial_T M^0 -\nabla_{a}M^{a})\nabla^{\ell}Z \rangle \\&\qquad + \int_M \langle \nabla^\ell Z, G^{\ell}\rangle + C\epsilon e^{-\lambda T}E_{\ell}(Z), \end{aligned} \end{equation} where \begin{align*} G^\ell &:= -\Bc [\nabla^{\ell},\Bc^{-1} M^{0}](M^{0})^{-1}\big((C^{a}+M^{a})\nabla_{a}Z-\Bc \Pbb Z-F\big)\\ &\qquad + \Bc [\nabla^{\ell},\Bc^{-1}(C^{a}+M^{a})]\nabla_{a}Z-\Bc \nabla^{\ell}(\Bc^{-1} F) -M^{0}[\nabla^{\ell},\partial_{T}]Z-(C^{a}+M^{a})[\nabla^{\ell},\nabla_{a}]Z. \end{align*} \end{lem} \begin{proof} Let $ \ell\geq 1 $. Applying the operator $ \Bc \nabla^{\ell} \Bc^{-1} $ to the fluid equations of motion of \eqref{Milne:fluid-PDE} yields \begin{align}\label{eomhigherorder} M^{0}\partial_{T}\nabla^{\ell}Z-(C^{a}+M^{a})\nabla_{a}\nabla^{\ell}Z&=-\Bc \Pbb \nabla^{\ell}Z+G^{\ell}. \end{align} The error term $ G^{\ell}$ is computed as \begin{align*} G^{\ell}&= -\Bc [\nabla^{\ell},\Bc^{-1} M^{0}]\partial_{T}Z+\Bc [\nabla^{\ell},\Bc^{-1}(C^{a}+M^{a})]\nabla_{a}Z-\Bc \nabla^{\ell}(\Bc^{-1} F)\\ &\qquad -M^{0}[\nabla^{\ell},\partial_{T}]Z-(C^{a}+M^{a})[\nabla^{\ell},\nabla_{a}]Z\\ &=-\Bc [\nabla^{\ell},\Bc^{-1} M^{0}](M^{0})^{-1}\big((C^{a}+M^{a})\nabla_{a}Z-\Bc \Pbb Z-F\big)\\ &\qquad + \Bc [\nabla^{\ell},\Bc^{-1}(C^{a}+M^{a})]\nabla_{a}Z-\Bc \nabla^{\ell}(\Bc^{-1} F) -M^{0}[\nabla^{\ell},\partial_{T}]Z-(C^{a}+M^{a})[\nabla^{\ell},\nabla_{a}]Z, \end{align*} where we simply inserted the equations of motion \eqref{Milne:fluid-PDE} in the last step. Using Lemma \ref{timederivative} and \eqref{eomhigherorder}, we have \begin{equation*} \begin{aligned} \frac12 \partial_{T}\int_{M}\langle \nabla^{\ell}Z,M^{0}\nabla^{\ell}Z\rangle&\leq \int_{M}\langle \nabla^{\ell}Z,M^{0}\partial_{T}\nabla^{\ell}Z\rangle+\frac{1}{2}\int_{M}\langle \nabla^{\ell}Z,(\partial_{T}M^{0})\nabla^{\ell}Z\rangle+C\epsilon e^{-\lambda T}E(Z)\\ &=\frac12 \int_{M}\langle \nabla^{\ell}Z,(\partial_T M^0 -\nabla_{a}M^{a})\nabla^{\ell}Z\rangle -\int_M \langle \nabla^\ell Z, \Bc \Pbb \nabla^{\ell}Z\rangle \\&\qquad+ \int_M \langle \nabla^\ell Z, G^{\ell}\rangle +C\epsilon e^{-\lambda T}E(Z). \end{aligned} \end{equation*} \end{proof} We now proceed by introducing some preliminary estimates. \begin{lem}\label{auxestmatrices} The fluid coefficient matrices and the inhomogeneity $ F $ obey the following estimates \begin{align} |\nabla\Pbb M^{0}\Pbb^{\perp}|_{\op}+ |\nabla\Pbb^{\perp} M^{0}\Pbb|_{\op}&\lesssim e^{-T} \Ord(|X|_{g})+\Ord(|X|_{g}^{2}+|z|_{g}^{2}+|\nabla z|_{g}^{2}),\label{M0conj}\\ |\Pbb (M^{0})^{-1}\Pbb^{\perp}|_{\op}+ |\Pbb^{\perp} (M^{0})^{-1}\Pbb|_{\op}&\lesssim e^{-T} \Ord(|X|_{g})+\Ord(|X|_{g}^{2}+|z|_{g}^{2}),\notag\\ |\Pbb^{\perp}(M^{a}\nabla_{a}Z-\Bc \Pbb Z-F)\Pbb|+&\notag\\ |\Pbb^{\perp}(M^{a}\nabla_{a}Z-\Bc \Pbb Z-F)\Pbb^{\perp}|&\lesssim \epsilon e^{-\lambda T}+ \Ord(|X|_{g}^{2}+|z|_{g}^{2}+|\nabla z|_{g}^{2}),\notag\\ | M^{a}\nabla_{a}Z-\Bc \Pbb Z-F|&\lesssim \epsilon e^{-\lambda T}+\Ord(|z|_{g}+|X|_{g}^{2}),\notag\\ |\Pbb^{\perp} C^{a}\nabla_{a}Z|&= |\Pbb^{\perp} C^{a} \Pbb \nabla_{a}Z|\lesssim \Ord(|\nabla z|_{g}).\notag \end{align} \end{lem} \begin{proof} Recall from Remark \ref{rem:conjeffect} that conjugating matrices with $ \Pbb $ and $ \Pbb^{\perp} $ picks out specific matrix elements. Considering the explicit form of $ M^{0} $ in \eqref{M0} we realize that the leading order term is given by $ \text{diag}(1,K^{-1}g) $ which is annihilated by the covariant derivative. Hence follows \eqref{M0conj}. Inspecting \eqref{Mk}, we see that the leading order term in $ M^{a} $ is of order $ \Ord(|z|_{g}) $ and is picked out by the conjugation $ \Pbb M^{a}\Pbb $. In conjunction with \eqref{estF} we conclude that the statements about $ M^{a}\nabla_{a}Z-\Bc Z-F $ above are true. Note that we treated the $ \nabla Z $ terms as negligible using Sobolev embedding. The rest of the statements follows in a similar fashion. \end{proof} \begin{lem}\label{auxhigherorder} Let $1 \leq \ell\leq k-1 $. Under \eqref{bootstrap}, there is a constant $C>0$ such that the following estimates hold \begin{equation}\label{inhomhigh}\begin{split} \int_{M}\langle \nabla^{\ell}Z,(\partial_{T}M^{0}-\nabla_{a}M^{a})\nabla^{\ell}Z\rangle &\leq C\epsilon \dot{E^{p}}_{\ell}(Z)+C\epsilon E^{p}_{k-1}(Z),\\ \int_{M}\langle \nabla^{\ell}Z,G^{\ell}\rangle &\leq C\epsilon e^{\lambda T}E_{k-1}(Z)^{\frac{1}{2}}+C\epsilon E^{p}_{k-1}(Z)+B^{\ell}, \end{split} \end{equation} where the problematic error term $ B^{\ell} $ is defined as \begin{align*} B^{\ell}&:= \int_{M}\langle \Pbb \nabla^{\ell}Z,\Pbb C^{a}\Pbb^{\perp}[\nabla^{\ell},\nabla_{a}]Z\rangle+\langle \Pbb^{\perp} \nabla^{\ell}Z,\Pbb^{\perp} C^{a}\Pbb[\nabla^{\ell},\nabla_{a}]Z\rangle. \end{align*} \end{lem} \begin{rem} The error terms $ B^{\ell} $ involve the matrix $C^a$, defined in \eqref{Milne-Ckeq}. The off-diagonal structure of $C^a$ causes a `mixing' effect in $ B^{\ell} $ by leading to terms of the type $ \Ord(\psi \cdot |z|_{g}) $. Such terms cannot be controlled in terms of the energy $ E^{p} $ since $\psi$ is not controlled by the parallel energy. Furthermore, note that these terms arise due to the curved geometry and are not present in a setting in which derivatives commute. Hence, these terms require additional treatment given below in the form of modified energies. \end{rem} \begin{proof} The proof of the divergence estimate is essentially the same as in lowest order. However, we repeat the calculation and add some details. Using the identity from Lemma \ref{est:matrices} \begin{align*} \int_{M}&\langle \nabla^{\ell}Z, (\partial_{T}M^{0}-\nabla_{a}M^{a})\nabla^{\ell}Z\rangle \\ &=\int_{M}\langle \Pbb \nabla^{\ell}Z, \Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb\Pbb\nabla^{\ell} Z\rangle +\int_{M}\langle \Pbb^{\perp}\nabla^{\ell} Z, \Pbb^{\perp} (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb\Pbb \nabla^{\ell}Z\rangle \\ &\qquad +\int_{M}\langle \Pbb \nabla^{\ell}Z, \Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb^{\perp}\Pbb^{\perp}\nabla^{\ell} Z\rangle +\int_{M}\langle \Pbb^{\perp} \nabla^{\ell}Z, \Pbb^{\perp} (\partial_{T}M^{0}-\nabla_{a}M^{a})\Pbb^{\perp}\Pbb^{\perp} \nabla^{\ell}Z\rangle\\ &\lesssim \dot{E^{p}}_{\ell}(Z)\|\Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb\|_{L^{\infty}} +\dot{E}_{\ell}(Z) \|\Pbb^{\perp} (\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb^{\perp}\|_{L^{\infty}}\\ &\qquad +\int_{M}\big(|\Pbb^{\perp} (\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb|_{\op} +|\Pbb (\partial_{T}M^{0}-\nabla_{a}M^{a}) \Pbb^{\perp}|_{\op}\big)|\Pbb \nabla^{\ell} Z| |\nabla^{\ell}Z|. \end{align*} Performing a similar estimate as in the proof of Proposition \ref{est:zeroenergy} we find \begin{equation*} \int_{M}\langle \nabla^{\ell}Z, (\partial_{T}M^{0}-\nabla_{a}M^{a})\nabla^{\ell}Z\rangle\lesssim \epsilon \dot{E^{p}}_{\ell}(Z)+\epsilon E^{p}_{k-1}(Z)+\epsilon e^{-\lambda T}. \end{equation*} Deriving \eqref{inhomhigh} is slightly more involved. We start be establishing a statement for a general tensor-valued vector $ V $ (i.e.~the components of $ V $ are tensors fields on $ M $): \begin{equation}\label{200}\begin{split} \int_{M}&\langle \nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}M^{0}](M^{0})^{-1}V\rangle \\ &=\int_{M}\langle \Pbb\nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}M^{0}](M^{0})^{-1} \Pbb V\rangle+\int_{M}\langle \Pbb^{\perp}\nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}M^{0}](M^{0})^{-1} \Pbb ^{\perp}V\rangle\\ &\qquad+\int_{M}\langle \Pbb\nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}\Pbb M^{0}]( \Pbb+ \Pbb^{\perp})(M^{0})^{-1} \Pbb^{\perp}V\rangle\\ &\qquad +\int_{M}\langle \Pbb^{\perp}\nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}\Pbb^{\perp}M^{0}]( \Pbb+ \Pbb^{\perp})(M^{0})^{-1} \Pbb V\rangle. \end{split}\end{equation} The first line on the right hand side of \eqref{200} can be estimated by \begin{equation*} \|\Pbb\nabla^{\ell}Z\|_{L^{2}}\|\nabla M^{0}\|_{H^{k-2}}\|\Pbb V\|_{H^{k-2}}+\|\nabla^{\ell}Z\|_{L^{2}}\|\nabla M^{0}\|_{H^{k-2}}\|\Pbb^{\perp} V\|_{H^{k-2}}, \end{equation*} where we used standard estimates for the commutator (see e.g.~\cite[Theorem A.3]{beyeroliynyk}). The second and third terms on the right hand side of \eqref{200} can be expanded as \begin{align*} &\int_{M}\langle \Pbb\nabla^{\ell}Z, \Bc \big([\nabla^{\ell},\Bc^{-1}\Pbb M^{0}\Pbb^{\perp}](M^{0})^{-1} \Pbb^{\perp}V+[\nabla^{\ell},\Bc^{-1}\Pbb M^{0}]\Pbb(M^{0})^{-1} \Pbb^{\perp}V\big)\rangle\\ &\qquad+\int_{M}\langle \Pbb^{\perp}\nabla^{\ell}Z, \Bc \big([\nabla^{\ell},\Bc^{-1}\Pbb^{\perp} M^{0}\Pbb](M^{0})^{-1} \Pbb V+[\nabla^{\ell},\Bc^{-1}\Pbb^{\perp} M^{0}]\Pbb^{\perp}(M^{0})^{-1} \Pbb V\big)\rangle\\ &\lesssim\|\Pbb \nabla^{\ell}Z\|_{L^{2}}\left(\|\nabla \Pbb M^{0}\Pbb^{\perp}\|_{H^{k-2}}\|(M^{0})^{-1}\Pbb^{\perp }V\|_{H^{k-2}}+\|\nabla\Pbb M^{0}\|_{H^{k-2}}\|\Pbb (M^{0})^{-1}\Pbb^{\perp}\|_{H^{k-2}}\|V\|_{H^{k-2}}\right)\\ &\qquad +\| \nabla^{\ell}Z\|_{L^{2}}\left(\|\nabla \Pbb^{\perp} M^{0}\Pbb\|_{H^{k-2}}\|(M^{0})^{-1}\Pbb V\|_{H^{k-2}}+\|\nabla\Pbb^{\perp} M^{0}\|_{H^{k-2}}\|\Pbb^{\perp} (M^{0})^{-1}\Pbb\|_{H^{k-2}}\|V\|_{H^{k-2}}\right). \end{align*} Utilizing these estimates for $ V=(M^{0})^{-1}\big((M^{a}+C^{a})\nabla_{a}Z-\Bc\Pbb Z -F\big) $, in conjunction with Lemma \ref{auxestmatrices}, we conclude that \begin{equation*} \int_{M}\langle \nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}M^{0}](M^{0})^{-1}\big((M^{a}+C^{a})\nabla_{a}Z-\Bc\Pbb Z -F\big)\rangle\lesssim \epsilon E^{p}_{k-1} + \epsilon e^{-\lambda T} E_{k-1}(Z)^{\frac{1}{2}}. \end{equation*} Now we tackle the rest of the inhomogeneity described in $ G^{\ell} $. \begin{align*} \int_{M}&\langle \nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}(C^{a}+M^{a})]\nabla_{a}Z\rangle\\ &=\int_{M}\langle \nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}C^{a}]\nabla_{a}Z\rangle+\int_{M}\langle \nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}M^{a}]\nabla_{a}Z\rangle\\ &=\int_{M}\langle \Pbb \nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}\Pbb M^{a}(\Pbb+\Pbb^{\perp})]\nabla_{a}Z\rangle+\int_{M}\langle\Pbb^{\perp} \nabla^{\ell}Z, \Bc [\nabla^{\ell},\Bc^{-1}\Pbb^{\perp}M^{a}(\Pbb+\Pbb^{\perp})]\nabla_{a}Z\rangle\\ &\lesssim \|\Pbb \nabla^{\ell}Z\|_{L^{2}}\|\nabla \Pbb Z\|_{H^{k-2}}\|\nabla Z\|_{H^{k-2}}+\|\nabla (\Pbb Z)\|_{H^{k-2}}^{2}\|\nabla Z\|_{H^{k-2}}+\|X\|_{H^{k-2}}^{2}\|Z\|_{H^{k-2}}^{2}, \end{align*} where we used that $ M^{a} $ only has leading order terms in the $ \Pbb M^{a}\Pbb $ component, as seen in \eqref{Mk}, and the last term stems from the error terms in $ M^{a} $. Furthermore, using \eqref{estF}, we have that \begin{align*} \int_{M}\langle \nabla^{\ell}Z,\Bc \nabla^{\ell}(\Bc^{-1}F)\rangle\lesssim \|Z\|_{H^{k-1}}\left(\|\Pbb Z\|_{H^{k-1}}^{2}+\|X\|_{H^{k-1}}^{2}+\epsilon e^{- T}\right). \end{align*} Lastly, using $\Pbb C^{a} \Pbb= \Pbb^{\perp} C^{a} \Pbb^{\perp}=0$, we find that \begin{align*} \int_{M}\langle \nabla^{\ell}Z,(C^{a}+M^{a})[\nabla^{\ell},\nabla_{a}]Z\rangle &=\int_{M}\langle (\Pbb+\Pbb^{\perp}) \nabla^{\ell}Z,(C^{a}+M^{a})(\Pbb+\Pbb^{\perp})[\nabla^{\ell},\nabla_{a}]Z\rangle\\ &=\int_{M}\langle \Pbb \nabla^{\ell}Z,\Pbb M^{a}\Pbb[\nabla^{\ell},\nabla_{a}]Z\rangle+\|X\|_{H^{k-1}}^{2}\|Z\|_{H^{k-1}}^{2}+B^{\ell} \\& \leq C\varepsilon\|\Pbb Z\|_{H^{k-1}}^{2} +C \|X\|_{H^{k-1}}^{2}\|Z\|_{H^{k-1}}^{2}+B^{\ell}. \end{align*} Note that we estimated the non-leading order terms of $ M^{a} $ by $ C \|X\|_{H^{k-1}}^{2}\|Z\|_{H^{k-1}}^{2} $. Applying the bootstrap assumptions, we see that these terms are exponentially decaying. The last term to control in $G^\ell$ is the due to the geometry: \begin{align*} \int_{M}\langle \nabla^{\ell}Z, M^{0} [\nabla^{\ell},\partial_{T}]Z\rangle&\lesssim \|\partial_{T}\Gamma\|_{H^{k-1}}E_{k-1}(Z). \end{align*} The norm of the time derivative of the Christoffel symbols can simply by estimated by $ \epsilon e^{-\lambda T} $ by applying the equations of motion of the geometry in \eqref{eq:EoM-pT-g-Sigma}. \end{proof} \subsection{Corrected Fluid Energies}\label{corr-en} We now introduce a corrected energy which allows us to compensate for the problematic terms $ B^{\ell} $. We start by giving explicit expressions for the first and second order corrected energies and show how these remedy the problem. Then, we present a process to extend these concepts to higher orders. \begin{Def}[First and second order corrected energies] \label{corrected12} Define corrected first order energies by \begin{align*} \tilde{E}_{1}(Z) &:= E_{1}(Z)-\frac{1}{9}\int_{M} \langle \Pbb Z, M^{0} \Pbb Z\rangle,\qquad \dot{\tilde{E}}_{1}(Z) := \dot{E}_{1}(Z)-\frac{1}{9}\int_{M} \langle \Pbb Z, M^{0} \Pbb Z\rangle. \end{align*} We define the corrected second order energies $\tilde{E}_{2}$ and $\dot{\tilde{E}}_{2}$ by \begin{align*} \tilde{E}_{2}(Z) &:= E_{2}-\frac{4}{9}\int_{M} \langle \Pbb \nabla Z,M^{0} \Pbb\nabla Z\rangle-\frac{4}{81}\int_{M} \langle \Pbb Z, M^{0} \Pbb Z\rangle, \\ \dot{\tilde{E}}_{2}(Z) &:= \dot{E}_{2}-\frac{4}{9}\int_{M} \langle \Pbb \nabla Z,M^{0} \Pbb\nabla Z\rangle-\frac{4}{81}\int_{M} \langle \Pbb Z, M^{0} \Pbb Z\rangle. \end{align*} \end{Def} \begin{rem} Note that while the coefficients of the correction terms are negative, their modulus is less then $1/2$. Hence, the functionals $ \tilde{E} $ are equivalent to the functions $ E $ and thus also to the Sobolev-norms of their respective order. \end{rem} With these new energies at our disposal we are now able to formulate improved estimates. \begin{prop}\label{higherorderestimate} For $ \ell=1,2 $ we have \begin{equation*} \begin{aligned} \partial_{T}\tilde{E}_{\ell}(Z)\leq (3-K^{-1}+C\epsilon)E^{p}_{\ell}(Z)+C\epsilon e^{-\lambda T}+ C\epsilon E^{p}_{k-1}(Z). \end{aligned} \end{equation*} \end{prop} \begin{proof} First, note that by using the explicit form of $C^a$, the critical term $B^\ell$ can be written as \begin{align*} B^\ell &=\int_{M}\nabla^{\ell}z^{a}[\nabla^{\ell},\nabla_{a}]\psi+\nabla^{\ell}\psi [\nabla^{\ell},\nabla_{a}]z^{a}. \end{align*} We start by considering the first order case $\ell=1$. Using Lemma \ref{auxhigherorder} as well as (\ref{higherordercalc}), we find \begin{equation*} \begin{aligned} \partial_{T}\dot{\tilde{E}}_{1} \leq (3-K^{-1}+C\epsilon)\dot{E^{p}}_{1}(Z)+C\epsilon e^{-\lambda T}+ C\epsilon E^{p}_{k-1}(Z)+B^{1}-\frac{2}{9}\int_{M} \langle \Pbb Z, M^{0}\partial_{T} \Pbb Z\rangle. \end{aligned} \end{equation*} Note that we used that $ |\partial_{T}M^{0}|\lesssim \epsilon e^{-\lambda T}+E^{p}_{k-1} $. When $\ell=1 $, we obtain \begin{align}\label{22a} B^1 &= \int_{M}g^{mn} \nabla_{m}\psi [\nabla_{n},\nabla_{a}]z^{a}=\int_{M}g^{mn} \nabla_{m}\psi \text{Riem}[g]^{a}{}_{bna}z^{b} = -\int_{M}g^{mn} \nabla_{m}\psi \text{Ric}[g]_{bn}z^{b} . \end{align} We first consider \eqref{22a}, when all the geometric quantities are with respect to $\gamma$ instead of $g$: \begin{align*} - \int_{M}\gamma^{mn} \partial_{m}\psi \text{Ric}[\gamma]_{nb}z^{b} &= \frac29 \int_{M} \gamma^{mn} \; \gamma_{bn}z^{b}\nabla_{m}\psi = \frac{2}{9}\int_{M}z^{m}\nabla_{m}\psi . \end{align*} So we find that the error term is \begin{align*} &\int_{M} g^{mn} (\text{Ric}[g]_{bn}-\text{Ric}[\gamma]_{bn})z^{b}\partial_{m}\psi\mu_{g} + \int_{M} (g-\gamma)^{mn} \text{Ric}[\gamma]_{bn}z^{b}\partial_{m}\psi\mu_{g} \lesssim \|g-\gamma\|_{H^{2}}\int_{M}|z\nabla\psi| \lesssim \epsilon e^{-T}, \end{align*} where we used Lemma \ref{Riemannestimate}. Employing (\ref{Milne:fluid-PDE}), we further calculate \begin{align*} -\frac{2}{9}\int_{M} \langle \Pbb Z, \partial_{T} \Pbb Z\rangle&=-\frac{2}{9}\int_{M} \langle \Pbb Z, \Pbb\big((C^{a}+M^{a})\nabla_{a}Z-\Bc \Pbb Z-F\big)\rangle\\ &\leq \underbrace{(K^{-1}-3)\frac{2}{9}\int_{M}\langle \Pbb Z, \Pbb Z \rangle}_{\eqqcolon(*)}-\frac{2}{9}\int_{M}z^{m}\nabla_{m}\psi +\epsilon \Ord(|z|_{g}^{2}+e^{-\lambda T}). \end{align*} Note that the positive term $ (*) $ from $ \Bc $ is not a problem, since this Lemma gives us the negative term $ (3-K^{-1}+\epsilon) E^{p}(Z) $ in the estimate for $ E_{0}(Z) $ and it is easy to see that \begin{equation*} (3-K^{-1}+C\epsilon) E^{p}(Z)+(K^{-1}-3)\frac{2}{9}\int_{M}\langle \Pbb Z, \Pbb Z \rangle\leq C(3-K^{-1}+C\epsilon) E^{p}(Z), \end{equation*} where $ C>0 $. Next, we compute the problematic term in the case $ \ell=2 $: \begin{align*} B^2 = \int_{M}g^{mn}g^{rs}& \nabla_{m}\nabla_{r}\psi [\nabla_{n}\nabla_{s},\nabla_{a}]z^{a}+g^{mn}g^{rs} \nabla_{m}\nabla_{r}z^{a} [\nabla_{n}\nabla_{s},\nabla_{a}]\psi\\ &=\int_{M}g^{mn}g^{rs} \nabla_{m}\nabla_{r}\psi (\nabla_{n}[\nabla_{s},\nabla_{a}]z^{a}+[\nabla_{n},\nabla_{a}]\nabla_{s}z^{a})+g^{mn}g^{rs} \nabla_{m}\nabla_{r}z^{a} [\nabla_{n},\nabla_{a}]\nabla_{s}\psi\\ &=\int_{M}g^{mn}g^{rs} \nabla_{m}\nabla_{r}\psi \big(\nabla_{n}(\text{Riem}^{a}{}_{bsa}z^{b})+\text{Riem}^{a}{}_{bna}\nabla_{s}z^{b}-\text{Riem}^{b}{}_{sna}\nabla_{b}z^{a}\big)\\ &\qquad-\int_{M}g^{mn}g^{rs} \nabla_{m}\nabla_{r}z^{a} \text{Riem}^{b}{}_{sna}\nabla_{b}\psi. \end{align*} We proceed by replacing the Riemann tensor with the one on the background and estimating the result by $ \|g-\gamma\|_{H^{k-1}} $ as before. Furthermore, by changing the measure to $ \mu_{\gamma} $ and the connection to $ \hat{\nabla} $, we pick up exponentially decaying error terms as in the case of $ l=1 $. We continue the calculation on the background: \begin{align*} \int_{M}\gamma^{mn}\gamma^{rs}& \hat{\nabla}_{m}\hat{\nabla}_{r}\psi \big(\frac{2}{9}\gamma_{bs}\hat{\nabla}_{n}z^{b}+\frac{2}{9}\gamma_{bn}\hat{\nabla}_{s}z^{b}-\frac{1}{9}( \gamma^{b}{}_{a}\gamma_{sn}-\gamma^{b}{}_{n}\gamma_{sa})\hat{\nabla}_{b}z^{a}\big)\\ &\qquad-\frac{1}{9}\int_{M}\gamma^{mn}\gamma^{rs}\hat{\nabla}_{m}\hat{\nabla}_{s}z^{a}(\gamma^{b}{}_{a}\gamma_{sn}-\gamma^{b}{}_{n}\gamma_{sa})\hat{\nabla}_{b}\psi\\ &=\frac{5}{9}\int_{M}\gamma^{mn}\hat{\nabla}_{m}\hat{\nabla}_{b}\psi\hat{\nabla}_{n}z^{b}-\frac{1}{9}\int_{M}\gamma^{mn}\hat{\nabla}_{m}\hat{\nabla}_{n}\psi\hat{\nabla}_{b}z^{b}-\frac{1}{9}\int_{M}\gamma^{mn}\hat{\nabla}_{m}\hat{\nabla}_{n}z^{b}\hat{\nabla}_{b}\psi \\&\quad +\frac{1}{9}\int_{M}\gamma^{mn}\hat{\nabla}_{m}\hat{\nabla}_{a}z^{a}\hat{\nabla}_{n}\psi=\frac{6}{9}\int_{M}\gamma^{mn}\hat{\nabla}_{m}\hat{\nabla}_{b}\psi\hat{\nabla}_{n}z^{b}+\frac{2}{9}\int_{M}\gamma^{mn}z^{b}\hat{\nabla}_{b}\hat{\nabla}_{m}\hat{\nabla}_{n}\psi \\ &=\frac{6}{9}\int_{M}\gamma^{mn}z^{b}\hat{\nabla}_{m}\hat{\nabla}_{b}\psi\hat{\nabla}_{n}+\frac{2}{9}\int_{M}\gamma^{mn}z^{b}\big(\hat{\nabla}_{m}\hat{\nabla}_{b}+[\hat{\nabla}_{b},\hat{\nabla}_{m}]\big)\hat{\nabla}_{n}\psi \\ &=\frac{4}{9}\int_{M}\gamma^{mn}\hat{\nabla}_{m}\hat{\nabla}_{b}\psi\hat{\nabla}_{n}z^{b}-\frac{2}{9}\int_{M}\gamma^{mn}z^{b}\text{Riem}[\gamma]^{a}{}_{nbm}\hat{\nabla}_{a}\psi \\ &=\frac{4}{9}\int_{M}\gamma^{mn}\hat{\nabla}_{m}\hat{\nabla}_{b}\psi\hat{\nabla}_{n}z^{b}+\frac{4}{81}\int_{M}z^{b}\hat{\nabla}_{b}\psi. \end{align*} We also see that, utilizing Lemma \ref{auxhigherorder}, \begin{align*} -\frac{4}{9}&\int_{M}g^{mn}\langle \Pbb \nabla_{m}Z,M^{0}\partial_{T}\Pbb \nabla_{n}Z\rangle =-\frac{4}{9}\int_{M}g^{mn}\langle \Pbb \nabla_{m}Z,\Pbb \left((C^{a}+M^{a})\nabla_{a}\nabla_{n}Z-\Bc \Pbb \nabla_{n}Z+G_{m}\right)\rangle \\ &\lesssim (K^{-1}-3)\frac{4}{9}\int_{M}g^{mn}\langle \Pbb \nabla_{m}Z,\Pbb \nabla_{n}Z \rangle -\frac{4}{9}\int_{M}g^{mn}\nabla_{m}z^{b}\nabla_{n}\nabla_{b}\psi\\ &\qquad-\frac{4}{9}\int_{M}g^{mn}\underbrace{\langle \Pbb \nabla_{m}Z, \Pbb C^{a}[\nabla_{a},\nabla_{n}]Z\rangle}_{=0}+C \epsilon e^{-\lambda T}, \end{align*} where, in the last line, we used that $ \Pbb C^{a}[\nabla_{a},\nabla_{n}]Z=(0,[\nabla_{i},\nabla_{n}]\psi)^{\tr}=0 $. In the same spirit as in the case of $ l=1 $ we can conclude that the term introduced by $ \Bc $ does not affect the overall estimate as it can be absorbed in the negative definite term $ (3-K^{-1}+C\epsilon)E^{p}_{0} $. Summing our results for $ \dot{\tilde{E}}_{1} $ and $ \dot{\tilde{E}}_{2} $ over $ l\leq 1 $ and $ l\leq 2 $ respectively, together with Lemma \ref{est:zeroenergy}, yields the desired result. \end{proof} \begin{prop}[Higher order corrections]\label{finalenergy} Corrected energies $ \tilde{E}_{\ell} $ for all orders $0\leq \ell\leq k-1 $ that are equivalent to the energies in Definition \ref{def:energy} can be constructed and satisfy the estimate in Proposition \ref{higherorderestimate}, i.e. \begin{equation*} \begin{aligned} \partial_{T}\tilde{E}_{\ell}(Z)\lesssim (3-K^{-1}+\epsilon)E^{p}_{\ell}(Z)+\epsilon e^{-\lambda T}+ \epsilon E^{p}_{k-1}(Z). \end{aligned} \end{equation*} \end{prop} \begin{rem} Constructing these energies, while in principle straightforward, is a tedious process and does not add in a meaningful way to the arguments. Hence, we omit an explicit construction and instead give a recipe how to construct them and prove why they satisfy the above estimate. \end{rem} \begin{proof} The ideas presented in the proof of Proposition \ref{higherorderestimate} can be generalized to higher orders in the following fashion. Schematically, at order $ \ell $, the error $ B $ takes the form \begin{equation*} |\int_{M}\nabla^{\ell}z^{a}[\nabla^{\ell},\nabla_{a}]\psi+\nabla^{\ell}\psi [\nabla^{\ell},\nabla_{a}]z^{a}|\lesssim |\int_{M}\nabla^{\ell}z\ast\nabla^{\ell-1}\psi +\nabla^{\ell}\psi\ast \nabla^{\ell-1}z|\lesssim |\int_{M}\nabla^{\ell}\psi\ast\nabla^{\ell-1}z|, \end{equation*} where we integrated by parts in the last step. By $ \ast $ we denote an arbitrary contraction $ T_{1}\ast T_{2} $ with the metric $ g $. Furthermore, note that $ z $ here is always contracted with one of the covariant derivatives, so these tensors are indeed of the same valence. In the expression above there are terms of the form $ \nabla\text{Riem} $. We tacitly assumed these to be negligible, since we can always replace any geometric quantities like $ \text{Riem} $ or $ \nabla $ by their background counterparts and $ \hat{\nabla}\text{Riem}[\gamma]=0 $ by the Ricci decomposition as done in Lemma \ref{Riemannestimate}. Now the terms in this $ \ast $-product may involve contractions over multiple derivatives of $ z $ or $ \psi $, i.e.~$ \gamma^{mn}\nabla_{m}\nabla_{n}z $ or divergence-like terms, i.e.~$ \nabla_{m}\nabla_{n}z^{m} $. All the terms are of order $ \ell $ and $ \ell-1 $. In these cases we can always interchange derivatives, in which case the curvature produces terms of order e.g.~$ \nabla^{\ell-2}\psi\nabla^{\ell-1}z $, which can then be rearranged using integration by parts. After all these manipulations, one ends up with an error of the type \begin{align*} \int_{M}c_{\ell\ell}\nabla^{\ell-1}\nabla_{a}\psi\nabla^{\ell-1}z^{a}+c_{\ell\ell-1}\nabla^{\ell-2}\nabla_{a}\psi\nabla^{\ell-2}z+\dots+c_{\ell1}z^{a}\nabla_{a}\psi . \end{align*} To illustrate this idea in practice, we note the explicit constants $ c_{22}= \frac{4}{9}$ and $ c_{21}=\frac{4}{81} $ in the case of $ \ell=2 $ are derived explicitly in the proof of Proposition \ref{higherorderestimate}. In the worst case one of these constants, w.l.o.g.~lets say $ c_{\ell \ell} $, is large and positive. This forces us to add a negative definite term to our energy. However, we may put a small coefficient in front of the top order term in our energy, i.e.~ \begin{align*} \delta_{\ell}\int_{M}\langle \nabla^{\ell}Z,M^{0}\nabla^{\ell}Z\rangle, \end{align*} so that the coefficient appears as $ c_{\ell \ell}\delta_{\ell}\ll1 $. Thus, the correction term needed is \begin{align*} -c_{\ell \ell}\delta_{\ell}\int_{M}\langle \Pbb \nabla^{\ell-1}Z, \Pbb M^{0}\nabla^{\ell-1}Z\rangle, \end{align*} which conserves the positive definiteness and coercivity of the energy. After deriving this term and employing the equations of motion, another potential problem is the positive definite term \begin{align*} c_{\ell \ell}\delta_{\ell}\int_{M}\langle \Pbb \nabla^{\ell-1}Z, \Bc \Pbb \nabla^{\ell-1}Z\rangle. \end{align*} However, this does not pose a threat for closing the energy estimates, since $ \delta_{\ell} $ is fixed and as small as necessary, and hence in the end the term can be absorbed by the negative definite term $ (3-K^{-1})E^{p}_{\ell-1} $, as $ 3-K^{-1}<0 $ is fixed. Another potential problem is posed by the term generating the error term $ B $ initially. We end up with a term of the form \begin{equation*} -c_{\ell \ell}\delta_{\ell}\int_{M}\langle \Pbb \nabla^{\ell-1}Z,\Pbb C^{a}[\nabla^{\ell-1},\nabla_{a}]Z\rangle=-c_{\ell \ell}\delta_{\ell}\int_{M}\nabla^{\ell-1}z^{i}[\nabla^{\ell-1},\nabla_{i}]\psi. \end{equation*} However, we see that this again straightforwardly falls into our sum \begin{align*} \int_{M}\tilde{c}_{\ell\ell-1}\nabla^{\ell-1}\psi\nabla^{\ell-2}z+\dots+\tilde{c}_{\ell 1}\nabla\psi z. \end{align*} with new constants $ \tilde{c}_{\ell i} $. The same process may now be repeated until the lowest order is reached. In this case the term generated by the matrix $ C $ simply vanishes. \end{proof} \section{Geometric energy}\label{EinEulBR-3} \begin{lem}[Matter source terms]\label{mattersources} In the case of a relativistic fluid stress energy tensor \eqref{EnMom}, the rescaled matter quantities take the form \begin{align*} E&=\rho_{0}e^{a(1+K)}e^{-3KT}\left((1+K)(\hat{v}^{\tau})^{2}N^{2}+K\right), \qquad j^{a}=\rho_{0}e^{a(1+K)}e^{(1-3K)T}N(1+K)\hat{v}^{\tau}v^{a},\\ \eta&=\tilde{E}+\rho_{0}e^{a(1+K)}e^{-3KT}\left((1+K)(|X|_{g}^{2}(\hat{v}^{\tau})^{2}-2 g_{ab}X^{a}v^{b}v^{\tau}+|v|_{g}^{2})+3K\right),\\ S_{ab}&=\rho_{0}e^{a(1+K)}e^{(-1-3K)T} \big((1+K)\big(X_{a}X_{b}(\hat{v}^{\tau})^{2}- X_{a}\hat{v}^{\tau}v_{b}-X_{b}\hat{v}^{\tau}v_{a}+v_{a}v_{b}\big)+K(2+K)\big). \end{align*} \end{lem} \begin{proof} Using Definition \ref{matterquant}, a straightforward computation shows that \begin{align*} \tilde{E}&=\tilde{T}^{\mu\nu}n_{\mu}n_{\nu}=\tilde{\rho}\left((1+K)(\bar{v}^{\tau})^{2}\tilde{N}^{2}+K\bar{g}^{00}\tilde{N}^{2}\right),\\ \tilde{j}_{a}&=\tilde{N}\tilde{T}^{0\mu}\bar{g}_{\mu a}=\tilde{N}\left(\tilde{\rho}(1+K)\bar{v}^{\tau}\bar{v}^{\mu}+K\tilde{\rho}\bar{g}^{0\mu}\right)\bar{g}_{a\mu},\\ \tilde{\eta}&=\tilde{E}+\tilde{g}^{ab}\tilde{T}_{ab}=\tilde{g}^{ab}\bar{g}_{a\mu}\bar{g}_{b\nu}\left((1+K)\tilde{\rho}\bar{v}^{\mu}\bar{v}^{\nu}+K\tilde{\rho}\bar{g}^{\mu\nu}\right)\\ &=\tilde{E}+(1+K)\tilde{\rho}\tau^{-2}\tilde{g}^{ab}\left(X_{a}X_{b}(\hat{v}^{\tau})^{2}- X_{a}\hat{v}^{}v_{b}- X_{b}\hat{v}^{\tau}v_{a}+v_{a}v_{b}\right)+3K\tilde{\rho},\\ \tilde{S}_{ab}&=\tilde{T}_{ab}-\frac{1}{2}\tr_{\bar{g}}\tilde{T}\cdot \tilde{g}_{ab}=\tilde{T}_{ab}-\frac{1}{2}\bar{g}_{\mu\nu}\tilde{T}^{\mu\nu}\cdot\tilde{g}_{ab}\\ &=K\tilde{\rho}\tau^{-2}g_{ab}+(1+K)\tilde{\rho}\tau^{-2}\left(X_{a}X_{b}(\hat{v}^{\tau})^{2}- X_{a}\hat{v}^{\tau}v_{b}- X_{b}\hat{v}^{\tau}v_{a}+v_{a}v_{b}\right)\\ &\qquad-2\tau^{-2}K\tilde{\rho}g_{ab}+2\tilde{\rho}(1+K)\tau^{-2}g_{ab}. \end{align*} Furthermore, inverting $ \zeta $ using \eqref{zeta:dynamic}, we find \begin{align*} \tilde{\rho} = \rho_{0}e^{\zeta(1+K)}(-\tau)^{3(1+K)}. \end{align*} Using this relation, we end up with the desired results. Note that we used $ \tilde{j}^{a}=\tilde{g}^{ab}\tilde{j}_{a}=(-\tau)^{2}g^{ab}\tilde{j}_{ab} $. \end{proof} \begin{lem}[Estimates on the matter variables]\label{matterest} The matter components obey the following estimates \begin{align*} |\tau|\|\eta\|_{H^{k-1}}+|\tau|^{2}\|j\|_{H^{k-1}}+|\tau|\|\partial_{T}\eta\|_{H^{k-2}}+|\tau|\|\partial_{T}j\|_{H^{k-2}}&\lesssim \epsilon e^{-(1+3K)T},\\ |\tau|\|S\|_{H^{k-1}}&\lesssim \epsilon e^{-(2+3K)T}. \end{align*} \end{lem} \begin{proof} Using the expressions for the matter components given in Lemma \ref{mattersources}, a straightforward application of Sobolev estimates leads to the following: \begin{equation*} |\tau|\|\eta\|_{H^{k-1}}\lesssim |\tau|^{1+3K}\|e^{a(\psi,|z|_{g})}(\hat{v})^{2}N^{2}\|_{H^{k-1}}\lesssim |\tau|^{1+3K}\|Z\|_{H^{k-1}}\|(\hat{v})^{2}\|_{H^{k-1}}\|N^{2}\|_{H^{k-1}}\lesssim \epsilon e^{-(1+3K)T}. \end{equation*} The calculation for $ j $ and $ S $ is analogous. However, the estimate of the time derivatives require a more careful analysis. We have the following bounds on the individual bounds \begin{align*} \|\partial_{T}(e^{a(\psi,|z|_{g})})\|_{H^{k-2}}&\lesssim \|D_{1}a\partial_{T}\psi\|_{H^{k-2}}+\|D_{2}a\partial_{T}|z|_{g}^{2}\|_{H^{k-2}}\lesssim \epsilon +\epsilon e^{-\lambda T},\\ \|\partial_{T}\hat{v}\|_{H^{k-2}}&\lesssim \|\partial_{T}N\|_{H^{k-2}}+\|\partial_{T}X\|_{H^{k-2}}+\|\partial_{T}\Pbb Z\|_{H^{k-2}}\lesssim \epsilon+\epsilon e^{-\lambda T},\\ \|\partial_{T}g\|_{H^{k-2}}&\lesssim \epsilon e^{-\lambda T}. \end{align*} These rough estimates straightforwardly imply the estimates of the time derivatives. \end{proof} \subsection{Geometric energy} Since $ \mathscr{L}_{g,\gamma} $ is an elliptic operator on the compact space $M$, it has a discrete spectrum of eigenvalues. On account of a result from \cite{KK-15}, the smallest eigenvalue $ \lambda_{0} $ of the operator $ \mathscr{L}_{g,\gamma} $ satisfies $ \lambda_{0}\geq \tfrac{1}{9} $ and the operator has trivial kernel. \begin{Def}[Geometric energy]\label{def:geomtricenergy} We define the constant $ \alpha(\lambda_{0},\delta_{\alpha}) $ and $ c_{E}$ as \begin{equation*} \alpha \coloneqq \begin{cases} 1, &\lambda_{0}<\frac{1}{9},\\ 1-\delta_{\alpha}, &\lambda_{0}=\frac{1}{9}, \end{cases} \AND c_{E}\coloneqq \begin{cases} 1, &\lambda_{0}<\frac{1}{9},\\ 9(\lambda_{0}-\xi), &\lambda_{0}=\frac{1}{9}, \end{cases} \end{equation*} where $ \delta_{\alpha}=\sqrt{1-9(\lambda_{0}-\xi)} $ with $ 0<\xi<1 $, which remains to be fixed. Given these constants, we recall from \cite{AnderssonMoncrief:2011, AF20} the geometric energy $ \mathcal{E}_{m} $ and the correction term $ \Gamma_{m} $ of order $ m\geq 1 $ as \begin{equation*} \begin{aligned} E^{(g)}_{m}&=\frac{1}{2}\int_{M}\langle 6\Sigma, \mathcal{L}_{g,\gamma}^{m-1}6\Sigma\rangle +\frac{9}{2}\int_{M}\langle (g-\gamma),\mathcal{L}_{g,\gamma}^{m}(g-\gamma)\rangle,\qquad \Gamma^{(g)}_{m}&=\int_{M}\langle 6\Sigma, \mathcal{L}_{g,\gamma}^{m-1}(g-\gamma)\rangle. \end{aligned} \end{equation*} Finally we define the corrected geometric energy $ E_{s} $ of order $ s\geq 1 $ as \begin{equation*} \mathcal{E}_{s}:= \sum_{1\leq m\leq s}E^{(g)}_{m}+c_{E}\Gamma^{(g)}_{m} \end{equation*} \end{Def} \section{Proof of global existence and stability} \begin{proof}[Proof of Theorem \ref{thm:Milne_stability}] The smallness condition of the initial data in (\ref{initialsmallness}) implies the existence of a constant $ C_{0} $ such that \begin{equation*} \|N-3\|_{H^{k}}+\|X\|_{H^{k}}+\|\partial_{T}N\|_{H^{k-1}}+\|\partial_{T}X\|_{H^{k-1}}+\tilde{E}_{k-1}(Z)+\mathcal{E}_{k}\leq C_{0}\epsilon. \end{equation*} Employing the inequality of Proposition \ref{finalenergy} yields \begin{equation*} \partial_{T}\tilde{E}_{k-1}(Z)\leq (3-K^{-1}+C\epsilon){E^{p}}_{k-1}(Z)+C\epsilon e^{-\lambda T}. \end{equation*} Hence, if $ \epsilon $ is sufficiently small, and using that $3-K^{-1}+C\epsilon <0$, we have that \begin{equation*} \|\rho\|_{H^{k-1}}+\|u\|_{H^{k-1}}\lesssim \tilde{E}_{k-1}(Z)(T)\leq C\epsilon\int_{T_{0}}^{T}e^{-\lambda s}ds\leq C_{0}\epsilon, \end{equation*} where we potentially redefined $ T_{0} $. Next, we improve the bootstrap on the lapse and shift using an estimate given in \cite[Prop. 17]{AF20} together with our matter estimates from Lemma \ref{matterest}: \begin{align*} \|N-3\|_{H^{k}} + \|X\|_{H^{k}}&\lesssim \|\Sigma\|_{H^{k-2}}^{2}+|\tau|\|\eta\|_{H^{k-2}}+|\tau|^{2}\|Nj\|_{H^{k-2}}+\|g-\gamma\|_{H^{k-1}}^{2} \\&\lesssim \epsilon^{2}e^{-2\lambda T}+\epsilon e^{-(1+3K)T}, \end{align*} which closes the bootstrap assumptions on the lapse and shift made in \eqref{bootstrap}. Similarly, to improve the bootstrap on $\partial_T N, \partial_T X$ we use \cite[Lem. 18]{AF20} (see also the correction in \cite[Prop. 7.2]{BarzegarFajman20pub}) and the matter estimates of Lemma \ref{matterest} to get \begin{align*} \|\partial_{T}N\|_{H^{k-1}}&\lesssim\|\hat{N}\|_{H^{k-1}}+\|X\|_{H^{k-1}}+\|\Sigma\|_{H^{\ell-1}}^{2}+\|g-\gamma\|_{H^{k-1}}^{2} +|\tau|\|S\|_{H^{k-3}}+|\tau|\|\eta\|_{H^{k-3}} \\&\quad +|\tau|\|\partial_{T}\eta\|_{H^{k-3}} \\&\lesssim \epsilon^2 e^{-2\lambda T} + \epsilon e^{-(1+3K)T}, \intertext{and} \|\partial_{T}X\|_{H^{k-1}}&\lesssim\|\hat{N}\|_{H^{k-3}}+\|\partial_{T}\hat{N}\|_{H^{k-2}}+\|X\|_{H^{k-1}}+\|\Sigma\|_{H^{k-2}}^{2}+\|g-\gamma\|_{H^{k-2}}^{2}+|\tau|^{2}\|j\|_{H^{k-3}}\\ &\qquad+|\tau|^{2}\|\partial_{T}j\|_{H^{k-3}}+|\tau|\|\partial_{T}\eta\|_{H^{k-3}} \\&\lesssim \epsilon^2 e^{-2\lambda T} + \epsilon e^{-(1+3K)T} . \end{align*} The evolution of the geometric energy is given by \begin{align*} \partial_{T}\mathcal{E}_{k}&\leq -2\alpha \mathcal{E}_{k}+6\mathcal{E}_{k}^{\frac{1}{2}}\|NS\|_{H^{k-1}}+C\mathcal{E}_{k}^{\frac{3}{2}}+C\mathcal{E}_{k}^{\frac{1}{2}}\big(|\tau|\|\eta\|_{H^{k-1}}+|\tau|^{2}\|Nj\|_{H^{k-2}}\big) \\&\leq -2\alpha\mathcal{E}_{k}+6C\mathcal{E}_{k}^{\frac{1}{2}}\epsilon^{2}e^{-(1+3K)T}+C\mathcal{E}_{s}^{\frac{3}{2}}. \end{align*} The first line above is from \cite[Lem. 20]{AF20} and the second uses our matter estimates. This energy estimate can be closed in a similar way as in \cite{AF20}. This amounts to rescaling the energy by an exponential factor depending on $ \delta_{\alpha} $ as introduced in Definition \ref{def:geomtricenergy}. For further details of this construction see \cite[\textsection 9]{AF20}. Also note that the decay in this estimate is stronger due to the additional factor of $ e^{-KT} $. Thanks to \cite[Lem. 19]{AF20}, we have the coercivity \begin{equation*} \|g-\gamma\|_{H^{k}}+\|\Sigma\|_{H^{k-1}}\lesssim \mathcal{E}_{k}. \end{equation*} Finally, future completeness relies on the rate of decay of the perturbation of the unrescaled geometry and matter fields. The exact details are very similar to that given in \cite{AF20} and the conclusion uses the completeness criterion given in \cite{CB_Cotsakis}. \end{proof} \begin{rem}\label{arbitrarygeometry} Note that all the arguments in Section \ref{EinEulBR} and following are not specific to a closed manifold close to the \emph{Milne-geometry} but, except for a few details, carry over in a straightforward manner to a \emph{fixed spatial background geometry}. The main motivation to consider constant negative Einstein-curvature is to guarantee stability of the geometry. However, when considering an arbitrary non-dynamic spatial background, we have to adapt our strategy when considering the energy corrections. As noted before, these stem from the non-commutativity of the spatial derivatives. Since we lack the luxury of an explicit background we cannot employ the same idea presented in Lemma \ref{corrected12}, i.e.~replace $ \text{Ric}[g] $ by $ \text{Ric}[\gamma] $ and exploit the fact that their difference is small. However, since our spatial manifold $ (M,g) $ is fixed and closed, we know that \begin{equation*} 0 \leq |\text{Ric}[g]|_{\op}\leq \alpha \end{equation*} for some $ \alpha>0 $. Remember that the first order error term is proportional to \begin{equation*} \delta_{1}\int_{M}\text{Ric}[g]_{mn}z^{n}\nabla^{m}\psi, \end{equation*} where $ \delta_{1}>0 $ is a possibly small constant that we set before the top order part of the norm to scale this error term. We may then correct our energy as \begin{equation*} \tilde{E}_{1}(Z)\coloneqq E_{1}(Z)-\frac{\delta_{1}}{2}\int_{M}\langle\Pbb Z,\text{Ric}[g]M^{0}\Pbb Z\rangle, \end{equation*} which is still equivalent, given $ \delta_{1} $ is sufficiently small. The time derivative of $ M^{0} $ and $ \text{Ric}[g] $ as well as additional terms from Lemma \ref{timederivative} are negligible. Employing the equations of motion similar to the proof of Lemma \ref{higherorderestimate}, in addition to the compensating term we also receive the additional \begin{equation*} \delta_{1}\int_{M}\langle\Pbb Z,\text{Ric}[g]\Bc\Pbb Z\rangle\leq \alpha\delta_{1}E^{p}_{0}(Z). \end{equation*} Given that $ \delta_{1} $ is sufficiently small, this term gets absorbed into the negative definite term $ -cE^{p}_{0}(Z) $. Note that we can always decompose the curvature tensor into $ \text{Ric}[g] $ and $ g $. Hence, we can, in a similar fashion as in Proposition \ref{finalenergy}, put a small constant $ \delta_{l} $ in front of the top order $ n $ derivatives in the energy and compensate for the terms, by adding and appropriate term of order $ n-1 $ depending only on the fluid velocity. The result then follows as a corollary of Theorem \ref{thm:Milne_stability}. \end{rem} \bibliographystyle{amsplain}
1,116,691,498,728
arxiv
\section{Introduction} \label{sec:int} \astrobj{7 Aql} (HD~174532, SAO~142696, HIP~92501) was discovered to be a pulsating star of $\delta$ Scuti type by \cite{garrido1} in a search of new variables in preparation for the COROT mission. Its multiperiodic nature was established by \cite{fox1} who detected six oscillation frequencies. \astrobj{8 Aql} (HD~174589, SAO~142706, HIP~92524) was reported as a new multiperiodic $\delta$ Scuti variable with three pulsation frequencies by \cite{fox1}. The dominant modes detected by \cite{fox1} were confirmed by \cite{fox3} by using CCD photometry. Both stars represent interest for asteroseismology since they are slightly evolved, and hence located in the HR diagram in the ambiguous transition phase between core hydrogen burning and thick shell hydrogen burning. This phase is sensitive to the treatment of the core overshooting processes. Moreover, \astrobj{7 Aql} has been selected as a secondary target of the COROT seismology program \citep{uytt}. The COROT space mission \citep{baglin}, successfully launched in December 2006, is providing a huge number of detected oscillation frequencies in individual $\delta$ Scuti stars \citep{poretti1,garcia}. In order to fully exploit the asteroseismic data by using stellar evolutionary models, accurate stellar physical parameters are needed. For \astrobj{7 Aql} no $uvby-\beta$ indices have been reported to date. On the contrary, a number of Str\"omgren indices [$(b-y)$, $m_{1}$, $c_{1}$, $H_{\beta}$] have been reported for \astrobj{8 Aql}, but based on a few measurements. Concerning the spectral classification of the stars, the reported types are not unique in the literature. Namely, the Michigan Catalogue of HD stars, Vol.5 (Houk+, 1999) reports F0V and A9IV for \astrobj{8 Aql} and \astrobj{7 Aql} respectively. Whereas, the SAO Star Catalog J2000 (SAO Staff 1966; USNO, ADC 1990) lists A2 for \astrobj{7 Aql} and A3 for \astrobj{8 Aql}. The Bright Star Catalogue, 5th Revised Ed. (Hoffleit+, 1991) gives F2III for \astrobj{8 Aql}. These classifications are based mainly upon photographic spectra which are less accurate than those obtained with modern equipments. The aim of this paper is to present more precise information about \astrobj{7 Aql} and \astrobj{8 Aql} by using both Str\"omgren photometry and spectroscopy. Furthermore, the differential time series in the Str\"omgren bands $uvby$ are also analyzed. \section{Observations and data reduction} \subsection{Photometric observations} The observations were secured in 2007 on the nights of June 21, 22, 23, 28, 30 and July 07 and 08 at the Observatorio Astr\'onomico Nacional-San Pedro M\'artir (OAN-SPM), Baja California, Mexico. The 1.5-m telescope with the six-channel Str\"omgren spectrophotometer was implemented. The observing routine consisted of five 10 s of integration of the star from which five 10 s of integration of the sky was subtracted. Two constant comparison stars were observed as well, namely HD 174046 and HD 174625. Along with \astrobj{7 Aql} and \astrobj{8 Aql} also observed was the $\delta$ Scuti variables HD 170699. The results for this particular objects will be given elsewhere \citep{alvarez1}. A set of standard stars was also observed each night to transform instrumental observations onto the standard system and to correct for atmospheric extinction. The instrumental magnitudes ($_{\rm inst}$) and colours, once corrected from atmospheric extinction, were transformed to the standard system ($_{\rm std}$) through the well known transformation relations given by \cite{stromgren} : \[ V_{\rm std} = A + y_{\rm inst} + B(b-y)_{\rm inst} \] \[(b-y)_{\rm std} = C + D(b-y)_{\rm inst} \] \[ m_{1,{\rm std}} = E + Fm_{1,{\rm inst}} + G(b-y)_{\rm inst} \] \[ c_{1,{\rm std}} = H + Ic_{1,{\rm inst}} + J(b-y)_{\rm inst} \] \[ H_{\beta,{\rm std}} = K + LH_{\beta,{\rm inst}} \] \noindent where $V$ is the magnitude in the Johnson system, and the $m_{1}$ and the $c_{1}$ indices are defined in the standard way: $m_{1} \equiv (u-v) -(v-b)$ and $c_{1} \equiv (v-b) - (b-y)$. Applying the above equations to the standard stars, an estimation of the mean errors for the transformations to the standard system can be obtained: $\sigma_{v}= 0.011$, $\sigma_{(b-y)} = 0.006$, $\sigma_{m_{1}}=0.015$, $\sigma_{c_{1}}=0.015$, $\sigma_{H_{\beta}}=0.015$. The photometric precision in the instrumental system was: $\sigma_{u}=0.017$, $\sigma_{v}=0.013$, $\sigma_{b}=0.011$ $\sigma_{y}= 0.009$. The averaged standard magnitudes and indices for target and comparison stars are given in Table~\ref{tab:index_pp}. The Str\"omgren indices for \astrobj{7 Aql} and comparison stars are reported for the first time in the present paper. Whereas, a number of photometric indices [$(b-y)$, $m_{1}$, $c_{1}$, $H_{\beta}$] are available for \astrobj{8 Aql}. In particular, \cite{crawford} gives (0.178, 0.178, 0.834, 2.747), \cite{gronbech, gronbech1} list (0.176, 0.183, 0.831, 2.752) and \cite{hauck} give (0.177, 0.181, 0.832, 2.749). These are in agreement with those reported in Table~\ref{tab:index_pp} within 1-$\sigma$ error. \begin{table*}[!t]\centering \caption{Averaged standard magnitudes and indices for target and comparison stars. The number of $uvby$ and $H_{\beta}$ measurements are indicated as $N_{uvby}$ and $N_{\beta}$ respective. } \label{tab:index_pp} \begin{tabular}{lcccccc} \hline \hline Star& $V$& $(b-y)$& $m_{1}$& $c_{1}$& $H_{\beta}$& $N_{uvby}$/$N_{\beta}$ \\ & (mag)& (mag)& (mag)& (mag)& (mag)&\\ \hline \astrobj{7 Aql} & 6.894 & 0.171 & 0.180 & 0.873 & 2.755 & 291/26 \\ \astrobj{8 Aql} & 6.075 & 0.178 & 0.185 & 0.822 & 2.730 & 288/26 \\ \astrobj{HD 174046} (c1) & 9.570 & 0.276 & 0.078 & 1.097 & 2.867 & 285/26\\ \astrobj{HD 174625} (c2) & 9.436 & 0.363 & 0.101 & 0.553 & 2.664 & 292/26\\ \hline \end{tabular} \end{table*} \subsection{Spectroscopic observations}\label{sec:spec_obs} Spectroscopic observations were conducted at the 2.12-m telescope of the same observatory during July 24, 2008 (UT). We used the Boller \& Chivens spectrograph installed in the Cassegrain focus of the telescope. The 600 lines/mm gratting was used to cover a wavelength range from 3900 to 6000 \AA. A dispersion of 2.05 \AA\, per pixels with a resolution of 5.6 \AA\, was employed. The SITE3 $1024 \times 1024$ pixel CCD with a 0.24 $\mu$m pixel size was attached to the spectrograph. Fig.~\ref{fig:spectra} displays examples of the spectra, which were reduced in the standard way using the IRAF package. A comparison of the normalized spectra with those of well classified stars available in the literature was carried out. The spectrum of \astrobj{8 Aql} is very similar to that of HD 89254 (F2III) of the library of stellar spectra STELIB \citep{leborgne}. On the other hand, the spectrum of the star HD 90277 (F0V) of the same library reproduces our spectra of \astrobj{7 Aql} fairly well, besides that its resolution is twice as high as our. \section{Physical parameters}\label{sec:par} We have used the standard indices $uvby-\beta$ listed in Table~\ref{tab:index_pp} to estimate the reddening as well as the unreddened colours of our target stars. The calibrations of \cite{nissen} which are based on calibrations of \cite{crawford, crawford2, crawford1} for A- and F- type stars were implemented. The derived physical parameters are listed in Table~\ref{tab:nissen_par}. The confidence of the physical parameters can be assessed by comparing the distances derived in the present study with those estimated from accurate trigonometric parallaxes. In particular, the HIPPARCOS parallax measurement for \astrobj{7 Aql} is $7.70 \pm 0.80$ mas and for \astrobj{8 Aql} is $11.80 \pm 0.78$ mas. The corresponding distances are $130 \pm 15$ pc and $85 \pm 6$ pc respectively. Thus, there is a good agreement between trigonometric and photometric distances. The $T_{\rm eff}$, log $g$ and metallicity from observed colours have been determined by means of the code TempLogG \citep{rogers, kupka}. The resulting physical parameters are listed in Table~\ref{tab:templog_par}. The spectral types and luminosity classes of the stars can be determined through the relationship between MK spectral types and the reddening free indices $\beta$, $[m_{1}]=m_{1}+0.18(b-y)$, $[c_{1}]=c_{1}-0.20(b-y)$ by \cite{oblak}. Considering the indices listed in Table~\ref{tab:index_pp} with the errors, we have found a spectral classification for the target stars between A9 and F2, with luminosity class of either III or V. Therefore, no unique spectral type can be obtained from St\"omgren photometry. This is due to the fact that the Str\"omgren standard indices used to assign a determined MK type in \citet{oblak} have a rather high standard deviation, hence more than one MK type could be assigned to the target stars. \begin{table*}[!t]\centering \caption{Physical parameters for the target stars derived from Nissen's (1988) calibrations.} \label{tab:nissen_par} \begin{tabular}{lcccccccr} \hline \hline Star & $E(b-y)$& $(b-y)_{0}$& $m_{0}$& $c_{0}$& $\beta_{0}$&$m_{V}$&$M_{V}$& Distance\\ &(mag)&(mag)&(mag)&(mag)&(mag) &(mag)&(mag)&(pc)\\ \hline \astrobj{7 Aql} & 0.000 &0.173 & 0.180 & 0.873 &2.755 & 6.89 & 1.25 & 134 \\ \astrobj{8 Aql} & 0.000& 0.196 & 0.185 & 0.822 &2.730 & 6.07 & 1.27 & 92 \\ \hline \end{tabular} \end{table*} \begin{figure}[!t] \centering \includegraphics[width=7cm]{foxm_fig9.eps} \caption{Low resolution spectra of target stars \astrobj{8 Aql} and \astrobj{7 Aql}.} \label{fig:spectra} \end{figure} \section{Differential light curves and frequency analysis}\label{sec:difphot} Figure~\ref{fig:curves_pp} displays examples of the differential light curves in the $y$ Str\"omgren filter of the target stars for three selected nights (vertical panels). The last three horizontal panels, from left to right, correspond to the differential light curve \astrobj{HD 174046}~-~\astrobj{HD 174625}. \begin{figure*}[!t] \begin{center} \includegraphics[width=14cm]{foxm_fig3A.eps} \caption{Examples of the differential light curves taken with the Str\"omgren spectrophotometer using the $y$ filter with reference star HD 174046 $=$ c1 and HD 174625 $=$ c2. The name of each one differential light curve is indicated at left.} \label{fig:curves_pp} \end{center} \end{figure*} A period analysis has been performed by means of standard Fourier analysis and leas-squares fitting. In particular, the amplitude spectra of the differential time series were obtained by means of an iterative sinus wave fit (ISWF; \citealt{ponman}). The amplitude spectra of the differential $v$ light curves \astrobj{7 Aql}$-$c2, \astrobj{8 Aql}$-$c2 are shown in the top panels of each plot of Figure~\ref{fig:prewh_pp}(a). The subsequent panels of each plot in the figure, from top to bottom, illustrate the process of detection of the frequency peaks in each amplitude spectrum. We followed the same procedure as explained in \citet{alvarez} and employed by \cite{fox4, fox, fox5, fox6}. We have used a threshold of 3.7-$\sigma$ above the mean noise level of the 100 $\mu$Hz closest to the peak in the amplitude spectrum to consider a frequency as statistically significant, as described in \citet{alvarez}. \begin{figure*}[!t] \subfigure[]{\includegraphics[width=11cm]{foxm_fig6.eps}} \subfigure[]{\includegraphics[width=7cm]{foxm_fig7.eps}} \caption{(a) Pre-whitening process in the spectra derived from the PP $v$ differential light curves 7 Aql$-$c2 (left) and 8 Aql$-$c2 (right). (b) Spectral windows in amplitude} \label{fig:prewh_pp} \end{figure*} The windows function of the observations is shown in ~\ref{fig:prewh_pp}(b). The resolution measured from the FWHM of the main lobe in the spectral window is $\Delta\,\nu=1.1$ $\mu$Hz. The results obtained from the prewhitening process of the Str\"omgren $vby$ time series are listed in Table~\ref{tab:frec1}, where the detected frequencies with their corresponding amplitudes and phases are given. The data from the Str\"omgren $u$ band are omitted hereafter for sake of clarity. The formal errors derived from non-weighting fits are also listed. We note that for uncorrelated observations like ours these uncertainties usually may underestimate the real errors in amplitudes and phases. The detected frequencies in the Str\"omgren $v$ band ($\lambda = 4110\,$\AA, $\Delta \lambda = 170\,$\AA) can be compared with those reported by \cite{fox1} whose observations were obtained through a similar interferometric blue filter ($\lambda \approx 4200\,$\AA, $\Delta \lambda \approx 190\,$\AA). Nonetheless, the time series analyzed by \cite{fox1} are based on a multisite campaign and therefore the final resolution is better. They detected three frequency peaks in \astrobj{8 Aql} namely 108.04 $\mu$Hz (4.1 mmag), 110.20 $\mu$Hz (6.1 mmag), 143.36 $\mu$Hz (9.6 mmag). As can be seen in Table~\ref{tab:frec1} we have detected the same frequencies in this season, but with smaller oscillation amplitudes. We note, however, that the amplitude ratio $A_{\nu_{1}} / A_{\nu_{2}}$ and $A_{\nu_{1}} / A_{\nu_{3}}$ are almost the same in both studies. In particular, the amplitude of the dominant mode, $\nu_{3}$, is 1.8 times smaller in our one-site observations. Even though, the oscillation amplitudes are big enough to be detected in our run. Regarding \astrobj{7 Aql} \cite{fox1} detected six significant frequency peaks namely 193.28 $\mu$Hz (2.8 mmag), 201.05 $\mu$Hz (3.8 mmag), 222.08 $\mu$Hz (3.6 mmag), 223.96 $\mu$Hz (3.4 mmag), 236.44 $\mu$Hz (6.1 mmag), 295.78 $\mu$Hz (1.5 mmag). Among these we have confirmed only two frequency peaks. $\nu_{1}$ most likely corresponds to 201.05 $\mu$Hz of \cite{fox1} with similar amplitude, while $\nu_{2}$ beyond no doubt is the dominant mode 236.44 $\mu$Hz with smaller oscillation amplitude. From the amplitude ratio of the detected modes in \cite{fox1} a smaller amplitude is expected for $\nu_{1} \sim 201$ $\mu$Hz. Therefore, the amplitude of this peak probably was affected by the side lobes. We think that the difference in amplitude of the modes in both seasons is a consequence of the bad coverage rather than intrinsic amplitude variability. In fact, if we define the oscillation amplitude ($A_{\rm osc}$) as the maximum constructive interference of the observed modes a short time series might induce an underestimation of the amplitude especially in presence of beating phenomena due to close frequencies. As shown by \cite{fox1} the oscillation amplitudes of the modes in \astrobj{8 Aql} are on average 1.9 times larger that those of \astrobj{7 Aql}. This explains the fact that we have only detected two oscillation modes in \astrobj{7 Aql}, but all in \astrobj{8 Aql}. \begin{table*}[]\centering \caption{Fundamental parameters for the target stars computed with the TempLogG code.} \label{tab:templog_par} \begin{tabular}{lccccr} \hline \hline Star & $M_{V}$& D &$T_{\rm eff}$ &$\log g$& [Fe/H]\\ &(mag)&(pc)&(K)&&\\ \hline \astrobj{7 Aql} & 1.22 & 136 & 7257 &3.62 &0.01 \\ \astrobj{8 Aql} & 1.23 & 92 & 7051 & 3.51 &0.14 \\ \hline \end{tabular} \end{table*} \begin{table}[!t]\centering \setlength{\tabcolsep}{1.0\tabcolsep} \caption{Frequency peaks detected in the light curves \astrobj{7 Aql}~$-$~c2 and \astrobj{8 Aql}~$-$~c2. S/N is the signal-to-noise ratio in amplitude after the prewhitening process. The origin of $\varphi$ is at 24544272.72047} \label{tab:frec1} \begin{tabular}{ccccr} \hline $\nu$&Freq.& A & $\varphi$ & $S/N$ \\ &($\mu$Hz)&(mmag)&(rad)&\\ \hline &&\astrobj{7 Aql} &&\\ &Filter $v$&&&\\ $\nu_{1}$ &$200.90 \pm 0.05$ & $3.64 \pm 0.3$ & $ -0.53 \pm 0.03$ & 5.9\\ $\nu_{2}$ &$236.53 \pm 0.05$ & $3.22 \pm 0.3$ & $ +0.10 \pm 0.03$ & 4.2\\ &Filter $b$&&&\\ $\nu_{1}$&$200.91 \pm 0.04$ & $3.38 \pm 0.3$ & $ -0.57 \pm 0.03$ & 6.1\\ $\nu_{2}$&$236.55 \pm 0.06$ & $2.48 \pm 0.3$ & $ +0.06 \pm 0.04$ & 3.8\\ &Filter $y$&&&\\ $\nu_{1}$&$200.89 \pm 0.05$ & $2.64 \pm 0.3$ & $ -0.50 \pm 0.03$ & 5.6\\ $\nu_{2}$&$236.54 \pm 0.06$ & $2.22 \pm 0.3$ & $ +0.23 \pm 0.04$ & 4.2\\ \hline &&\astrobj{8 Aql} &&\\ &Filter $v$&&&\\ $\nu_{1}$&$143.38 \pm 0.04$ & $5.38 \pm 0.3$ & $-2.54 \pm 0.02 $ & 11.5 \\ $\nu_{2}$&$110.24 \pm 0.05$ & $3.56 \pm 0.3$ & $ -0.67 \pm 0.03$& 6.7 \\ $\nu_{3}$&$107.99 \pm 0.06$ & $2.69 \pm 0.3$ & $ +2.48 \pm 0.04$ & 5.1\\ &Filter $b$&&&\\ $\nu_{1}$&$143.37 \pm 0.04$ & $4.92 \pm 0.3$ & $-2.42 \pm 0.02$ & 13.3\\ $\nu_{2}$&$110.26 \pm 0.04$ & $2.86 \pm 0.3$ & $-0.74 \pm 0.03$& 7.0\\ $\nu_{3}$&$107.96 \pm 0.06$ & $2.33 \pm 0.3$ & $+2.59 \pm 0.04$& 5.7 \\ &Filter $y$&&&\\ $\nu_{1}$&$143.36 \pm 0.03$ & $ 3.98 \pm 0.3$ & $ -2.32 \pm 0.03$& 10.8\\ $\nu_{2}$&$110.23 \pm 0.04$ & $ 1.80 \pm 0.3$ & $ -0.44 \pm 0.05$& 4.3\\ $\nu_{3}$&$107.92 \pm 0.05$ & $ 1.49 \pm 0.3$ & $ +2.91 \pm 0.06$& 3.6\\ \hline \end{tabular} \end{table} \section{Preliminary comparison with theoretical models}\label{sec:models} In this section the pulsation constants will be computed in order to try to disentangle possible radial modes. Then the frequencies listed in Table~\ref{tab:frec1} will be used in an attempt of multicolour mode identification. A more complete modelling considering the frequency modes detected by \cite{fox1} will be given in a forthcoming paper. Figure~\ref{fig:models} shows the de-reddened position of the target stars in an $T_{\rm eff}$-magnitude diagram. The computation of the theoretical evolutive sequences are explained in \citet{fox2}. The observed absolute magnitudes $M_{V}$ were taken from Table~\ref{tab:nissen_par}, while the $T_{\rm eff}$ are from Table~\ref{tab:templog_par}. Error bars of 0.1 mag for $M_{V}$ and 100 K for $T_{\rm eff}$ have been adopted. The dotted lines are evolutive sequences of non-rotating models without overshooting giving a range of masses suitable for \astrobj{7 Aql}. The dashed line corresponds to an evolutive track of models of 2.20 $M_{\odot}$ which match approximately the observational position of \astrobj{8 Aql}. We have used a chemical initial composition of [Fe/H] = 0.066 for \astrobj{7 Aql} and [Fe/H] = 0.148 for \astrobj{8 Aql}. According to the models depicted in Fig.~\ref{fig:models} the mass of \astrobj{8 Aql} is 0.2 $M_{\odot}$ larger than that of \astrobj{7 Aql}. Their ages should be between 800 and 1000 Myr. We note that the effect of rotation have been neglected. However as shown by \cite{perez} the effect of rotation is important not only in the location of the stars in a colour-magnitude diagram but also on the pulsation modes even for low rotators like \astrobj{7 Aql} ($v\,sin\,i=32$ km/s). The pulsation constant $Q$ is expressed in terms of four observable parameters as follows \citep{breger1}: \begin{equation} \log Q = -6.456 + \log P + 0.5 \log g + 0.1 M_{\rm bol} + \log T_{\rm eff} \end{equation} Using the parameters listed in Table~\ref{tab:templog_par} and considering the balometric corrections for the target stars \citep{balona}, we find for \astrobj{7 Aql} $Q_{\nu_{1}}=0.0127$ and $Q_{\nu_{2}}=0.0108$. For \astrobj{8 Aql} we have $Q_{\nu_{1}}=0.0153$, $Q_{\nu_{2}}=0.0199$ and $Q_{\nu_{3}}=0.0203$. Comparing these $Q$-values with the theoretical ones \citep[2.0M48 model by ][]{fitch} we find that the oscillation modes $\nu_{1}$ and $\nu_{2}$ of \astrobj{7 Aql} are indicative of $p$ modes of either $l=0, 2\; {\rm or}\; 3$ with overtones $n=5$ and $n=7$, respectively. On the other hand, the 2.0M49 model \citep{fitch}, which match approximately the parameters of \astrobj{8 Aql} yields either identifications $(l=1, n=4)$ or $(l=2, n=4)$ for $\nu_{1}$, while the frequencies $\nu_{2}$ and $\nu_{3}$ are consistent with either radial pulsations $(l=0, n=2)$ or non-radial oscillations $(l=2, n=2)$. \begin{figure}[!t] \centering \includegraphics[width=7cm]{foxm_fig8.eps} \caption{HR diagram showing the location of the target stars. The slightly cooler star is \astrobj{8 Aql}. Evolutive sequences of non-rotating models without overshooting are shown by dotted ([Fe/H] = 0.066) and dashed lines ([Fe/H] = 0.148). The error bars give the position of the stars according to the uncertainties discussed in Sect~\ref{sec:par}.} \label{fig:models} \end{figure} \begin{figure*}[!t] \subfigure[]{\includegraphics[width=7cm]{foxm_fig10.eps}} \subfigure[]{\includegraphics[width=7cm]{foxm_fig11.eps}} \caption{Phase-amplitude diagram for the Str\"omgren filters showing the comparison between observed difference phase-amplitude ratio (boxes) and theoretical predictions (asterisks). (a) $b$ and $y$ bands for the frequency $\nu_{1}=143.38\,\mu$Hz of \astrobj{8 Aql}. (b) $v$ and $y$ bands for the frequency $\nu_{1}=200.90\,\mu$Hz of \astrobj{7 Aql}. } \label{fig:phase} \end{figure*} \subsection{Multicolour photometry} It is well known that the amplitude and phases observed with the Str\"omgren filters can be used to estimate the spherical degree $l$ of the mode \citep{watson,garrido}. This estimation is done comparing the observed amplitude ratios and phase differences with theoretical predictions. One equilibrium model per star has been obtained, passing by the center of the observed photometric error box. The numerical code CESAM \citep{morel} is used to obtain these models, fixing standard physics for $\delta$ Scuti stars \citep[see][]{casas}. The non-adiabatic pulsational code GraCo \citep{moya04,moya08} has also been used for obtaining the variation of the flux and the phase-lags necessary to compute the variation of the magnitude as a function of the wave length. As both stars are close to the red edge of the $\delta$ Scuti's instability strip, the outer convection zone is well developed. Therefore, the equations describing the convection-pulsation interaction are required. To do so, the Time Dependent Convection \citep{grigahcene} has been used in GraCo. Examples of the final comparison between observations and theoretical predictions are depicted in Fig.~\ref{fig:phase}. The horizontal axe shows the phase difference in gradus, while the vertical one the amplitude ratios. Unfortunately, with the large error bars derived for the present sparse observations, in some cases the oscillations are compatible with all possible non-radial and radial modes up to $l=3$. In others, the discrimination is not good. Even so, most identifications point towards the presence of degrees with $l \geq 2$ values. If that hypothesis were correct the presence of radial oscillations derived from Fitch's models could be excluded. Therefore, the two detected frequencies in \astrobj{7 Aql} could be identified as $l=2$ and $n=5, 7$; while the three frequencies in \astrobj{8 Aql} would be consistent with $l=2$ and $n=4, 2, 3$. However, continuous multicolour time series are required for a more conclusive study. \section{Conclusions} We have presented the results obtained in an one-site observational photometric campaign on the $\delta$ Scuti stars \astrobj{7 Aql} and \astrobj{8 Aql}. Photoelectric photometric $uvby-\beta$ data were acquired at the 1.5-m telescope of SPM observatory by using the Str\"omgren six channel spectrophotometer. Str\"omgren standard indices for \astrobj{7 Aql}, \astrobj{8 Aql} and comparison stars are reported. The main physical parameters have been derived from the Str\"omgren photometry. These have been used to place the target stars in an $T_{\rm eff}$-magnitude diagram. The star 8 Aql is about 0.2 $M_{\odot}$ more massive than 7 Aql, while their absolute magnitudes are rather similar. The pulsation constant $Q$ has been computed for the modes detected in the present study. An attempt of mode identification by means multicolour photometry has been carried out. The stars seem to oscillate with modes of degree $l=2$ or higher. However, longer multicolour time series are required for a more conclusive study. The analysis of few low resolution spectra points that 7 Aql and 8 Aql are stars of spectral type F0V and F2III respectively. \bigskip {\bf \noindent Acknowledgments} This work has received financial support from the UNAM under grant PAPIIT IN108106 and IN114309. A. M. acknowledges financial support from a ``Juan de la Cierva'' contract of the Spanish Ministry of Education and Science. Special thanks are given to the technical staff and night assistant of the San Pedro M\'artir Observatory. This research has made use of the SIMBAD database operated at CDS, Strasbourg (France). \bigskip {\noindent \bf References} \medskip \bibliographystyle{elsarticle-harv}
1,116,691,498,729
arxiv
\section{Introduction} Compressive sensing is a powerful signal acquisition approach with which one can recover signals beyond bandlimitedness from noisy under-determined measurements whose number is closer to the intrinsic complexity of the target signals than the Nyquist rate \cite{CandesRombergTao:2006,Donoho:2006,FazelCandesRcht:2008,FoucartRauhut:2013}. Quantization that transforms the infinite-precision measurements into discrete ones is necessary for storage and transmission \cite{Sayood:2017}. The binary quantizer, an extreme case of scalar quantization, that codes the measurements into binary values with a single bit has been introduced into compressed sensing \cite{BoufounosBaraniuk:2008}. The 1-bit compressed sensing (1-bit CS) has drawn much attention because of its low cost in hardware mentation and storage and its robustness in the low signal-to-noise ratio scenario \cite{Laska:2012}. \subsection{Related work} A lot of efforts have been devoted to studying the theoretical and computational issues in the 1-bit CS under the sparsity assumption, i.e., $\|x^*\|_0\leq s \ll m$. Support recovery can be achieved in both noiseless and noisy setting provided that $m > \mathcal{O}(s\log n)$ \cite{GopiJain:2013,JacquesDegraux:2013,PlanVershynincpam:2013,BauptBaraniuk:2011,JacquesLaska:2013,GuptaNowakRech:2010,BauptBaraniuk:2011,PlanVershynin:2013,ZhangYiJin:2014,Ahsen:2019}. Greedy methods \cite{LiuGongXu:2016,Boufounos:2009,JacquesLaska:2013} and first order methods \cite{BoufounosBaraniuk:2008, LaskaWenYinBaraniuk:2011,YanYangOsher:2012,DaiShenXuZhang:2016} are developed to minimize the sparsity promoting nonconvex objective function caused by the unit sphere constraint or the nonconvex regularizers. Convex relaxation models are also proposed \cite{ZhangYiJin:2014,PlanVershynin:2013,PlanVershynincpam:2013, ZymnisBoydCandes:2010,PlanVershynin:2017,Vershynin:2015,HuangShiYan:2015} to address the nonconvex optimization problem. Using least squares to estimate parameters in the scenario of model misspecification goes back to \cite{Brillinger:2012}, and see also \cite{LiDuan:1989} and the references therein for related development in the setting $m \gg n$. Recently, with this idea, \cite{PlanVershynin:2016,Neykov:2016,HuangJiao:2018,ding2020robust} proposed least square with $\ell_1$/$\ell_0$ regularized or generalized lasso to estimate parameters from general under-determined nonlinear measurements. In addition to the sparse structure of the signals/images under certain linear transform \cite{Mallat:2008}, the natural signals/images data have been verified having low intrinsic dimension, i.e., they can be represented by a generator $G$, such as pretrained neural network, that maps from $\mathbb{R}^k$ to $\mathbb{R}^{n}$ with $k\ll n$. Such a $G$ can be obtained via GAN \cite{goodfellow14}, VAE \cite{kingma14} or flow based method \cite {rezende2015variational}. In these models, the generative part learns a mapping from a low dimensional representation space $z\in \mathbb{R}^k$ to the high dimensional sample space $G(z)\in \mathbb{R}^n$. While training, this mapping is encouraged to produce vectors that resemble the vectors in the training dataset. With this generative prior, several tasks have been studied such as image restoration \cite{ulyanov2018deep}, phase retrieval \cite{hand2018phase} and compressed sensing \cite{wu2019deep,bora2017compressed,huang2018provably,liu2020information} and nonlinear single index models under certain measurement and noise models \cite{wei2019statistical,liu2020generalized}. In \cite{bora2017compressed}, the authors propose the least squares estimator (\ref{ls1})-(\ref{ls2}) to recover signals in standard compressed sensing with generative prior and prove sharp sample complexities \cite{liu2020information}. Surprisingly, the sharp sample complexity for the squares decoder (\ref{ls1})-(\ref{ls2}) can be derived in this paper even if the measurements are highly quantized and corrupted by noise and sign flips. Very recently, under generative prior, \cite{liu2020sample} and \cite{qiu2020robust} derived sample complexity results for 1-bit CS. The sample complexity obtained in \cite{liu2020sample} is $O(k\log L)$ under the assumption that the generator $G$ is $L$- Lipschitz continuous and the rows of $A$ are i.i.d. sampled from $\mathcal{N}(\textbf{0},\mathbf{I})$. However, the estimator proposed in \cite{liu2020sample}, $\hat{x} = G(\hat{z})$ with $\hat{z} \in \{z: y=\mathrm{sign}(AG(z))\}$, is quite different from our least squares decoder and the analysis technique used there are also not applicable to our decoder. \cite{qiu2020robust} proposed unconstrained empirical risk minimization to recovery in 1-bit CS and derived the sample complexity to be $\mathcal{O}(kd\log n)$ for $d$-layer ReLU network $G$ via assuming the rows of $A$ are i.i.d. sampled from subexponential distributions. However, the 1-bit CS model considered in \cite{qiu2020robust} is without sign flips and require additional quantization threshold before sampling to measure and the empirical risk minimization decoder used there is also different from our least squares (\ref{ls1})-(\ref{ls2}). The results in \cite{liu2020generalized} can be applied to 1-bit CS model, however, it requires that the target signals are exact contained in the range of the generator. In contrast, we only need a more realistic assumption that the target signals can be approximated by a generator. \subsection{Notation and Setup}\label{setting} We use $[n]$ to denote the set $\{1,...,n\}$, use $A_i\in \mathbb{R}^{m\times1}, i \in [n]$ and $a_j \in \mathbb{R}^{n\times 1},j\in [m]$ to denote the $i$th column and $j$th row of $A$, respectively. The multivariate normal distribution is denoted by $\mathcal{N}(\textbf{0},\Sigma)$ with a symmetric and positive definite matrix $\Sigma$. Let $\|x\|_{\Sigma} = (x^{t}\Sigma x)^{\frac{1}{2}}$, and $\|x\|_p = (\sum_{i=1}^{n}|x_{i}|^p)^{1/p}, p\in [1,\infty)$ be the $\ell_p$-norm of $x$. Without causing confusion, $\|\cdot\|$ defaults to $\|\cdot\|_2$. Sign function $\textrm{sign}(\cdot)$ is defined componentwise as $\textrm{sign}(z) =1$ for $z \geq 0$ and $\textrm{sign}(z) = -1$ for $z<0$. We use $\odot $ to denote the Hadamard product. For any set $B$, $|B|$ is defined as the number of elements contained in $B$. Following \cite{PlanVershynincpam:2013,HuangJiao:2018}, we consider the following 1-bit CS model \begin{equation}\label{setup} y = \eta \odot\textrm{sign} (A x^* + \epsilon), \end{equation} where $y\in \mathbb{R}^{m}$ are the binary measurements, $x^{*}\in \mathbb{R}^{n}$ is an unknown signal. The measurement matrix $A \in \mathbb{R}^{m\times n}$ is a random matrix whose rows $a_i, i \in [m]$ are i.i.d. random vectors sampled from $\mathcal{N}(\textbf{0}, \Sigma)$ with an unknown covariance matrix $\Sigma $, $\eta\in \mathbb{R}^{m}$ is a random vector modeling the sign flips of $y$ whose coordinates $\eta_is$ are i.i.d. satisfying $\mathbb{P}[\eta_i = 1] = 1- \mathbb{P}[\eta_i = -1] = q \neq \frac{1}{2},$ and $\epsilon \in \mathbb{R}^{m}$ is a random vector sampled from $\mathcal{N}(\textbf{0},\sigma^2\textbf{I}_m)$ with an unknown noise level $\sigma$ modeling errors before quantization. We assume $\eta_i, \epsilon_i$ and $a_i$ are independent. Model \eqref{setup} is unidentifiable under positive scaling, the best one can do is to recover $x^*$ up to a constant, $c=(2q-1)\sqrt{\frac{2}{\pi(\sigma^2+1)}}$ which has been proved in \cite{HuangJiao:2018}. Without loss of generality we may assume $\| x^*\|_{\Sigma} = 1$. Let $\ell$-dimensional unit sphere and a ball in the $\ell^p$ norm to be $$\mathcal{S}_{p}^{\ell-1} = \{x\in\mathbb{R}^{\ell}: \|x\|_p=1\}, \quad \mathcal{B}_p^{\ell}(r)=\{z\in\mathbb{R}^{\ell}: \|z\|_p\leq r\}.$$ For an $L$-Lipschitz generator $G: \mathbb{R}^k\rightarrow\mathbb{R}^{n}$, denote by $$\mathcal{G}_{k,\tau,p}(r) = \{x\in\mathbb{R}^{n}: \exists z\in \mathcal{B}_{2}^{k}(r), \ \ s. t. \ \ \|cx-G(z)\|_p \leq \tau\},$$ where signals that can generated by $G$ with tolerance $\tau$. When $p=2$, we denote $\mathcal{G}_{k,\tau,2}(r)$ by $\mathcal{G}_{k,\tau}(r)$ for simplicity. The target signal $x^*$ is assumed with low generative intrinsic dimension, i.e., $x^*\in \mathcal{G}_{k,\tau,p}(r)$ for some $p$ and $r$. \subsection{Contributions} It is a challenging task to decode from nonlinear, noisy, sign-flipped and under determined ($m\ll n$) binary measurements. For a given Lipschtiz generator $G$, we use $\hat{x} =G(\hat{z})$ to estimate $x^*$ in the 1-bit CS model \eqref{setup} via exploring the intrinsic low dimensional structure of the target signals, where the latent code $\hat{z}$ is solved by the least square problem \eqref{ls2}. \begin{itemize} \item[(1)] We prove that, with high probability the estimation error $\|\hat{x}- c x^*\| \leq \mathcal{O} (\sqrt{\frac{k\log (Ln)}{m}})$ is sharp provided that the sample complexity satisfies $m\geq \mathcal{O}( k \log (Ln))$, if the target signal $x^*$ can be approximated well by generate $G$. \item[(2)] By constructing a ReLU network with properly chosen depth and width, we verify the desired approximation in (1) holds if the target signals have low intrinsic dimensions. \item[(3)] Extensive numerical simulations and comparisons with state-of-the-art methods show that the proposed least square decoder is the robust to noise and sign flips, as demonstrated by our theory. \end{itemize} The rest of the paper is organized as follows. In Section 2 we consider the least squares decoder and prove several bounds on $\|\hat{x} -cx^* \|$. In Section 3 we conduct numerical simulation and compare with existing state-of-the-art 1-bit CS methods. We conclude in Section 4. \section{Analysis of the Least Square Decoder} We first propose the least square decoder in details. Consider the following least square problem for the latent code $z$: \begin{equation}\label{ls2} \hat{z} \in \arg \min_{z\in \mathcal{B}_2^{k}(r)} \frac{1}{2m}\|y - AG(z)\|^2. \end{equation} Then for a given $L$-Lipschtiz generator $G$, the signal is approximated by \begin{equation}\label{ls1} \hat{x} =G(\hat{z}). \end{equation} In this section, we will prove under proper assumption on generator and sample complexity, the error between the decoder $\hat{x}$ and the underlying signal $x^*$ can be estimated, i.e., Theorem \ref{thel1} and \ref{theorem2}. Moreover we also provide the construction of a ReLU network such that the approximation to the target signals are satisfied, see Theorem \ref{thapp}. \begin{theorem}\label{thel1} Given a Lipschitz generator satisfying $G(\mathcal{B}_2^k(1))\subset \mathcal{B}_1^n(1)$. Assume the 1-bit CS model \eqref{setup} holds with $x^* \in \mathcal{G}_{k,\tau,1}(1)$, and $m \geq \mathcal{O} \left(\max \{\log n, k\log {\frac{L}{\tau}}\}\right)$, then with probability at least $1-O(\frac{1}{n^2})-e^{-O(m/k)}$, the least squares decoder defined in (\ref{ls2})-(\ref{ls1}) (for $r=1$) satisfies \begin{equation*} \|\hat{x}-c{x^*}\|\leq \mathcal{O}\left(\sqrt{\tau} + \left(\frac{\log n}{m}\right)^{1/4}\right). \end{equation*} \end{theorem} To prove Theorem \ref{thel1}, we need some technical Lemmas. Firstly we introduce the concept of S-REC with some minor changes and $\epsilon-$net, which is defined in \cite{bora2017compressed}. \begin{definition}\cite{bora2017compressed}. Let $S\subseteq\mathbb{R}^n$ and two positive parameters $\gamma>0, \delta>0$. The matrix $A\in \mathbb {R}^{m\times n}$ is said to satisfy the S-REC$(S, \gamma, \delta)$, if $\forall x_1, x_2\in S$, \begin{equation} \frac{1}{m}\|A(x_1-x_2)\|^2\geq\gamma\|x_1 - x_2\|^2 - \delta. \end{equation} \end{definition} \begin{definition} Let $N \subseteq S\subseteq\mathbb{R}^n$ and $\epsilon > 0$. We say that $N$ is an $\epsilon-$net of $S$, if $\forall s \in S$, there exist an $\tilde{s} \in N$ such that $\|s-\tilde{s}\|\leq \epsilon$. \end{definition} \begin{lemma}\label{cov}\cite{boucheron2013concentration} $\forall \epsilon > 0$, there exists an $\epsilon-$net $N_{\epsilon}$ of $\mathcal{B}^k_2(r)$ with finite many points in $N_\epsilon$, such that $$\log|N_\epsilon|\leq k\log(\frac{4r}{\epsilon}).$$ \end{lemma} The proof follows directly from the standard volume arguments, see \cite{boucheron2013concentration}. \begin{lemma}\label{A_delta} Let $G: \mathbb{R}^k\rightarrow\mathbb{R}^n$ be an $L$-Lipschitz function. If $N$ is a $\frac{\delta}{L}-$net on $\mathcal{B}^k_2(r)$, then, $G(N)$ is a $\delta-$net on $G(\mathcal{B}^k_2(r))$, i.e., \begin{equation}\label{eq:1} \forall z\in \mathcal{B}^k_2(r), \quad \exists z_1 \in N, \; s.t. \;\|G(z)-G(z_1)\|\leq \delta. \end{equation} Furthermore, let $A\in\mathbb{R}^{m\times n}$ be a random matrix and the rows are i.i.d. random vectors sampled from the multivariate normal distribution $\mathcal{N}(\mathbf{0},\Sigma)$, then, \begin{equation}\label{eq:2} \frac{1}{\sqrt{m}}\|AG(z)-AG(z_1)\|\leq\mathcal{O}(\delta) \end{equation} holds with probability $1-e^{- O(m)}$ as long as $m = \mathcal{O}\left(k\log\frac{L}{\delta}\right)$. \end{lemma} \begin{proof} Let $N$ be $\frac{\delta}{L}-$ net on $\mathcal{B}_2^k(r)$ satisfying $$\log|N|\leq k\log(\frac{4Lr}{\delta}).$$ Since G is $L$-Lipschitz function, then by definition we can check that $ G(N)$ is $\delta-$net on $G(\mathcal{B}_2^k(r))$. For fixed $\delta >0$, let $N_i$ be a $\frac{\delta_i}{L}-$net on $\mathcal{B}^k_2(r)$ satisfying $\log|N_i|\leq k\log\frac{4Lr}{\delta_i}$ with $\delta_i = \frac{\delta}{2^i}$, and $$N = N_0\subset N_1\subset\ldots\subset N_l,$$ with $2^l>\sqrt{n}.$ \\ $\forall x \in G(\mathcal{B}^k_2(r))$, $\exists x_i\in G(N_i)$, such that $$\|x - x_l\|\leq\frac{\delta}{2^l} \text{ and } \|x_{i+1} - x_i\|\leq\frac{\delta}{2^i}, i= 1, \ldots, l-1.$$\\ By triangle inequality we get that, \begin{equation}\label{eqlem1} \begin{array}{l} \frac{1}{\sqrt{m}}\|Ax-Ax_0\|\\ = \|\frac{1}{\sqrt{m}}A\sum_{i=0}^{l-1}(x_{i+1}-x_i) + \frac{1}{\sqrt{m}}A(x-x_l)\|\\ \leq\sum_{i=0}^{l-1}\frac{1}{\sqrt{m}}\|A(x_{i+1}-x_i)\|+\|\frac{1}{\sqrt{m}}A(x-x_l)\|. \end{array} \end{equation} By construction, the last term \begin{equation}\label{eqlem2} \begin{array}{l} \|\frac{1}{\sqrt{m}}A(x-x_l)\|\leq(2+\sqrt{\frac{n}{m}})\frac{\delta}{2^l}=\mathcal{O}({\delta}). \end{array} \end{equation} Let $\widetilde{A} = A\Sigma^{-1/2}$, by Lemma 1.3 in \cite{vempala2005random}, with probability at least $1-\exp(-\mathcal{O}(\epsilon_i^2m))$, the following holds \begin{equation*} \|\frac{1}{\sqrt{m}}\widetilde{A}(x_{i+1}-x_i)\|^2\leq(1+\epsilon_i)\|x_{i+1}-x_i\|^2, \end{equation*} equivalently, \begin{equation*} \begin{array}{l} \|\frac{1}{\sqrt{m}}A\Sigma^{-\frac{1}{2}}\Sigma^{\frac{1}{2}}(x_{i+1}-x_i)\|^2\\ \leq(1+\epsilon_i)\|\Sigma^{\frac{1}{2}}\|^2\|x_{i+1}-x_i\|^2, \end{array} \end{equation*} i.e., \begin{equation}\label{ineq_A_x_i} \|\frac{1}{\sqrt{m}}A(x_{i+1}-x_i)\| \leq(1+\frac{\epsilon_i}{2})\|\Sigma^{\frac{1}{2}}\|\|x_{i+1}-x_i\|, \end{equation} the last inequality is derived from $\sqrt{1+\epsilon_i}\leq 1+\frac{\epsilon_i}{2}, \text{ }\epsilon_i\in(0,1)$. Set $\epsilon_i^2 =\epsilon +\frac{ik}{m}$, and use union bound and \eqref{ineq_A_x_i}, we have $\forall i\in[l]$, \begin{equation}\label{eqlem3} \|\frac{1}{\sqrt{m}}A(x_{i+1}-x_i)\| \leq(1+\frac{\epsilon_i}{2})\|\Sigma^{1/2}\|\|x_{i+1}-x_i\|, \end{equation} with probability at least $1-\exp{(-\mathcal{O}(\epsilon m))}$. Then, it follow from \eqref{eqlem1}, \eqref{eqlem2} and \eqref{eqlem3} that, \begin{equation*} \begin{array}{l} \frac{1}{\sqrt{m}}\|Ax-Ax_0\|\\ \leq\|\frac{1}{\sqrt{m}}A\sum_{i=0}^{l-1}(x_{i+1}-x_i)\| + \mathcal{O}(\delta)\\ \leq\sum_{i=0}^{l-1}(1+\frac{\epsilon_i}{2})(\sigma_{max}(\Sigma))^{\frac{1}{4}}\frac{\delta}{2^i} + \mathcal{O}(\delta)\\ \leq \delta(\sigma_{max}(\Sigma))^{\frac{1}{4}}\sum_{i=0}^{l-1}\frac{\sqrt{\epsilon}}{2^{i+1}}(1+\frac{ik}{m\epsilon})+\mathcal{O}(\delta)\\ =\mathcal{O}(\delta). \end{array} \end{equation*} \end{proof} \begin{lemma}\label{A_REC} Let $G: \mathbb{R}^k\rightarrow\mathbb{R}^n$ be $L$-Lipschitz generator, $S = G(\mathcal{B}^k_2(r))$, and $A\in\mathbb{R}^{m\times n}$ be a random matrix and the rows are i.i.d. random vectors sampled from the multivariate normal distribution $\mathcal{N}(\mathbf{0},\Sigma)$. if $m = \mathcal{O}\left(k\log\frac{Lr}{\delta}\right),$ $A$ satisfy the S-REC$(S, \frac{1}{2}\sqrt{\sigma_{min}(\Sigma)}, O(\delta))$, with probability $1 - e^{-O(m/k)}$. \end{lemma} \begin{proof} We construct a $\frac{\delta}{L}-$net on $\mathcal{B}_2^k(r)$, which is denoted as $N$ and satisfy $\log|N|\leq k\log(\frac{4Lr}{\delta}).$ Since G is $L$-Lipschitz function, then by Lemma \ref{A_delta}, $ G(N)$ is $\delta-$net on $G(\mathcal{B}_2^k(r))$, i.e.,\\ $\forall z, z'\in \mathcal{B}_2^k(r), \exists z_1, z_2\in N$ s.t. \begin{equation}\label{lem2eq1} \begin{aligned} \|z-z_1\|\leq\frac{\delta}{L}, \ \ \|G(z)-G(z_1)\|\leq\delta,\\ \|z'-z_2\|\leq\frac{\delta}{L}, \ \ \|G(z')-G(z_2)\|\leq\delta. \end{aligned} \end{equation} By triangle inequality, Lemma \ref{A_delta} and \eqref{lem2eq1}, we get \begin{equation}\label{ineq_G} \begin{array}{ll} \|G(z)-G(z')\| &\leq\|G(z)-G(z_1)\|+\|G(z_1)-G(z_2)\| +\|G(z_2)-G(z')\| \\[1.5ex] & \leq 2\delta + \|G(z_1)-G(z_2)\| \end{array} \end{equation} and \begin{equation}\label{A_z1_z2} \begin{array}{ll} \frac{1}{\sqrt{m}}\|AG(z_1)-AG(z_2)\| &\leq\frac{1}{\sqrt{m}}\left(\|AG(z_1)-AG(z)\|+\|AG(z)-AG(z')\| +\|AG(z')-AG(z_2)\|\right)\\[1.5ex] &\leq \mathcal{O}(\delta) + \frac{1}{\sqrt{m}}\|AG(z)-AG(z')\|. \end{array} \end{equation} Recall $N$ is a $\frac{\delta}{L}-$net on $\mathcal{B}_2^k(r)$, consider $$G(N) = \{G(z): z\in N\}, \quad T= \Sigma^{\frac{1}{2}}G(N) = \{t: t=\Sigma^{\frac{1}{2}} G(z), z\in N\}, $$ then $|T| \leq |G(N)| \leq |N|\leq (\frac{4Lr}{\delta})^k$. Similar as Lemma \ref{A_delta}, let $\widetilde{A}=A\Sigma^{-\frac{1}{2}}$, then the rows of $\widetilde{A}$ are i.i.d standard Gaussian vectors. By the Johnson-Lindenstrauss Lemma, the projection $F: \mathbb{R}^n\rightarrow \mathbb{R}^m$ with $F(t) = \frac{1}{\sqrt{m}}A\Sigma^{-\frac{1}{2}}t$ preserves distances in the sense that, given any $\epsilon\in (0,1)$, with probability at least $1-e^{-\mathcal{O}(\epsilon^2 m/k)}$, for all $t_1, t_2\in T$, \begin{equation*} (1-\epsilon)\|t_1 - t_2\|\leq\|F(t_1)-F(t_2)\|\leq(1+\epsilon)\|t_1-t_2\| \end{equation*} provided that $m \geq \mathcal{O}(\frac{k}{\epsilon^2}\log\frac{Lr}{\delta}).$ We may choose $\epsilon = 0.5$ and hence \begin{equation}\label{J-L_A} \frac{1}{\sqrt{m}}\|AG(z_1)-AG(z_2)\| \geq 0.5\|\Sigma^{\frac{1}{2}}(G(z_1)-G(z_2))\| \geq 0.5 \sqrt{\sigma_{min}(\Sigma)} \|G(z_1)-G(z_2)\|, \end{equation} holds with probability at least $1-e^{-\mathcal{O}(m/k)}.$ It follows from \eqref{ineq_G}-\eqref{J-L_A} that \begin{equation*} \begin{array}{ll} \frac{1}{\sqrt{m}}\|AG(z)-AG(z')\| &\geq \frac{1}{\sqrt{m}}\|AG(z_1)-AG(z_2)\| - \mathcal{O}(\delta) \\ [1.5ex] &\geq 0.5 \sqrt{\sigma_{min}(\Sigma)} \|G(z_1)-G(z_2)\| - \mathcal{O}(\delta) \\ [1.5ex] &\geq 0.5 \sqrt{\sigma_{min}(\Sigma)} \|G(z)-G(z')\| - \mathcal{O}(\delta). \end{array} \end{equation*} The above inequality implies that $A$ satisfy the S-REC$(G(\mathcal{B}^k_2(r)),0.5\sqrt{\sigma_{min}(\Sigma)}, O(\delta))$ with probability at least $1-e^{-\Omega(m/k)}$, for $m \geq \mathcal{O}(k\log\frac{Lr}{\delta}).$ \end{proof} Next Lemma shows that least square decoder can be good in the subgaussian setting. \begin{lemma}\label{linf} \cite{HuangJiao:2018} Let $A\in\mathbb{R}^{m\times n}$, whose rows $a_i\in\mathbb{R}^n$, are independent subgaussian vectors with mean $\mathbf{0}$ and covariance matrix $\Sigma$. If $m\geq \mathcal{O}(\log n)$, then \begin{equation} \left\|\sum_{i=1}^{m}\left(\mathbb{E}\left[a_{i} y_{i}\right]-a_{i} y_{i}\right) / m\right\|_{\infty} \leq \mathcal{O}(\sqrt{\frac{\log n}{m}}) \end{equation} holds with probability at least $1-\frac{2}{n^3}$, and \begin{equation} \left\|A^T A / m-\Sigma\right\|_{\infty} \leq \mathcal{O}(\sqrt{\frac{\log n}{m}}) \end{equation} holds with probability at least $1-\frac{1}{n^2}$, where $\|\Psi\|_\infty$ is the maximum pointwise absolute value of $\Psi$. \end{lemma} Now we are ready to prove Theorem \ref{thel1}. \begin{proof} Recall that $$y = \eta\odot sign(Ax^*+\epsilon)$$ and \begin{equation} \widehat{z} = \arg\min_{z\in \mathcal{B}_2^k(1)}\frac{1}{2m}\|y - AG(z)\|^2. \end{equation} Our goal is to bound $\|G(\widehat{z})-\widetilde{x^*}\|_2$ with $\widetilde{x^*}=cx^*$. By triangle inequality, \begin{equation*} \begin{aligned} \|G(\widehat{z})-\widetilde{x^*}\| &=\|G(\widehat{z})- G(\overline{z}) + G(\overline{z})-\widetilde{x^*}\|\\ &\leq \|G(\widehat{z})- G(\overline{z})\| + \|G(\overline{z})-\widetilde{x^*}\|, \end{aligned} \end{equation*} where $\overline{z}\in \mathcal{B}_{2}^{k}(1)$ is chosen such that $\|G(\overline{z})-\widetilde{x^*}\|_1 \leq \tau$ by the assumption $x^*\in \mathcal{G}_{k,\tau,1}(1)$, we have \begin{equation}\label{the1} \|G(\widehat{z})-\widetilde{x^*}\| \leq \|G(\widehat{z})- G(\overline{z})\|+ \tau. \end{equation} From the definition of $\widehat{z}$ we have $$\|AG(\widehat{z}) - y\|^2 \leq \|AG(\overline{z}) - y\|^2.$$ Direct computation shows that \begin{equation*} \begin{aligned} 0 &\geq \|AG(\widehat{z}) - y\|^2 -\|AG(\overline{z}) - y\|^2 = \|AG(\widehat{z}) - AG(\overline{z})+ AG(\overline{z})- y\|^2 -\|AG(\overline{z}) - y\|^2 \\% [1.5ex] & = \|AG(\widehat{z}) - AG(\overline{z})\|^2 + 2\langle G(\widehat{z}) - G(\overline{z}), A^T(AG(\overline{z})- y) \rangle, \end{aligned} \end{equation*} which hence \begin{equation}\label{eql} \begin{array}{ll} \frac{1}{m}\|AG(\widehat{z}) - AG(\overline{z})\|^2 &\leq 2\langle G(\widehat{z}) - G(\overline{z}), \frac{1}{m}A^T(y -AG(\overline{z})) \rangle\\ [1.5ex] &\leq 2\|G(\widehat{z}) - G(\overline{z})\|_1\|\frac{1}{m}A^T(y -AG(\overline{z}))\|_\infty \\[1.5ex] &\leq 4 \|\frac{1}{m}A^T(y -AG(\overline{z}))\|_\infty, \end{array} \end{equation} where the last step is from the assumption $G(\mathcal{B}_2^k(1))\subset \mathcal{B}_1^{n} \Rightarrow \|G(\widehat{z}) - G(\overline{z})\|_1\leq 2$. Next we bound $\frac{1}{m}\|A^T(y - AG(\overline{z}))\|_\infty$. By triangle inequality, \begin{equation}\label{the5} \begin{array}{ll} \frac{1}{m}\|A^T(y - AG(\overline{z}))\|_\infty &= \frac{1}{m}\|A^T(y - A\widetilde{x^*} + A\widetilde{x^*} - AG(\overline{z}))\|_\infty \\[1.5ex] &\leq\frac{1}{m}\|A^T(y - A\widetilde{x^*})\|_\infty + \frac{1}{m}\|A^T(A\widetilde{x^*} - AG(\overline{z}))\|_\infty. \end{array} \end{equation} The first term in \eqref{the5} can be estimated by \begin{equation}\label{theq7} \begin{array}{ll} \frac{1}{m}\|A^T(y - A\widetilde{x^*})\|_\infty &=\|\frac{1}{m}A^Ty -\Sigma\widetilde{x^*} + \Sigma\widetilde{x^*} - \frac{1}{m}A^TA\widetilde{x^*}\|_\infty\\[1.5ex] &\leq\|\frac{1}{m}A^Ty -\Sigma\widetilde{x^*}\|_\infty + \|\Sigma\widetilde{x^*} - \frac{1}{m}A^TA\widetilde{x^*}\|_\infty\\[1.5ex] &\leq\frac{1}{m}\|A^Ty - \mathbb{E}[A^Ty]\|_\infty + \|\Sigma - \frac{1}{m}A^TA\|_\infty\|\widetilde{x^*}\|_1\\[1.5ex] &= \frac{1}{m}\|\sum_{i=1}^m (A_iy_i-\mathbb{E}[A_iy_i])\|_\infty + \|\Sigma - \frac{1}{m}A^TA\|_\infty\|\widetilde{x^*}\|_1\\[1.5ex] & \leq \mathcal{O}\left(\sqrt{\frac{\log n}{m}}\right), \end{array} \end{equation} where the last inequality is from Lemma \ref{linf}. To estimate the second term in \eqref{the5}, denote by $\widetilde{\Delta} = \widetilde{x^*} - G(\overline{z})$, we then have \begin{equation}\label{the8} \begin{array}{ll} \frac{1}{m}\|A^T(A\widetilde{x^*} - AG(\overline{z}))\|_\infty &= \frac{1}{m}\|A^TA\widetilde{\Delta}\|_\infty\\[1.5ex] &= \|\frac{1}{m}A^TA\widetilde{\Delta} - \Sigma\widetilde{\Delta} + \Sigma\widetilde{\Delta}\|_\infty\\[1.5ex] &\leq \|(\frac{1}{m}A^TA - \Sigma)\widetilde{\Delta}\|_\infty + \|\Sigma\widetilde{\Delta}\|_\infty\\[1.5ex] &\leq \|\widetilde{\Delta}\|_1(\|\frac{1}{m}A^TA - \Sigma\|_\infty + \|\Sigma\|_\infty)\\[1.5ex] &\leq \mathcal{O}\left(\sqrt{\frac{\log n}{m}}+1\right)\tau. \end{array} \end{equation} From lemma \ref{A_REC}, $A$ satisfies the S-REC$(G(\mathcal{B}_2^k(1)), 0.5\sqrt{\sigma_{min}(\Sigma)}, O(\delta)),$ with probability $1 - e^{-O(m/k)}$ as long as $m = O(k\log\frac{L}{\delta})$, i.e., \begin{equation}\label{the3} \frac{1}{m}\|AG(\widehat{z}) - AG(\overline{z})\|^2 \geq 0.5\sqrt{\sigma_{min}(\Sigma)}\|G(\widehat{z}) - G(\overline{z})\|^2 - \mathcal{O}(\delta). \end{equation} Substituting \eqref{the5} - \eqref{the3} into \eqref{eql} we obtain \begin{equation}\label{eq2} 0.5\sqrt{\sigma_{min}(\Sigma)}\|G(\widehat{z}) - G(\overline{z})\|^2 - \mathcal{O}(\delta) \leq \mathcal{O}\left(\sqrt{\frac{\log n}{m}}+\tau + \sqrt{\frac{\log n}{m}}\tau\right). \end{equation} We may choose $\delta = O(\tau)$ in \eqref{eq2} and substituted it into \eqref{the1}, we conclude \begin{equation*} \|G(\widehat{z}) - \widetilde{x}^*\| \leq \mathcal{O}\left(\left(\frac{\log n}{m}\right)^{1/4} + \sqrt{\tau}\right). \end{equation*} \end{proof} Obviously, $\tau$ measures the approximation error between the target $x^*$ and the generator $G$. If we assume that $\tau$ is smaller than $\mathcal{O}((\frac{\log n}{m})^{1/2})$, Theorem \ref{thel1} shows that under that approximate low generative dimension prior, our proposed least decoder (\ref{ls1})-(\ref{ls2}) can achieve an estimation error $\mathcal{O}((\log n/m)^{1/4})$ provide that the number of samples $m \geq \mathcal{O}(\max \{\log n, k\log \frac{L}{\tau}\})$. Similar results has been established for 1-bit CS under the sparsity prior $\|x^*\|_0\leq s$ in the literatures. For example, \cite{PlanVershynincpam:2013} proposed a linear programming decoder \begin{equation*} x_{\mathrm{lp}}\in \arg\min_{x\in \mathbb{R}^n} \|x\|_1 \quad \mathrm{s.t.} \quad y \odot A x\geq 0 \quad \|A x\|_1 = m. \end{equation*} in the noiseless setting without sign flips. It has been proved in \cite{PlanVershynincpam:2013} that $$\|\frac{x_{\mathrm{lp}}}{\|x_{\mathrm{lp}}\|} - x^*\|\leq \mathcal{{O}}((\frac{s\log n}{m})^{1/5}),$$ provided that $m=\mathcal{O}(s\log^2(n/s))$. Later, in \cite{PlanVershynin:2013}, another convex decoder \begin{equation*} x_{\mathrm{cv}} \in \arg\min_{x\in \mathbb{R}^n} -\langle y, A x\rangle/m \quad \mathrm{s.t.} \quad \|x\|_1 \leq s, \quad \|x\| \leq 1, \end{equation*} is shown to achieve a estimation error bound $$\|\frac{x_{\mathrm{cv}}}{\|x_{\mathrm{cv}}\|} - x^*\|\leq \mathcal{{O}}((\frac{s\log n}{m})^{1/4}).$$ Although the order of estimation error proved in Theorem \ref{thel1} does not depend on the Lipschtiz constant of the generator $G$ which is usually exponential order of the depth of the neural networks \cite{bora2017compressed}, it is sub-optimal. Next we improve the estimation error bound by using the tool of local (Gaussian) mean width. The definition of local mean width is given below, it can also be found in \cite{PlanVershynin:2016,PlanVershynin:2017}. \begin{definition}\label{gmw} Let $S\subseteq\mathbb{R}^n$. The local mean width of $S$ is a function of scale $t \geq 0$ defined as $$ \omega_{t}(S)=\mathbb{E}_{g\sim \mathcal{N}(\mathrm{0},\mathbf{I})} \left[\sup _{x \in S \cap t \mathcal{B}_2^{n}(1)}\langle x, g\rangle\right]. $$ \end{definition} \begin{theorem}\label{theorem2} Given an $L$-Lipschitz generator satisfying $G(\mathcal{B}_2^k(r))\subset \mathcal{S}_2^{n-1}$. Assume the 1-bit CS model \eqref{setup} holds with $x^* \in \mathcal{G}_{k,\tau}(r)$, and $m \geq \mathcal{O}\left(\max\{k\log\frac{Lrn}{k},\log n\}\right)$, then with high probability, the least square decoder defined in (\ref{ls2})-(\ref{ls1}) satisfies \begin{equation*} \|\hat{x}-c{x^*}\|_2\leq \mathcal{O}\left(\sqrt{\frac{k}{m}\log\frac{rLn}{k\gamma}}\right) +\mathcal{O}(\frac{\tau n}{m}), \end{equation*} for $\gamma=\max\{\tau,\frac{k}{m}\log \frac{Lrn}{k}+\sqrt{\frac{\log n}{m}}\}$. If the approximation error satisfies $\tau = \mathcal{O} (\frac{\sqrt{mk\log (Ln)}}{n})$ and $r = \mathcal{O}(1)$, then we have \begin{equation*} \|\hat{x}-c{x^*}\|_2\leq \mathcal{O}\left(\sqrt{\frac{k}{m}\log(Ln)}\right). \end{equation*} \end{theorem} First we do some preparing work before the proof. Similar as the proof to Theorem \ref{thel1}, let $\overline{z}\in \mathcal{B}_{2}^{k}(r)$ satisfying $\|G(\overline{z})-\widetilde{x^*}\| \leq \tau$. By triangle inequality \eqref{the1}, we have \begin{equation}\label{eqth21} \|G(\widehat{z})-\widetilde{x^*}\| \leq \|G(\widehat{z})- G(\overline{z})\|+ \tau. \end{equation} Let $h = G(\hat{z})-G(\overline{z})$, $\gamma=\max\{\tau,\frac{k}{m}\log \frac{Lrn}{k}+\sqrt{\frac{\log n}{m}}\}$, $S = G(\mathcal{B}_2^k(r))$, and $D_{\gamma}(S,G(\overline{z}))$ be the tangent cone which is defined by $$ D_{\gamma}(S,G(\overline{z})) = \{t u: t>0, u = G(z)-G(\overline{z}), \|u\|>\gamma\}.$$ If $\|h\|\leq \gamma $ this Theorem is trivial by (\ref{eqth21}), otherwise $\|h\|> \gamma $, then $h\in D_{\gamma}(S,G(\overline{z}))$. Let $$\mathcal{D} = D_{\gamma}(S,G(\overline{z})) \cap \mathcal{S}_{2}^{n-1}.$$ We need the following two lemmas to proceed the proof. \begin{lemma}\label{pv} With probability at least $0.99$, both \begin{equation}\label{eqth25} \inf _{v \in \mathcal{D}} \frac{1}{\sqrt{m}}\|A v\|_{2} \geq C_0 \end{equation} \begin{equation}\label{eqth26} \sup _{v \in \mathcal{D}}\frac{1}{m}\left\langle v, A^{T} (y -A\widetilde{x^*})\right\rangle \leq C\frac{\omega_1(\mathcal{D})}{\sqrt{m}}. \end{equation} hold, where $\omega_1(\mathcal{D})$ is the local (Gaussian) mean width of $\mathcal{D}$ given in Definition \ref{gmw}. \end{lemma} \begin{proof} The results can be found in the proof to Theorem 1.4 in \cite{PlanVershynin:2016}. \end{proof} \begin{lemma}\label{gw} $$\omega_1(\mathcal{D})= \mathcal{O}\left(\sqrt{k\log(\frac{rLn}{k\gamma})}\right).$$ \end{lemma} \begin{proof} Recall that $$D_{\gamma}(S,G(\overline{z})) = \{t u: t>0, u = G(z)-G(\overline{z}), \|u\|>\gamma\}.$$ and $$\mathcal{D} = D_{\gamma}(S,\overline{z}) \cap \mathcal{S}_{2}^{n-1}.$$ Then $\mathcal{D}=\{\frac{ G(z)-G(\overline{z})}{\| G(z)-G(\overline{z})\|}: z\in \mathcal{B}_2^k(r), \| G(z)-G(\overline{z})\|>\gamma\}.$ Let $\mathcal{U}$ be $\frac{\epsilon\gamma}{2L}-$ net on $\mathcal{B}_2^k(r)$ satisfying $$\log|\mathcal{U}|\leq k\log(\frac{8Lr}{\gamma\epsilon}),$$ which can be obtained by Lemma \ref{cov}. Then, $\mathcal{C} = \{\frac{G(u)-G(\overline{z})}{G(u)-G(\overline{z})}:u \in \mathcal{U} \ \ \mathrm{and} \ \ \|G(u)-G(\overline{z})\|\geq \gamma\}$ is a $\epsilon-$ net of $\mathcal{D}$. Indeed, let $\frac{a}{\|a\|}$ with $a = G(z)-G(\overline{z})$ be an arbitrary element in $\mathcal{D}$, and $u\in \mathcal{U}$ such that $\|u-z\|\leq \frac{\epsilon\gamma}{2L}$. Let $b= G(u)-G(\overline{z})$, then $\frac{b}{\|b\|}\in \mathcal{C}$ and satisfies that \begin{align*} & \left\| \frac{a}{\|a\|}-\frac{b}{\|b\|}\right\| = \left\|\frac{a\|b\|-b\|a\|}{\|a\|\|b\|}\right\| \\ & \leq \frac{\|(a-b)\|b\|\|+\|b(\|b\|-\|a\|)\|}{\|a\|\|b\|}\\ & \leq 2\frac{\|a-b\|}{\|a\|} \leq 2L\frac{\frac{\epsilon\gamma}{2L}}{\gamma}=\epsilon, \end{align*} where in last inequality we use the facts that $G$ is $L$-Lipschitz and $\|a\|\geq \gamma.$ By Massart's finite class Lemma in \cite{boucheron2013concentration}, the local Gaussian width of $\mathcal{C}$ satisfies \begin{equation}\label{gf} \omega_1(\mathcal{C})\leq \sqrt{2k\log(\frac{16Lr}{\gamma\epsilon})}. \end{equation} Since $\forall x \in \mathcal{D}$, there exist $\tilde{x} \in \mathcal{C}$ such that $\|x-\tilde{x}\|\leq \epsilon$. We then have $\forall g \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ \begin{equation*} \langle g, x\rangle \leq \langle g, \tilde{x}\rangle + \langle g, x-\tilde{x}\rangle \leq \langle g, \tilde{x}\rangle + \epsilon \|g\|. \end{equation*} The above display and the definition of local Gaussian mean width and the fact $\mathcal{D}\subset \mathcal{B}_2^{n}(1)$ implies \begin{align*} \omega_1(\mathcal{D})&= \mathbb{E}[\sup_{x\in \mathcal{D}}\langle g, x\rangle]\leq \mathbb{E} [\sup_{\tilde{x} \in \mathcal{C}}\langle g, \tilde{x}\rangle + \epsilon \|g\|]\\ & \leq \omega_1(\mathcal{C})+\sqrt{n}\epsilon \\ &\leq \sqrt{2k\log(\frac{16Lr}{\gamma\epsilon})} + \sqrt{n}\epsilon, \end{align*} where the second equality follow from $\mathbb{E}[\|g\|] = \sqrt{n}$, and in the third inequality we use \eqref{gf}. The proof will be finished by setting $\epsilon = \sqrt{\frac{k}{n}}$. \end{proof} \begin{lemma}\label{spectralnorm} Let $a_i\in \mathbb{R}^{n}, i = 1,...n$ are i.i.d samples with mean $0$ and covariance matrix $\Sigma$. Denote $\Sigma_m = \sum_{i=1}^{m} a_ia_i^{T}/m$. Then for any $u \geq 0$ $$ \left\|\Sigma_{m}-\Sigma\right\| \leq \mathcal{O}\left(\sqrt{\frac{n+u}{m}}+\frac{n+u}{m}\right)\|\Sigma\| $$ with probability at least $1-2 e^{-u}$. \end{lemma} \begin{proof} See exercise 4.7.3 in \cite{vershynin2018high}. \end{proof} Now we can move to the proof to Theorem \ref{theorem2}. \begin{proof} Similar as \eqref{eql} in the proof to Theorem \ref{thel1}, by \eqref{eqth25} in Lemma \ref{pv} and triangle inequality, we have with probability at least $0.99$ that \begin{equation}\label{eq23} \begin{array}{ll} C_0 \|h\|^2 \leq \frac{1}{m}\|Ah\|^2 &\leq 2\langle h, \frac{1}{m}A^T(y -AG(\overline{z}))\rangle.\\[1.5ex] &\leq 2|\langle h, \frac{1}{m}A^T(y -A\widetilde{x^*}) \rangle| + 2|\langle h, \frac{1}{m}A^T A(\widetilde{x^*}-G(\overline{z})) \rangle|. \end{array} \end{equation} We have to bound the two terms in \eqref{eq23}. For the first term, let $v = \frac{h}{\|h\|} = \frac{G(\widehat{z}) - G(\overline{z})}{\|G(\widehat{z}) - G(\overline{z})\|}$, hence $v\in \mathcal{D}$. Then by \eqref{eqth26} in Lemma \ref{pv} and Lemma \ref{gw}, we obtain that with probability at least $0.99$ \begin{equation}\label{eq30} |\langle h, \frac{1}{m}A^T(y -A\widetilde{x^*}) \rangle| \leq \mathcal{O}\left(\sqrt{\frac{k\log(rLn/(k\gamma))}{m}}\right) \|h\|. \end{equation} For the second term in \eqref{eq23}, we apply Cauchy-Schwarz inequality and using spectral norm estimation for random matrix in Lemma \ref{spectralnorm} , we get with high probability at least $1-e^{-n}$ that \begin{align}\label{eq31} |\langle h, \frac{1}{m}A^T A(\widetilde{x^*}-G(\overline{z})) \rangle| &\leq \|h\| \frac{1}{m}\|A^T(A\widetilde{x^*} - AG(\overline{z}))\| \nonumber\\ &\leq (\|A^TA/m-\Sigma\|_2+\|\Sigma\|) \|\widetilde{x^*}-G(\overline{z})\| \|h\| \nonumber \\ &\leq \mathcal{O}(\sqrt{\frac{2n}{m}}+\frac{2n}{m}+1)\|\Sigma\|\tau\|h\|\\ & \leq \mathcal{O}(\tau \frac{n}{m})\|h\|. \end{align} Combining \eqref{eq23}, \eqref{eq30} and \eqref{eq31} we get $$\|h\|\leq \mathcal{O}(\sqrt{\frac{k\log(rLn/(k\gamma))}{m}}) +\mathcal{O}(\frac{\tau n}{m}).$$ Moreover, if the approximation error $\tau = \mathcal{O} (\frac{\sqrt{mk\log (Ln)}}{n})$ and $r = \mathcal{O}(1)$, the above inequality can be reduced to \begin{equation*} \|\hat{x}-c{x^*}\|_2\leq \mathcal{O}\left(\sqrt{\frac{k}{m}\log(Ln)}\right), \end{equation*} which completes the proof. \end{proof} By assuming the Lipschitz constant $L$ is larger than $n$ (this usually holds in deep neural network generators), the estimation error $\mathcal{O}(\sqrt{\frac{k\log L}{m}})$ and the sample complexity $m \geq \mathcal{O}(k\log L) $ proved in Theorem \ref{theorem2} are sharp even in the standard compressed sensing with generative prior \cite{liu2020information}. Under generative prior, \cite{liu2020sample} proposed the estimator $\hat{x} = G(\hat{z})$ with $\hat{z} \in \{z: y=\mathrm{sign}(AG(z))\}$ in the setting the rows of $A$ are i.i.d. sampled from $\mathcal{N}(\textbf{0},\mathbf{I})$. The sample complexity obtained in \cite{liu2020sample} is also $\mathcal{O}(k\log L)$. \cite{qiu2020robust} proposed unconstrained empirical risk minimization to recover the 1-bit CS in the scenario that the rows of $A$ are i.i.d. sampled from subexponential distributions, and the generator $G$ is restricted to be a $d$-layer ReLU network. The sample complexity derived in \cite{qiu2020robust} is $\mathcal{O}(kd\log n)$. There are some works on generative priors which assumes that the target signals can be exactly generated by a generator $G$, i.e., $x^* \in \mathcal{G}_{k,\tau}(r)$ with $\tau=0$, see e.g. \cite{liu2020generalized,qiu2020robust}. As mentioned in Theorem \ref{theorem2}, we can relax this assumption by requiring the target signal $x^*$ can be generated by $G$ approximately, i.e., $x^* \in \mathcal{G}_{k,\tau}(r)$ with \begin{equation}\label{eq:tau} \tau = \mathcal{O} (\frac{\sqrt{mk\log (Ln)}}{n}). \end{equation} Since natural signals/images data with low intrinsic dimension can be represented approximately by neural networks is empirically verified in \cite{goodfellow14,kingma14,rezende2015variational}. Next, we verify the assumption \eqref{eq:tau} by construct a generator ${G}$ with properly chosen depth and width based on the recent approximation ideas of deep neural networks \cite{shen2019nonlinear,vershynin2020memory,vardi2021optimal,huang2021error} by utilizing the bit extraction techniques \cite{bartlett1999almost,bartlett2019nearly}. To this end, we recall the definition of Minkowski dimension which is used to measures the intrinsic dimension of the target signals living in a large ambient dimension. \begin{definition}\label{Minkowski dimensions} \textnormal{ The upper and the lower Minkowski dimensions of a set $A \subseteq \mathbb{R}^n$ are defined respectively as \begin{align*} \overline{\dim}_M(A) := \limsup_{\epsilon\to 0} \frac{\log |N_{\epsilon}|}{-\log \epsilon}, \\ \underline{\dim}_M(A) := \liminf_{\epsilon\to 0} \frac{\log |N_{\epsilon}|}{-\log \epsilon}, \end{align*} where $N_{\epsilon}$ is the $\epsilon$-net of $A$. If $\overline{\dim}_M(A) = \underline{\dim}_M(A) = \dim_M(A)$, then $\dim_M(A)$ is called the \emph{Minkowski dimension} of the set $A$. } \end{definition} The Minkowski dimension measures how the number of elements in the $\epsilon$-net $N_{\epsilon}$ of $A$ decays when the radius of covering balls converges to zero. We collect the useful properties from \cite{falconer2004fractal} of Minkowski dimension. \begin{proposition}\cite{falconer2004fractal}\label{pmd} $\dim_M(A) < \bar{n}$ if and only if $\forall \gamma >0$, $$\epsilon^{-(\bar{n}-\gamma)}\leq |N_{\epsilon}| \leq \epsilon^{-(\bar{n}+\gamma)}$$ holds when $\epsilon$ small enough. Furthermore, $\dim_M(A) \leq \bar{n}$ implies that $\forall \epsilon>0$, there exist an $\epsilon$-net $N_{\epsilon}$ of $A$ such that $|N_{\epsilon}| \leq c\epsilon^{-\bar{n}}$, where $c$ is a finite number. \end{proposition} The next three Lemmas present the approximation ability of the deep neural networks. \begin{lemma}\label{fit1} For any $\mathcal{W},\ell\in \mathbb{N}$, given $\mathcal{W}^{2} \ell$ samples $\left(z_{i}, y_{i}\right), i=1, \ldots, \mathcal{W}^{2} \ell$, with distinct $z_{i} \in \mathbb{R}^{k}$ and $y_{i} = \sum_{j=1}^{\ell}2^{-j}b_{i,j}$, $b_{i,j}\in\{0,1\}$. There exists a ReLU network $G_1$ with width $4 \mathcal{W}+4$ and depth $\mathcal{\ell}+2$ such that $G_1\left(z_{i}\right)=y_{i}$ for $i=1, \ldots, \mathcal{W}^{2} \ell.$ \begin{proof} This lemma follows directly from of Lemma $2.1$ and $2.2$ in \cite{shen2019nonlinear}. \end{proof} \end{lemma} \begin{lemma}\label{bitext} For any $\ell \in \mathbb{N}$, there exists a ReLU network $G_2$ with width $8$ and depth $2\ell$ such that $G_2(x, j)=b_{j}$ for any $x=\sum_{j=1}^{\ell}2^{-j}b_j$ with $b_{j} \in\{0,1\}$ and $j=1,2, \ldots, \ell$. \end{lemma} \begin{proof} This lemma follow from Lemma 5.7 in \cite{huang2021error}. \end{proof} \begin{lemma}\label{intmul} Let $\mathcal{W}\in \mathbb{N}$. Given any $\mathcal{W}^{2} \ell^2$ points $\{\left(z_{i}, b_{i,j}\right) i=1, \ldots, \mathcal{W}^{2} \ell, j=1,...,\ell\}$, where $z_{i} \in \mathbb{R}^{k}$ are distinct and $b_{i,j} \in \{0,1\}$. There exists a ReLU network $G_3$ with width $4 \mathcal{W}+6$ and depth $3 \ell+1$ such that $G_3\left(z_{i},j\right)=b_{i,j}$, $i=1, \ldots, \mathcal{W}^{2} \ell, j=1,...,\ell$. \end{lemma} \begin{proof} $\forall i=1, \ldots, \mathcal{W}^{2} \ell$, let $y_{i}= \sum_{j=1}^{\ell}2^{-j}b_{i,j} \in[0,1].$ By Lemma \ref{fit1} there exists a network $G_{1}$ with width $4 \mathcal{W}+4$ and depth $\ell+2$ such that $G_{1}\left(z_{i}\right)=y_{i}$ for $i=1, \ldots, \mathcal{W}^{2} \ell$. By Lemma 6.4, there exists a network $G_{2}$ with width 8 and depth $2\ell$ such that $G_{2}\left(y_{i}, \ell\right)=b_{i, \ell}$ for any $i=1, \ldots, \mathcal{W}^{2} \ell$ and $j=1, \ldots, \ell$. Therefore, the function $G_3(\cdot, j)=G_{2}\left(G_{1}(\cdot), j\right)$ implemented by a ReLU network with width $ 4\mathcal{W}+6$ and depth $3 \ell+1$ satisfies our requirement. \end{proof} \begin{theorem}\label{thapp} Assume the target signals $x^*\in A^*\subseteq [0,1]^n$ with $\dim_M(A^*) = k$. Then $\forall \tau \in (0,1)$ there exist a generator network $G:\mathbb{R}^{k}\rightarrow \mathbb{R}^n$ with depth $3\ell+2$ and width $(4\lceil \sqrt{sn/\ell}\rceil+6)n$ such that $x^*\in \mathcal{G}_{k,\tau,2}(1), \forall x^*\in A^*$, where $\ell = \lceil\log_2(\frac{2n}{\tau})\rceil+1, \quad s = \mathcal{O}(\tau^{-k}).$ \end{theorem} \begin{proof} Let $\epsilon = \tau/2$. Since the target signals $x^*$ are contained in $A^*$ with $\dim_M(A^*) = k$, then there exist an $\epsilon$-net $N_{\epsilon} = \{o^{*}_i\}_{i=1}^{s}$ of $A^*$ with $s\leq c\epsilon^{-k}$ by Proposition \ref{pmd}. For any $o^{*}_i \in N_{\epsilon}$, let the binary representation of $ o^{*}_i$ be $o^{*}_i=\sum_{j=1}^{\infty} 2^{-j} \tilde{o}^{*}_{i,j}$ with $\tilde{o}^{*}_{i,j}\in\mathbb{R}^n$ whose entries $\in \{0,1\}$. Let $\ell = \lceil\log_2(\frac{n}{\epsilon})\rceil+1$, the truncation of $o^{*}_i$ be $T_{\ell}o^{*}_i = \sum_{j=1}^{\ell} 2^{-j} \tilde{o}^{*}_{i,j}$, then it implies that \begin{equation}\label{tc} \|o^{*}_i-T_{\ell}o^{*}_i\|\leq \epsilon, \forall i =1,...,s. \end{equation} By construction of $N_{\epsilon}$, (\ref{tc}) and triangle inequality, we have $\{T_{\ell}o^{*}_i\}_{i=1}^s$ is an $\tau$-net of $A^*$. Let $e= (1,0,0,...0)^T\in\mathbb{R}^s$, $\mathcal{W} = \lceil \sqrt{sn/\ell}\rceil$, and $z_i$ be the $i$-th element of $\{e, e/2,...,e/(ns)\}$, $b_{i,j} = G_2(T_{\ell}o^{*}_i,j)$, $i=1,...,sn, j=1,...\ell$. By Lemma \ref{intmul}, we have $G_3\left(z_{i},j\right)=b_{i,j}$, $i=1, \ldots, sn, j=1,...,\ell$. $\forall x\in \mathbb{R}^k$, define $$G(x)=(\sum_{j=1}^{\ell} 2^{-j}G_3(a_1 x, j),\sum_{j=1}^{\ell} 2^{-j}G_3(a_2x, j),...\sum_{j=1}^{\ell} 2^{-j}G_3(a_n x,j))^T:\mathbb{R}^k\rightarrow \mathbb{R}^n$$ with $a_1>0, a_2>0,...,a_n>0.$ Let $\theta_1$ and $\theta_2$ be the parameters of the ReLU network $G_1$ and $G_2$, respectively. Denote $a =(a_1,...,a_n)\in\mathbb{R}^n$ and $\theta=(a,\theta_1,\theta_2)$. Then $G(x)$ is a ReLU network with free parameter $\theta$ and depth $3\ell+2$ and width $(4\lceil \sqrt{sn/\ell}\rceil+6)n$. We use $G_{\theta}(x)$ to emphasize the dependence of $G$ on $\theta$. For $i=1,2,...s$, let $a^{(i)} = (\frac{1}{(i-1)n+1},\frac{1}{(i-1)n+2}, ..., \frac{1}{(i-1)n+n})^T \in\mathbb{R}^n,$ $\theta^{(i)} = (a^{(i)},\theta_1,\theta_2)$, by construction, we have $G_{\theta^{(i)}} (e) = T_{\ell}o^{*}_{i}$. \end{proof} \section{Numerical Experiments} \subsection{Experiments setting} The rows of the matrix $A$ are i.i.d. random vectors sampled from the multivariate normal distribution $\mathcal{N}(\mathbf{0},\Sigma)$ with $\Sigma_{jk}=\nu^{|j-k|}$, $1\leq j,k\leq n$, $\nu =0.3$ in our tests. The elements of $\epsilon$ are generated from $\mathcal{N}(\mathbf{0},\sigma^2\mathbf{I}_m)$ with $\sigma=0.1$ in our examples. $\eta$ has independent coordinates with $\mathbb{P}\{\eta_i=1\}=1-\mathbb{P}\{\eta_i=-1\}=q\neq\frac{1}{2}$, with different $q$ which will be clarified in each example. The generative model $G$ in our experiments is a pretrained variational autoencoder (VAE) model\footnote{We use the pre-trained generative model of (Bora et., 2017) available at https://github.com/ AshishBora/csgm.}. The MNIST dataset \cite{lecun1998gradient} consisting $60000$ handwritten images of size 28$\times$28 is applied in our tests. For this dataset, we set the VAE model with a latent dimension $k=20$. Input to the VAE is a vectorized binary image of input dimension $784$. Encoder and decoder are both fully connected network with two hidden layers, i.e., encoder and decoder are with size $784-500-500-20$ and $20-500-500-784$, respectively. To avoid the norm constraint $\|z\|_2 \leq r$ in the least square decoder (\ref{ls1})-(\ref{ls2}), we use its Lagrangian form as following: \begin{equation}\label{LS generative model} \min_{z} \frac{1}{2m}\|y - AG(z)\|^2 + \lambda\|z\|^2, \end{equation} where the regularization parameter $\lambda$ is chosen as $0.001$ for all the experiments. We do $10$ random restarts with $1000$ steps per restart and choose the best estimation. The reconstruction error is calculated over $10$ images by averaging the per-pixel error in terms of the $l_2$ norm. \subsection{Experiment Results} We compare our results with two SOTA algorithms: BIHT \cite{JacquesLaska:2013} and generative prior based algorithm VAE \cite{liu2020sample}. The least square decoder with VAE in our paper is named by LS-VAE Figures $1-4$ indicate that with or without sign flip in measurements, generative prior based methods attain more accurate reconstruction than BIHT. Additionally, if sign flips are added, Figures $3$ and $4$ show that LS-VAE attain the higher accurate reconstruction. In Figure $5$, we plot the reconstruction error for different measurements (from $50$ measurements to $300$ measurements). VAE and LS-VAE both have smaller reconstruction errors, but LS-VAE is slightly better. Moreover, after $200$ measurements, the reconstruction error emerges saturation for generative prior based methods, due to its output is constrained to the presentation error \cite{bora2017compressed}. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.4]{100-03-biht.pdf} \setlength{\abovecaptionskip}{1pt} \caption{original images, reconstructions by BIHT, VAE and LS-VAE (from top to bottom row) with 100 measurements} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.4]{300-03-biht.pdf} \setlength{\abovecaptionskip}{1pt} \caption{original images, reconstructions by BIHT, VAE and LS-VAE (from top to bottom row) with 300 measurements} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.42]{100-003-03-biht.pdf} \setlength{\abovecaptionskip}{1pt} \caption{original images, reconstructions by BIHT, VAE and LS-VAE (from top to bottom row) with 100 measurements and 3\% sign flips} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.4]{300-003-03-biht.pdf} \setlength{\abovecaptionskip}{1pt} \caption{original images, reconstructions by BIHT, VAE and LS-VAE (from top to bottom row) with 300 measurements and 3\% sign flips} \end{center} \end{figure} \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.5]{mnist_00001_00_0001_to300_nolasso_l2.pdf} \includegraphics[scale=0.5]{mnist_00001_003_0001_to300_nolasso_l2.pdf} \setlength{\abovecaptionskip}{1pt} \caption{pixel-wise reconstruction error as the number of measurements varies. Error bars indicate 95\% confidence intervals. The result with no sign flips and with 3\% sign flips are shown in the left and right, respectively.} \end{center} \end{figure} \section{Conclusion} We present a least square decoder by exploring the low generative intrinsic dimension structure of the target for the 1-bit compressive sensing with possible sign-flips. Under the assumption that the target signals can be approximately generated via $L$-Lipschitz generator $G: \mathbb{R}^k\rightarrow\mathbb{R}^{n}, k\ll n$, we prove that, up to a constant $c$, with high probability, the least square decoder achieves a sharp estimation error $\mathcal{O} (\sqrt{\frac{k\log (Ln)}{m}})$ as long as $m \geq O( k \log (Ln))$. We verify the (approximately) deep generative prior holds if the target signals have low intrinsic dimensions by constructing a ReLU network with properly chosen depth and width. Extensive numerical simulations and comparisons with state-of-the-art methods demonstrated the least square decoder is the robust to noise and sign flips, which verifies our theory. We only consider the analysis of the least squares decoders, we will leave the analysis of the regularized least squares decoder in the future work. \section*{Acknowledgements} Y. Jiao is supported in part by the National Science Foundation of China under Grant 11871474 and by the research fund of KLATASDSMOE of China. X. Lu is partially supported by the National Key Research and Development Program of China (No.2018YFC1314600), the National Science Foundation of China (No. 11871385), and the Natural Science Foundation of Hubei Province (No. 2019CFA007). \bibliographystyle{plain}
1,116,691,498,730
arxiv
\section{Introduction} \label{sec:1} In this paper we study the mass loss and the mass history of star clusters in a tidal field, based on a grid of $N$-body simulations. The purpose of this study is three-fold: (a) to understand and quantitatively describe the different effects that are responsible for mass loss, (b) to study the interplay between the different mass loss mechanisms and (c) to develop a method for predicting the mass history of individual clusters of different initial conditions and in different environments. This information is needed if one wants to analyse observed star cluster systems in different galaxies. Therefore we describe in this paper in detail the different mass loss effects; how much each one contributes to the mass loss rate; how it depends on cluster parameters and environment and how these effects determine the mass history and total lifetime of a cluster. In particular we will point out the importance of the loss of stars that is induced by stellar evolution. This mass loss is proportional to the evolutionary mass loss and therefore adds to the mass loss rates of clusters at young ages. We will show that both the evolution-induced mass loss and the relaxation-driven mass loss start slowly with a delay time on the order of a few crossing times at the tidal radius. We will also show that the mass loss rate after core collapse is about a factor two higher than before core collapse, and that the dependence of the relaxation-driven mass loss on mass is different before and after core collapse. The mass of star clusters decreases during their lifetime, until they are finally completely dissolved. The stars that are lost from clusters add to the population of field stars. The mass loss is due to stellar evolution and to several dynamical effects such as two-body relaxation, tidal stripping and shocks. These effects have all been extensively studied individually. However, to understand and describe the combination and interplay of the effects one has to rely on dynamical simulations. The effects of stellar evolution on star clusters can be studied by means of stellar evolution tracks for a large range of masses and metallicities (e.g. Anders \& Fritze-v. Alvensleben 2003; Bruzual \& Charlot 2003; Fioc \& Rocca-Volmerange 1997; Leitherer et al. 1999; Maraston 2005). The dynamical effects of cluster evolution have been described in a large number of theoretical studies starting with Ambartsumian (1938), Spitzer (1940), Chandrasekhar (1943) and a series of papers by King, e.g. King (1958). This was followed by the seminal works of Spitzer (1958 and 1987) and by many other studies, e.g. Chernoff \& Weinberg (1990); Gnedin \& Ostriker (1997); Aarseth (1999); Fukushige \& Heggie (2000). The first $N$-body simulations of clusters were done by von Hoerner (1960). For a review of early \mbox{$N$-body}\ simulations of star clusters, see Aarseth \& Lecar (1973). The recent advancement of computational power, in particular the development of the $GRAPE$-computers (Makino et al. 2003) and the use of Graphics Processing Units (GPU) has allowed the improvement and verification of these theoretical models by means of direct $N$-body simulations (Vesperini \& Heggie 1997; Portegies Zwart et al. 1998; Baumgardt \& Makino 2003, hereafter BM03; Gieles \& Baumgardt 2008). For the purpose of the present study the following results are particularly important: \\ (i) The realization that mass loss by tidal effects does not only scale with the half-mass relaxation time (as was assumed in earlier studies), but by a combination of the half-mass relaxation time and the crossing time (Fukushige \& Heggie 2000; Baumgardt 2001; BM03). This implies that the lifetime due to evaporation in a tidal field does not scale linearly with the cluster mass $M$, but with $M^{\gamma}$ with $\gamma \simeq 0.6$ to 0.7. \\ (ii) The realization that mass loss by shocks due to the passage of spiral arms and giant molecular clouds scales with the density $M/r^3$ of the clusters (Spitzer 1958, Gnedin \& Ostriker 1997). Adopting the observed mean mass-radius relation of $r \propto M^{0.1}$ for clusters in spiral galaxies (Larsen 2004, Scheepmaker et al. 2007) then also results in a mass loss rate that scales approximately as $M^{\gamma}$, with $\gamma$ similar to the value of evaporation in a tidal field (Gieles et al. 2006, 2007). This is in agreement with empirical determinations of $\gamma \simeq 0.6$ from studies of cluster samples in different galaxies (Boutloukos \& Lamers 2003; Gieles et al. 2005; Gieles 2009).\\ (iii) A grid of cluster evolution models with different initial masses, different initial concentration factors and in different Galactic orbits by means of $N$-body simulations (BM03) allows a study of the interplay between stellar evolution and dynamical mass loss, that is not easily done by theoretical studies. In particular it shows how the mass loss depends on mass, age and external conditions, and how the stellar mass function evolves during the life of the cluster. In this paper we will use a grid of \mbox{$N$-body}\ simulations of Roche-lobe filling models (BM03), supplemented with a new grid for Roche-lobe underfilling models, of Galactic clusters of different initial mass, different initial concentrations and in different orbits to describe the process of mass loss from clusters and the interplay between the different effects. We also derive a method for calculating the mass loss and mass history for clusters of different metallicity and in different environments. This results in an improvement of the analytical description of the mass history of clusters that was based on a combination of stellar evolution and dynamical effects. (Lamers et al. 2005). The paper is arranged as follows. In Sect. 2 we describe the mass loss processes of star clusters: stellar evolution and dynamical effects. In Sect. 3 we describe the results of N-body simulations of BM03 used in this study. Sect. 4 deals with the mass loss due to stellar evolution, i.e. both the direct mass loss and the evolution-induced loss of stars. In Sections 5 and 6 we describe the relaxation-driven mass loss respectively before and after core collapse. Section 7 deals with the mass evolution of clusters in elliptical orbits around the galaxy and Sect. 8 deals with initially Roche-lobe underfilling clusters. In Sect. 9 we study the relation between the total age of a cluster and the initial parameters. In Sect. 10 we predict the mass loss history of clusters and its main contributions. Sections 11 and 12 contain a discussion and the summary plus conclusions of this study. In Appendix A we present a recipe to predict the mass history of star clusters in different environments and with different metallicities. In Appendix B we tabulate numerical coefficients to calculate the mass loss of clusters by stellar evolution. \section{Mass loss processes} \label{sec:2} Clusters lose mass by stellar evolution and by dynamical effects, such as two-body relaxation and tidal stripping of stars in a cluster that is emersed in a steady tidal field and shocks. The mass loss by stellar evolution is in the form of gas ejected by stellar winds and by supernovae, but also in the form of compact remnants that may be ejected if they get a kick velocity at birth. Mass loss by dynamical effects is always in the form of stars. Throughout this paper we will refer to these two effects respectively as ``mass loss by stellar evolution'' and ``dissolution'', either in a steady potential field or due to tidal perturbation (shocks). \subsection{Mass loss by stellar evolution} \label{sec:2.1} The mass fraction that is lost by stellar evolution depends on the metallicity and on the adopted stellar initial mass function. We have calculated these for clusters with a Kroupa IMF, using the evolutionary calculations of Hurley et al. (2000) by assuming no dynamical mass loss. The data are provided by Pols (2007, Private Communication). The various contributions to the evolutionary mass loss for (non dissolving) clusters with metallicities of Z=0.0004, 0.001, 0.004, 0.008 and 0.02 can be expressed with very high accuracy (better than $\sim$ 1\%) by 3rd order polynomials as function of time. We have calculated these fit formulae for models with a Kroupa (2001) IMF in the range of 0.1 to 100 \mbox{$M_{\odot}$}. These models have an initial mean stellar mass of 0.638 \mbox{$M_{\odot}$}. The fit formulae for clusters are listed in Appendix B for the following parameters:\\ (i) the remaining mass fraction $\mu(t)= M(t)/\mbox{$M_{\rm i}$}$,\\ (ii) the mass fractions of black holes $\mu_{\rm BH}=M_{\rm BH}/\mbox{$M_{\rm i}$}$, neutron stars $\mu_{\rm NS}$ and white dwarfs $\mu_{\rm WD}$,\\ (iii) the mean mass of all stars $<m>$ and of the black holes $<m>_{\rm BH}$, neutron stars, $<m>_{\rm BNS}$ and white dwarf $<m>_{\rm WD}$, \\ (iv) the luminosity $L(t)/\mbox{$L_{\odot}$}$ of a cluster with an initial mass of 1 \mbox{$M_{\odot}$}. The mass fraction that is lost by winds and supernova ejecta is $1-\mu(t)$. If compact remnants are ejected with a kick velocity then the remaining mass fraction due to stellar evolution is \begin{equation} \mbox{$\mu_{\rm ev}(t)$} \equiv \mu(t)- \mbox{$f_{\rm kick}^{\rm BH}$} \mbox{$\mu_{\rm BH}(t)$} - \mbox{$f_{\rm kick}^{\rm NS}$} \mbox{$\mu_{\rm NS}(t)$} - \mbox{$f_{\rm kick}^{\rm WD}$} \mbox{$\mu_{\rm WD}(t)$} \label{eq:muevt} \end{equation} where \mbox{$f_{\rm kick}^{\rm BH}$}, \mbox{$f_{\rm kick}^{\rm NS}$}\ and \mbox{$f_{\rm kick}^{\rm WD}$}\ are the fractions of these stellar remnants that are ejected out of the cluster by their kick velocity. If all BHs are kicked out then $\mbox{$f_{\rm kick}^{\rm BH}$}=1$ and if all WDs are retained then $\mbox{$f_{\rm kick}^{\rm WD}$}=0$. The fraction of the {\it luminous mass} that is left by stellar evolution is \begin{equation} \mbox{$\mu^{\rm ev}_{\rm lum}(t)$} = \mu(t) - \mbox{$\mu_{\rm BH}(t)$} - \mbox{$\mu_{\rm NS}(t)$} - \mbox{$\mu_{\rm WD}(t)$}. \label{eq:mulumevt} \end{equation} All fraction $\mu$ are expressed relative to the {\it initial} cluster mass \mbox{$M_{\rm i}$}. The mass loss rate of a cluster due to stellar evolution can now be expressed as \begin{equation} \left(\frac{{\rm d}\mbox{$M$}}{{\rm d}t}\right)_{\rm ev} = M_{\rm lum}(t) \cdot \frac{{\rm d}\mbox{$\mu_{\rm ev}(t)$}}{{\rm d}t} \label{eq:dmdtev} \end{equation} which is negative, since $\mbox{$\mu_{\rm ev}$}$ decreases with time. This expression is strictly valid for the early phases of cluster lifetime before the preferential loss of low mass stars by dynamical effects has changed the shape of the mass function. Since stellar evolution dominates the mass loss only in the early phase of the clusters lifetime, equation \ref{eq:dmdtev} is a good approximation. (For a description of evolutionary mass loss in a cluster with preferential loss of low mass stars see Kruijssen \& Lamers 2008, Kruijssen 2009 and Trenti et al. 2010.) In the description of the mass loss of the cluster models studied by \mbox{$N$-body}\ simulations in this paper the effect of the changing mass function due to evolution and the preferential loss of low mass stars is properly taken into account, as it is in the output of the simulations. \subsection{Mass loss by dynamical effects or ``dissolution''} \label{sec:2.2} The time-dependent mass loss by dissolution can be described by $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$} = - M/t_{\rm dis}$, where {\it $t_{\rm dis}\equiv ({\rm d} \ln (M)/{\rm d} t)^{-1}$ is the dissolution time scale} that depends on the actual cluster mass and on the environment of the cluster. Let us assume that we can describe $t_{\rm dis}$ as a power-law function of mass, as $t_{\rm dis} = t_0 (M/\mbox{$M_{\odot}$})^{\gamma}$, with the constant $t_0$ being the {\it dissolution parameter} (which is the hypothetical dissolution time scale of a cluster of 1 \mbox{$M_{\odot}$}). The changes in the cluster mass due to dissolution is then described by\footnote{ Throughout the rest of this paper all masses \mbox{$M_{\rm i}$}, $M$ and $m$ are in units of \mbox{$M_{\odot}$}\ and all ages are in Myrs.} \begin{equation} \left(\frac{{\rm d}M}{{\rm d}t}\right)_{\rm dis} = - \frac{M(t)}{t_{\rm dis}(t)} = - \frac{M(t)^{1-\gamma}}{t_0} \label{eq:dmdtdis} \end{equation} We stress that the dissolution time scale \mbox{$t_{\rm dis}$}\ is not the same as the {\it total life time of the cluster}, \mbox{$t_{\rm tot}$}, although these are related. Integration of Eq. \ref{eq:dmdtdis} shows that $\mbox{$t_{\rm tot}$} = \mbox{$t_0M_{\rm i}^{\gamma}$} / \gamma$ in the absence of stellar evolution, where $\mbox{$M_{\rm i}$}$ is the initial mass. In reality stellar evolution also removes part of the cluster mass. This implies that this simple estimate of \mbox{$t_{\rm tot}$}\ overestimates the real cluster lifetime. Theoretical considerations suggest that $\gamma \simeq 0.65$ to 0.85. This follows from the following dynamical arguments. \subsubsection{Dissolution in a steady tidal field} \label{sec:2.2.1} Spitzer (1987) has argued that a fraction of the stars $\xi$ escapes each $\mbox{$t_{\rm rh}$}$, such that \begin{equation} \left(\frac{{\rm d}N}{{\rm d}t}\right)_{\rm dis} = -\frac{\xi N}{\mbox{$t_{\rm rh}$}} \label{eq:dmdtspitzer} \end{equation} where $\mbox{$t_{\rm rh}$} \propto (N^{1/2}/ \ln \Lambda) \mbox{$r_{\rm h}$}^{3/2}~ \mbox{$\overline{m}$} ^{-1/2}$ is the half-mass relaxation time, \mbox{$r_{\rm h}$}\ is the half-mass radius, $\mbox{$\overline{m}$} $ is the mean stellar mass, $N=M/\mbox{$\overline{m}$} $ is the number of stars and $\ln \Lambda$ is the Coulomb logarithm. The value of $\xi$ is larger for Roche-lobe filling clusters than for clusters in isolation. In analytical studies of cluster dissolution a single value for $\xi$ is usually assumed, implying that the cluster lifetime is a constant times $\mbox{$t_{\rm rh}$}$ (see e.g. Spitzer 1987). However, a recent theoretical study by Lee (2002) and Gieles \& Baumgardt (2008) has shown that $\xi$ is a strong function of the Roche-lobe filling factor $\mbox{$r_{\rm h}$}/\mbox{$r_{\rm J}$}$, (where $\mbox{$r_{\rm J}$}$ is the Jacobi radius, i.e. the Roche-lobe radius for clusters). These authors found for clusters with $\mbox{$r_{\rm h}$}/\mbox{$r_{\rm J}$} \geq 0.05$ that $\xi$ scales roughly as $(\mbox{$r_{\rm h}$}/\mbox{$r_{\rm J}$})^{3/2}$. Since $\mbox{$r_{\rm J}$}\propto M^{1/3}\omega^{-2/3}$, where $\omega$ is the angular frequency of the cluster orbit, $\xi\propto t_{\rm cross}\times \omega$. Here $t_{\rm cross}$ is the mean crossing time of stars in a cluster: $t_{\rm cross} \propto \mbox{$r_{\rm h}$}^{3/2}/ \sqrt{GM}. This dependence of $\xi$ on $\mbox{$r_{\rm h}$}^{3/2}$ cancels the $\mbox{$r_{\rm h}$}^{3/2}$ dependence of $\mbox{$t_{\rm rh}$}$ such that the radius becomes an unimportant parameter in \mbox{${\rm d}N/{\rm d}t$}. This can be understood intuitively as follows: for a smaller (larger) radius, relaxation becomes more (less) important, while the escape criterion due to the tidal field becomes less (more) important. On top of this, Baumgardt (2001) and BM03 showed that the dissolution time scale does not scale linearly with $\mbox{$t_{\rm rh}$}$, but rather with a combination of $\mbox{$t_{\rm rh}$}$ and the crossing time, $t_{\rm cross}$, because even unbound stars need time to leave a cluster (see Fukushige \& Heggie 2000 for details). The relevant time scale is \begin{eqnarray} t_{\rm dyn} &\propto& \mbox{$t_{\rm rh}$}^{x} t_{\rm cross}^{1-x}\\ &\propto& \left(\frac{N}{\ln\Lambda}\right)^x\,t_{\rm cross} \label{eq:treltcr} \end{eqnarray} This implies that {dissolution time scale in terms of number of stars is} \begin{eqnarray} \mbox{$t_{\rm dis}$}^N~&\equiv& ~ -\frac{N}{{\rm d}N/{\rm d}t}~ = ~\frac{t_{\rm dyn}}{\xi} \\%~ =~ \frac{t_{\rm rn}^{x}~ t_{\rm cross}^{1-x}}{ \xi}\\ &\propto& \left(\frac{N}{\ln\Lambda}\right)^x\,\omega^{-1} \label{eq:tdisN} \end{eqnarray} where $\mbox{$t_{\rm dis}$}^N$ is the dissolution time scale if dissolution is expressed in terms of $N$ instead of $M$. In the range of $ 10^4 < N < 10^6$ we can approximate $\Lambda\simeq 0.02 N$ (Giersz \& Heggie 1994) and $N/\ln \Lambda \propto N^{0.80}$ and so $\mbox{$t_{\rm dis}$} ^N \propto N^{p}$ with $p =0.80 x$. \footnote{ Using the total lifetime as an indicator of the dynamical time BM03 found that $x=0.75$ for Roche-lobe filling models with an initial concentration factor of the density King-profile $W_0=5$ and $x=0.82$ for the more centrally concentrated $W_0=7$ models. This would imply $p \simeq 0.60$ and 0.66 for Roche-lobe filling models of $W_0=5$ and 7 respectively.} In this paper we describe the dissolution time as a function of $M$ instead of $N$. Comparing the two expressions $\mbox{$t_{\rm dis}$} \propto M^\gamma$ and $\mbox{$t_{\rm dis}$} ^N \propto N^p $ we see that $\gamma \ne p$ if the mean stellar mass \mbox{$\overline{m}$}\ changes during the clusters lifetime. If \mbox{$\overline{m}$}\ decreases as function of time, i.e. increases as function of $M(t)$, then $\gamma<p$, whereas $\gamma > p$ if \mbox{$\overline{m}$}\ increases with time. We will see below that after an initial phase, dominated by stellar evolution, \mbox{$\overline{m}$}\ increases with time due to the preferential loss of low mass star by tidal stripping. So the values of $\gamma$ are expected to be slightly higher than the values of $p$. \subsubsection{Dissolution due to shocks} \label{sec:2.2.2} Clusters can also be destroyed by shocks (e.g. Ostriker et al. 1972, Spitzer 1987, Chernoff \& Weinberg 1990, Gnedin \& Ostriker 1997) due to encounters with spiral arms or giant molecular clouds in the disk of a galaxy, by disk shocking for clusters in orbits inclined with respect to the Galactic plane and by bulge shocking for clusters in highly elliptical orbits. The mass loss rate due to shocks depends on the cluster half-mass radius, \mbox{$r_{\rm h}$}, as $\mbox{$({\rm d}M/{\rm d}t)$} \propto M/\mbox{$\rho_{\rm h}$}\propto \mbox{$r_{\rm h}$}^3$, with $\mbox{$\rho_{\rm h}$}$ the density within \mbox{$r_{\rm h}$}. This implies that the mass loss time scale depends on the cluster properties as $\mbox{$t_{\rm dis}$} \equiv -M/\mbox{$({\rm d}M/{\rm d}t)$} \propto M/\mbox{$r_{\rm h}$}^3$, which is proportional to the cluster density. The observed mean mass-radius relation of clusters is not well defined, but Larsen (2004) and Scheepmaker et al. (2007) find a mean relation of $\mbox{$r_{\rm h}$} \propto M^{\lambda} $ with $\lambda \simeq 0.13$. So the time scale for mass loss due to shocks is $\mbox{$t_{\rm dis}$} \propto M^{0.61}$ for a constant mean stellar mass (Gieles et al. 2006, 2007). This dependence is almost the same as that for tidal dissolution. \subsubsection{The expected values of $\mbox{$t_0$}$ and $\gamma$} \label{sec:2.2.3} Based on the arguments of the previous subsections we expect that the combined mass loss by tidal dissolution and shocks can be described by a function of the form \begin{equation} \left( \frac{{\rm d}N}{{\rm d}t}\right)_{\rm dis} = -\frac{N}{\mbox{$t_{\rm dis}$}^N} = -\frac{N^{1-p}}{\mbox{$t_0$}^N} ~~~{\rm or} ~~~ \left( \frac{{\rm d}M}{{\rm d}t}\right)_{\rm dis} = - \frac{M^{1-\gamma}}{t_0} \label{eq:dndtdmdt} \end{equation} with $p \simeq 0.65$ and $\gamma \mathrel{\copy\simgreatbox} p$. We will use the second expression with the value of $\gamma$ derived from the \mbox{$N$-body}\ simulations of BM03. In an environment where cluster dissolution is only due to stellar evolution, internal dynamical effects and tidal stripping, \mbox{$t_0$}\ depends on the potential field in which the cluster moves. If clusters move in elliptical orbits in a logarithmic potential field, i.e. a constant galactic rotation velocity $v_{\rm Gal}$, the dissolution time is reduced by a factor $1-\epsilon$, compared to a circular orbit at $R_A$, where the eccentricity $\epsilon = (R_A-R_P)/(R_A+R_P)$ and $R_A$ and $R_P$ are the apogalactic and perigalactic distances respectively. This implies that we expect the value of $\mbox{$t_0$}$ to vary as \begin{equation} \mbox{$t_0$} = \mbox{$t_{\rm ref}^N$} \times \left(\frac{1-\epsilon}{\mbox{$\overline{m}$}^{\gamma}}\right) \left( \frac{\mbox{$R_{\rm Gal}$}}{8.5 {\rm kpc}}\right) \left(\frac{\mbox{$v_{\rm Gal}$}}{220 {\rm \mbox{${\rm km~s}^{-1}$}}}\right)^{-1} \label{eq:tnrefdef} \end{equation} where \mbox{$t_{\rm ref}^N$}\ is a constant, whose value will be derived from the \mbox{$N$-body}\ simulations of BM03. The factors $\mbox{$R_{\rm Gal}$}/8.5{\rm kpc}$ and $\mbox{$v_{\rm Gal}$} / 220\, {\rm km~s}^{-1}$ provide a scaling for calculating the dissolution of clusters at different galactocentric distances in other galaxies with constant rotation velocity. The factor $1/ \mbox{$\overline{m}$}^\gamma$ is a result of the conversion of $N$ to $M$ if the mean stellar mass \mbox{$\overline{m}$}\ is about constant. We will see below that this yields a very good description of the mass loss rate and $M(t)$ for all models of BM03, if \mbox{$\overline{m}$}\ is chosen appropriately.\footnote {If other processes, such as encounters with GMCs or spiral density waves, are important, then the value of \mbox{$t_0$}\ will be smaller than predicted by Eq. \ref{eq:tnrefdef}. Wielen (1985) and Lamers \& Gieles (2006) found that clusters in the solar neighbourhood are mainly destroyed by encounters with GMCs which reduces \mbox{$t_0$}\ by a factor 4 compared to Eq. \ref{eq:tnrefdef}. Kruijssen \& Mieske (2009) have derived the values of \mbox{$t_0$}\ for a number of galactic globular clusters in elliptical orbits, assuming $\gamma=0.70$. } \section{The models of Roche-lobe filling clusters} \label{sec:3} \begin{table*} \caption{The N-body models of BM03 used in this study} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{r r r r r r r r | r r r r r | r r r r r r r} \multicolumn{8}{l}{Input parameters} & \multicolumn{5}{l}{Timescales} & \multicolumn{7}{l}{Output parameters} \\ $\#$ & Mass & nr & $W_0$ & $R_{\rm Gal}$ & Orbit & $\mbox{$r_{\rm t}$}$ & $\mbox{$r_{\rm h}$}$ & $t_{\rm 1\%}$ & $t_{\rm cc}$ & $t_{\rm rh}$ & $t_{\rm cr}(r_{\rm h})$ & $t_{\rm cr}(r_{\rm t})$ & $\gamma$ & $t_0$ & $\mbox{$t_0M_{\rm i}^{\gamma}$}$ & $t_{\rm del}$ & $f_{\rm ind}^{\rm max}$ & $\mbox{$t_0^{\rm cc}$}$ & $j_{\rm cc}$ \\ & $M_{\odot}$ & stars & & kpc & & pc & pc & Gyr & Gyr & Gyr & Myr & Myr & & Myr & Gyr & Myr & & Myr & \\ \hline 1 & 71952 & 128k & 5& 15& circ & 89.6 &16.75 & 45.3 & 36.42 & 7.20 &15.24 & 133 & 0.65 & 40.0 & 57.4 & 400 & 1.1 & 16.1 & 1.60 \\ 2 & 35915 & 64k & 5& 15& circ & 71.0 &13.28 & 26.9 & 22.51 & 3.88 &15.24 & 133 & 0.65 & 42.0 & 38.4 & 400 & 1.0 & 17.2 & 1.61 \\ 3 & 18205 & 32k & 5& 15& circ & 56.7 &10.59 & 19.8 & 14.24 & 2.13 &15.24 & 133 & 0.65 & 41.2 & 24.2 & 400 & 1.0 & 20.0 & 1.40 \\ 4 & 8808 & 16k & 5& 15& circ & 44.5 & 8.32 & 13.4 & 8.72 & 1.13 &15.24 & 133 & 0.65 & 37.9 & 13.7 & 400 & 0.7 & 20.4 & 1.29 \\ 5 & 4489 & 8k & 5& 15& circ & 35.5 & 6.64 & 9.0 & 5.92 & 0.63 &15.24 & 133 & 0.65 & 36.0 & 8.5 & 400 & 0.7 & 19.0 & 1.37 \\ \hline 6 & 71236 & 128k & 5& 8.5& circ & 61.1 &11.43 & 26.5 & 21.34 & 4.05 & 8.63 & 76 & 0.65 & 21.5 & 32.1 & 228 & 0.8 & 8.5 & 1.70 \\ 7 & 36334 & 64k & 5& 8.5& circ & 48.8 & 9.13 & 17.2 & 13.91 & 2.22 & 8.63 & 76 & 0.65 & 22.0 & 20.3 & 228 & 0.8 & 10.4 & 1.40 \\ 8 & 18408 & 32k & 5& 8.5& circ & 39.0 & 7.28 & 11.1 & 8.41 & 1.22 & 8.63 & 76 & 0.65 & 21.7 & 12.8 & 228 & 0.8 & 9.9 & 1.50 \\ 9 & 9003 & 16k & 5& 8.5& circ & 30.7 & 5.74 & 7.5 & 5.06 & 0.65 & 8.63 & 76 & 0.65 & 20.5 & 7.4 & 228 & 0.7 & 10.7 & 1.34 \\ 10 & 4497 & 8k & 5& 8.5& circ & 24.3 & 4.55 & 4.9 & 3.30 & 0.36 & 8.63 & 76 & 0.65 & 20.0 & 4.7 & 228 & 0.8 & 10.2 & 1.42 \\ \hline 11 & 71218 & 128k & 5& 2.8& circ & 29.4 & 5.50 & 9.3 & 7.66 & 1.35 & 2.87 & 25 & 0.65 & 7.5 & 10.7 & 75 & 0.7 & 3.0 & 1.62 \\ 12 & 35863 & 64k & 5& 2.8& circ & 23.4 & 4.37 & 5.9 & 4.63 & 0.73 & 2.87 & 25 & 0.65 & 6.7 & 5.8 & 75 & 0.7 & 3.3 & 1.36 \\ 13 & 18274 & 32k & 5& 2.8& circ & 18.7 & 3.49 & 3.6 & 2.85 & 0.40 & 2.87 & 25 & 0.65 & 6.0 & 3.5 & 75 & 0.6 & 3.3 & 1.26 \\ 14 & 9024 & 16k & 5& 2.8& circ & 14.8 & 2.76 & 2.3 & 1.58 & 0.22 & 2.87 & 25 & 0.65 & 5.3 & 2.0 & 75 & 0.4 & 3.2 & 1.16 \\ 15 & 4442 & 8k & 5& 2.8& circ & 11.7 & 2.18 & 1.3 & 0.85 & 0.12 & 2.87 & 25 & 0.65 & 4.4 & 1.0 & 75 & 0.4 & 2.8 & 1.13 \\ \hline 16 & 71699 & 128k & 7& 8.5& circ & 61.3 & 7.11 & 28.5 & 12.62 & 1.99 & 4.22 & 76 & 0.80 & 6.4 & 46.1 & 228 & 0.8 & 11.5 & 1.52 \\ 17 & 35611 & 64k & 7& 8.5& circ & 48.5 & 5.63 & 17.2 & 7.87 & 1.07 & 4.22 & 76 & 0.80 & 6.5 & 28.2 & 228 & 0.8 & 10.5 & 1.57 \\ 18 & 18013 & 32k & 7& 8.5& circ & 38.7 & 4.48 & 11.2 & 4.87 & 0.58 & 4.22 & 76 & 0.80 & 6.5 & 15.7 & 228 & 0.8 & 10.5 & 1.43 \\ 19 & 8928 & 16k & 7& 8.5& circ & 30.6 & 3.55 & 6.9 & 2.89 & 0.32 & 4.22 & 76 & 0.80 & 6.0 & 8.5 & 228 & 0.8 & 11.0 & 1.21 \\ 20 & 4402 & 8k & 7& 8.5& circ & 24.2 & 2.80 & 4.4 & 1.67 & 0.17 & 4.22 & 76 & 0.80 & 5.5 & 4.3 & 228 & 0.8 & 10.1 & 1.14 \\ \hline 21 & 17981 & 32k & 5& 8.5& e0.2 & 29.5 & 5.51 & 9.0 & 6.34 & 0.80 & 5.76 & 50 & 0.65 & 14.5 & 9.0 & 150 & 0.2 & 8.4 & 1.24: \\ 22 & 18300 & 32k & 5& 8.5& e0.3 & 25.7 & 4.81 & 7.8 & 5.21 & 0.65 & 4.65 & 41 & 0.65 & 12.0 & 7.1 & 120 & 0.1 & 7.9 & 1.02: \\ 23 & 17966 & 32k & 5& 8.5& e0.5 & 18.6 & 3.47 & 5.7 & 3.61 & 0.40 & 2.88 & 25 & 0.65 & 8.8 & 5.3 & 75 & 0.0 & 5.0 & 1.23: \\ 24 & 17957 & 32k & 5& 8.5& e0.7 & 12.2 & 2.27 & 3.6 & 2.09 & 0.21 & 1.52 & 13 & 0.65 & 5.9 & 3.4 & 39 & 0.0 & 3.0 & 1.29: \\ 25 & 18026 & 32k & 5& 8.5& e0.8 & 8.9 & 1.67 & 2.8 & 1.46 & 0.13 & 0.96 & 8 & 0.65 & 4.5 & 2.6 & 24 & 0.0 & 2.3 & 1.28: \\ \hline \end{tabular} } Left section: model parameters; Middle section: cluster time scales; Right section: fit parameters.\\ First three blocks: clusters with $W_0=5$ in circular orbits at $R_{\rm Gal}$= 15, 8.5 and 2.8 kpc. Fourth block: clusters with $W_0=7$ in circular orbits at $R_{\rm Gal}$=8.5 kpc. Fifth block: clusters with $W_0=5$ in elliptical orbits with apogalactic distance of $R_A$= 8.5 kpc and a perigalactic distance of $R_A(1-\epsilon)/(1+\epsilon)$. The values of $r_{\rm h}$ and $r_{\rm t}$ apply to the perigalacticon. Clusters with $W_0=5$ or 7 have $\gamma=0.65$ and 0.80 respectively, before core collapse. The number of stars is given in units of $1k=1024$. \label{tbl:BM03models} \end{table*} In this paper we study the results of $N$-body simulations, in order to understand the way clusters evolve due to dynamical and evolutionary effects. We use the models of Roche-lobe filling clusters from BM03 for comparison with our predictions, supplemented with a few models of initially Roche-lobe underfilling clusters. Out of the initial 33 models we have selected 25 representative BM03 cluster models. These are chosen because they allow the study of the effects of the different parameters of the models, i.e. initial mass and initial concentration, and of the cluster orbits, i.e. Galactocentric distance and eccentricity. (We found that the information derived from these models also applies to the other 8 models.) The models span a range of dissolution times between 1.5 and 50 Gyr. The selected models are listed in Table \ref{tbl:BM03models}. \subsection{Parameters of the cluster models} \label{sec:3.1} The models are divided into five blocks, separated by horizontal lines in Table \ref{tbl:BM03models}. The first three blocks contain models of Roche-lobe filling clusters of different masses with an initial concentration factor $W_0=5$, in circular orbits at galactocentric distances of $\mbox{$R_{\rm Gal}$}=15$, 8.5 and 2.83 kpc. The fourth block contains Roche-lobe filling cluster models in circular orbits and with different initial masses, but with a more concentrated initial density distribution, with a King profile of $W_0=7$. These will be used to study the effect of the initial concentration on the cluster evolution, by comparing them with the results of the $W_0=5$ models. The fifth block contains Roche-lobe filling cluster models with $W_0=5$ in elliptical orbits with various eccentricities. All models have a Kroupa stellar initial mass function (IMF), with a mass range of 0.15 to 15 \mbox{$M_{\odot}$}, an initial mean stellar mass of 0.547 \mbox{$M_{\odot}$}\ and a metallicity of $Z=0.001$. The clusters have no initial binaries, but binaries do form during the dynamical evolution, mainly in the high density central region during core collapse. Neutron stars and white dwarfs are retained in the cluster when they are formed (no kick velocities) but may be lost later by dynamical effects. Black holes are not considered in the BM03 models, because of the adopted upper mass limit of 15 \mbox{$M_{\odot}$}. The clusters in elliptical orbits (nrs 21 to 25) have about the same initial mass and the same apogalactic distance of $R_A=8.5$ kpc, but different elliptical orbits with eccentricities $0.2 \le \epsilon \le 0.8$. This implies perigalactic distances of $R_P=R_A (1-\epsilon)/(1+\epsilon)$ between $0.667\,R_A$ and $0.111\,R_A$. For these models the mass loss rates are strongly variable with time: at perigalacticon the rates are much higher than at apogalacticon. The initial values of the tidal radius, half-mass radius etc. in Table \ref{tbl:BM03models} refer to the values at perigalacticon. The data of each model in Table \ref{tbl:BM03models} are given in three groups, separated by a vertical line. The left group gives the initial model data: model nr, initial mass, initial nr of stars, $W_0$, apogalactic distance, type of orbit with eccentricity, tidal radius \mbox{$r_{\rm t}$}\ and half mass radius \mbox{$r_{\rm h}$} . The middle group gives the various time scales of the models: the time $\mbox{$t_{\rm 1\%}$}$ when $M(t)=0.01 \mbox{$M_{\rm i}$}$, the core-collapse time $t_{\rm cc}$, the initial half mass relaxation time \mbox{$t_{\rm rh}$}, the half mass crossing time \mbox{$t_{\rm cr}(r_{\rm h})$}\ and the initial crossing time at the tidal radius \mbox{$t_{\rm cr}(r_{\rm t})$}. The right hand group gives the data that describe the mass loss rates of the models: \mbox{$t_0$}, $\mbox{$t_0$} \mbox{$M_{\rm i}^{\gamma}$}$ (which is a proxy of the expected total life time). The values of delay-time \mbox{$t_{\rm delay}$}\ and \mbox{$f_{\rm ind}^{\rm max}$}\ together describe how the clusters react dynamically to mass loss by stellar evolution (see Sect. \ref{sec:6}). The last two columns give the values of $\mbox{$t_0^{\rm cc}$}$, which describes the mass loss rate after core collapse, and \mbox{$j_{\rm cc}$}\, which describes the increase in mass loss due to core collapse. The determination of these parameters is described below. \subsection{The mass loss rates of the cluster models} \label{sec:3.2} BM03 define a star to be lost from a cluster if it is outside the Jacobi radius $\mbox{$r_{\rm J}$}$ of the cluster. Stars with a velocity $v>v_{\rm esc}$ but $r<\mbox{$r_{\rm J}$}$ are still considered cluster members. On the other hand, the mass lost by stellar evolution (i.e. by winds and supernovae) is assumed to leave the cluster immediately. We have derived the mass loss rates of the N-body models of BM03. From the output of these model calculations we can separate the mass loss that is due to stellar evolution from the contribution by dynamical effects. {\it There is an important difference between these two mass loss rates. Mass loss by stellar evolution is instantaneous and independent of the structure and orbit of the cluster. On the other hand, mass loss by dynamical effects always proceeds on a slow time scale and needs time to build up.} We will see this in the results. \subsection{Three phases of mass loss} \label{sec:3.3} A study of the mass loss rates of the BM03 models shows that three mass loss phases can be recognized. This is depicted in Fig. \ref{fig:phases} for two models (nrs 6, 12), which shows the variation of \mbox{$({\rm d}M/{\rm d}t)$}\ as function of $M$ and $t$. In the first phase (A) mass loss is dominated by stellar evolution and so the mass loss rate drops steeply with time. In the second phase (B) mass loss is dominated by dynamical effects and the mass loss rate behaves approximately as a power law of $\mbox{$({\rm d}M/{\rm d}t)$} \propto M^{1-\gamma}$ (see Sects. \ref{sec:2.2.1} and \ref{sec:2.2.2}). The third phase (C) is after core collapse. Mass loss is also dominated by dynamical effects, but the mass loss rate is higher than before core collapse. This has been noticed before in the mass evolution of cluster models, e.g. Baumgardt (2001). The separation between the three regions is not as strict as suggested in Fig. \ref{fig:phases} because the different effects overlap near the boundaries (see below). In the next three sections we discuss the three mass loss phases and how they depend on the cluster parameters and the environment. \begin{figure*} \centerline{\psfig{figure=lamers-fig1a.ps,width=6.0cm}\hspace{-0.3cm} \psfig{figure=lamers-fig1b.ps,width=6.0cm}} \vspace{-0.3cm} \centerline{\psfig{figure=lamers-fig1c.ps,width=6.0cm}\hspace{-0.3cm} \psfig{figure=lamers-fig1d.ps,width=6.0cm}} \caption[] {The three phases of mass loss in two \mbox{$N$-body}\ models of BM03: A = dominated by stellar evolution, B = dominated by dissolution, C = dominated by dissolution after core collapse. The line separating phases A and B is when the evolutionary mass loss has dropped to 10\% of the total mass loss. The line separating phases B and C indicates the core collapse time. The figure shows $dM/dt$ versus $M$ (left) and $t$ (right) in logarithmic units. The full upper line is the total mass loss rate; the dashed line is the mass loss by stellar evolution; the full smooth lines are the mass loss by dynamical effects, assumed to scale as a power law of $M$, before and after core collapse. The models are defined by a vector containing: model nr, nr of stars, total lifetime (in Gyrs), concentration parameter $W_0$, $R_G$ (in kpc), and orbit.} \label{fig:phases} \end{figure*} \section{Direct and induced mass loss by stellar evolution} \label{sec:4} Mass loss during the early history of clusters (phase A in Fig. \ref{fig:phases}) is dominated stellar evolution. At very early stages only stellar evolution contributes to the mass loss. But very soon thereafter, the mass loss rate \mbox{$({\rm d}M/{\rm d}t)$}\ of all models is higher than \mbox{$({\rm d}M/{\rm d}t)_{\rm ev}$}. This is seen best in the right hand panels of Fig. \ref{fig:phases}, where the mass loss rates are plotted versus time: the difference between the total mass loss rate (top lines) and the mass loss by evolution (dotted line) is much larger than the mass loss by dissolution (almost straight line), which will be discussed below. {\it This shows that during the early phases, when the mass loss is dominated by stellar evolution, Roche-lobe filling clusters lose an extra amount of mass (in the form of stars) by dynamical effects, induced by the mass loss due to stellar evolution.} This evolution-induced mass loss is due to the fact that the cluster radius expands and the tidal radius shrinks due to evolutionary mass loss. For adiabatic models, we expect that the evolution-induced mass loss rate, \mbox{$({\rm d}M/{\rm d}t)_{\rm ind}^{\rm ev}$}\ will be about equal to the mass loss rate by stellar evolution, \mbox{$({\rm d}M/{\rm d}t)_{\rm ev}$}. This can be understood as follows. If a cluster loses a fraction $ \delta << 1$ of its mass $M_0$ on a time scale longer than its crossing time, the cluster will adiabatically expand such that its radius, relative to its initial radius, is $r/r_0=M_0/M\approx1+\delta$. At the same time, the mass loss causes the Jacobi radius to shrink: $r_J/r_{J0}=(M/M_0)^{1/3}\approx1-\delta/3$. The mass in the shell between $1-\delta/3<r/r_{J0}<1+\delta$ is consequently unbound since it is outside the new Jacobi radius. For a logarithmic potential, the density at $r_{J0}$, $\rho(r_{J0})$, is 6 times lower than the mean density within $r_{J0}$: $\bar{\rho_{J0}}=3M_0/(4\pi r_{J0}^3)$. The mass $\Delta M$ that is in the unbound shell is thus $\Delta M=4\pi r_{J0}^2\rho_{J0}\Delta r$, with $\Delta r=4\delta r_{J0}/3$, such that $\Delta M=(2/3)\delta M_0$. In fact, $\Delta M$ will be slightly higher because we have adopted the lower limit for the density in the outer layers of the cluster. So we may expect that evolutionary mass loss rate induces about the same rate of evolution-induced mass loss, $\mbox{$({\rm d}M/{\rm d}t)_{\rm ind}^{\rm ev}$} = \mbox{$f_{\rm ind}$} \times \mbox{$({\rm d}M/{\rm d}t)_{\rm ev}$}$ with $\mbox{$f_{\rm ind}$} \simeq 1$. A study of all models shows two deviations from this simple expectation.\\ (i) The value of \mbox{$f_{\rm ind}$}\ is only about unity for models for which $\mbox{$({\rm d}M/{\rm d}t)_{\rm ev}$}\ >> \mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$}$. This is the case for models with a very long lifetime of $\mbox{$t_{\rm tot}$} > 25$ Gyr. If the mass loss by dissolution in the early lifetime of the cluster cannot be ignored, the evolution-induced mass loss rate is smaller. Therefore $\mbox{$f_{\rm ind}$}$ will be smaller than unity.\\ (ii) The evolution-induced mass loss does not start at $t=0$ but it needs time to build up. We can expect that the time scale for this build-up will be of the order of the crossing time at the tidal radius, because this is the time scale on which stars can leave the cluster by passing the tidal radius due to the reduction of the depth of the potential well. \begin{figure} \centerline{\epsfig{figure=lamers-fig2a.ps,width=8.0cm}} \centerline{\epsfig{figure=lamers-fig2b.ps,width=8.0cm}} \caption[]{The function \mbox{$f_{\rm delay}$}\ for two N-body models with very different delay time scales. The smooth line is the fit by formula \ref{eq:fdelay}. The parameters are given in the figure. } \label{fig:fdelay} \end{figure} A study of all the cluster models shows that we can describe the evolution-induced mass loss rate by \begin{equation} \mbox{$\left(\frac{{\rm d}M}{{\rm d}t}\right)_{\rm ind}^{\rm ev}$} = \mbox{$f_{\rm ind}$} (t) \times \mbox{$\left(\frac{{\rm d}M}{{\rm d}t}\right)_{\rm ev}$}. \label{eq:dmdtdynev} \end{equation} with \begin{equation} \mbox{$f_{\rm ind}$} (t)= \mbox{$f_{\rm ind}^{\rm max}$} \times \mbox{$f_{\rm delay}$} (t) \label{eq:find} \end{equation} We assume that the growth of the evolution-induced mass loss approaches its maximum value in an exponential function of the delay time scale \begin{equation} \mbox{$f_{\rm delay}$} (t)= 1 - \exp \{ -(t/t_{\rm delay}) \} \label{eq:fdelay} \end{equation} The delay time scale is expected to depend on the crossing time at the tidal radius, so \begin{equation} t_{\rm delay} = n_{\rm delay} \times t_{\rm cr}(r_{\rm t}) \label{eq:tdelay} \end{equation} Eq. \ref{eq:find} describes the increase of $\mbox{$f_{\rm ind}$} (t)$ from 0 to \mbox{$f_{\rm ind}^{\rm max}$}\ with a time scale of $t_{\rm delay}$. Figure \ref{fig:fdelay} shows the function \mbox{$f_{\rm ind}$}\ for two models. The figure shows that the exponential expression describes the delay function quite well. The values of \mbox{$n_{\rm delay}$}\ turn out to be about 3.0 for all Roche-lobe filling models. Therefore we have adopted this value for all models. (see Table \ref{tbl:BM03models}). The values of \mbox{$f_{\rm ind}^{\rm max}$}\ listed in Table \ref{tbl:BM03models} show that $\mbox{$f_{\rm ind}^{\rm max}$} =1$ only for the clusters with very long lifetimes larger than about 25 Gyr. For clusters with shorter lifetime \mbox{$f_{\rm ind}^{\rm max}$}\ does not reach this value, because the contribution by dissolution helps to restore the equilibrium that was destroyed by the fast mass loss due to stellar evolution. Therefore we expect the value of \mbox{$f_{\rm ind}^{\rm max}$}\ to depend on the ratio between the evolutionary mass loss, $\mbox{$({\rm d}M/{\rm d}t)_{\rm ev}$} = M \mbox{$({\rm d} \mu/{\rm d}t)_{\rm ev}$}$ and the mass loss by dissolution, $-M/\mbox{$t_{\rm dis}$}$ with $\mbox{$t_{\rm dis}$}=\mbox{$t_0$} M^{\gamma}$. The smaller the ratio $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$}\ / \mbox{$({\rm d}M/{\rm d}t)_{\rm ev}$}$ the larger \mbox{$f_{\rm ind}^{\rm max}$}\ with a maximum of $\mbox{$f_{\rm ind}^{\rm max}$} \simeq 1$. Fig. \ref{fig:findmax} shows the values of \mbox{$f_{\rm ind}^{\rm max}$}\ as a function of \mbox{$t_0M_{\rm i}^{\gamma}$}. (The determination of the values of \mbox{$t_0$}\ and $\gamma$ is described in Sect. 5). The results can be represented by the relation \begin{equation} \mbox{$f_{\rm ind}^{\rm max}$} = -0.86+ 0.40 \times \log (\mbox{$t_0M_{\rm i}^{\gamma}$})~~{\rm for}~\log(\mbox{$t_0M_{\rm i}^{\gamma}$}) \ge 2.15 \label{eq:findmax} \end{equation} and $\mbox{$f_{\rm ind}^{\rm max}$}=0$ if $\log(\mbox{$t_0M_{\rm i}^{\gamma}$})<2.15$, with the ages in Myrs. \begin{figure} \centerline{\epsfig{figure=lamers-fig3.ps, width=09.0cm}} \caption[]{The values of \mbox{$f_{\rm ind}^{\rm max}$}\ versus $\log \mbox{$t_0M_{\rm i}^{\gamma}$}$ for all Roche-lobe filling cluster models. For models in circular orbits the values of \mbox{$f_{\rm ind}^{\rm max}$}\ decrease with decreasing \mbox{$t_0M_{\rm i}^{\gamma}$}, i.e. with increasing mass loss by dissolution, as expected. The dashed line shows the approximate relation which is given by Eq. \ref{eq:findmax}. Clusters in highly eccentric orbits have smaller values of \mbox{$f_{\rm ind}^{\rm max}$} (see Sect. \ref{sec:7}).} \label{fig:findmax} \end{figure} \section{Dissolution before core collapse} \label{sec:5} We first consider the dynamical mass loss in phase B, i.e. before core collapse, which is expected to vary as $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$} \sim M^{1-\gamma}$ (Sect. \ref{sec:2.2}). This mass loss is due to two-body relaxation (dissolution) plus the induced loss of stars due to the expansion of the cluster and the shrinking of the tidal radius. In order to derive the value of $\gamma$ we study the dependence of $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$}$ on $M(t)$ for all models of clusters in circular orbits. For the determination of the mass loss by dissolution we have corrected the total mass loss rate for both the evolutionary and the evolution-induced mass loss rate. \subsection{The empirical value of $\gamma$ before core collapse} \label{sec:5.1} The upper left panel of Fig. \ref{fig:gamma} shows the mass loss rates as a function of mass of models 1 to 15, i.e. for clusters with $W_0=5$. For each model the mass loss rates are measured in the mass interval where the evolutionary mass loss is small, $\mbox{$({\rm d}M/{\rm d}t)_{\rm ev}$} < 0.1 \mbox{$({\rm d}M/{\rm d}t)$}$, up to core collapse. So each model occupies a limited region in the $M(t)$-range. The mass loss rates are corrected for evolutionary and evolution-induced mass loss. The mass loss rates are normalized to $\mbox{$R_{\rm Gal}$} = 8.5$ kpc and to the mean stellar mass, to make the curves overlap. This is possible because we expect from Eqs. \ref{eq:dndtdmdt} and \ref{eq:tnrefdef} that \begin{equation} \left(\frac{{\rm d}M}{{\rm d}t}\right)\left( \frac{\mbox{$R_{\rm Gal}$}}{8.5 {\rm kpc}}\right) \left( \frac{\mbox{$\overline{m}$}}{\mbox{$\overline{m}$}_i}\right)^{-\gamma} \equiv \left(\frac{{\rm d}M}{{\rm d}t}\right)^{\rm norm} \propto M^{1-\gamma} \label{eq:dmdtnorm} \end{equation} where $\mbox{$\overline{m}$}$ is the mean stellar mass and $\mbox{$\overline{m}$}_i$ is the initial mean stellar mass. We see that the curves nicely overlap. We have fitted a straight line through the data with a slope $1-\gamma = 0.35\pm 0.02$. This implies that $\gamma = 0.65 \pm 0.02$. (The appearance of $\gamma$ in the term $(\mbox{$\overline{m}$}/\mbox{$\overline{m}$}_i)^{\gamma} \simeq 1$ implies that we had to derive the value of $\gamma$ in two iterations.) We will adopt $\gamma=0.65$ for the $W_0=5$ models in the rest of the paper. This agrees with observations of star clusters in M51 (Gieles 2009). and with the N-body simulations of clusters without stellar evolution by Gieles \& Baumgardt (2008). The middle left panel shows the relation for the normalized mass loss rates for the models with $W_0=7$. Each model occupies only a small part in this diagram because the pre-core collapse time of these models is short: only about half as long as that of the $W_0=5$ models. Nevertheless, we see that there is a clear power law dependence on $M$ with a slope of $1-\gamma = 0.20 \pm 0.04$ which implies $\gamma=0.80 \pm 0.04$. We adopt $\gamma=0.80$ for the pre core collapse phase of the $W_0=7$ models. This value is higher than for the $W_0=5$ models, as expected (see Sect. \ref{sec:2.2.1}). \begin{figure*} \centerline{\hspace{+3.0cm}\epsfig{figure=lamers-fig4a.ps, width=12cm}\hspace{-4.0cm} \epsfig{figure=lamers-fig4b.ps,width=12.0cm}} \caption[]{The dependence of the mass loss rate by dissolution on $M(t)$. The figure shows the values of $\log -\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}^{\rm norm}$} $, defined by Eq. \ref{eq:dmdtnorm}, versus $\log(M)$, before (left) and after (right) core collapse for the models with $W_0=5$ (top), $W_0=7$ (middle) and Roche-lobe underfilling models (bottom). Notice the power law dependence of \mbox{$({\rm d}M/{\rm d}t)_{\rm dis}^{\rm norm}$}\ on $M$. The dashed lines show the best-fit linear relation. Before core collapse these relations have a slope of $1-\gamma=0.35$, 0.20 and 0.20 respectively for $W_0=5$, $W_0=7$ and Roche-lobe underfilling models. After core collapse the relation is expressed by a double power law with $1-\mbox{$\gamma_{\rm cc}$}=0.30$ if $M>10^3$ \mbox{$M_{\odot}$}\ and $1-\mbox{$\gamma_{\rm cc}$}=0.60$ if $M<10^3$ \mbox{$M_{\odot}$}. The dotted line in the lower right hand panel shows the shape of the expected relation with the Coulomb logarithm taken into account if the mean stellar mass is $\mbox{$\overline{m}$}=0.5$ \mbox{$M_{\odot}$}\ (see Sect. \ref{sec:6.2}). } \label{fig:gamma} \end{figure*} \subsection{The dissolution parameters \mbox{$t_0$}\ and \mbox{$t_{\rm ref}^N$} } \label{sec:5.2} We have fitted the mass loss rates of the BM03 models in phase B, i.e. the pre-core collapse phase, with a function $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$} =- M^{1-\gamma}/\mbox{$t_0$}$ with $\gamma=0.65$ and 0.80 for the models with $W_0=5$ and 7 respectively. The values of the mean stellar mass in that phase is also derived from the details of the BM03 models. The values of $t_0$ are listed in Table \ref{tbl:BM03models}. We see that they are approximately constant for the $W_0=5$ cluster models at the same value of $\mbox{$R_{\rm Gal}$}$ and that for clusters in different orbits $\mbox{$t_0$} \propto \mbox{$R_{\rm Gal}$}$, as expected (Eq. \ref{eq:tnrefdef}). Fig. \ref{fig:tnref} (top panel) shows the relation between $\mbox{$t_0$}$ and $\mbox{$t_0M_{\rm i}^{\gamma}$}$, which is a proxy for the total lifetime of the cluster, for all models. For each value of \mbox{$R_{\rm Gal}$}\ the values of \mbox{$t_0$}\ decrease slightly with decreasing $\mbox{$t_0M_{\rm i}^{\gamma}$}$. This is because \mbox{$t_0$}\ is expected to depend on $\mbox{$\overline{m}$}^{-\gamma}$ (Eq. \ref{eq:tnrefdef}) and the mean mass in the pre-core collapse phase depends on \mbox{$t_0M_{\rm i}^{\gamma}$}. The vertical offset between the relations for different values of \mbox{$R_{\rm Gal}$}\ is because \mbox{$t_0$}\ is also expected to be proportional to \mbox{$R_{\rm Gal}$}\ (Eq. \ref{eq:tnrefdef}). The difference in the values of \mbox{$t_0$}\ between models of $W_0=5$ and 7 at the same \mbox{$R_{\rm Gal}$}\ is due to the difference in the value of $\gamma$. In fact, for clusters with $\mbox{$M_{\rm i}$} =10^4 \mbox{$M_{\odot}$}$ the dissolution time scale $\mbox{$t_{\rm dis}$}=\mbox{$t_0$} 10^{4\gamma}$ of a $W_0=5$ cluster is about the same as that for a $W_0=7$ cluster. The values of \mbox{$t_0$}\ are measured after the stellar evolution-dominated phase but before core collapse. This is when $M(t)/\mbox{$M_{\rm i}$} \equiv \mu$ is approximately between 0.4 and 0.15, with a mean value of $<\mu> \simeq 0.25$. The middle panel of Fig. \ref{fig:tnref} shows the values of \mbox{$\overline{m}$}\ in the pre-collapse phase at $\mu=0.25$ derived from the BM03 models. These values of \mbox{$\overline{m}$}\ are approximately \begin{eqnarray} \log \mbox{$\overline{m}$} & = & +0.184 - 0.121 \times \log ~\mbox{$t_0$} \mbox{$M_{\rm i}$} ^{0.65}~~{\rm for}~W_0=5 \nonumber \\ \log \mbox{$\overline{m}$} & = & +0.090 - 0.094 \times \log ~\mbox{$t_0$} \mbox{$M_{\rm i}$} ^{0.80}~~{\rm for}~W_0=7 \label{eq:mmeanprcc} \end{eqnarray} These two relations are shown in the figure. The increase of \mbox{$\overline{m}$}\ towards shorter lifetimes is because at a given fraction of its lifetime the maximum mass of a star in a cluster with a short lifetime is higher than in case of a longer lifetime due to stellar evolution (see also Sect. \ref{sec:5.3}). The lower panel of Fig. \ref{fig:tnref} shows the value of $\mbox{$t_0$} \mbox{$\overline{m}$}^{\gamma} (\mbox{$R_{\rm Gal}$} /8.5)^{-1} (\mbox{$v_{\rm Gal}$}/220)(1-\epsilon)^{-1}$, which is expected to be constant, \mbox{$t_{\rm ref}^N$}\ (see Eq. \ref{eq:tnrefdef}). We see that the values are indeed about constant with $\log (\mbox{$t_{\rm ref}^N$}/ {\rm Myr}) = 1.125 \pm 0.016$ for $W_0=5$ models and $0.550\pm 0.015$ for $W_0=7$ models. For clusters with total ages less than about 3 Gyrs, the values are slightly smaller. This is due to the fact that these clusters still contain massive stars over most of their lifetime and massive stars are effective in kicking out lower mass stars (BM03). \begin{figure* \centerline{\hspace{+3.0cm}\epsfig{figure=lamers-fig5a.ps,width=12.0cm}\hspace{-4.0cm} \epsfig{figure=lamers-fig5b.ps,width=12.0cm}} \caption[]{ The parameters of the models 1 to 20 of clusters in circular orbits before core collapse (left) and after core collapse (right). Top: The relation between $\mbox{$t_0$} M_i^\gamma$, which is a proxy for the total lifetime of a cluster, and \mbox{$t_0$}\ (left) or \mbox{$t_0^{\rm cc}$} (right). Dashed and dotted lines show the relations for clusters with $W_0=5$ or 7 respectively for the same value of \mbox{$R_{\rm Gal}$}. Middle: The mean stellar mass after the evolution dominated phase and before core collapse, i.e. at $\mu \simeq 0.25$ (left) or at core collapse (right). Bottom: The resulting values of $\mbox{$t_{\rm ref}^N$}$ (left) or $\mbox{$t_{\rm ref,cc}^N$}$ (right) as a function of $\mbox{$t_0$} M_i^\gamma$. The mean values of $\mbox{$t_{\rm ref}^N$} = 13.3$ Myr and $\mbox{$t_{\rm ref,cc}^N$}= 7.2$ Myr for $W_0=5$ models and $\mbox{$t_{\rm ref}^N$}=3.5$ Myr and $\mbox{$t_{\rm ref,cc}^N$} = 6.2$ Myr for $W_0=7$ models are indicated. } \label{fig:tnref} \end{figure*} \subsection{The evolution of the mean stellar mass} \label{sec:5.3} The time variation of \mbox{$\overline{m}$}\ for the different models is shown in Fig. \ref{fig:mmean}. The top panel shows the time evolution of \mbox{$\overline{m}$}\ (plotted in terms of $\mu$) for all models with $W_0=5$ and 7 in circular orbits. The lower panel shows the mean mass as function of time in terms of $t/\mbox{$t_{\rm 1\%}$}$. The mean stellar mass initially decreases due to the loss of high mass stars by stellar evolution, but then increases again with age due to the preferential loss of low mass stars by dynamical effects. The minimum value is reached for all models at $t \simeq 0.2\mbox{$t_{\rm 1\%}$} $. This is the time when the cluster is fully mass segregated after which the low mass stars in the outer part of the cluster are lost preferentially (BM03). For models with a short total dissolution time the minimum mass is higher than for models with a long lifetime. Notice that after mass segregation \mbox{$\overline{m}$}\ increases as a power-law of $\mu$ with $\mbox{$\overline{m}$} \propto \mu^{-\delta}$ with $\delta \simeq 0.30$ for all BM03 models, including the ones not shown here. \begin{figure \centerline{\epsfig{figure=lamers-fig6a.ps,width=7.8cm}} \centerline{\epsfig{figure=lamers-fig6b.ps,width=7.8cm}} \caption[]{ Top: The mean stellar mass of $W_0=5$ and $W_0=7$ clusters as a function of the remaining mass fraction for models 1 to 20. The shorter the lifetime of a cluster, the higher the line in this figure. The vertical dotted lines indicate approximately the range where \mbox{$t_0$}\ was determined in the pre-core collapse phase, with the central value indicated by a full line. The diamonds indicate the minimum values. The mean mass at core collapse is indicated by an asterisk or a cross for models with $W_0=5$ and $W_0=7$ respectively. Bottom: The mean stellar mass as a function of the fraction of the total cluster lifetime. For all models the minimum is reached at $t \simeq 0.15 \mbox{$t_{\rm 1\%}$}$. } \label{fig:mmean} \end{figure} The middle panel of Fig. \ref{fig:tnref} shows that the mean stellar mass of the $W_0=7$ models is slightly larger than for the less concentrated $W_0=5$ models. This is due to the fact that core collapse occurs earlier in the more concentrated $W_0=7$ models and so the value of \mbox{$t_0$}\ in the pre-core collapse phase refers to an earlier time than in the $W_0=5$ models. Fig. \ref{fig:mmean} shows that \mbox{$\overline{m}$}\ increases with age, so it is smaller in the pre-core collapse phase of the $W_0=7$ models. \subsection{The start of the dissolution process} \label{sec:5.4} We have derived the mass loss by dissolution for the BM03 models from the output files of these models. The data show that the dissolution does not start at $t=0$ but needs time to develop, just like the evolution-induced mass loss needs time to get going. This can be seen in Fig. \ref{fig:phases} which shows that at very early times the total mass loss rate is equal to the evolutionary mass loss, without any contribution by dynamical effects. The time needed to develop the dynamical dissolution depends on the crossing time at the tidal radius and is expected to behave in the same way as the growth of the evolution-induced mass loss rate. Therefore we can describe the dissolution before core collapse as \begin{equation} \left(\frac{{\rm d}M}{{\rm d}t}\right) = - \mbox{$f_{\rm delay}$} \times \frac{M^{1 - \gamma} }{\mbox{$t_0$}} \label{eq:dmdtdis+fdelay} \end{equation} with \mbox{$f_{\rm delay}$}\ given by Eq. \ref{eq:fdelay}. The delay-time of the dissolution is a result of the initial conditions of the cluster models which start without stars with escape velocities. This might not be realistic. If clusters form in a collapsing cloud and go through a phase of violent relaxation, the tail of the Maxwellian velocity distribution is initially filled and dissolution will start immediately. \section{Dissolution after core collapse} \label{sec:6} We now consider the dissolution in phase C, i.e. after core collapse. Fig. \ref{fig:phases} shows that the mass loss rate increases by a small factor at about \mbox{$t_{\rm cc}$}\ and that the slope $1-\gamma$ of the $\log -\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$}$ vs $\log M$ relation is different from that in phase B. \subsection{The time of core collapse, \mbox{$t_{\rm cc}$}\ } \label{sec:6.1} The core collapse times, listed in Table \ref{tbl:BM03models}, are derived empirically from the \mbox{$N$-body}\ calculations. Theory predicts that core collapse occurs after about a fixed number of central relaxation times, $t_{\rm rc}$, (e.g. Spitzer 1987, p.95). However as the cluster evolves, it loses mass and expands, so $t_{\rm rc}$ changes continuously. Therefore we search for an empirical expression of \mbox{$t_{\rm cc}$}\ in terms of the {\it initial} values of \mbox{$t_{\rm rh}$}\ and \mbox{$R_{\rm Gal}$}\ (see Fig. \ref{fig:tcc}). A linear regression analysis shows that we can approximate \mbox{$t_{\rm cc}$}\ of all Roche-lobe filling cluster models, including those with $W_0=7$ and elliptical orbits, to an accuracy better than about 0.05 dex by the following relations \begin{equation} \log~(\mbox{$t_{\rm cc}$}) = 1.228 + 0.872 \times \log(\mbox{$t_{\rm rh}$}) \label{eq:tcctidal} \end{equation} \begin{figure \epsfig{figure=lamers-fig7.ps,width=9.0cm} \caption[]{ The relation between the initial half mass relaxation time, $\mbox{$t_{\rm rh}$}$ and the core collapse time $\mbox{$t_{\rm cc}$}$ for Roche-lobe filling clusters, including those in elliptical orbits. } \label{fig:tcc} \end{figure} \subsection{The value of $\gamma$ after core collapse} \label{sec:6.2} Core collapse changes the density distribution of the stars in the clusters, so that it becomes independent of the initial distribution. Therefore we expect the values of $\gamma$ in phase C to be the same for all the models. The right hand panels of Fig. \ref{fig:gamma} show the normalized dissolution rate, given by Eq. \ref{eq:dmdtnorm}, after core collapse for cluster models with $W_0=5$ (top) and $W_0=7$ (middle). Each model occupies a certain mass range, starting at core collapse down to about $M(t)=10^2~ \mbox{$M_{\odot}$}$. Notice that, apart from a small vertical offset, the sets of models show very similar lines with a slight curvature, in the sense that the slope gets steeper towards lower mass. The empirical relations can be fitted very well with a broken power-law relation \begin{equation} \left(\frac{{\rm d}M}{{\rm d}t}\right)^{\rm postcc} = -\frac{M(t)^{1-\mbox{$\gamma_{\rm cc}$}}}{ t_0^{\rm cc}} \label{eq:dmdtdispostcc} \end{equation} with $\mbox{$\gamma_{\rm cc}$}=0.70$ for $M(t)>10^3$ and $\mbox{$\gamma_{\rm cc2}$}=0.40$, where the subscript 2 refers to the mass loss at $M(t)<10^3 \mbox{$M_{\odot}$}$. These values are the same for $W_0=5$ and 7 clusters, because core-collapse results in a redistribution of the density profile which becomes nearly independent of that in the pre-collapse phase. Continuity of the mass loss rate at $M(t)=10^3 \mbox{$M_{\odot}$} $ requires that $\mbox{$t_0^{\rm cc2}$} = \mbox{$t_0^{\rm cc}$} \times 10^{3(\mbox{$\gamma_{\rm cc}$} - \mbox{$\gamma_{\rm cc2}$})} =10^{0.90} ~ \mbox{$t_0^{\rm cc}$}$. The steepening of the slopes in Fig. \ref{fig:gamma} is the result of the changes in the Coulomb logarithm towards smaller numbers of stars (Eq. \ref{eq:tdisN}). This can be shown as follows. Adopting for simplicity a mean stellar mass of $\mbox{$\overline{m}$} \simeq 0.5$ \mbox{$M_{\odot}$}\ we predict (Eq. \ref{eq:tdisN}) that $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$} \simeq \mbox{$\overline{m}$} \mbox{$({\rm d}N/{\rm d}t)$} \simeq \mbox{$\overline{m}$}~N/ \mbox{$t_{\rm dis}$} \sim M^{1-x} (\ln 0.02M/\mbox{$\overline{m}$})^x$ (Giersz \& Heggie 1996) with $x \simeq 0.75$ (see Sect. \ref{sec:2.2.1}). The variation of $\log \mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$}$ with $\log M$, predicted with this aproximation is shown in the lower part of Fig. \ref{fig:gamma} by a dotted line with an arbitrary vertical offset. The shape of this predicted line is very similar to the one derived empirically. \subsection{The values of \mbox{$t_0^{\rm cc}$}\ and \mbox{$t_{\rm ref,cc}^N$}\ after core collapse} \label{sec:6.3} The top right part of Fig. \ref{fig:tnref} shows the values of \mbox{$t_0^{\rm cc}$}\ as a function of \mbox{$t_0M_{\rm i}^{\gamma}$}, which is a proxy for the lifetime of the cluster. The pattern is the same as on the left side of this figure, i.e. before core collapse. The mean stellar mass at core collapse is shown in the middle right part of the figure. We find that we can approximate \begin{eqnarray} \log \mbox{$\overline{m}$}_{\rm cc} & = & +0.200 - 0.0984 \times \log ~\mbox{$t_0$} \mbox{$M_{\rm i}$}^{0.65}~~~~~~{\rm for}~~ W_0=5 \nonumber \\ \log \mbox{$\overline{m}$}_{\rm cc} & = & +0.075 - 0.0984 \times \log ~\mbox{$t_0$} \mbox{$M_{\rm i}$}^{0.80}~~~~~~{\rm for}~~ W_0=7 \label{eq:mmeanpostcc} \end{eqnarray} The smaller mean mass of the $W_0=7$ models at core collapse is due to the fact that core collapse occurs much earlier than for the less concentrated $W_0=5$ models, so that the preferential loss of low mass stars had a smaller effect (see Fig. \ref{fig:mmean}). The lower right part of Fig. \ref{fig:tnref} shows the derived values of $\mbox{$t_{\rm ref}^N$}$. The mean values are $\log (\mbox{$t_{\rm ref,cc}^N$})= 0.864 \pm 0.044$ and $0.796 \pm 0.017$ for the models with $W_0=5$ and 7 respectively. The scatter in \mbox{$t_{\rm ref,cc}^N$}\ is larger than that of \mbox{$t_{\rm ref}^N$}\ in Fig. \ref{fig:tnref} because of the larger noise in the \mbox{$({\rm d}M/{\rm d}t)$}\ - vs - $M$ relation due to the smaller numbers of stars (see Fig. \ref{fig:phases}). We see that \mbox{$t_{\rm ref,cc}^N$}\ is slightly different for $W_0=5$ and 7 models. This can be understood because inside a cluster the relaxation time increases with radius, so the outer cluster parts still keep some memory of the initial distribution by the time the center has gone into collapse. \section{Clusters in elliptical orbits} \label{sec:7} Models 21 to 25 are for clusters in elliptical orbits with a apogalactic distance of 8.5 kpc and various eccentricities. These can be compared with an otherwise similar model nr 8 in a circular orbit. Fig. \ref{fig:elliptical} shows the mass loss of models 8, 21 and 23, with $\epsilon=0.0$, 0.2 and 0.5 respectively. Since the total lifetime of the clusters scales approximately as $(1-\epsilon)$, the lifetimes of the clusters are very different (see Table \ref{tbl:BM03models}). The evolution can be described by the same three phases that we found for all other models: a stellar evolution dominated phase (A), a dissolution dominated phase before core collapse (B) and the phase after core collapse (C). The mass loss rates in all phases is variable with a periodicity of the orbital period. The mass loss rate is highest at perigalacticon and the amplitude of the variations increases with increasing ellipticity. Especially after core collapse the amplitude increases drastically. This is due to the expansion of the outer layers of the cluster as a reaction to the core collapse. The stars in the outer layers are then more susceptible to the periodically changing tidal field. Only the models with $\epsilon=0$ and $0.2$ show a jump in the mass loss rate at the time of core collapse. For clusters in more eccentric orbits no clear jump is observed at core collapse, but the mass loss rate does increase compared to the simple power-law extrapolation of phase B. The straight full lines in Fig. \ref{fig:elliptical} show the values of $\gamma$ defined by $-\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$} = M^{1-\gamma}/ \mbox{$t_0$}$. Both the values of $\gamma$ and of \mbox{$t_0$}\ describe the ``time averaged'' mass loss rate as a function of the mass. These values were derived from a study of the $M(t)$ history for these models. The values of \mbox{$t_0$}, \mbox{$n_{\rm delay}$}, \mbox{$f_{\rm ind}^{\rm max}$}\ and \mbox{$t_0^{\rm cc}$}\ are listed in Table \ref{tbl:BM03models}. Fig. \ref{fig:tnref-ecc} shows the values of \mbox{$t_0$}, \mbox{$\overline{m}$}\ and \mbox{$t_{\rm ref}^N$}\ of clusters in elliptical orbits (models 20 to 25) before and after core collapse respectively. We have added the data of model 8 which has the same initial mass, $W_0$ and the same \mbox{$R_{\rm Gal}$}\ but a circular orbit. The values are compared with those of clusters with $W_0=5$ at $\mbox{$R_{\rm Gal}$}=8.5$ kpc but with different masses (dashed lines), taken from Figs. \ref{fig:tnref}. Notice that the models in eccentric orbits have very similar characteristics compared to those in circular orbits, if we correct for the effect of eccentricity by a factor $(1-\epsilon)^{-1}$ in \mbox{$t_0$}\ and \mbox{$t_0^{\rm cc}$}. BM03 already concluded that the total life time of clusters is proportional to $(1-\epsilon)$. We find here that the ``orbital averaged'' mass loss rates before and after core collapse both scale with $(1-\epsilon)^{-1}$. The main difference between the cluster models in circular and elliptical orbits is in the mean stellar mass at core collapse (compare the middle panels of Figs. \ref{fig:tnref} and \ref{fig:tnref-ecc}). For clusters in circular orbits \mbox{$\overline{m}_{\rm cc}$}\ increases towards shorter lifetimes, whereas \mbox{$\overline{m}_{\rm cc}$}\ is about constant for clusters with the same apogalactic radius but different ellipticities. This is because the BM03 models are Roche-lobe filling at perigalacticon so both \mbox{$t_{\rm rh}$}\ and \mbox{$t_{\rm cc}$}\ decrease steeply with increasing eccentricity. The combination of a short lifetime (i.e. a higher maximum star mass at \mbox{$t_{\rm cc}$}) and a smaller ratio of $\mbox{$t_{\rm cc}$} / \mbox{$t_0M_{\rm i}^{\gamma}$}$ (i.e. fewer low mass stars are lost after mass segregation) results in \mbox{$\overline{m}_{\rm cc}$}\ being almost independent of $\epsilon$. This is illustrated in Fig. \ref{fig:mmeancc-ecc}, which shows the variation of $\mbox{$\overline{m}$}$ with $\mu$ and the mean mass at core collapse, which is almost constant. The core collapse time of the clusters in elliptical orbits, shown in Fig. \ref{fig:tcc}, is slightly longer than predicted by Eq. \ref{eq:tcctidal} by about 20 percent. This is because the values of \mbox{$t_{\rm rh}$}, listed in Table \ref{tbl:BM03models}, refer to perigalacticon, which is smaller than the orbital-averaged values of \mbox{$t_{\rm rh}$}. \begin{figure \centerline{\epsfig{figure=lamers-fig8a.ps,width=7.0cm}} \vspace{-0.3cm} \vspace{-0.3cm} \centerline{\epsfig{figure=lamers-fig8b.ps,width=7.0cm}} \vspace{-0.3cm} \vspace{-0.3cm} \centerline{\epsfig{figure=lamers-fig8c.ps,width=7.0cm}} \caption[] {The mass loss rates of clusters with elliptical orbits with eccentricities of 0 (upper), 0.20 (middle) and 0.5 (bottom). A = dominated by stellar evolution, B = dominated by dissolution, C = dominated by dissolution after core collapse. The line separating phases B and C indicates the core collapse time. The figure shows $\mbox{$({\rm d}M/{\rm d}t)$}$ versus $M$ in logarithmic units. The full upper line is the total mass loss rate; the dashed line is the mass loss by stellar evolution; the full smooth lines are the mass loss by dynamical effects, assumed to scale as a power law of $M$, before and after core collapse. The models are defined by a vector containing: model nr, nr of stars, total lifetime (in Gyrs), concentration parameter $W_0$, $R_G$ (in kpc), and orbit. } \label{fig:elliptical} \end{figure} \begin{figure* \centerline{\hspace{+3.0cm}\epsfig{figure=lamers-fig9a.ps,width=12.0cm} \hspace{-4.0cm} \epsfig{figure=lamers-fig9b.ps,width=12.0cm}} \caption[]{The parameters of the models of clusters in elliptical orbits, nrs 20 to 25, combined with model 8 of the same mass but with $\epsilon=0$ before (left) and after (right) core collapse. In all panels the right data point is for $\epsilon=0$ and the left one is for $\epsilon=0.8$. Top: The relation between $\mbox{$t_0$} M_i^\gamma$ and \mbox{$t_0$}\ before (left) or \mbox{$t_0^{\rm cc}$}\ (right) after core collapse. The dashed lines show the relation for clusters in circular orbits at $\mbox{$R_{\rm Gal}$} = 8.5$ kpc (models 6 to 10) for comparison (Fig. \ref{fig:tnref}). Middle: The mean stellar mass before (left) and at core collapse (right), again compared to the that of models 6 to 10. Bottom: The resulting values of $\mbox{$t_{\rm ref}^N$}$ (left) and $\mbox{$t_{\rm ref,cc}^N$}$ (right) as a function of $\mbox{$t_0$} M_i^\gamma$. The mean values of $\mbox{$t_{\rm ref}^N$} = 13.3 $ Myr and $\mbox{$t_{\rm ref,cc}^N$} = 7.2$ Myr are the same as for models 6 to 10 with circular orbits. } \label{fig:tnref-ecc} \end{figure*} \begin{figure \epsfig{figure=lamers-fig10.ps,width=8.0cm} \caption[]{ The variation of the mean stellar mass as function of $\mu$ for clusters in elliptical orbits. Lowest curve: $\epsilon=0.2$, upper curve: $\epsilon=0.8$. The wiggles are the result of the variation in the mass of the cluster within the variable tidal radius. Diamonds indicate the minimum values and squares indicate the moment of core collapse. Notice that the mean mass at core collapse is about the same for all models. } \label{fig:mmeancc-ecc} \end{figure} \section{Initially Roche-lobe underfilling cluster models} \label{sec:8} \subsection{The parameters of the initially Roche-lobe underfilling models } \label{sec:uf-table} Because we are also interested in the mass history of initially highly concentrated clusters, i.e. with a half-mass radius much smaller than their tidal radius, we have supplemented the set of Roche-lobe filling cluster models with \mbox{$N$-body}\ simulations of a series of clusters that start severely Roche-lobe underfilling. These are for clusters with an initial mass in the range of 10400 to 84000 \mbox{$M_{\odot}$}, in circular orbits at $\mbox{$R_{\rm Gal}$}=8.5$ kpc and with an initial density distribution described by a King parameter of $W_0=5$. The metallicity is $Z=0.001$. These cluster models have an initial half mass radius of $\mbox{$r_{\rm h}$}=1$, 2 or 4 pc. The stellar IMF of these clusters is different from those of the Roche-lobe filling model. They have a Kroupa mass function in the range of 0.10 to 100 \mbox{$M_{\odot}$}, with an initial mean stellar mass of 0.623 \mbox{$M_{\odot}$}. In these models 10\% of the formed neutron stars and black holes are retained in the cluster. The models were calculated for this study. The parameters of the models are listed in Table \ref{tbl:models-uf09}. For the study of the mass loss by dissolution we define an ``underfilling factor'' $\mathfrak{F}$, which is defined as the ratio of the initial half-mass radius of the cluster model, \mbox{$r_{\rm h}$}, and the half-mass radius $r_{\rm h}^{\rm rf}$ of a Roche-lobe filling cluster with the same King parameter $W_0$. \begin{equation} \mathfrak{F}_{\rm W_0} \equiv \frac{\mbox{$r_{\rm h}$}}{r_{\rm h}^{\rm rf}} = \frac{(\mbox{$r_{\rm h}$}/\mbox{$r_{\rm t}$})_{\rm W_0}}{(\mbox{$r_{\rm h}$}/\mbox{$r_{\rm t}$})^{\rm rf}_{\rm W_0}} = \frac{r_{\rm t}^{\rm W_0}}{r_J} \label{eq:uffactor} \end{equation} where $r_J= r_{\rm t}^{\rm rf}$ is the Jacoby radius, i.e the tidal radius of a Roche-lobe filling cluster, and $r_{\rm t}^{\rm W_0}$ is the end of the density profile of a cluster with fixed values of $\mbox{$r_{\rm h}$}$ and $W_0$. Note that $\mathfrak{F}=1$ for Roche-lobe filling clusters and $<1$ for Roche-lobe underfilling clusters. In this expression $(\mbox{$r_{\rm h}$} / \mbox{$r_{\rm t}$})^{\rm rf}_5=0.187$ and $(\mbox{$r_{\rm h}$} / \mbox{$r_{\rm t}$})^{\rm rf}_7=0.116$ are the ratios for a Roche-lobe filling cluster with an initial density distribution of $W_0=5$ and 7 respectively. (The same definition of $\mathfrak{F}$ was used by Gieles an Baumgardt (2008) in their theoretical study of the mass loss of Roche-lobe underfilling clusters.) The values of $\mathfrak{F}_5$ are listed in Table \ref{tbl:models-uf09}. We will show below that for the description of the dissolution of the Roche-lobe underfilling models the parameter $\mathfrak{F}_7 = 1.612 \mathfrak{F}_5$ is more important than $\mathfrak{F}_5$. \begin{table*} \caption{The N-body models of initially Roche-lobe underfilling clusters. The left block of contains the model parameters, the middle section the cluster time scales and the right block contains our fit parameters.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{r r r r r r r r | r r r r r | r r r r r r r} \multicolumn{8}{l}{Input parameters} & \multicolumn{5}{l}{Timescales} & \multicolumn{7}{l}{Output parameters} \\ $\#$ & Mass & nr & $W_0$ & $R_{\rm Gal}$ & $r_t$ & $r_{\rm h}$ & $\mathfrak{F}_5$& $t_{\rm 1\%}$ & $t_{\rm cc}$ & $t_{\rm rh}$ & $t_{\rm cr}(r_{\rm h})$ & $t_{\rm cr}(r_{\rm t})$ & $\gamma$ & $t_0$ & $\mbox{$t_0M_{\rm i}^{\gamma}$}$ & $t_{\rm del}$ & $f_{\rm ind}^{\rm max}$ & $\mbox{$t_0^{\rm cc}$}$ & $j_{\rm cc}$ \\ & $M_{\odot}$ & stars & & kpc & pc & pc & & Gyr & Gyr & Gyr & Myr & Myr & & Myr & Gyr & Myr & & Myr & \\ \hline uf1 & 10831 & 16k & 5& 8.5& 32.6 & 1.00 & 0.164 & 7.22 & 3.00: & 0.045 & 0.57 & 75.5 & 0.80 & 5.1 & 8.6 & 50 & 0.0 & 10.5 & 1.09 \\ uf2 & 10426 & 16k & 5& 8.5& 32.2 & 2.00 & 0.332 & 7.59 & 3.50: & 0.127 & 1.65 & 75.5 & 0.80 & 6.2 & 10.2 & 100 & 0.0 & 10.4 & 1.34 \\ uf3 & 10589 & 16k & 5& 8.5& 32.4 & 4.00 & 0.660 & 5.89 & 5.30: & 0.361 & 4.64 & 75.5 & 0.80 & 5.0 & 8.3 & 100 & 0.2 & 8.4: & 1.15\\ uf4 & 21193 & 32k & 5& 8.5& 40.8 & 1.00 & 0.131 &11.42 & 3.36 & 0.058 & 0.41 & 75.5 & 0.80 & 5.0 & 14.5 & 200 & 0.0 & 11.1 & 1.16 \\ uf5 & 21095 & 32k & 5& 8.5& 40.7 & 2.00 & 0.263 &13.40 & 7.00 & 0.165 & 1.16 & 75.5 & 0.80 & 6.0 & 17.3 & 350 & 0.0 & 12.8 & 1.11 \\ uf6 & 20973 & 32k & 5& 8.5& 40.7 & 4.00 & 0.526 &12.75 & 9.30 & 0.466 & 3.29 & 75.5 & 0.80 & 6.0 & 17.2 & 200 & 0.1 & 10.6 & 1.24 \\ uf7 & 41465 & 64k & 5& 8.5& 51.0 & 1.00 & 0.105 &17.79 & 3.73 & 0.076 & 0.29 & 75.5 & 0.80 & 5.0 & 24.7 & 300 & 0.0 & 10.8 & 1.18 \\ uf8 & 40816 & 64k & 5& 8.5& 50.8 & 2.00 & 0.211 &20.76 & 8.32 & 0.212 & 0.83 & 75.5 & 0.80 & 6.5 & 31.7 & 800 & 0.0 & 11.8 & 1.44 \\ uf9 & 42114 & 64k & 5& 8.5& 51.3 & 4.00 & 0.417 &21.18 &12.65 & 0.608 & 2.32 & 75.5 & 0.80 & 6.0 & 30.0 & 800 & 0.0 & 11.5 & 1.31 \\ uf10 & 83853 & 128k & 5& 8.5& 64.5 & 2.00 & 0.116 &34.77 &10.07 & 0.282 & 0.58 & 75.5 & 0.80 & 7.0 & 60.8 & 1200 & 0.0 & 13.7 & 1.51 \\ uf11 & 83700 & 128k & 5& 8.5& 64.5 & 4.00 & 0.332 &36.58 &18.40 & 0.796 & 1.65 & 75.5 & 0.80 & 7.2 & 62.4 & 1500 & 0.0 & 11.9 & 1.67 \\ \end{tabular} } Number of stars: $1k=1024$. \label{tbl:models-uf09} \end{table*} \subsection{Dissolution before core collapse} \label{sec:8.2} The mass loss of the Roche-lobe underfilling models can be described in the same way as for the Roche-lobe filling models. The resulting fitting parameters are listed in the right hand block of Table \ref{tbl:models-uf09}. The first phase is dominated by stellar evolution. However in this case there is no evolution-induced mass loss. This is because the clusters are initially well within their tidal limit. So the mass loss by stellar evolution does produce an expansion of the radius, but this expansion does not immediately reach the tidal radius. This is reflected in the values of $\mbox{$f_{\rm ind}^{\rm max}$} =0.0$ for all models, except uf3 and uf6. These two are the least Roche-lobe underfilling models with $\mathfrak{F}_5=0.66$ and 0.53 respectively. The value of \mbox{$f_{\rm ind}^{\rm max}$}\ can then be described in a similar way as for Roche-lobe filling clusters, Eq. \ref{eq:findmax}, but with a correction term that depends on the underfilling factor, \begin{equation} \mbox{$f_{\rm ind}^{\rm max}$} = -0.86+ 0.40 \times \log (\mbox{$t_0M_{\rm i}^{\gamma}$})+2.75 \times \log \mathfrak{F}_5 \label{eq:findmax-uf} \end{equation} with a maximum of 1 and a minimum of 0 and $\gamma=0.80$ (see below). The dissolution before core collapse can be expressed by a power law approximation of $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}^{\rm norm}$}$ versus $M$, with $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}^{\rm norm}$}$ defined by Eq. \ref{eq:dmdtnorm}. The lower left panel of Fig. \ref{fig:gamma} shows the relation between $\log(\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}^{\rm norm}$})$ and log $(M)$. Because the core collapse time of the Roche-lobe underfilling models is short, each model contributes only to a small part of the mass range. We found that $1- \mbox{$\gamma_{\rm uf}$}=0.20 \pm 0.05$ gives a good fit to these plots, so we adopted $\mbox{$\gamma_{\rm uf}$}=0.80$. We note that this value is the same as for the Roche-lobe filling clusters with $W_0=7$, whereas the initially Roche-lobe underfilling clusters have an initial concentration of $W_0=5$. However, a study of the expansion of severely Roche-lobe underfilling clusters by means of \mbox{$N$-body} - simulations has shown that the initial expansion due to mass loss redistributes the density close to that of a $W_0=7$ model, in agreement with our derived value of $\mbox{$\gamma_{\rm uf}$}=0.80$. The tendency of clusters to evolve to a $W_0 \simeq 7$ model was noticed by Portegies Zwart et al. (1998). In initially strongly concentrated clusters, $W_0>7$, dynamical friction quickly drives the massive stars to the center where they will lose mass due to stellar evolution. This results in an expansion of the core and a less steep density profile, so $W_0$ decreases. On the other hand, in clusters with a less steep initial concentration, $W_0 \le 5$, dynamical friction is less efficient and the massive stars lose mass by stellar evolution before they reach the center. So the cluster expands more homogeneously due to evolutionary mass loss. After the massive stars have undergone stellar evolution, dynamical effects take over and the core shrinks due to the approaching core collapse, so the density distribution becomes more concentrated and $W_0$ increases. Once the cluster has expanded to about the tidal radius, the dissolution is very similar to that of a Roche-lobe filling cluster with $W_0=7$. This is reflected in the values of \mbox{$t_0^{\rm uf}$}\ which range from 5.4 to 7.5 Myr, whereas the values of \mbox{$t_0$}\ for the comparable Roche-lobe filling models, nrs 16 to 19, range from 6.0 to 6.5 Myr. Part of the difference is due to differences in the mean stellar mass, because the two sets of models have different IMFs. Dissolution of the Roche-lobe underfilling models needs more time to get started than Roche-lobe filling models, because the clusters first have to expand to the tidal limit. This is a slow process that occurs on the relaxation time scale. This is shown in Fig. \ref{fig:tdelay-uf} which shows the ratio $\mbox{$t_{\rm delay}$} / \mbox{$t_{\rm rh}$}$ as a function of the Roche-lobe underfilling factor $\mathfrak{F}_5$, for different values of \mbox{$r_{\rm h}$}. The values of $\mbox{$t_{\rm delay}$} /\mbox{$t_{\rm rh}$}$ range from about 0 to 4 for the models considered here. The figure shows that we can approximate \begin{equation} \mbox{$t_{\rm delay}$} ~ \simeq~ 4.31~10^{-3}\times (\mathfrak{F}_5)^{-1.989} ~ \mbox{$t_{\rm rh}$}^{1.605} \label{eq:tdelaytrh} \end{equation} with all ages in Myr. This equation is valid for $ -1.0 < \log( \mathfrak{F}_5) < -0.20$. For Roche-lobe filling clusters with $ \log( \mathfrak{F}_5) > -0.20$ the delay time does not scale with \mbox{$t_{\rm rh}$}\ but with the crossing time at the tidal radius and $\mbox{$t_{\rm delay}$} \simeq 3.0~ \mbox{$t_{\rm cr}(r_{\rm t})$}$ (Sect. \ref{sec:4}). So, for strongly Roche-lobe underfilling models the delay time scales with the initial value of $\mbox{$t_{\rm rh}$}$, but if the underfilling factor approaches $\mathfrak{F}=1$ the delay time is much shorter and scales with $\mbox{$t_{\rm cr}(r_{\rm t})$}$. The dissolution before core collapse can now be expressed by Eq. \ref{eq:dmdtdis+fdelay} with \mbox{$f_{\rm delay}$}\ given by Eq. \ref{eq:fdelay} and \mbox{$t_{\rm delay}$}\ approximated by Eq. \ref{eq:tdelaytrh}. \begin{figure} \centerline{\epsfig{figure=lamers-fig11.ps, width=8.0cm}} \caption[] {The ratio between the delay time $\mbox{$t_{\rm delay}$}$ for the establishment of dissolution and the half mass relaxation time \mbox{$t_{\rm rh}$}\ for initially Roche-lobe underfulling clusters. The dashed lines show the fit of Equ. \ref{eq:tdelaytrh}.} \label{fig:tdelay-uf} \end{figure} Fig. \ref{fig:tnref-uf} (left) shows the values of \mbox{$t_0^{\rm uf}$}, the mean mass before core collapse \mbox{$\overline{m}$}\ and the resulting values of \mbox{$t_{\rm ref,uf}^N$}\ as a function of \mbox{$t_0M_{\rm i}^{\gamma}$}. We used different symbols for different initial half mass radii. \begin{figure* \centerline{\hspace{+3.0cm}\epsfig{figure=lamers-fig12a.ps,width=12.0cm} \hspace{-4.0cm} \epsfig{figure=lamers-fig12b.ps,width=12.0cm}} \caption[]{The parameters of the models of initially Roche-lobe underfilling clusters, nrs uf1 to uf11 before (left) and after core collapse (right). Top: The relation between $\mbox{$t_0M_{\rm i}^{\gamma}$}$ and \mbox{$t_0^{\rm uf}$}\ (left) or \mbox{$t_0^{\rm cc, uf}$}\ (right), i.e. after core collapse. Middle: The mean stellar mass before (left) and at core collapse (right). The dashed lines indicate mean relations of the different values of \mbox{$r_{\rm h}$}. Bottom: The resulting values of $\mbox{$t_{\rm ref,uf}^N$}$ (left) and $\mbox{$t_{\rm ref,cc,uf}^N$}$ (right) as a function of $\mbox{$t_0$} M_i^\gamma$. The mean relations are given in Sect. \ref{sec:8.2} and \ref{sec:8.3}. } \label{fig:tnref-uf} \end{figure*} The mean stellar mass before core collapse depends both on $\mbox{$t_0M_{\rm i}^{\gamma}$}$ and on $\mathfrak{F}$ and not only on $\mbox{$t_0M_{\rm i}^{\gamma}$}$ as is the case for the Roche-lobe filling clusters. This is because the core collapse time $\mbox{$t_{\rm cc}$}$ depends on $\mathfrak{F}$ and so the amount of mass that the cluster has lost before core collapse also depends on both \mbox{$t_0M_{\rm i}^{\gamma}$}\ and \mbox{$t_{\rm rh}$}. We find that we can express the mean stellar mass before core collapse, i.e. between $t(\mu=0.25)$ and \mbox{$t_{\rm cc}$}, as \begin{equation} \log \mbox{$\overline{m}$} ~=~ 0.139 -0.0984 \log (\mbox{$t_0M_{\rm i}^{\gamma}$}) + 0.101 \log (\mathfrak{F}_7) \label{eq:mmean-precc-uf} \end{equation} We have expressed \mbox{$\overline{m}$}\ in terms of $\mathfrak{F}_7$ (with $\mathfrak{F}_7=1.612 \mathfrak{F}_5$, Eq. \ref{eq:uffactor}) instead of the initial value of $\mathfrak{F}_5$ because the initial expansion of the clusters redistributes the density to about a $W_0=7$ model (see above). The relation has about the same dependence on \mbox{$t_0M_{\rm i}^{\gamma}$}\ as in the case of Roche-lobe filling clusters (Eq. \ref{eq:mmeanprcc}). The lower left part of Fig. \ref{fig:tnref-uf} shows the resulting values of \mbox{$t_{\rm ref,uf}^N$}. For Roche-lobe filling clusters we found one value of $\mbox{$t_{\rm ref}^N$}=3.5$ Myr ($\log \mbox{$t_{\rm ref}^N$} =0.544$) for all models of $W_0=7$ (see Fig. \ref{fig:tnref}). The Roche-lobe underfilling models show a large range in \mbox{$t_{\rm ref,uf}^N$}, indicating that the underfilling factor plays is a role for small values of $\mathfrak{F}_7$. We found that we can approximate \begin{eqnarray} \log (\mbox{$t_{\rm ref,uf}^N$})& =& 0.544 ~~~{\rm if}~ \log(\mathfrak{F}_7)>-0.50 \nonumber \\ & & 0.544 + 0.65 \times \{\log(\mathfrak{F}_7) +0.50\} \nonumber \\ & & ~~{\rm if}~ \log(\mathfrak{F}_7)<-0.50 \end{eqnarray} This shows that there is a smooth transition between the dissolution parameter for Roche-lobe filling and underfilling clusters. The fits are shown as three partially overlapping dashed lines for models of \mbox{$r_{\rm h}$}=1, 2 and 4 pc in the lower part of Fig. \ref{fig:tnref-uf}. \subsection{Dissolution after core collapse} \label{sec:8.3} The time of core collapse is expected to scale with the half mass relaxation time. Fig. \ref{fig:tcc-uf} shows $\mbox{$t_{\rm cc}$}$ as a function of \mbox{$t_{\rm rh}$}\ and the initial Roche-lobe underfilling factor $\mbox{$r_{\rm h}$} / \mbox{$r_{\rm t}$}$. In Sect. \ref{sec:6.1} we found that $\mbox{$t_{\rm cc}$} \propto \mbox{$t_{\rm rh}$}^{0.872}$ for Roche-lobe filling clusters. We find that for Roche-lobe underfilling clusters the dependence of $\mbox{$t_{\rm cc}$}$ on $\mbox{$t_{\rm rh}$}$ can be described by the same power law dependence, but there is an additional dependence on the underfilling factor \begin{equation} \log (\mbox{$t_{\rm cc}$}) ~\simeq~ 1.505 ~+~ 0.872 \log(\mbox{$t_{\rm rh}$}) ~-~ 0.513 \log (\mathfrak{F}_5) \label{eq:tcc-uf} \end{equation} (In this case $\mathfrak{F}_5$ is the crucial parameter, rather than $\mathfrak{F}_7$, because the initial relaxation time \mbox{$t_{\rm rh}$}\ is defined for the initial density distribution with $W_0=5$.) The smaller the underfilling factor, the larger \mbox{$t_{\rm cc}$}\ for a given value of \mbox{$t_{\rm rh}$}. This is because clusters that are initially strongly Roche-lobe underfilling expand more strongly and so $\mbox{$t_{\rm rh}$} (t)$ increases more strongly with time, which results in a larger ratio of $\mbox{$t_{\rm cc}$} / \mbox{$t_{\rm rh}$}$. In the limit of $\mathfrak{F}_5=1$, i.e. for Roche-lobe filling clusters Eq. \ref{eq:tcc-uf} predicts that $ \log (\mbox{$t_{\rm cc}$}) \simeq 1.50 + 0.872 \log(\mbox{$t_{\rm rh}$})$. This can be compared with the value for Roche-lobe filling clusters of $ \log (\mbox{$t_{\rm cc}$}) \simeq 1.23 + 0.872 \log(\mbox{$t_{\rm rh}$})$ (Eq. \ref{eq:tcctidal}). So Roche-lobe underfilling clusters need about twice as many ``initial'' relaxation times to go into core collapse as Roche-lobe filling clusters. This is because of their stronger expansion and the fact that they do not push stars over the tidal boundary while evolving towards core collapse. \begin{figure} \epsfig{figure=lamers-fig13.ps, width=8.0cm} \caption[] {The core collapse time of the Roche-lobe underfilling models as a function of the initial half mass relaxation time and the underfilling factor $\mathfrak{F}_5$. } \label{fig:tcc-uf} \end{figure} A study of the relation between $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}^{\rm norm}$}$ and $M(t)$ of the various Roche-lobe underfilling models shows that the dissolution after core collapse can be described by the same values of $\mbox{$\gamma_{\rm cc}$}$ as for the Roche-lobe filling cluster models, i.e $\mbox{$\gamma_{\rm cc}$}=0.70$ if $M(t)>10^3$ \mbox{$M_{\odot}$}\ and $\mbox{$\gamma_{\rm cc2}$}=0.40$ if $M(t)<10^3$ \mbox{$M_{\odot}$}\ (see lower right panel of Fig. \ref{fig:gamma}). This is due to the fact that core collapse results in a redistribution of the density in the cluster and erases the memory of the pre-collapse phase. The values of $\mbox{$t_0^{\rm cc}$}$ of the Roche-lobe underfilling models are listed in Table \ref{tbl:models-uf09}. They are plotted versus \mbox{$t_0M_{\rm i}^{\gamma}$}\ in the right hand panel of Fig. \ref{fig:tnref-uf} together with the mean stellar mass at core collapse and the resulting values of $\mbox{$t_{\rm ref,cc}^N$}$. The dissolution parameter after core collapse when $M(t)<10^3 \mbox{$M_{\odot}$}$ is $\mbox{$t_0^{\rm cc2}$} = \mbox{$t_0^{\rm cc}$} \times 10^{3(\mbox{$\gamma_{\rm cc}$} - \mbox{$\gamma_{\rm cc2}$})} =10^{0.90} ~ \mbox{$t_0^{\rm cc}$}$. The mean mass at core collapse can be approximated by \begin{equation} \log(\mbox{$\overline{m}_{\rm cc}$}) = 0.178 -0.0984 \log(\mbox{$t_0M_{\rm i}^{\gamma}$})+0.207 \log(\mathfrak{F}_7) \label{eq:mmeancc-uf} \end{equation} The relation has the same slope as that of the Roche-lobe filling clusters after core collapse (Eq. \ref{eq:mmeanpostcc}) but the constant 0.178 for clusters with $\mathfrak{F}_7=1$ is different from the 0.075 of Roche-lobe filling clusters with $W_0=7$, which indicates a higher mean mass at core collapse. This is mainly due to differences in the IMF of the two sets of models. Fig. \ref{fig:tnref-uf}b shows that $\mbox{$t_{\rm ref,cc}^N$}= \mbox{$t_0^{\rm cc}$} \times \mbox{$\overline{m}_{\rm cc}$}^{\gamma}$ with $\gamma=0.70$ for these models is about constant but with a significant scatter. This shows that there is a small residual effect of the initial underfilling factor in the dissolution after core collapse. We found that we can approximate \begin{equation} \log(\mbox{$t_{\rm ref,cc}^N$}) = 0.875 + 0.127 \times \log(\mathfrak{F}_7) \label{eq:tnrefccuf} \end{equation} The constant $0.875$ is slightly larger than the value of $0.796$ for Roche-lobe filling models of $W_0=7$. Fig. \ref{fig:mmeanuf} shows the history of the mean stellar mass in the Roche-lobe underfilling cluster models as a function of $\mu$. The trends are approximately the same as in Fig. \ref{fig:mmean}, i.e. an initial decrease due to the loss of massive stars by stellar evolution followed by an increase of \mbox{$\overline{m}$}\ after mass segregation has been established and low mass stars are lost preferentially. The difference is due to the different initial stellar IMF, which have an initial $\mbox{$\overline{m}$}=0.623$ \mbox{$M_{\odot}$}\ instead of 0.547 for the Roche-lobe filling models. Moreover, in the Roche-lobe underfilling models 90$\%$ of the black holes and neutron stars is ejected whereas they were all retained in the Roche-lobe filling models. Both effects influence the evolution of $\mbox{$\overline{m}$}$ (Kruijssen 2009). The decrease of \mbox{$\overline{m}$}\ is stronger than for the Roche-lobe filling models because the IMF of the Roche-lobe underfilling models reaches up to 100 \mbox{$M_{\odot}$} , whereas the IMF of the other models reaches to 15 \mbox{$M_{\odot}$}. We see that \mbox{$\overline{m}$}\ of the Roche-lobe underfilling models reaches its minimum at the same value of $t \simeq 0.15 t_{\rm tot}$ as the other models. This shows that full mass segregation is reached at the same fraction of the total lifetime, independent of the initial radius. \begin{figure} \epsfig{figure=lamers-fig14.ps,width=9.0cm} \caption[] {The evolution of the mean stellar mass in the Roche-lobe underfilling clusters as a function of $\mu$. The diamonds indicate the moment when $t=0.15 \mbox{$t_{\rm tot}$}$, at which time the mean mass reaches its minimum value due to mass segregation.} \label{fig:mmeanuf} \end{figure} \section{The relation between the cluster lifetime and \mbox{$t_0M_{\rm i}^{\gamma}$} } \label{sec:9} The lifetime of clusters depends on time scales for mass loss by stellar evolution, and the dissolution constants \mbox{$t_0$}\ and \mbox{$t_0^{\rm cc}$}\ before and after core collapse. Stellar evolution dominates the mass loss only during the first part of the cluster lifetime and typically removes about 20 to $40\%$ of the cluster mass, after which dissolution takes over. Since the dissolution timescales before and after core collapse both depend on the strength of the tidal field in which the cluster moves, we may expect that the cluster lifetime depends largely on $\mbox{$t_0$} \mbox{$M_{\rm i}$} ^\gamma$. This is confirmed in Fig. \ref{fig:tone-tmig}, which shows a very tight relations between $\mbox{$t_0$} \times \mbox{$M_{\rm i}$}^\gamma$ and \mbox{$t_{\rm 1\%}$}. For clusters with an initial density concentration described by $W_0=5$ and $W_0=7$ in circular and elliptical orbits we find \begin{eqnarray} \log ( \mbox{$t_{\rm 1\%}$})& = & 0.518 + 0.864 \times \log ( \mbox{$t_0$} \mbox{$M_{\rm i}$}^{0.65})~~{\rm if}~W_0=5 \nonumber \\ & = & 0.797 + 0.778 \times \log ( \mbox{$t_0$} \mbox{$M_{\rm i}$}^{0.80})~~{\rm if}~ W_0=7 \label{eq:tone-tmig} \end{eqnarray} The relation between $\mbox{$t_0M_{\rm i}^{\gamma}$}$ (with $\gamma=0.80$) and \mbox{$t_{\rm 1\%}$}\ for the Roche-lobe underfilling clusters is indistinguishable from that of the Roche-lobe filling clusters of $W_0=7$. The tight correlations show that \mbox{$t_0M_{\rm i}^{\gamma}$}\ can be used as accurate indicator of the lifetime of a cluster. Equation \ref{eq:tone-tmig} might suggest that the lifetime of a cluster is proportional to $\mbox{$M_{\rm i}$}^{0.56}$ for $W_0=5$ clusters and $\mbox{$M_{\rm i}$}^{0.62}$ for $W_0=7$ cluster. However, we remind that $\mbox{$t_0$}$ is proportional to $\mbox{$\overline{m}$}^{- \gamma}$ (Eq. \ref{eq:tnrefdef}) and $\mbox{$\overline{m}$} \propto (\mbox{$t_0M_{\rm i}^{\gamma}$})^{-0.121}$ and $(\mbox{$t_0M_{\rm i}^{\gamma}$})^{-0.094}$ (Eq. \ref{eq:mmeanprcc}) for $W_0=5$ and 7 respectively. This implies that $\mbox{$t_{\rm 1\%}$} \propto \mbox{$M_{\rm i}$}^{0.61}$ and $\mbox{$M_{\rm i}$}^{0.67}$ for $W_0=5$ and 7, in agreement with the values of the indices 0.62 and 0.67 derived by BM03. \begin{figure \centerline{\epsfig{figure=lamers-fig15.ps,width=9.5cm}} \caption[] {The relation between $\mbox{$t_0$} \times \mbox{$M_{\rm i}$}^\gamma$ and \mbox{$t_{\rm 1\%}$}\ for all Roche-lobe filling and underfilling cluster models in circular and elliptical orbits. The tight relation can be described by two linear equations (\ref{eq:tone-tmig}) for clusters with $W_0=5$ (full line) and $W_0=7$ (dashed line).} \label{fig:tone-tmig} \end{figure} \section{The predicted mass history of star clusters} \label{sec:10} In the previous sections we have described the interplay between the different mass loss processes of star clusters. Based on these results we derived a recipe for calculating the mass evolution of star clusters in different environments. The recipe is described in Appendix A. \subsection{The contribution of different effects to the mass loss} \label{sec:10.1} The mass loss from star clusters is due to several effects: stellar evolution, evolution-induced loss of stars, and dissolution (relaxation-driven mass loss) before and after core collapse. Fig. \ref{fig:contributions} shows the contributions of these different effects for two characteristic models, $\#$ 15 which has a lifetime of $\mbox{$t_{\rm 1\%}$}=1.3$ Gyr and $\#$ 2 with $\mbox{$t_{\rm 1\%}$}=26.9$ Gyr. The two models show that clusters with a long lifetime ($>20$ Gyr) lose about 35\% of their mass by stellar evolution, 15\% by induced mass loss and the remaining 50\% by dissolution. Clusters with a short lifetime ($<5$ Gyr) lose more than 60\% by dissolution, less than about 30\% by stellar evolution, and less than 10\% by induced mass loss. This is because the short lifetime is the ``result'' of a strong mass loss by dissolution, which does not leave much time for the cluster to lose a large fraction of its mass by stellar evolution. \begin{figure} \centerline{\hspace{+3.5cm}\epsfig{figure=lamers-fig16a.ps, width=8.0cm}\hspace{-3.5cm} \epsfig{figure=lamers-fig16b.ps, width=8.0cm}} \caption[]{The contribution of different mechanisms to the mass loss of two cluster models with a short (model 15, left) and a long (model 2, right) lifetime. The induced mass loss is small for clusters with a short lifetime of $\mbox{$t_{\rm 1\%}$}=1.3$ and 26.9 Gyr respectively.} \label{fig:contributions} \end{figure} \subsection{Predicted M(t) of the total mass of clusters} \label{sec:10.2} We have calculated the $M(t)$ history of all cluster models listed in Table \ref{tbl:BM03models} with the recipe described in Appendix A. A subset of the results is shown in Fig. \ref{fig:M(t)-comparison}. The sample shown contains models with $W_0=5$ with a large (64k or 128k) and small (8k) number of stars at respectively $\mbox{$R_{\rm Gal}$}=15$, 8.5 and 2.63 kpc; two models with $W_0=7$ (128k and 8k); two models in elliptical orbits ($\epsilon=0.2$ and 0.5); and five Roche-lobe underfilling models with different numbers of stars and initial half mass radii. The agreement is good for all models of clusters in circular and elliptical orbits, with initial concentrations $W_0=5$ and 7, and for the Roche-lobe underfilling clusters, including the ones not shown here. For cluster models in the original BM03 sample that are not discussed in this paper, the agreement is equally good. The different models have different shapes of $M(t)/\mbox{$M_{\rm i}$}$ versus $t/\mbox{$t_{\rm 1\%}$}$. Clusters with a long lifetime ($\mbox{$t_{\rm 1\%}$} > 20$ Gyr, high \mbox{$M_{\rm i}$}) show a strong drop in mass during the first 5\% of their life, due to stellar evolution and induced mass loss, followed by a more gentle decrease. Clusters with a short lifetime ($\mbox{$t_{\rm 1\%}$} < 10$ Gyr, low \mbox{$M_{\rm i}$}) show a more gradual concave shape. All models show a bump in the $M(t)$-plot near the core collapse time: the mass loss rate is about twice as high after core collapse than before. The shapes of the $M(t)$ relations are all convex with various degrees of curvature. Only those cluster models for which core collapse occurs about halfway through their lifetime show a more or less linear mass history (e.g. models 16, uf09 and uf10). \begin{figure*} \centerline{\epsfig{figure=lamers-fig17a.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17b.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17c.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17d.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17e.ps,width=7.5cm}\hspace{-4.4cm}} \vspace{-0.3cm} \centerline{\epsfig{figure=lamers-fig17f.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17g.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17h.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17i.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17j.ps,width=7.5cm}\hspace{-4.4cm}} \vspace{-0.3cm} \centerline{\epsfig{figure=lamers-fig17k.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17l.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17m.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17n.ps,width=7.5cm}\hspace{-4.4cm} \epsfig{figure=lamers-fig17o.ps,width=7.5cm}\hspace{-4.4cm}} \caption[]{The $M(t)$ history of a representative set of cluster models: 6 models in circular orbits with $W_0=5$, two with $W_0=7$, two in elliptical orbits, and 5 Roche-lobe underfilling models. The models of the Roche-lobe filling clusters are specified by a vector containing: model nr, number of stars, $W_0$, \mbox{$R_{\rm Gal}$}\ in kpc and orbit. The models of the Roche-lobe underfilling clusters are specified by: model nr, number of stars, $W_0$, \mbox{$R_{\rm Gal}$}\ in kpc and $\mbox{$r_{\rm h}$}$ in pc. Full lines: derived from \mbox{$N$-body}\ simulations of BM03 or from the sample of Roche-lobe underfilling models presented here. Dotted line: predicted with the parameters listed in Tab. \ref{tbl:BM03models}. Dashed lines: predicted by the method described in Appendix A. The upper dash-dotted line shows the fraction of the mass that is lost directly by stellar evolution. The vertical tickmarks indicate the time of core collapse from the models (full) and calculated with Eqs. \ref{eq:tcctidal} and \ref{eq:tcc-uf} (dashed).} \label{fig:M(t)-comparison} \end{figure*} \section{Discussion} \label{sec:discussion} We have shown how the different mass loss effects of star clusters interact in the determination of their mass history. We have also derived a recipe for calculating the mass loss history of star clusters in different environments and with different metallicities and stellar initial mass functions. This study is based on the \mbox{$N$-body}\ simulations by BM03, supplemented with newer $N$-body simulations of Roche-lobe underfilling clusters, so the results are dependent on the characteristics of these models. The \mbox{$N$-body}\ models that we used are relatively simple: the clusters start in virial equiliblium, without primorial mass segregation, without primordial binaries and with stars in isotropic orbits. The study of these simple models, which are valid after the gas expulsion phase, are a first step to understand the complicated interplay between the various dynamical effects in clusters. The models can be refined later when observational evidence shows which assumptions have to be improved. We discuss the major assumptions of the models, how they may have influenced our results, and how they can be taken into account in the recipe for computing $M(t)$.\\ (a) The BM03 models are Roche-lobe filling. This is a good assumption for open clusters and globular clusters which are close to the galactic centre or have very large half-mass radii. However, the majority of globular clusters probably formed with half-mass radii around 1 pc and therefore started strongly Roche-lobe underfilling (Baumgardt, Kroupa \& Parmentier 2008b). Some of these are still underfilling at present (Baumgardt et al. 2010) (b) The models that we used do not have initial mass segregation. The question of the initial mass segregation is still open. Baumgardt et al. (2008a) found that the present overall mass function of most globular clusters can be explained without invoking initial mass segregation, but some clusters require initial mass segregation to explain their present mass function. Observations of young clusters, age $<$50 Myr, show evidence for mass segregation, e.g. Brandl et al. (1996) for R136; Hillenbrand and Hartmann (1998) for the Orion Nebula Cluster; McGrady et al. (2005) for M82-F. However, de Grijs et al. (2002) argued that this does not necesaarily imply {\it initial} mass segregation because the timescale for the dynamical segregation of high mass stars in young massive clusters may be very short. Dynamical mass segregation has been taken into account in the models we used in this study. (c) The models have no primordial binaries, but only dynamically formed binaries. It is expected that real clusters contain a large fraction of initial binaries (Elson et al. 1998, Hut et al. 1992, Hu et al. 2006, Sommariva et al. 2009). However only hard binaries influence the cluster dynamics as they can heat the cluster and prevent core collapse (Hut et al. 1992). K\"upper et al (2008) have shown that the escape rate is hardly affected by binaries. (d) Stellar remnants are initially retained in the BM03 cluster models, i.e. they are not ejected by a kick-velocity. The new Roche-lobe underfilling models have a 10\% retention factor of black holes and neutron stars. The retention of the remnants implies that the model clusters at older ages contain a large fraction of neutron stars which are more massive than the average stellar mass. Clusters with a large fraction of massive remnants will dissolve faster due to the higher average stellar mass. Also the depletion rate of low mass stars could be different (Kruijssen 2009). (e) The effects of bulge shocks are included in the models of clusters in eccentric orbits. However, disk-shocking is not included in our models. Vespirini and Heggie (1997) have studied the effect of disk-shocking. Based on their results (see their Fig. 21) we conclude that the effect of disk-shocks on decreasing the lifetime of clusters is small, especially for clusters beyond the solar circle and for massive clusters. Our models also do not include the effects of shocks by spiral density waves or passing GMCs. The latter effect is thought to be the main destruction mechanism for clusters in the Galactic plane (e.g. Lamers \& Gieles 2006) and probably also in GMC-rich interacting galaxies (Gieles et al. 2008). These effects will increase the dissolution compared to that of the BM03 models. However they can easily be accounted for in the recipe that we derived for calculating $M(t)$, by simply adopting a smaller value of the dissolution parameter \mbox{$t_0$}. This correction is justified because shocks will remove stars from the outer regions of the cluster in approximately the same way as the tidal field. (f) The models are calculated for a given stellar initial mass function, (a Kroupa mass function of $0.15<m<15$ \mbox{$M_{\odot}$}\ for the BM03 models and a Kroupa mass function of $0.10 < m < 100$\mbox{$M_{\odot}$}\ for the new Roche-lobe underfilling models) and for a given metallicity of $Z=0.001$. The mass loss by stellar evolution depends on these assumptions. However, the recipe that we derived allows the choice of different metallicities and different IMFs, by applying the approximate formulae that describe the mass loss by stellar evolution and the formation of remnants for a grid of metallicities listed in Appendix B. (g) We assumed that the cluster move in a spherical logarithmic potential with a constant rotation speed. This implies that we may have underestimated the effect of disk-shocking, which is important for clusters in disk galaxies. A study of the effects of a non-spherical halo and the resulting non-circular orbits with disk-shocking has to be postponed to future studies. (The use of GPUs for the computations of cluster dynamics will allow a significant expansion of the parameter space of cluster models.) \section{Summary and Conclusions} \label{sec:conclusions} Based on \mbox{$N$-body}\ simulations by BM03 of the evolution of Roche-lobe filling star clusters of different initial concentrations and in different orbits in the Galaxy, and on a new sample of Roche-lobe underfilling clusters in circular orbits, we have studied the interplay between the different mass loss effects: mass loss by stellar evolution, loss of stars induced by stellar evolution, and dynamical mass loss (referred to as ``dissolution'') before and after core collapse. At young ages stellar evolution is the dominant effect. The fast (adiabatic) evolutionary mass loss results in a simultaneous expansion of the cluster and a shrinking of its tidal (Jacobi) radius. So the outer cluster layers become unbound. This {\it evolution-induced mass loss} contributes to the overall mass loss if the cluster is deeply emersed in the tidal field, i.e. if the cluster is initially filling its tidal radius. The evolution-induced mass loss rate is proportional to the mass loss rate by stellar evolution but it is smaller for clusters with a mass loss rate by dissolution larger than the mass loss rate by stellar evolution. This is for instance the case for low mass clusters or for clusters in orbits close to the Galactic center. The \mbox{$N$-body}\ models show that the induced mass loss does not start immediately, but that it needs time to build up. This build-up can be described by an exponential function (Eqs. \ref{eq:find} and \ref{eq:fdelay}) with a delay time scale of a few, typically 3, times the crossing time at the tidal radius. For Roche-lobe underfilling clusters the delay time scale is much longer, of the order of a few half mass relaxation times, because the cluster first has to expand to the tidal radius. The actual value depends on the initial Roche-lobe underfilling factor. As the evolution-induced mass loss rate needs time to get going, the {\it total amount of evolution-induced mass loss} is considerably smaller, typically 10 to 50\%\ of the total amount of mass lost by stellar evolution (see Fig. \ref{fig:contributions}). The mass loss of the cluster models by dissolution needs time to build-up, just like the evolution-induced mass loss. However, this is a consequence of the initial conditions of the cluster model and the start of the dissolution might be very different in real clusters (Sect. 5.3). We have shown from both theory and the model simulations that the dissolution rate depends on the environment of the clusters and can be described accurately by a formula of the type $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$} = -M(t)^{1-\gamma}/\mbox{$t_0$}$, with $M$ and $t$ in units of solar mass and Myr. The value of the dissolution parameter $\mbox{$t_0$}$ depends on the environment, e.g. the Galactic potential, the orbit and shocks by spiral arms and passing GMCs. We have derived expressions for estimating $t_0$ for clusters in galaxies where tidal evaporation is the main dissolution effect. This value depends on the Galactic potential (i.e. the galactic rotation velocity), the orbit of the cluster, the initial concentration characterized by $W_0$, and on the evolution of the mean stellar mass. We have derived an expression for the mean stellar mass during the pre-core collapse phase and the post-core collapse phase for various initial mass functions and metallicities. For clusters in an environment where shock-heating by encounters with spiral arms or GMCs are important, the value of \mbox{$t_0$}\ can be estimated using the descriptions by Gieles et al. (2006, 2007). The slope of the stellar mass function depends mainly on the remaining mass fraction $\mu=M(t)/\mbox{$M_{\rm i}$}$ of the clusters and hardly on the initial parameters, such as mass, mass function, concentration factor and strength of the tidal field (e.g. Vesperini \& Heggie, 1997; BM03; Trenti et al. 2010). This effect can also be seen in our results in the evolution of the mean stellar mass of a cluster. The data in Figs. \ref{fig:mmean}, \ref{fig:mmeancc-ecc} and \ref{fig:mmeanuf} show a very similar evolution of $\mbox{$\overline{m}$}$ as function of $\mu$ in almost all models. The main difference is that clusters with short lifetimes have an offset of \mbox{$\overline{m}$}\ to higher values. This is because the mean stellar mass not only depends on the slope of the mass function, but also on the mass of the most massive stars that have survived stellar evolution. This upper mass depends on the age of the cluster and not on its mass fraction. The details of the \mbox{$N$-body}\ simulations have shown that $\gamma = 0.65$ for clusters with an initial density distribution of a King-profile with $W_0=5$ and $\gamma=0.80$ if $W_0=7$. The difference in $\gamma$ is due to the fact that the dissolution timescale depends on both the half-mass relaxation time and the crossing time. Initially Roche-lobe underfilling clusters quickly expand due to mass loss by stellar evolution and reach a density distribution of approximately $W_0=7$, so their dissolution is also described by $\gamma=0.80$. These values of $\gamma$ apply to the pre-core collapse phase\footnote{ We point out that this formula describes the time-dependent mass loss rate per cluster. It is different from the formula that was derived by BM03 (their Eq. 7) to describe the dependence between the total lifetime of a cluster and its initial mass.}. We note that our Roche-lobe underfilling models have half-mass relaxation times between 40 and 800 Myr. The central relaxation times are about 10 times shorter, i.e. 4 to 80 Myr, but still longer than the evolution time of the most massive stars. If the initial radius is smaller than those of our models, e.g. $\le$ 0.5 pc, the core relaxation time may be shorter than the evolution time and the cluster concentration might decrease rather than increase due to stellar evolution. The \mbox{$N$-body} -simulations showed that cluster dissolution does not start right away but that it also needs time to get going. We find that this build-up can be described by the same exponential function with the same time scale as the evolution-induced mass loss (Eq. \ref{eq:fdelay}). The core collapse time \mbox{$t_{\rm cc}$}\ of the models can be expressed in terms of the initial half mass relaxation time \mbox{$t_{\rm rh}$}\ by a simple relation that depends on the underfilling factor $\mathfrak{F}$ (Eq. \ref{eq:uffactor}). For a Roche-lobe filling cluster of $\mbox{$M_{\rm i}$} = 10^5~ \mbox{$M_{\odot}$}$ in a circular orbit $\mbox{$t_{\rm cc}$} \simeq 5~ \mbox{$t_{\rm rh}$} (t=0)$. The mass loss rate by dissolution increases at core collapse by about a factor 2 depending on the model, and has a different mass dependence after core collapse than before with $\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$} = -M(t)^{1-\mbox{$\gamma_{\rm cc}$}}/ t_0^{\rm cc}$ with $\mbox{$\gamma_{\rm cc}$}=0.70$ for all models. This is independent of the initial density distribution because this is erased by the core collapse. When the mass of the cluster decreases to $M(t) \le 10^3~ \mbox{$M_{\odot}$}$ the mass loss dependence changes to $\mbox{$\gamma_{\rm cc}$}=0.40$. This is due to the variation of the Coulomb logarithm in the dependence of relaxation time on the number of stars $N$ in the cluster. We derived an expression for $t_0^{\rm cc}$, i.e. after core collapse (Sect. \ref{sec:6.3}). We have derived simple expressions for the parameters that describe the evolution-induced mass loss and the dissolution, in terms of the initial cluster parameters (\mbox{$M_{\rm i}$}, $W_0$) and the orbit (\mbox{$R_{\rm Gal}$}\ and eccentricity). We also derived parameters that describe the mass loss by stellar evolution for different stellar IMFs and metallicities. With these parameters we can describe the different mass loss effects throughout the lifetime of a cluster. By integrating $ (dM/dt)_{\rm tot} = \mbox{$({\rm d}M/{\rm d}t)_{\rm ev}$} + \mbox{$({\rm d}M/{\rm d}t)_{\rm ind}^{\rm ev}$} +\mbox{$({\rm d}M/{\rm d}t)_{\rm dis}$} $, starting from the initial mass \mbox{$M_{\rm i}$}, we can calculate the mass loss histories of clusters. For this purpose we describe a simple recipe for calculating $M(t)$ in Appendix A, that provides a summary of the equations. The resulting mass histories are compared with those derived from the \mbox{$N$-body}\ simulations. Some of the characteristic results are shown in Fig. \ref{fig:M(t)-comparison}. The agreement is very good, within a few percent of the initial mass. The agreement is equally good for the cluster models of BM03 that were not used in this paper. The method described here provides a description of the variation of the total mass, i.e. stars and remnants, of a cluster with age. To derive the luminous mass, one has to correct the total mass for the contribution by remnants. In the calculations of BM03 the newly formed remnants were retained in the cluster (no kick velocity was assumed). In the method that we present here the kick fractions of black holes, neutron stars and white dwarfs can be specified as free parameters. In later phases part of the remnants can be lost by dissolution. At late ages remnant neutron stars and black holes are the most massive objects in the cluster. They will sink to the center and are not likely to be lost by dynamical effects. The results of this paper and the methods can be used to predict the mass histories of star clusters with different stellar IMFs and different metallicities in different environments. This can then be used to predict the evolution of the mass function of cluster systems. \section*{Acknowledgments} We thank Onno Pols for providing us with updated evolutionary calculations and Diederik Kruijssen for his comments on the manuscript. Simon Portegies Zwart has given important advice. HJGLM and HB thank ESO for Visiting Scientist Fellowship during a few moths in 2008 and 2009 in Garching and Santiago, when this study was performed. This research was supported by the DFG cluster of excellence Origin and Structure of the Universe (www.universe-cluster.de). \bibliographystyle{mn2e}
1,116,691,498,731
arxiv
\section{Introduction} This short example shows a contrived example on how to format the authors' information for {\it IJCAI--21 Proceedings}. \section{Author names} Each author name must be followed by: \begin{itemize} \item A newline {\tt \textbackslash{}\textbackslash{}} command for the last author. \item An {\tt \textbackslash{}And} command for the second to last author. \item An {\tt \textbackslash{}and} command for the other authors. \end{itemize} \section{Affiliations} After all authors, start the affiliations section by using the {\tt \textbackslash{}affiliations} command. Each affiliation must be terminated by a newline {\tt \textbackslash{}\textbackslash{}} command. Make sure that you include the newline on the last affiliation too. \section{Mapping authors to affiliations} If some scenarios, the affiliation of each author is clear without any further indication (\emph{e.g.}, all authors share the same affiliation, all authors have a single and different affiliation). In these situations you don't need to do anything special. In more complex scenarios you will have to clearly indicate the affiliation(s) for each author. This is done by using numeric math superscripts {\tt \$\{\^{}$i,j, \ldots$\}\$}. You must use numbers, not symbols, because those are reserved for footnotes in this section (should you need them). Check the authors definition in this example for reference. \section{Emails} This section is optional, and can be omitted entirely if you prefer. If you want to include e-mails, you should either include all authors' e-mails or just the contact author(s)' ones. Start the e-mails section with the {\tt \textbackslash{}emails} command. After that, write all emails you want to include separated by a comma and a space, following the same order used for the authors (\emph{i.e.}, the first e-mail should correspond to the first author, the second e-mail to the second author and so on). You may ``contract" consecutive e-mails on the same domain as shown in this example (write the users' part within curly brackets, followed by the domain name). Only e-mails of the exact same domain may be contracted. For instance, you cannot contract ``[email protected]" and ``[email protected]" because the domains are different. \end{document} \section{Introduction} As computational capacity increases, the end-to-end deep learning model has been employed in autonomous driving systems that may bring new concerns about its safety. Autonomous driving systems are normally divided into sub-tasks: localization and mapping, perception, assessment, planning and decision making, vehicle control, and human-machine interface \cite{Yurtsever2020}. Recently, end-to-end driving that maps inputs directly to steering commands started to rise as an alternative to modular systems. The ALVINN is the earliest attempt of end-to-end driving, where a 3-layer fully connected network was trained to produce the direction a vehicle should follow \cite{NIPS1988_812b4ba2}. There are also applications of end-to-end off-road driving \cite{NIPS2005_fdf1bc56}. More recently, researchers build a convolutional neural network to map raw pixels from a single front-facing camera directly to steering commands \cite{bojarski2016end}. End-to-end learning may lead to better performance and smaller systems, but they can be fooled by adversarial attacks that add imperceivable perturbations to inputs. \begin{figure}[H] \centering \includegraphics[scale=0.215]{adversarial-driving.png} \caption{Adversarial Driving: The behavior of end-to-end autonomous driving model can be manipulated by adding unperceivable perturbations to the input image.} \label{fig:adv_drv} \end{figure} Existing adversarial attacks can be categorized into white-box, gray-box, and black-box attacks \cite{REN2020346}. In white-box attacks, the adversaries have full knowledge of their target model, including model architecture and parameters \cite{goodfellow2014explaining}. In gray-box attacks, the adversaries only have access to the structure of the target model. In black-box attacks, the adversaries can only gather information about the model through querying. Current research on adversarial attacks majorly focuses on classification tasks, while the effect of these attacks against regression tasks remains rarely explored. The contributions of this paper are summarized as follows: \begin{itemize} \item We introduce adversarial driving: the first online attack against the end-to-end regression model for autonomous driving (Figure \ref{fig:adv_drv}). \item We devise two white-box attacks that can be mounted in real-time: one produces the perturbation for each frame, while the other generates a universal perturbation that can be applied to all frames. \item The attacking system is open-sourced and is extensible to include more attacks for further research\footnote{Code: \href{https://github.com/wuhanstudio/adversarial-driving}{https://github.com/wuhanstudio/adversarial-driving}}. \end{itemize} \clearpage \section{Adversarial Driving} \subsection{Adversarial Attacks on Regression Tasks} Adversarial attacks on classification tasks have been widely studied in the last decade, but we find that relatively few works focus on regression tasks. For classification tasks, the attack is effective if the prediction differs from the ground truth. While for regression tasks, an admissible prediction could be within a range. For instance, the predicted house price can fluctuate within a reasonable range. Taking the actual value as the ground truth \cite{nguyen2018adversarial}, we can use Root Mean Square Error (RMSE) to measure the effectiveness of attacks. An effective attack should produce a higher RMSE loss than random noises \cite{msml}. For autonomous driving, which is a regression task, current research focuses on asynchronous offline attacks. The driving record is split into static images and corresponding steering angles, then the attack is applied on each static image, and the overall success rate can be concluded \cite{Deng2020}. However, many traffic incidents are caused by minor mistakes at a critical point. Thus some stealth attacks that have low overall success rates could still be perilous. On the other hand, similar to human drivers, driving models could also react to adversarial attacks, some attacks may be neutralized by models' reactions if the attack is applied synchronously. To investigate those stealth attacks and driving models' reactions to those attacks, we would like to perform online attacks, which means the attack will be applied while the vehicle is navigating. It's risky to perform online attacks against real-world autonomous driving systems. Thus our adversarial driving system employs a self-driving simulator. \subsection{Adversarial Driving: Online Attack } Previous offline attacks usually rely on the ground truth, such as the fast gradient sign method (FGSM) that linearizes the cost function around the current value of $\theta$, and obtain an optimal max-norm constrained perturbation \cite{goodfellow2014explaining}: \begin{equation} \eta=\epsilon sign(\nabla_{x}J(\theta, x, y)) \end{equation} However, for a navigating autonomous driving system, the cost $J(\theta, x, y)$ cannot be calculated because there is no ground truth of steering command. Even the same experienced human driver can take different operations under the same circumstance. The steering command to drive safely is not unique. Thus we need to define a suitable ground truth for our attacks. Our attack methods follow several assumptions: \begin{itemize} \item Our attacks are online without pre-labeled ground truth. \item The driving model is accurate enough so that we can take the model's output as the ground truth. Because if the model itself is fallacious, we do not need to attack. The model itself should fail the driving task. \item It's a successful attack if the deviation under attack is greater than the one under random noises. \end{itemize} Following these assumptions, we achieve two different kinds of attack that can manipulate the behavior of the end-to-end autonomous driving system. \subsubsection{Fast Gradient Sign Method on Regression (FGSMr)} First, we introduce a white-box attack that calculates the perturbation at each timestep. This attack can push the vehicle to the desired direction (either left or right). A neural network is denoted as $f(\cdot)$ with input x and prediction $f(x)$. Attacking a regression model can be treated as a binary targeted attack. We can either increase the prediction or decrease the prediction. \begin{equation} \eta=\epsilon sign(\nabla_{x}\pm f(x)) \end{equation} Instead of linearizing the cost function, we linearize the output of the model directly to manipulate the behavior of the driving model. Linearizing $f(x)$ will increase the output, while linearizing $-f(x)$ will decrease the output. Our steering command ranges from -1 to 1 (from left to right). If we linearize the $f(x)$, the predicted steering command will increase, and thus the attack will push the vehicle to the right side. Similarly, we can attack the vehicle to the left side by linearizing $-f(x)$. \subsubsection{Universal Adversarial Perturbation on Regression (UAPr)} Second, we introduce the other white-box attack that calculates a universal perturbation for all timesteps. The attack consists of two procedures, learning and executing. During the learning procedure, for each frame, we'll generate the universal perturbation by linearizing the output of the model, and find the minimum perturbation that changes the sign of the prediction to the desired direction \cite{moosavi2016deepfool}: \begin{equation} \Delta \eta \leftarrow \frac{\nabla_{x} (\pm f(x))}{||\omega||_{2}} \end{equation} Then we project the overall perturbation on the $l_p$ ball centered at 0 and of radius ${\eta}^{'}$ to ensure that the constraint $||{\eta}^{'}||_{p} \leq \xi$ is satisfied \cite{moosavi2017universal}: \begin{equation} P_{p, \xi}(\eta) = \text{arg }\underset{{\eta}^{'}}{\text{min }}||\eta-\eta^{'}||_{2} \text{ subject to } ||{\eta}^{'}||_{p} \leq \xi \end{equation} The generated universal perturbation can be applied to the input image for all timesteps during the executing procedure. \subsection{System Architecture} \begin{figure}[H] \centering \includegraphics[scale=0.27]{structure.png} \caption{Adversarial Driving: System Architecture} \label{fig:architecture} \end{figure} The adversarial driving system consists of three key components: the simulator, the server, and the front-end. \textbf{Simulator}: The self-driving simulator utilizes Unity3D, which is a game engine. The simulator connects to a WebSocket server, and once connected, it publishes the image of each frame and accepts steering commands from the server. \textbf{Server}: The WebSocket server accepts connections from the simulator and sends back the control command. Besides, it will publish generated adversarial images to the front end, and receive attack commands from the web browser. \textbf{Front-end}: The front end is a website where the attacker can choose different kinds of attacks and monitor the status of the simulator. \subsection{Evaluation} We perform several attacks against the NVIDIA end-to-end self-driving model \cite{bojarski2016end}. The steering command ranges from -1 to 1 (from left to right). The absolute and relative deviations of steering angle in average under 1000 attacks are listed in Table \ref{tab:result} respectively. \begin{table}[H] \centering \begin{tabular}{ccr} \hline Attack & Abs. devi. & Rel. devi.\\ \hline Random Noises & 0.01 & 69\% \\ FGSMr-Left ($\epsilon=1$) & 0.51 & 952\% \\ FGSMr-Right ($\epsilon=1$) & 0.52 & 2405\% \\ UAPr-no-Left ($p=2, \xi=10$) & 0.10 & 240\% \\ UAPr-no-Right ($p=2, \xi=10$) & 0.11 & 378\% \\ \hline \end{tabular} \caption{Evaluations of Adversarial Driving Attacks} \label{tab:result} \end{table} \textbf{FGSMr is a strong attack}. Once under attack, the vehicle will run off the road in several seconds. The absolute deviations are similar for different attack directions. \textbf{UAPr is a stealth attack}. The absolute deviation is larger than random noises, indicating it's effective, while much smaller than the FGSMr. The attack is stealth because the digression of the steering command is slight while making the vehicle hard to control. This could lead to incidents at certain critical points. Besides, the same perturbation is applied for all frames, thus it's much faster than the FGSMr. In conclusion, we devise one strong attack (FGSMr) and one stealth attack (UAPr). A strong attack deviates the vehicle in several seconds, while a stealth attack could cause incidents at certain critical points. As a result, the end-to-end driving model is vulnerable to adversarial attacks. \section{Conclusion and Future Work} In this research, we devise two white-box targeted attacks against end-to-end autonomous driving systems. The behavior of the driving model can be manipulated by adding perturbations to the input image. Our research demonstrates that the autonomous driving system, which is a regression task, is also vulnerable to adversarial attacks. Besides, further research could also be conducted to investigate the effect of black-box attacks against end-to-end autonomous driving systems. It is also possible that modular systems with inputs from multiple sensors are vulnerable to adversarial attacks. This research may raise concerns over applications of end-to-end models in safety-critical systems. \bibliographystyle{named}
1,116,691,498,732
arxiv
\section{#1} \setcounter{equation}{0}} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \theoremstyle{definition} \newtheorem{remark}[theorem]{Remark} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \theoremstyle{definition} \newtheorem{assumption}[theorem]{Assumption} \newcommand{{\int\hspace*{-4.3mm}\diagup}}{{\int\hspace*{-4.3mm}\diagup}} \makeatletter \def\dashint{\operatorname% {\,\,\text{\bf-}\kern-.98em\DOTSI\intop\ilimits@\!\!}} \makeatother \newcommand{\WO}[2]{\overset{\scriptscriptstyle0}{W}\,\!^{#1}_{#2}} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \def\textit{\textbf{c}}{\textit{\textbf{c}}} \def\textit{\textbf{u}}{\textit{\textbf{u}}} \def\textit{\textbf{v}}{\textit{\textbf{v}}} \def\textit{\txfextbf{w}}{\textit{\txfextbf{w}}} \def\textit{\textbf{f}}{\textit{\textbf{f}}} \def\textit{\textbf{g}}{\textit{\textbf{g}}} \def\textit{\textbf{h}}{\textit{\textbf{h}}} \def\textit{\textbf{P}}{\textit{\textbf{P}}} \def\textit{\textbf{\phi}}{\textit{\textbf{\phi}}} \def\\det{\text{det}} \def\tilde{\mathcal{L}_0^\sigma}{\tilde{\mathcal{L}_0^\sigma}} \def\hat{\mathcal{L}_0^\sigma}{\hat{\mathcal{L}_0^\sigma}} \def\alpha'+\sigma{\alpha'+\sigma} \def\alpha'/\sigma{\alpha'/\sigma} \defa{a} \defb{b} \defc{c} \def{\sf A}{{\sf A}} \def{\sf B}{{\sf B}} \def{\sf M}{{\sf M}} \def{\sf S}{{\sf S}} \def\mathrm{i}{\mathrm{i}} \def\.5{\frac{1}{2}} \def\mathbb{A}{\mathbb{A}} \def\mathbb{O}{\mathbb{O}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{Z}{\mathbb{Z}} \def\mathbb{E}{\mathbb{E}} \def\mathbb{N}{\mathbb{N}} \def\mathbb{H}{\mathbb{H}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{C}{\mathbb{C}} \def\tilde{G}{\tilde{G}} \def\textsl{\textbf{a}}{\textsl{\textbf{a}}} \def\textsl{\textbf{x}}{\textsl{\textbf{x}}} \def\textsl{\textbf{y}}{\textsl{\textbf{y}}} \def\textsl{\textbf{z}}{\textsl{\textbf{z}}} \def\textsl{\textbf{w}}{\textsl{\textbf{w}}} \def\mathfrak{L}{\mathfrak{L}} \def\mathfrak{B}{\mathfrak{B}} \def\mathfrak{O}{\mathfrak{O}} \def\mathfrak{R}{\mathfrak{R}} \def\mathfrak{S}{\mathfrak{S}} \def\mathfrak{T}{\mathfrak{T}} \def\mathfrak{q}{\mathfrak{q}} \def\text{Re}\,{\text{Re}\,} \def\text{Im}\,{\text{Im}\,} \def\mathcal{A}{\mathcal{A}} \def\mathcal{B}{\mathcal{B}} \def\mathcal{C}{\mathcal{C}} \def\mathcal{D}{\mathcal{D}} \def\mathcal{E}{\mathcal{E}} \def\mathcal{F}{\mathcal{F}} \def\mathcal{G}{\mathcal{G}} \def\mathcal{H}{\mathcal{H}} \def\mathcal{P}{\mathcal{P}} \def\mathcal{M}{\mathcal{M}} \def\mathcal{O}{\mathcal{O}} \def\mathcal{Q}{\mathcal{Q}} \def\mathcal{R}{\mathcal{R}} \def\mathcal{S}{\mathcal{S}} \def\mathcal{T}{\mathcal{T}} \def\mathcal{L}{\mathcal{L}} \def\mathcal{U}{\mathcal{U}} \def\mathcal{I}{\mathcal{I}} \newcommand\frC{\mathfrak{C}} \def\bar{P}{\bar{P}} \newcommand{\RN}[1]{% \textup{\uppercase\expandafter{\romannumeral#1}}% } \newcommand{\ip}[1]{\left\langle#1\right\rangle} \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\norm}[1]{\lVert#1\rVert} \newcommand{\Norm}[1]{\left\lVert#1\right\rVert} \newcommand{\abs}[1]{\left\lvert#1\right\rvert} \newcommand{\tri}[1]{|\|#1|\|} \newcommand{\operatorname{div}}{\operatorname{div}} \newcommand{\text{dist}}{\text{dist}} \newcommand{\operatornamewithlimits{argmin}}{\operatornamewithlimits{argmin}} \renewcommand{\epsilon}{\varepsilon} \begin{document} \title[Nonlocal elliptic equations]{Dini estimates for nonlocal fully nonlinear elliptic equations} \author[H. Dong]{Hongjie Dong} \address[H. Dong]{Division of Applied Mathematics, Brown University, 182 George Street, Providence, RI 02912, USA} \email{Hongjie\[email protected]} \thanks{H. Dong was partially supported by the NSF under agreements DMS-1056737 and DMS-1600593.} \author[H. Zhang]{Hong Zhang} \address[H. Zhang]{Division of Applied Mathematics, Brown University, 182 George Street, Providence, RI 02912, USA} \email{Hong\[email protected]} \thanks{H. Zhang was partially supported by the NSF under agreement DMS-1056737.} \begin{abstract} We obtain Dini type estimates for a class of concave fully nonlinear nonlocal elliptic equations of order $\sigma\in (0,2)$ with rough and non-symmetric kernels. The proof is based on a novel application of Campanato's approach and a refined $C^{\sigma+\alpha}$ estimate in \cite{DZ16}. \end{abstract} \maketitle \section{Introduction and main results} The paper is a continuation of our previous work \cite{DZ16}, where we studied Schauder estimates for concave fully nonlinear nonlocal elliptic and parabolic equations. In particular, when the kernels are translation invariant and the data are merely bounded and measurable, we proved the $C^{\sigma}$ estimate, which is very different from the classical theory for second-order elliptic and parabolic equations. In this paper, we consider concave fully nonlinear nonlocal elliptic equations with Dini continuous coefficients and nonhomogeneous terms, and establish a $C^\sigma$ estimate under these assumptions. The study of classical elliptic equations with Dini continuous coefficients and data has a long history. Burch \cite{Burch78} first considered divergence type linear elliptic equations with Dini continuous coefficients and data, and estimated the modulus of continuity of the derivatives of solutions. The corresponding result for concave fully nonlinear elliptic equations was obtained by Kovats \cite{Kovats97}, which generalized a previous result by Safonov \cite{Saf88}. Wang \cite{Wang06} studied linear non-divergence type elliptic and parabolic equations with Dini continuous coefficients and data, and gave a simple proof to estimate the modulus of continuity of the second-order derivatives of solutions. See, also \cite{Lieb87,Sp81,Bao02,DG02,MM11,Y.Li2016}, and the references therein. Recently, there is extensive work on the regularity theory for nonlocal elliptic and parabolic equations. For example, $C^\alpha$ estimates, $C^{1,\alpha}$ estimates, Evans-Krylov type theorem, and Schauder estimates were established in the past decade. See, for instance, \cite{CS09,CS11,DK11,DK13,KL13,CD14,CK15, JX15,JX152,JS15,Mou16}, and the references therein. In particular, Mou \cite{Mou16} investigated a class of concave fully nonlinear nonlocal elliptic equations with smooth symmetric kernels, and obtained the $C^{\sigma}$ estimate under a slightly stronger assumption than the usual Dini continuity on the coefficients and data. The author implemented a recursive Evans-Krylov theorem, which was first studied by Jin and Xiong \cite{JX15}, as well as a perturbation type argument. In this paper, by using a novel perturbation type argument, we relax the regularity assumption to simply Dini continuity and also remove the symmetry and smoothness assumptions on the kernels. To be more specific, we are interested in fully nonlinear nonlocal elliptic equations in the form \begin{equation} \label{eq 1} \inf_{\beta\in \mathcal{A}}(L_{\beta}u+f_{\beta})=0, \end{equation} where $\mathcal{A}$ is an index set and for each $\beta\in \mathcal{A}$, $$ L_{\beta}u=\int_{\mathbb{R}^d} \delta u(x,y)K_{\beta}(x,y)\,dy, $$ \begin{align*} \delta u(x,y)= \begin{cases} u(x+y)-u(x)-y\cdot Du(x)\quad&\text{for}\,\, \sigma\in(1,2),\\ u(x+y)-u(x)-y\cdot Du(x)\chi_{B_1}\quad&\text{for}\,\, \sigma=1,\\ u(x+y)-u(x)\quad &\text{for}\,\, \sigma\in (0,1), \end{cases} \end{align*} and $$ K_{\beta}(x,y)=a_\beta(x,y)|y|^{-d-\sigma}. $$ This type of nonlocal operators was first investigated by Komatsu \cite{Komatsu84}, Mikulevi$\check{\text{c}}$ius and Pragarauskas \cite{MP92,MP14}, and later by Dong and Kim \cite{DK11,DK13}, and Schwab and Silvestre \cite{SS16}, to name a few. We assume that $a(\cdot,\cdot)\in [\lambda, \Lambda]$ for some ellipticity constants $0<\lambda\le \Lambda$, and is merely measurable with respect to the $y$ variable. When $\sigma=1$, we additionally assume that \begin{equation} \int_{S_r}yK_{\beta}(x,y)\,ds=0,\label{eq10.58} \end{equation} for any $r>0$, where $S_r$ is the sphere of radius $r$ centered at the origin. We say that a function $f$ is Dini continuous if its modulus of continuity $\omega_f$ is a Dini function, i.e., $$ \int_0^1 \omega_f(r)/r\,dr<\infty. $$ The following theorem is our main result. \begin{theorem}\label{thm 1} Let $\sigma\in (0,2)$, $0<\lambda\le \Lambda<\infty$, and $\mathcal{A}$ be an index set. Assume for each $\beta\in \mathcal{A}$, $K_{\beta}$ satisfies \eqref{eq10.58} when $\sigma=1$, and \begin{align*} &\big|a_{\beta}(x,y)-a_{\beta}(x',y)\big| \le \Lambda\omega_a(|x-x'|),\\ &|f_{\beta}(x)-f_{\beta}(x')|\le \omega_f(|x-x'|),\quad \sup_{\beta\in \mathcal{A}}\|f_{\beta}\|_{L_\infty(B_1)}<\infty, \end{align*} where $\omega_a$ and $\omega_f$ are Dini functions. Suppose $u\in C^{\sigma^+}(B_1)$ is a solution of \eqref{eq 1} in $B_1$ and is Dini continuous in $\mathbb{R}^d$. Then we have the a priori estimate \begin{equation}\label{eq12.17} [u]_{\sigma;B_{1/2}}\le C\|u\|_{L_\infty}+C\sup_\beta\|f_\beta\|_{L_\infty(B_1)} +C\sum_{j=1}^\infty\big(\omega_u(2^{-j})+\omega_f(2^{-j})\big) \end{equation} where $C>0$ is a constant depending only on $d$, $\sigma$, $\lambda$, $\Lambda$, and $\omega_a$ Moreover, when $\sigma\neq 1$, we have $$ \sup_{x_0\in B_{1/2}} [u]_{\sigma;B_r(x_0)}\to 0 \quad\text{as}\quad r\to 0 $$ with a decay rate depending only on $d$, $\sigma$, $\lambda$, $\Lambda$, $\omega_a$, $\omega_f$, $\omega_u$, and $\sup_{\beta\in \mathcal{A}}\|f_\beta\|_{L_\infty(B_1)}$. When $\sigma=1$, $Du$ is uniformly continuous in $B_{1/2}$ with a modulus of continuity controlled by the quantities before. \end{theorem} Here for simplicity we assume $u\in C^{\sigma^+}(B_1)$, which means that $u\in C^{\sigma+\varepsilon}(B_1)$ for some arbitrary $\varepsilon>0$. This condition is only needed for $L_\beta u$ to be well defined, and may be replaced by other weaker conditions. \begin{remark} By a careful inspection of the proofs below, one can see that the estimates above in fact only depend on $d$, $\sigma$, $\lambda$, $\Lambda$, $\sup_{\beta\in \mathcal{A}}\|f_\beta\|_{L_\infty(B_1)}$, the modulus continuity $\omega_f$ of $f_\beta$ in $B_1$, $\omega_a(r)$, $\omega_u(r)$ for $r\in (0,1)$, and $\|u\|_{L_{1,w}}$, where the weight $w=w(x)$ is equal to $(1+|x|)^{-d-\sigma}$. In particular, $u$ does not need to be globally bounded in $\mathbb{R}^d$. \end{remark} Roughly speaking, the proof can be divided into two steps: We first show that Theorem \ref{thm 1} holds when the equation is satisfied in the whole space; Then we implement a localization argument to treat the general case. In Step one, our proof is based on a refined $C^{\sigma+\alpha}$ estimate in our previous paper \cite{DZ16} and a new perturbation type argument, as the standard perturbation techniques do not seem to work here. The novelty of this method is that instead of estimating $C^\sigma$ semi-norm of the solution, we construct and bound certain semi-norms of the solution, see Lemmas \ref{lem2.3} and \ref{lem2.4}. When $\sigma<1$, such semi-norm is defined as a series of lower-order H\"older semi-norms of $u$. This is in the spirit of Campanato's approach first developed in \cite{Ca66}. Heuristically, in order for the nonlocal operator to be well defined, the solution needs to be smoother than $C^\sigma$. To resolve this problem, we divide the integral domain into annuli, which allows us to use a lower-order semi-norm to estimate the integral in each annulus. The series of lower-order semi-norms, which turns out to be slightly stronger than the $C^\sigma$ semi-norm, further implies that $$ [u]_{\sigma;B_r(x_0)}\rightarrow 0\quad\text{as} \quad r\rightarrow 0 $$ uniformly in $x_0$. In particular, when $\sigma = 1$ we are able to estimate the modulus of continuity of the gradient of solutions. The proof of the case when $\sigma\ge 1$ is more difficult than that of the case when $\sigma<1$. This is mainly due to the fact that the series of lower-order H\"older semi-norms of the solution itself is no longer sufficient to estimate the $C^\sigma$ norm. Therefore, we need to subtract a polynomial from the solution in the construction of the semi-norm. In some sense, the polynomial should be taken to minimize the series. It turns out that when $\sigma>1$, up to a constant we can choose the polynomial to be the first-order Taylor's expansion of the solution. The case $\sigma= 1$ is particularly challenging since the polynomial needs to be selected carefully, for which an additional mollification argument is applied. The organization of this paper is as follows. In the next section, we introduce some notation and preliminary results that are necessary in the proof of our main theorem. Some of these results might be of independent interest. In section 3, we first prove a global version of Theorem \ref{thm 1} and then localize the result to obtain Theorem \ref{thm 1}. \section{Preliminaries} We will frequently use the following identity \begin{align} \label{eq10.22} &2^j\big(u(x+2^{-j}\ell)-u(x)\big)-\big(u(x+\ell)-u(x)\big)\nonumber\\ &=\sum_{k=1}^j 2^{k-1}\big(2u(x+2^{-k}\ell)-u(x+2^{-k+1}\ell)-u(x)\big), \end{align} which holds for any $\ell\in \mathbb{R}^d$ and nonnegative integer $j$. Denote $\mathcal{P}_1$ to be the set of first-order polynomials of $x$. \begin{lemma} \label{lem2.3} Let $\alpha\in (0,\sigma)$ be a constant. (i) When $\sigma\in (0,1)$, we have \begin{equation} \label{eq10.08} [u]_{\sigma}\le C\sup_{r>0}\sup_{x_0\in \mathbb{R}^{d}} r^{\alpha-\sigma}[u]_{\Lambda^\alpha(B_r(x_0))} \le C\sup_{r>0}\sup_{x_0\in \mathbb{R}^{d}} r^{\alpha-\sigma}[u]_{\alpha;B_r(x_0)} \end{equation} where $C>0$ is a constant depending only on $d$, $\alpha$, and $\sigma$. (ii) When $\sigma\in (1,2)$, we have \begin{equation} \label{eq10.09} [u]_{\sigma}\le C\sup_{r>0}\sup_{x_0\in \mathbb{R}^{d}} r^{\alpha-\sigma}[u]_{\Lambda^\alpha(B_r(x_0))} \le C\sup_{r>0}\sup_{x_0\in \mathbb{R}^{d}} r^{\alpha-\sigma}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_r(x_0)} \end{equation} where $C>0$ is a constant depending only on $d$, $\alpha$, and $\sigma$. (iii) When $\sigma=1$, we have \begin{align} \|Du\|_{L_\infty}&\le C\sum_{k=0}^\infty\sup_{x_0\in \mathbb{R}^{d}} 2^{-k(\alpha-1)}[u]_{\Lambda^{\alpha}(B_{2^{-k}}(x_0))} +C\sup_{\substack{x,x'\in \mathbb{R}^d\\|x-x'|=1}}|u(x)-u(x')|\nonumber\\ \label{eq10.10} &\le C\sum_{k=0}^\infty\sup_{x_0\in \mathbb{R}^{d}} 2^{-k(\alpha-1)}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-k}}(x_0)}+C\sup_{\substack{x,x'\in \mathbb{R}^d\\|x-x'|=1}}|u(x)-u(x')| \end{align} where $C>0$ is a constant depending only on $d$ and $\alpha$. Moreover, we can estimate the modulus of continuity of $Du$ by the remainder of the summation on the right-hand side of \eqref{eq10.10}. \end{lemma} \begin{proof} First we consider the case when $\sigma\in (0,1)$. Let $x,x'\in \mathbb{R}^d$ be two different points. Denote $h=|x-x'|$. Since $$ u(x')-u(x)=\frac 1 2 \big(u(2x'-x)-u(x)\big)-\frac 1 2\big(u(2x'-x)-2u(x')+u(x)\big), $$ we get \begin{align*} &h^{-\sigma}|u(x')-u(x)|\\ &\le 2^{\sigma-1} (2h)^{-\sigma}\big(u(2x'-x)-u(x)\big)+h^{-\sigma}|u(2x'-x)-2u(x')+u(x)|\\ &\le 2^{\sigma-1} (2h)^{-\sigma}\big(u(2x'-x)-u(x)\big)+\sup_{x\in \mathbb{R}^{d}} h^{\alpha-\sigma}[u]_{\Lambda^\alpha(B_h(x))}. \end{align*} Taking the supremum with respect to $x$ and $x'$ on both sides, we get $$ [u]_\sigma\le 2^{\sigma-1}[u]_\sigma+\sup_{x\in \mathbb{R}^{d}} h^{\alpha-\sigma}[u]_{\Lambda^\alpha(B_h(x))}, $$ which together with the triangle inequality gives \eqref{eq10.08}. For $\sigma\in (1,2)$, let $\ell\in\mathbb{R}^d$ be a unit vector and $\varepsilon\in (0,1/16)$ be a small constant to be specified later. For any two distinct points $x,x'\in \mathbb{R}^d$, we denote $h=|x-x'|$. By the triangle inequality, \begin{equation} \label{eq8.43} h^{1-\sigma}|D_\ell u(x)-D_\ell u(x')| \le I_1+I_2+I_3, \end{equation} where \begin{align*} I_1&=h^{1-\sigma}|D_\ell u(x)-(\varepsilon h)^{-1}(u(x+\varepsilon h\ell)-u(x))|,\\ I_2&=h^{1-\sigma}|D_\ell u(x')-(\varepsilon h)^{-1}(u(x'+\varepsilon h\ell)-u(x'))|,\\ I_3&=h^{1-\sigma}(\varepsilon h)^{-1}|(u(x+\varepsilon h\ell)-u(x))-(u(x'+\varepsilon h\ell)-u(x'))|. \end{align*} By the mean value theorem, \begin{equation} \label{eq3.02b} I_1+I_2\le 2\varepsilon^{\sigma-1} [Du]_{\sigma}. \end{equation} Now we choose and fix a $\varepsilon$ sufficiently small depending only on $\sigma$ such that $2\varepsilon^{\sigma-1}\le 1/2$. Using the triangle inequality, we have \begin{align*} I_3 \le Ch^{-\sigma}\big(|u(x+\varepsilon h\ell)+u(x')-2u(\bar x)|+|u(x'+\varepsilon h\ell)+u(x)-2u(\bar x)|\big), \end{align*} where $\bar x=(x+\varepsilon h\ell+x')/2$. Thus, \begin{equation} \label{eq3.03b} I_3\le Ch^{\alpha-\sigma}[u]_{\Lambda^\alpha(B_{h}(\bar x))}. \end{equation} Combining \eqref{eq8.43}, \eqref{eq3.02b}, and \eqref{eq3.03b}, we get \eqref{eq10.09} as before. Finally, we treat the case when $\sigma=1$. It follows from \eqref{eq10.22} that \begin{align*} 2^j\big|u(x+2^{-j}\ell)-u(x)\big|\le 2|u(x+\ell)-u(x)|+\sum_{k=1}^j 2^{-k(\alpha-1)}[u]_{\Lambda^\alpha(B_{2^{-k}}(x+2^{-k}\ell))}. \end{align*} Taking $j\to \infty$, we obtain the desired inequality. For the continuity estimate, let $\ell\in\mathbb{R}^d$ be a unit vector. Assume that $|x-x'|\in [2^{-i-1},2^{-i})$ for some positive integer $i$. From \eqref{eq10.22}, for any $j\ge i+1$, \begin{align*} &2^j\big(u(x+2^{-j}\ell)-u(x)\big)-2^i\big(u(x+2^{-i}\ell)-u(x)\big)\\ &=\sum_{k=i+1}^j 2^{k-1}\big(2u(x+2^{-k}\ell)-u(x+2^{-k+1}\ell)-u(x)\big) \end{align*} and a similar identity holds with $x'$ in place of $x$. Then we have \begin{align*} &|D_\ell u(x)-D_\ell u(y)|=\lim_{j\to \infty}\Big|2^j\big(u(x+2^{-j}\ell)-u(x)\big) -2^j\big(u(x'+2^{-j}\ell)-u(x')\big)\Big|\\ &\le \Big|2^i\big(u(x+2^{-i}\ell)-u(x)\big) -2^i\big(u(x'+2^{-i}\ell)-u(x')\big)\Big|\\ &\quad + \sum_{k=i+1}^\infty\sup_{x_0\in \mathbb{R}^{d}} 2^{-k(\alpha-1)}[u]_{\Lambda^\alpha(B_{2^{-k}}(x_0))}. \end{align*} By the triangle inequality, the first term on the right-hand side is bounded by $$ 2^i|u(x+2^{-i}\ell)-2u(\bar x)+u(x')|+ 2^i|u(x'+2^{-i}\ell)-2u(\bar x)+u(x)| $$ with $\bar x=(x+2^{-i}+x')/2$, which is further bounded by $$ 2^{1+i(1-\alpha)}[u]_{\Lambda^\alpha(B_{2^{-i}}(\bar x))}. $$ Therefore, $$ |D_\ell u(x)-D_\ell u(y)|\le C\sum_{k=i}^\infty\sup_{x_0\in \mathbb{R}^{d}} 2^{-k(\alpha-1)}[u]_{\Lambda^\alpha(B_{2^{-k}}(x_0))}, $$ which converges to $0$ as $i\to \infty$ uniformly with respect to $\ell$. The lemma is proved. \end{proof} The following lemma will be used to estimate the error term in the freezing coefficient argument. \begin{lemma} \label{lem2.4} Let $\alpha\in (0,1)$ and $\sigma\in (1,2)$ be constants. Then for any $u\in C^1$, we have \begin{equation} \label{eq8.32} \sum_{k=0}^\infty 2^{k(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d} [u-P_{x_0}u]_{\alpha;B_{2^{-k}(x_0)}} \le C\sum_{k=0}^\infty 2^{k(\sigma-\alpha)} \sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-k}(x_0)}} \end{equation} and \begin{equation} \label{eq10.32} \sum_{k=0}^\infty 2^{k\sigma} \sup_{x_0\in \mathbb{R}^d}\|u-P_{x_0}u\|_{L_\infty(B_{2^{-k}(x_0)})} \le C\sum_{k=0}^\infty 2^{k(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d} [u]_{\Lambda^{\alpha}(B_{2^{-k}(x_0)})}, \end{equation} where $P_{x_0}u$ is the first-order Taylor expansion of $u$ at $x_0$, and $C>0$ is a constant depending only on $d$, $\alpha$, and $\sigma$. \end{lemma} \begin{proof} Denote $$ b_k:=2^{k(\sigma-\alpha)} \sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-k}}(x_0)}. $$ Then for any $x_0\in \mathbb{R}^d$ and each $k=0,1,\ldots$, there exists $p_k\in \mathcal{P}_1$ such that $$ [u-p_k]_{\alpha;B_{2^{-k}}(x_0)}\le 2b_k2^{-k(\sigma-\alpha)}. $$ By the triangle inequality, for $k\ge 1$ we have \begin{equation} \label{eq8.45} [p_{k-1}-p_k]_{\alpha;B_{2^{-k}}(x_0)}\le 2b_k2^{-k(\sigma-\alpha)} +2b_{k-1}2^{-(k-1)(\sigma-\alpha)}. \end{equation} It is easily seen that \begin{equation*} [p_{k-1}-p_k]_{\alpha;B_{2^{-k}}(x_0)}=|\nabla p_{k-1}-\nabla p_k|2^{-(k-1)(1-\alpha)}, \end{equation*} which together with \eqref{eq8.45} implies that \begin{equation} \label{eq8.47} |\nabla p_{k-1}-\nabla p_k|\le C(b_k+b_{k-1})2^{-k(\sigma-1)}. \end{equation} Since $\sum_0^k b_k<\infty$, from \eqref{eq8.47} we see that $\{\nabla p_k\}$ is a Cauchy sequence in $\mathbb{R}^d$. Let $q=q(x_0)\in \mathbb{R}^d$ be its limit, which clearly satisfies for each $k\ge 0$, $$ |q-\nabla p_k|\le C\sum_{j=k}^\infty 2^{-j(\sigma-1)}b_j. $$ By the triangle inequality, we get \begin{align} \label{eq9.08} &[u-q\cdot x]_{\alpha;B_{2^{-k}}(x_0)} \le [u-p_k]_{\alpha;B_{2^{-k}}(x_0)}+[p_k-q\cdot x]_{\alpha;B_{2^{-k}}(x_0)}\nonumber\\ &\le C2^{-k(1-\alpha)}\sum_{j=k}^\infty 2^{-j(\sigma-1)}b_j\le C2^{-k(\sigma-\alpha)}, \end{align} which implies that $$ \|u-u(x_0)-q\cdot (x-x_0)\|_{L_\infty(B_{2^{-k}}(x_0))}\le C2^{-k\sigma}, $$ and thus $q=\nabla u(x_0)$. It then follows \eqref{eq9.08} that \begin{align*} &\sum_{k=0}^\infty 2^{k(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d} [u-P_{x_0}u]_{\alpha;B_{2^{-k}}(x_0)}\le C\sum_{k=0}^\infty 2^{k(\sigma-1)} \sum_{j=k}^\infty 2^{-j(\sigma-1)}b_j\\ &=C\sum_{j=0}^\infty 2^{-j(\sigma-1)}b_j \sum_{k=0}^j 2^{k(\sigma-1)}\le C\sum_{j=0}^\infty b_j. \end{align*} This completes the proof of \eqref{eq8.32}. Next we show \eqref{eq10.32}. For any $x\in B_{2^{-k}}$, it follows from \eqref{eq10.22} that for $j\ge 1$, \begin{align*} &u(x)-u(0)-2^j\big(u(2^{-j}x)-u(0)\big)\nonumber\\ &=\sum_{i=0}^{j-1} 2^{i}\big(u(2^{-i}x)+u(0)-2u(2^{-i-1}x)\big). \end{align*} Sending $j\to \infty$, we obtain \begin{align*} &\big|u(x)-u(0)-x\cdot \nabla u(0)\big|\le \sum_{i=0}^\infty 2^{i}\big|u(2^{-i}x)+u(0)-2u(2^{-i-1}x)\big|\\ &\le 2^{-\alpha}\sum_{i=0}^\infty 2^{i-(i+k)\alpha}[u]_{\Lambda^{\alpha}(B_{2^{-(k+i)}})} =2^{-\alpha}\sum_{i=k}^\infty 2^{i-k-i\alpha}[u]_{\Lambda^{\alpha}(B_{2^{-i}})}, \end{align*} where we shifted the index in the last equality. Therefore, by shifting the coordinates and sum in $k$, we have \begin{align*} &\sum_{k=0}^\infty 2^{k\sigma}\sup_{x_0\in \mathbb{R}^d}\|u-P_{x_0}u\|_{L_\infty(B_{2^{-k}})(x_0)}\\ &\le C \sum_{k=0}^\infty 2^{k(\sigma-1)}\sum_{i=k}^\infty 2^{i(1-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\Lambda^{\alpha}(B_{2^{-i}}(x_0))}\\ &= C\sum_{i=0}^\infty 2^{i(1-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\Lambda^{\alpha}(B_{2^{-i}}(x_0))} \sum_{k=0}^i 2^{k(\sigma-1)}\\ &\le C\sum_{i=0}^\infty 2^{i(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\Lambda^{\alpha}(B_{2^{-i}}(x_0))}, \end{align*} where we switched the order of the summations in the second equality and in the last inequality we used the condition that $\sigma>1$. The lemma is proved. \end{proof} Let $\zeta\in C_0^\infty(B_{1})$ be a nonnegative radial function with unit integral. For $R>0$, we define the mollification of a function $u$ by $$ u^{(R)}(x)=\int_{\mathbb{R}^{d}} u(x-Ry)\zeta(y)\,dy. $$ The next lemmas will be used in the estimate of $M_j$ in Proposition \ref{prop1.1} when $\sigma= 1$. \begin{lemma} \label{lem2.1} Let $\beta\in (0,1]$, $\alpha\in (0,1+\beta)$, and $0<R\le R_1<\infty$. Then for any $u\in \Lambda^\alpha(B_{2R_1})$, we have \begin{equation} \label{eq11.15} [Du^{(R)}]_{\beta;B_{R_1}}\le C(d,\beta,\alpha) R^{\alpha-1-\beta}[u]_{\Lambda^\alpha(B_{2R_1})}. \end{equation} \end{lemma} \begin{proof} We begin by estimating $\|D_{\ell}^2u\|_{0;B_{R_1}}$ for a fixed unit vector $\ell\in \mathbb{R}^d$. Because $D_{\ell}^2\zeta$ is even with respect to $x$ and has zero integral, using integration by parts we have for any $x\in B_{R_1}$, \begin{align*} &|D_{\ell}^2u^{(R)}(x)|=R^{-2}\Big|\int_{\mathbb{R}^{d}} u(x-Ry)D_{\ell}^2\zeta(y)\,dy\Big|\\ &=\frac {R^{-2}}2\Big|\int_{\mathbb{R}^{d}} \big(u(x-Ry)+u(x+Ry)-2u(x)\big)D_{\ell}^2\zeta(y)\,dy\Big|\\ &\le CR^{\alpha-2}[u]_{\Lambda^\alpha(B_{2R_1})}\int_{\mathbb{R}^{d}} |y|^\alpha D_{\ell}^2\zeta(y)\,dy\le CR^{\alpha-2}[u]_{\Lambda^\alpha(B_{2R_1})}. \end{align*} Using the identity, $2D_{ij}u=2D_{\ell}^2u-D_i^2 u-D_j^2 u$, where $\ell=(e_i+e_j)/\sqrt 2$, we obtain the desired inequality \eqref{eq11.15} when $\beta=1$. Next we consider the case when $\beta\in (0,1)$. We follow the proof of Lemma \ref{lem2.3}. Let $\ell\in\mathbb{R}^d$ be a unit vector, and $\varepsilon\in (0,1/16)$ be a small constant to be specified later. For any two distinct points $x,x'\in B_{R_1}$, let $h=|x-x'|(<2R_1)$. It is easily seen that there exist two points $y\in B_{\varepsilon h}(x)\cap B_{R_1}$ and $y'\in B_{\varepsilon h}(x')\cap B_{R_1}$ such that $$ y+\varepsilon h\ell\in B_{\varepsilon h}(x)\cap B_{R_1},\quad y'+\varepsilon h\ell\in B_{\varepsilon h}(x')\cap B_{R_1}. $$ By the triangle inequality, $$ h^{-\beta}|D_\ell u^{(R)}(x)-D_\ell u^{(R)}(x')| \le I_1+I_2+I_3, $$ where \begin{align*} I_1&=h^{-\beta}|D_\ell u^{(R)}(x)-(\varepsilon h)^{-1}(u^{(R)}(y+\varepsilon h\ell)-u^{(R)}(y))|,\\ I_2&=h^{-\beta}|D_\ell u^{(R)}(x')-(\varepsilon h)^{-1}(u^{(R)}(y'+\varepsilon h\ell)-u^{(R)}(y'))|,\\ I_3&=h^{-\beta}(\varepsilon h)^{-1}|(u^{(R)}(y+\varepsilon h\ell)-u^{(R)}(y))-(u^{(R)}(y'+\varepsilon h\ell)-u^{(R)}(y'))|. \end{align*} By the mean value theorem, \begin{equation} \label{eq3.02} I_1+I_2\le 2\varepsilon^\beta [Du^{(R)}]_{\beta;B_{R_1}}. \end{equation} Now we choose $\varepsilon$ depending only on $d$ and $\beta$ such that $2\varepsilon^\beta\le 1/2$. To estimate $I_3$, we consider two cases. If $h>R$, by the triangle inequality, we have \begin{align*} I_3 &\le Ch^{-1-\beta}\big(|u^{(R)}(y+\varepsilon h\ell)+u^{(R)}(y')-2u^{(R)}(\bar y)|\\ &\quad+|u^{(R)}(y'+\varepsilon h\ell)+u^{(R)}(y)-2u^{(R)}(\bar y)|\big), \end{align*} where $\bar y=(y+\varepsilon h\ell+y')/2$. Then by the Minkowski inequality, \begin{equation} \label{eq3.03} I_3\le Ch^{\alpha-1-\beta}[u^{(R)}]_{\Lambda^\alpha(B_{R_1})}\le CR^{\alpha-1-\beta}[u]_{\Lambda^\alpha(B_{2R_1})}. \end{equation} On the other hand, if $h\in (0,R)$, by the mean value theorem and \eqref{eq11.15} with $\beta=1$, \begin{equation} \label{eq3.04} I_3 \le Ch^{1-\beta}[D u^{(R)}]_{1;B_{R_1}}\le Ch^{1-\beta}R^{\alpha-2}[u]_{\Lambda^\alpha(B_{2R_1})}\le CR^{\alpha-1-\beta}[u]_{\Lambda^\alpha(B_{2R_1})}. \end{equation} Combining \eqref{eq3.02}, \eqref{eq3.03}, and \eqref{eq3.04}, we obtain $$ h^{-\beta}|D_\ell u^{(R)}(x)-D_\ell u^{(R)}(x')|\le \frac 1 2 [Du^{(R)}]_{\beta;B_{R_1}}+CR^{\alpha-1-\beta}[u]_{\Lambda^\alpha(B_{2R_1})}. $$ Taking the supremum of the left-hand side above with respect to unit vector $\ell\in \mathbb{R}^d$ and $x,x'\in B_{R_1}$, we immediately get \eqref{eq11.15}. The lemma is proved. \end{proof} \begin{lemma} \label{lem2.2} Let $\alpha\in (0,1)$, $\beta\in (0,1)$, and $R>0$ be constants. Let $p=p(x)$ be the first-order Taylor expansion of $u^{(R)}$ at the origin and $\tilde u=u-p$. Then for any integer $j\ge 0$, we have \begin{align} \label{eq3.51} &\|\tilde u\|_{L_\infty(B_{2^{j+1} R})}\le C2^{j(1+\beta)}R^\alpha[u]_{\Lambda^\alpha(B_{2^{j+2} R})},\\ &\sup_{\substack{x,x'\in B_{2^jR}\\0<|x-x'|<2R}} \frac{|\tilde u(x)-\tilde u(x')|}{|x-x'|^\alpha} \le C2^{j\beta}[u]_{\Lambda^\alpha(B_{2^{j+2} R})}, \label{eq 12.62b} \end{align} where $C>0$ is a constant depending only on $d$, $\beta$, and $\alpha$. \end{lemma} \begin{proof} Since $\zeta\in C_0^\infty(B_1)$ is radial and has unit integral, we have for any $x\in B_{2^{j+1} R}$, \begin{align} &\big|u^{(R)}(x)-u(x)\big|\nonumber\\ &=\big|\frac 1 2 \int_{\mathbb{R}^d} \big(u(x+Ry)-u(x-Ry) -2u(x)\big)\zeta(y)\,dy\Big|\le CR^\alpha[u]_{\Lambda^\alpha(B_{2^{j+2}R})}. \label{eq10.31b} \end{align} By the mean value theorem and Lemma \ref{lem2.1}, for any $x\in B_{2^{j+1}R}$, \begin{equation*} \big|u^{(R)}(x)-p(x)\big| \le C(2^{j+1}R)^{1+\beta}[u^{(R)}]_{1+\beta;B_{2^{j+1}R}} \le C2^{j(1+\beta)}R^\alpha[u]_{\Lambda^\alpha(B_{2^{j+2}R})}, \end{equation*} which together with \eqref{eq10.31b} implies \eqref{eq3.51}. Next we show \eqref{eq 12.62b}. For any two distinct points $x,x'\in B_{2^j R}$ satisfying $0<|x-x'|<2R$, denote $h=|x-x'|(<2R)$. Let $k$ be the largest nonnegative integer such that $2^k (x'-x)+x\in B_{2^{j+1}R}$. Clearly, \begin{equation} \label{eq4.26} 2^k h\in (2^{j-1}R,2^{j+2}R). \end{equation} It follows from \eqref{eq10.22} that \begin{align} \label{eq10.12} &\tilde u(x')-\tilde u(x)= 2^{-k} \big(\tilde u(2^k (x'-x)+x)-\tilde u(x)\big)\nonumber\\ &\quad+\sum_{i=0}^{k-1} 2^{-i-1} \big(2\tilde u(2^i (x'-x)+x)-\tilde u(x)-\tilde u(2^{i+1} (x'-x)+x)\big). \end{align} By \eqref{eq4.26}, \eqref{eq10.12}, and \eqref{eq3.51}, we obtain \begin{align*} h^{-\alpha}|\tilde u(x')-\tilde u(x)| &\le 2^{-k+1}h^{-\alpha}\|\tilde u\|_{L_\infty(B_{2^{j+1} R})}+C[u]_{\Lambda^\alpha(B_{2^{j+1} R})}\\ &\le C2^{-j}R^{-1} h^{1-\alpha}\cdot 2^{j(1+\beta)}R^\alpha[u]_{\Lambda^\alpha(B_{2^{j+2} R})} +C[u]_{\Lambda^\alpha(B_{2^{j+1} R})}\\ &\le C2^{j\beta}[u]_{\Lambda^\alpha(B_{2^{j+2} R})}, \end{align*} where we used $h<2R$ in the last inequality. The lemma is proved. \end{proof} \section{Proofs} The following proposition is a further refinement of \cite[Corollary 4.6]{DZ16}. \begin{proposition}\label{prop1.1} Let $\sigma\in (0,2)$ and $0<\lambda\le \Lambda$. Assume that for any $\beta\in\mathcal{A}$, $K_{\beta}$ only depends on $y$. There is a constant $\hat{\alpha}\in (0,1)$ depending on $d,\sigma,\lambda$, and $\Lambda$ so that the following holds. Let $\alpha\in (0,\hat{\alpha})$. Suppose $u\in C^{\sigma+\alpha}(B_1)\cap C^{\alpha}(\mathbb{R}^d)$ is a solution of \begin{equation*} \inf_{\beta\in \mathcal{A}}(L_{\beta} u+f_{\beta})=0\quad \text{in}\,\, B_1 \end{equation*} Then, \begin{equation*} [u]_{\alpha+\sigma;B_{1/2}}\le C\sum_{j=1}^\infty 2^{-j\sigma}M_j +C\sup_{\beta}[f_{\beta}]_{\alpha; B_{1}}, \end{equation*} where $$ M_j=\sup_{x,x'\in B_{2^j},0<|x-x'|<2}\frac{|u(x)-u(x')|}{|x-x'|^\alpha}. $$ \end{proposition} \begin{proof} This follows from the proof of \cite[Corollary 4.6]{DZ16} by observing that in the estimate of $[h_{\beta}]_{\alpha;B_1}$, the term $[u]_{\alpha; B_{2^{j}}}$ can be replaced by $M_j$. Moreover, by replacing $u$ by $u-u(0)$, we see that $$ \|u\|_{\alpha;B_2}\le C[u]_{\alpha;B_2}. $$ The lemma is proved. \end{proof} \begin{proposition}\label{prop3.2} Suppose that \eqref{eq 1} is satisfied in $\mathbb{R}^d$. Then under the conditions of Theorem \ref{thm 1}, we have \begin{equation} \label{eq8.16} [u]_\sigma\le C\|u\|_{L_\infty}+C \sum_{k=1}^\infty \omega_f(2^{-k}), \end{equation} where $C>0$ is a constant depending only on $d$, $\lambda$, $\Lambda$, $\omega_a$, and $\sigma$. \end{proposition} \begin{proof} {\bf Case 1: $\sigma\in (0,1)$.} For $k\in \mathbb{N}$, let $v$ be the solution of \begin{align}\label{eq 12.181} \begin{cases} \inf_{\beta\in\mathcal{A}}\big(L_{\beta}(0)v+f_{\beta}(0)\big)=0\quad &\text{in}\,\, B_{2^{-k}}\\ v=u\quad &\text{in}\,\,B_{2^{-k}}^c \end{cases}, \end{align} where $L_{\beta}(0)$ is the operator with kernel $K_{\beta}(0,y)$. Then by Proposition \ref{prop1.1} with scaling, we have \begin{align} &[v]_{\alpha+\sigma;B_{2^{-k-1}}}\le C\sum_{j=1}^\infty 2^{(k-j)\sigma}M_j+C2^{k\sigma}[v]_{\alpha;B_{2^{-k}}} \nonumber\\ &\le C\sum_{j=1}^k 2^{(k-j)\sigma}M_j+C[u]_{\alpha}+C2^{k\sigma}[v]_{\alpha;B_{2^{-k}}}, \label{eq1.04} \end{align} where $\alpha\in (0,\hat\alpha)$ satisfying $\sigma+\alpha<1$ and $$ M_j=\sup_{x,x'\in B_{2^{j-k}},0<|x-x'|<2^{-k+1}}\frac{|u(x)-u(x')|}{|x-x'|^\alpha}. $$ Let $k_0,k_1\ge 1$ be integers to be specified. From \eqref{eq1.04}, we get \begin{align} [v]_{\alpha;B_{2^{-k-k_0}}}\le C2^{-(k+k_0)\sigma}\sum_{j=1}^k 2^{(k-j)\sigma}M_j+C2^{-(k+k_0)\sigma}[u]_{\alpha} +C2^{-k_0\sigma}[v]_{\alpha;B_{2^{-k}}}. \label{eq1.04b} \end{align} Next, $w:=u-v$ satisfies \begin{equation} \label{eq4.25} \begin{cases} \mathcal{M}^+w\ge -C_k\quad &\text{in}\,\, B_{2^{-k}},\\ \mathcal{M}^-w\le C_k\quad &\text{in}\,\, B_{2^{-k}},\\ w=0\quad &\text{in}\,\, B_{2^{-k}}^c, \end{cases} \end{equation} where $$ C_k=\sup_{\beta\in \mathcal{A}}\|f_{\beta}-f_{\beta}(0)+(L_{\beta}-L_{\beta}(0))u\|_{L_\infty(B_{2^{-k}})}. $$ It is easily seen that \begin{align*} C_k&\le \omega_f(2^{-k})+C\omega_a(2^{-k})\int_{\mathbb{R}^d}|u(x+y)-u(x)||y|^{-d-\sigma}\,dy\\ &\le \omega_f(2^{-k})+C\omega_a(2^{-k})\Big(\sup_{x_0\in B_{2^{-k}}}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} [u]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}\Big). \end{align*} Then by the H\"older estimate \cite[Lemma 2.5]{DZ16}, we have \begin{align} \label{eq3.05} &[w]_{\alpha;B_{2^{-k}}}\le C2^{-k(\sigma-\alpha)}C_k\nonumber\\ &\le C2^{-k(\sigma-\alpha)}\Big(\omega_f(2^{-k}) +\omega_a(2^{-k})\big(\sup_{x_0\in B_{2^{-k}}}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} [u]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}\big)\Big). \end{align} Combining \eqref{eq1.04b} and \eqref{eq3.05} yields \begin{align} \label{eq3.25} &2^{(k+k_0)(\sigma-\alpha)}[u]_{\alpha;B_{2^{-k-k_0}}}\nonumber\\ &\le C2^{-(k+k_0)\alpha}\sum_{j=1}^k 2^{(k-j)\sigma}[u]_{\alpha;B_{2^{j-k}}}+C2^{-(k+k_0)\alpha}[u]_{\alpha} +C2^{-k_0\alpha}2^{k(\sigma-\alpha)}[u]_{\alpha;B_{2^{-k}}}\nonumber\\ &\,\,+C2^{k_0(\sigma-\alpha)}\Big(\omega_f(2^{-k}) +\omega_a(2^{-k})\big(\sup_{x_0\in B_{2^{-k}}}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} [u]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}\big)\Big). \end{align} Shifting the coordinates, from \eqref{eq3.25} we ge \begin{align} \label{eq3.25b} &2^{(k+k_0)(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-k-k_0}}(x_0)}\nonumber\\ &\le C2^{-(k+k_0)\alpha}\sup_{x_0\in \mathbb{R}^d}\sum_{j=1}^k 2^{(k-j)\sigma}[u]_{\alpha;B_{2^{j-k}}(x_0)} +C2^{-(k+k_0)\alpha}[u]_{\alpha}\nonumber\\ &\quad +C2^{-k_0\alpha}2^{k(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-k}(x_0)}} +C2^{k_0(\sigma-\alpha)}\Big(\omega_f(2^{-k})\nonumber\\ &\quad +\omega_a(2^{-k})(\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} [u]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty})\Big). \end{align} We take the summation of \eqref{eq3.25b} in $k=k_1,k_1+1,\ldots$ to obtain \begin{align*} &\sum_{k=k_1}^\infty 2^{(k+k_0)(\sigma-\alpha)} \sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-k-k_0}}(x_0)}\\ &\le C\sum_{k=k_1}^\infty 2^{-(k+k_0)\alpha}\Big(\sup_{x_0\in \mathbb{R}^d}\sum_{j=1}^k 2^{(k-j)\sigma}[u]_{\alpha;B_{2^{j-k}}(x_0)}\Big) +C2^{-(k_1+k_0)\alpha}[u]_{\alpha}\\ &\quad+C2^{-k_0\alpha}\sum_{k=k_1}^\infty2^{k(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-k}(x_0)}}+C2^{k_0(\sigma-\alpha)} \sum_{k=k_1}^\infty\Big(\omega_f(2^{-k}) \\ &\quad+\omega_a(2^{-k})\big(\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} \sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}\big)\Big), \end{align*} which by switching the order of summations is further bounded by \begin{align*} &C2^{-k_0\alpha}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-j}}(x_0)}\\ &\quad +C2^{-(k_1+k_0)\alpha}[u]_{\alpha}+C2^{k_0(\sigma-\alpha)} \sum_{k=k_1}^\infty \omega_f(2^{-k})\\ &\quad+C2^{k_0(\sigma-\alpha)}\sum_{k=k_1}^\infty\omega_a(2^{-k}) \cdot\Big(\sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d} [u]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}\Big). \end{align*} The bound above together with the obvious inequality $$ \sum_{j=0}^{k_1+k_0-1}2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-j}}(x_0)} \le C2^{(k_1+k_0)(\sigma-\alpha)}[u]_\alpha, $$ implies that \begin{align*} &\sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-j}}(x_0)}\le C2^{-k_0\alpha}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-j}}(x_0)} \\ &\quad +C2^{(k_1+k_0)(\sigma-\alpha)}[u]_{\alpha}+C2^{k_0(\sigma-\alpha)} \sum_{k=k_1}^\infty \omega_f(2^{-k})\\ &\quad +C2^{k_0(\sigma-\alpha)}\sum_{k=k_1}^\infty\omega_a(2^{-k})\cdot \Big(\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} \sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-j}}(x_0)} +C\|u\|_{L_\infty}\Big). \end{align*} By first choosing $k_0$ sufficiently large and then $k_1$ sufficiently large, we get $$ \sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u]_{\alpha;B_{2^{-j}}(x_0)}\le C\|u\|_{\alpha}+C \sum_{k=1}^\infty \omega_f(2^{-k}), $$ which together with Lemma \ref{lem2.3} (i) and the interpolation inequality gives \eqref{eq8.16}. {\bf Case 2: $\sigma\in (1,2)$.} For $k\in \mathbb{N}$, let $v_M$ be the solution of \begin{align*} \begin{cases} \inf_{\beta\in\mathcal{A}}\big(L_{\beta}(0)v_M+f_{\beta}(0)\big)=0\quad &\text{in}\,\, B_{2^{-k}}\\ v_M=g_M\quad &\text{in}\,\,B_{2^{-k}}^c \end{cases}, \end{align*} where $M\ge 2\|u-p_0\|_{L_\infty(B_{2^{-k}})}$ is a constant to be specified later, \begin{align*} g_M = \max\big(\min(u-p_0,M),-M\big), \end{align*} and $p_0$ is the first-order Taylor's expansion of $u$ at the origin. By Proposition \ref{prop1.1}, instead of \eqref{eq1.04}, we have \begin{align} &[v_M]_{\alpha+\sigma;B_{2^{-k-1}}}\le C\sum_{j=0}^\infty 2^{(k-j)\sigma}M_j+C2^{k\sigma}[v_M]_{\alpha;B_{2^{-k}}} \nonumber\\ &\le C\sum_{j=0}^k 2^{(k-j)\sigma}M_j+C\|Du\|_{L_\infty}+C2^{k\sigma}[v_M]_{\alpha;B_{2^{-k}}}, \label{eq4.22} \end{align} where $\alpha\in (0,\hat\alpha)$ and $$ M_j=\sup_{x,x'\in B_{2^{j-k}},0<|x-x'|<2^{-k+1}}\frac{|u(x)-p_0(x)-u(x')+p_0(x')|}{|x-x'|^\alpha}. $$ From \eqref{eq4.22} and the mean value formula, \begin{align*} &\|v_M-p_1\|_{L_\infty(B_{2^{-k-k_0}})}\le C2^{-(k+k_0)(\sigma+\alpha)}\sum_{j=0}^k 2^{(k-j)\sigma}M_j\\ &\quad+C2^{-(k+k_0)(\sigma+\alpha)}\|Du\|_{L_\infty} +C2^{-k\alpha-k_0(\sigma+\alpha)}[v_M]_{\alpha;B_{2^{-k}}}, \end{align*} where $p_1$ is the first-order Taylor's expansion of $v_M$ at the origin. The above inequality, \eqref{eq4.22}, and the interpolation inequality imply \begin{align} &[v_M-p_1]_{\alpha;B_{2^{-k-k_0}}}\le C2^{-(k+k_0)\sigma}\sum_{j=0}^k 2^{(k-j)\sigma}M_j\nonumber\\ &\quad+C2^{-(k+k_0)\sigma}\|Du\|_{L_\infty}+ C2^{-k_0\sigma}[v_M]_{\alpha;B_{2^{-k}}}, \label{eq1.04bb} \end{align} Next $w_M:=g_M-v_M$ satisfies \begin{equation*} \begin{cases} \mathcal{M}^+w_M\ge h_M-C_k\quad &\text{in}\,\, B_{2^{-k}},\\ \mathcal{M}^-w_M\le \hat{h}_M+C_k\quad &\text{in}\,\, B_{2^{-k}},\\ w_M=0\quad &\text{in}\,\, B_{2^{-k}}^c, \end{cases} \end{equation*} where \begin{equation*} h_M :=\mathcal{M}^-(g_M-(u-p_0)),\quad \hat{h}_M:=\mathcal{M}^+(g_M-(u-p_0)). \end{equation*} By the dominated convergence theorem, it is easy to see that \begin{equation*} \|h_M\|_{L_\infty(B_{2^{-k}})},\,\, \|\hat{h}_M\|_{L_\infty(B_{2^{-k}})}\rightarrow 0\quad \text{as}\quad M\rightarrow \infty. \end{equation*} By the same argument as in the previous case, $$ C_k\le \omega_f(2^{-k})+C\omega_a(2^{-k})\Big(\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} [u-P_{x_0}u]_{\alpha;B_{2^{-j}}(x_0)}+\|Du\|_{L_\infty}\Big). $$ Thus similar to \eqref{eq3.05}, choosing $M$ sufficiently large so that \begin{equation*} \|h_M\|_{L_\infty(B_{2^{-k}})},\,\, \|\hat{h}_M\|_{L_\infty(B_{2^{-k}})}\le C_k/2, \end{equation*} we have \begin{align} \label{eq3.05bb} [w_M]_{\alpha;B_{2^{-k}}} &\le C2^{-k(\sigma-\alpha)}\Big(\omega_f(2^{-k}) +\omega_a(2^{-k})\|Du\|_{L_\infty}\nonumber\\ &\quad+\omega_a(2^{-k})\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} [u-P_{x_0}u]_{\alpha;B_{2^{-j}}(x_0)}\Big). \end{align} Combining \eqref{eq1.04bb} and \eqref{eq3.05bb}, similar to \eqref{eq3.25b}, we obtain \begin{align} \label{eq3.25bb} &2^{(k+k_0)(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-k-k_0}}(x_0)}\nonumber\\ &\le C2^{-(k+k_0)\alpha}\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^k 2^{(k-j)\sigma}[u-P_{x_0}u]_{\alpha;B_{2^{j-k}}(x_0)} +C2^{-(k+k_0)\alpha}\|Du\|_{L_\infty} \nonumber\\ &\quad +C2^{-k_0\alpha}2^{k(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u-P_{x_0}]_{\alpha;B_{2^{-k}(x_0)}}+C2^{k_0(\sigma-\alpha)} \Big(\omega_f(2^{-k}) \nonumber\\ &\quad +\omega_a(2^{-k})(\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} [u-P_{x_0}u]_{\alpha;B_{2^{-j}}(x_0)}+\|Du\|_{L_\infty})\Big). \end{align} Using \eqref{eq3.25bb}, as before we get \begin{align} \label{eq9.06} &\sum_{k=k_1}^\infty 2^{(k+k_0)(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-k-k_0}}(x_0)}\nonumber\\ &\le C2^{-k_0\alpha}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u-P_{x_0}u]_{\alpha;B_{2^{-j}}(x_0)}\nonumber\\ &\quad+C2^{-(k_1+k_0)\alpha}\|u\|_{1}+C2^{k_0(\sigma-\alpha)} \sum_{k=k_1}^\infty \omega_f(2^{-k})\nonumber\\ &\quad+C2^{k_0(\sigma-\alpha)}\sum_{k=k_1}^\infty\omega_a(2^{-k})\cdot\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} [u-P_{x_0}u]_{\alpha;B_{2^{-j}}(x_0)}, \end{align} and \begin{align*} &\sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}\\ &\le C2^{-k_0\alpha}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}[u-P_{x_0}u]_{\alpha;B_{2^{-j}}(x_0)}\\ &\quad+C2^{(k_1+k_0)(\sigma-\alpha)}\|u\|_{1}+C2^{k_0(\sigma-\alpha)} \sum_{k=k_1}^\infty \omega_f(2^{-k})\\ &\quad+C2^{k_0(\sigma-\alpha)}\sum_{k=k_1}^\infty\omega_a(2^{-k})\cdot\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(\sigma-\alpha)} [u-P_{x_0}u]_{\alpha;B_{2^{-j}}(x_0)}. \end{align*} By choosing $k_0$ and $k_1$ sufficiently large and applying Lemma \ref{lem2.4}, we obtain \begin{equation} \label{eq12.03} \sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}\le C\|u\|_{1}+C\sum_{k=1}^\infty \omega_f(2^{-k}). \end{equation} Finally, by Lemma \ref{lem2.3} (ii) and the interpolation inequality, we get \eqref{eq8.16}. {\bf Case 3: $\sigma=1$.} We proceed as in the previous case, but instead take $p_0$ to be the first-order Taylor's expansion of the mollification $u^{(2^{-k})}$ at the origin. We also assume that the solution $v$ to \eqref{eq 12.181} exists without carrying out another approximation argument. By Proposition \ref{prop1.1} and Lemma \ref{lem2.2} with $\beta=\alpha/2$, \begin{align} &[v]_{\alpha+1;B_{2^{-k-1}}}\le C\sum_{j=0}^\infty 2^{k-j}M_j+C2^k[v]_{\alpha;B_{2^{-k}}}\nonumber\\ &\le C\sum_{j=0}^\infty 2^{k-j+j\alpha/2}[u]_{\Lambda^\alpha(B_{2^{j-k}})} +C2^k[v]_{\alpha;B_{2^{-k}}}\nonumber\\ &\le C\sum_{j=0}^k 2^{k-j+j\alpha/2}[u]_{\Lambda^\alpha(B_{2^{j-k}})} +C2^{k\alpha/2}[u]_{\alpha}+C2^k[v]_{\alpha;B_{2^{-k}}}. \label{eq4.22c} \end{align} From \eqref{eq4.22c} and the interpolation inequality, we obtain \begin{align} &[v-p_1]_{\alpha;B_{2^{-k-k_0}}}\nonumber\\ &\le C2^{-(k+k_0)}\sum_{j=0}^k 2^{k-j+j\alpha/2}[u]_{\Lambda^\alpha(B_{2^{j-k}})} +C2^{-(k+k_0)+k\alpha/2}[u]_{\alpha}+C2^{-k_0}[v]_{\alpha;B_{2^{-k}}}\nonumber\\ &\le C2^{-(k+k_0)}\sum_{j=0}^k 2^{k-j+j\alpha/2}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{j-k}}}\nonumber\\ &\quad+C2^{-(k+k_0)+k\alpha/2}[u]_{\alpha}+C2^{-k_0}[v]_{\alpha;B_{2^{-k}}}, \label{eq1.04c} \end{align} where $p_1$ is the first-order Taylor's expansion of $v$ at the origin. Next $w:=u-p_0-v$ satisfies \eqref{eq4.25}, where by the cancellation property \eqref{eq10.58}, $$ C_k\le \omega_f(2^{-k})+C\omega_a(2^{-k})\Big(\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(1-\alpha)}\inf_{p\in \mathcal{P}_1} [u-p]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}\Big). $$ Therefore, similar to \eqref{eq3.05}, we have \begin{align} \label{eq3.05c} [w]_{\alpha;B_{2^{-k}}} &\le C2^{-k(1-\alpha)}\Big(\omega_f(2^{-k})\nonumber\\ &\quad+\omega_a(2^{-k})\big(\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(1-\alpha)} \inf_{p\in \mathcal{P}_1} [u-p]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}\big)\Big). \end{align} Notice that from \eqref{eq 12.62b} and the triangle inequality \begin{align*} &[v]_{\alpha;B_{2^{-k}}} \le [w]_{\alpha;B_{2^{-k}}}+ [u-p_0]_{\alpha;B_{2^{-k}}}\\ &\le [w]_{\alpha;B_{2^{-k}}}+C[u]_{\Lambda^\alpha(B_{2^{-k+2}})} \le [w]_{\alpha;B_{2^{-k}}}+C\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-k+2}}}. \end{align*} Similar to \eqref{eq3.25b}, combining \eqref{eq1.04c}, \eqref{eq3.05c}, and the inequality above, we obtain \begin{align*} &2^{(k+k_0)(1-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-k-k_0}}(x_0)}\\ &\le C2^{-(k+k_0)\alpha}\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^k 2^{k-j+j\alpha/2}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{j-k}}(x_0)}+C2^{-(k/2+k_0)\alpha}[u]_{\alpha}\nonumber\\ &\quad + C2^{-k_0\alpha+(k-2)(1-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-k+2}(x_0)}}+C2^{k_0(1-\alpha)}\Big(\omega_f(2^{-k}) \\ &\quad +\omega_a(2^{-k})\big(\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(1-\alpha)} \inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}\big)\Big), \end{align*} which by summing in $k=k_1,k_1+1,\ldots$ implies that \begin{align*} &\sum_{k=k_1}^\infty 2^{(k+k_0)(1-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-k-k_0}}(x_0)}\\ &\le C2^{-k_0\alpha}\sum_{j=0}^\infty 2^{j(1-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}\\ &\quad+C2^{-(k/2+k_0)\alpha}[u]_{\alpha}+C2^{k_0(1-\alpha)} \sum_{k=k_1}^\infty\omega_f(2^{-k}) +C2^{k_0(1-\alpha)}\sum_{k=k_1}^\infty\omega_a(2^{-k})\\ &\qquad\cdot(\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^\infty 2^{j(1-\alpha)} \inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}), \end{align*} where for the first term on the right-hand side, we switched the order of summations to get \begin{align*} &\sum_{k=k_1}^\infty2^{-(k+k_0)\alpha}\sup_{x_0\in \mathbb{R}^d}\sum_{j=0}^k 2^{k-j+j\alpha/2}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{j-k}}(x_0)}\\ &\le \sum_{k=0}^\infty2^{-(k+k_0)\alpha}\sum_{j=0}^k 2^{j+(k-j)\alpha/2} \sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}\\ &=2^{-k_0\alpha}\sum_{j=0}^\infty 2^{j(1-\alpha/2)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}\sum_{k=j}^\infty 2^{-k\alpha/2}\\ &\le C2^{-k_0\alpha}\sum_{j=0}^\infty 2^{j(1-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}. \end{align*} Therefore, \begin{align*} &\sum_{j=0}^\infty 2^{j(1-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}\\ &\le C2^{-k_0\alpha}\sum_{j=0}^\infty 2^{j(1-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)} \\ &\quad+C2^{(k_1+k_0)(1-\alpha)}[u]_{\alpha}+C2^{k_0(1-\alpha)} \sum_{k=k_1}^\infty\omega_f(2^{-k}) +C2^{k_0(1-\alpha)}\sum_{k=k_1}^\infty\omega_a(2^{-k})\\ &\qquad\cdot(\sum_{j=0}^\infty 2^{j(1-\alpha)}\sup_{x_0\in \mathbb{R}^d} \inf_{p\in \mathcal{P}_1}[u-p]_{\alpha;B_{2^{-j}}(x_0)}+\|u\|_{L_\infty}), \end{align*} Finally, to get \eqref{eq8.16} it suffices to choose $k_0$ and $k_1$ sufficiently large and apply Lemma \ref{lem2.3} (iii). \end{proof} Next we employ a localization argument as in \cite{DZ16}. \begin{proof}[Proof of Theorem \ref{thm 1}] Since the proof of the case when $\sigma\in(0,1)$ is almost the same as $\sigma\in(1,2)$ and actually simpler, we only present the latter and sketch the proof of the case when $\sigma = 1$ in the end. {\bf The case when $\sigma\in(1,2)$.} We divide the proof into three steps. {\em Step 1.} For $k=1,2,\ldots$, denote $B^k := B_{1-2^{-k}}$. Let $\eta_k\in C_0^\infty(B^{k+1})$ be a sequence of nonnegative smooth cutoff functions satisfying $\eta\equiv 1$ in $B^{k}$, $|\eta|\le 1$ in $B^{k+1}$, and $\|D^i\eta_k\|_{L_\infty}\le C2^{ki}$ for each $i\ge 0$. Set $v_k := u\eta_k\in C^{\sigma+}$. A simple calculation reveals that \begin{equation*} \inf_{\beta\in \mathcal{A}}(L_\beta v_k-h_{k\beta}+\eta_k f_\beta)=0\quad \text{in}\,\, \mathbb{R}^d, \end{equation*} where \begin{equation*} h_{k\beta}=h_{k\beta}(x) = \int_{\mathbb{R}^d}\frac{\xi_k(x,y)a_\beta(x,y)}{|y|^{d+\sigma}}\,dy \end{equation*} and \begin{equation*} \xi_k(x,y) = u(x+y)(\eta_k(x+y)-\eta_k(x))-y\cdot D\eta_k(x)u(x). \end{equation*} Obviously, $\eta_k f_\beta$ is a Dini continuous function in $\mathbb{R}^d$ and \begin{align*} &|\eta_k(x)f_\beta(x)-\eta_k(x')f_\beta(x')|\\ &\le \|\eta_k\|_{L_\infty}\omega_f(|x-x'|)+\|f_\beta\|_{L_\infty(B_1)}\|D\eta_k\|_{L_\infty}|x-x'|\\ &\le \omega_f(|x-x'|)+C2^{k}\|f_\beta\|_{L_\infty(B_1)}|x-x'|, \end{align*} where $C$ only depends on $d$. {\em Step 2.} We first estimate the $L_\infty$ norm of $h_{k\beta}$. By the fundamental theorem of calculus, \begin{align*} \xi_k(x,y) = y\cdot\int_{0}^1 u(x+y)D\eta_k(x+ty)-u(x)D\eta_k(x)\,dt. \end{align*} For $|y|\ge 2^{-k-3}$, $|\xi_k(x,y)|\le C2^{k}|y|\|u\|_{L_\infty}$. For $|y|<2^{-k-3}$, we can further write \begin{equation*} \xi_k(x,y) = y\cdot\int_{0}^1(u(x+y)-u(x))D\eta_k(x+ty)+u(x)(D\eta_k(x+ty)-D\eta_k(x))\,dt, \end{equation*} where the second term on the right-hand side is bounded by $C2^{2k}|y|^2|u(x)|$. To estimate the first term, we consider two cases: when $|x|\ge1-2^{-k-2}$, because $|y|<2^{-k-3}$, $\xi_k(x,y)\equiv 0$; when $|x|<1-2^{-k-2}$, we have \begin{equation*} \Big|y\cdot\int_0^1(u(x+y)-u(x))D\eta_k(x+ty)\,dt\Big|\le C2^{k}|y|^2\|Du\|_{L_\infty(B^{k+3})}. \end{equation*} \begin{comment} For the case $7/8\le |x|<8/9$, there are two situations: first when $[x,x+y]\subset B_1$, then estimate is the same as the case $x\in B_{7/8}$; second if there exists $t_0\in (0,1)$ so that $|x+t_0y| = 1$, then \begin{align*} &y\int_0^1(u(x+y)-u(x))D\eta(x+ty)\,dt \\ &= y\int_0^1(u(x+y)-u(x))(D\eta(x+ty)-D\eta(x+t_0y))\,dt\le C|y|^2\|u\|_{L_\infty}. \end{align*} \end{comment} Hence for $|y|<2^{-k-3}$, \begin{equation*} |\xi_k(x,y)|\le C|y|^2\big(2^{2k}|u(x)|+2^{k}\|Du\|_{L_\infty(B^{k+3})}\big). \end{equation*} Combining with the case when $|y|>2^{-k-3}$, we see that \begin{equation} \label{eq11.26} \|h_{k\beta}\|_{L_\infty}\le C2^{\sigma k}\big(\|u\|_{L_\infty}+\|Du\|_{L_\infty(B^{k+3})}\big). \end{equation} Next we estimate the modulus of continuity of $h_{k\beta}$. By the triangle inequality, \begin{align} \label{eq11.39} &|h_{k\beta}(x)-h_{k\beta}(x')|\nonumber \\ &\le \int_{\mathbb{R}^d}\frac{|(\xi_k(x,y)-\xi_k(x',y))a_\beta(x,y)|}{|y|^{d+\sigma}} +\frac{|\xi_k(x',y)(a_\beta(x,y)-a_\beta(x',y))|}{|y|^{d+\sigma}}\,dy\nonumber\\ &:= {\rm I}+{\rm II}. \end{align} Similar to \eqref{eq11.26}, by the estimates of $|\xi_k(x,y)|$ above, we have \begin{equation} \label{eq11.40} {\rm II}\le C2^{\sigma k}\big(\|u\|_{L_\infty}+\|Du\|_{L_\infty(B^{k+3})}\big)\omega_a(|x-x'|), \end{equation} where $C$ depends on $d$, $\sigma$, and $\Lambda$. For $I$, by the fundamental theorem of calculus, \begin{align*} &\xi_k(x,y)-\xi_k(x',y) = y\cdot\int_0^1\Big(u(x+y)D\eta_k(x+ty)-u(x)D\eta_k(x)\\ &\quad -u(x'+y)D\eta_k(x',x+ty)+u(x')D\eta_k(x')\Big)\,dt. \end{align*} When $|y|\ge2^{-k-3}$, similar to the estimate of $\xi_k(x,y)$, it follows that \begin{equation} \label{eq11.41} |\xi_k(x,y)-\xi_k(x',y)|\le C|y|\big(2^{k}\omega_u(|x-x'|)+2^{2k}\|u\|_{L_\infty}|x-x'|\big). \end{equation} The case when $|y|<2^{-k-3}$ is a bit more delicate. First, by the fundamental theorem of calculus, \begin{align*} &|\xi_k(x,y)-\xi_k(x',y)|\\ &\le |y|\int_0^1|(u(x+y)-u(x))D_k\eta(x+ty)-(u(x'+y)-u(x'))D\eta_k(x'+ty)|\,dt\\ &\quad +|y|^2\int_0^1\int_0^1|u(x)D^2\eta_k(x+tsy)-u(x')D^2\eta_k(x'+tsy)|\,dt\,ds := {\rm III}+{\rm IV}. \end{align*} It is easily seen that \begin{align*} {\rm IV}\le C|y|^2(2^{2k}\omega_u(|x-x'|)+2^{3k}\|u\|_{L^\infty}|x-x'|). \end{align*} Next we bound ${\rm III}$ by considering four cases. When $x,x' \in (B^{k+2})^c$, we have ${\rm III}\equiv 0$. When $x,x'\in B^{k+2}$, \begin{align*} {\rm III} &\le |y|^2 \int_0^1\int_0^1 |Du(x+sy)D\eta_k(x+ty)-Du(x'+sy)D\eta_k(x'+ty)|\,ds\,dt\\ &\le C|y|^2\big(2^{k}[u]_{1+\alpha;B^{k+3}}|x-x'|^\alpha +2^{2k}\|Du\|_{L_\infty(B^{k+3})}|x-x'|\big), \end{align*} where we choose $\alpha = \frac{\sigma-1}{2}.$ When $x\in B^{k+2}$ and $x'\in (B^{k+2})^c$, \begin{align*} {\rm III}& = |y|\int_0^1|(u(x+y)-u(x))D\eta_k(x+ty)|\,dt \\ &\le |y|^2\int_0^1\int_0^1|Du(x+sy)(D\eta_k(x+ty)-D\eta_k(x'+ty))|\,ds\,dt\\ &\le C|y|^22^{2k}\|Du\|_{L_\infty(B^{k+3})}|x-x'|. \end{align*} The last case is similar. In conclusion, we obtain \begin{align*} {\rm III}\le C|y|^2\big(2^{k}[u]_{1+\alpha;B^{k+3}}|x-x'|^\alpha +2^{2k}\|Du\|_{L_\infty(B^{k+3})}|x-x'|\big). \end{align*} Combining the estimates of ${\rm III}, {\rm IV}$, and \eqref{eq11.41}, we obtain \begin{align} \label{eq11.42} {\rm I} &\le C2^{k(\sigma+1)}\big(\omega_u(|x-x'|)+[u]_{1+\alpha;B^{k+3}}|x-x'|^\alpha\nonumber\\ &\quad +(\|Du\|_{L_\infty(B^{k+3})}+\|u\|_{L_\infty})|x-x'|\big). \end{align} By combining \eqref{eq11.39}, \eqref{eq11.40}, and \eqref{eq11.42}, we obtain \begin{align*} |h_{k\beta}(x)-h_{k\beta}(x')|&\le \omega_h(|x-x'|) \end{align*} where \begin{align} \label{eq9.32} &\omega_h(r) := C2^{\sigma k}\big(\|u\|_{L_\infty}+\|Du\|_{L_\infty(B^{k+3})}\big)\omega_a(r)\nonumber\\ &\quad +C2^{k(\sigma+1)}\big(\omega_u(r)+[u]_{1+\alpha;B^{k+3}}r^\alpha +(\|Du\|_{L_\infty(B^{k+3})}+\|u\|_{L_\infty})r\big) \end{align} is a Dini function. {\em Step 3.} We apply Proposition \ref{prop3.2} to $v_k$ to obtain \begin{align*} [v_k]_\sigma&\le C\|v_k\|_{L_\infty}+C\sum_{j=1}^\infty \big(\omega_h(2^{-j})+\omega_f(2^{-j})\big)+C2^k \sup_\beta\|f_\beta\|_{L_\infty(B_1)}\\ &\le C\|v_k\|_{L_\infty}+C2^{k(\sigma+1)} \big([u]_{1+\alpha;B^{k+3}}+\|Du\|_{L_\infty(B^{k+3})}+\|u\|_{L_\infty}\big)\\ &\quad +C\sum_{j=1}^\infty\big(2^{k(\sigma+1)}\omega_u(2^{-j}) +\omega_f(2^{-j})\big)+C2^k\sup_\beta\|f_\beta\|_{L_\infty(B_1)}, \end{align*} where $C$ depends on $d$, $\lambda$, $\Lambda$, $\sigma$, and $\omega_a$, but independent of $k$. Since $\eta_k\equiv 1$ in $B^k$, it follows that \begin{align} \nonumber [u]_{\sigma;B^{k}}&\le C2^{k(\sigma+1)}\|u\|_{L_\infty}+C2^{k(\sigma+1)} \big([u]_{1+\alpha;B^{k+3}}+\|Du\|_{L_\infty(B^{k+3})}\big)\\ \label{eq12.141} &\quad +C_0\sum_{j=1}^\infty\big(2^{k(\sigma+1)}\omega_u(2^{-j}) +\omega_f(2^{-j})\big) +C2^k\sup_\beta\|f_\beta\|_{L_\infty(B_1)}. \end{align} By the interpolation inequality, for any $\epsilon\in(0,1)$ \begin{equation} \label{eq12.142} [u]_{1+\alpha;B^{k+3}}+\|Du\|_{L_\infty(B^{k+3})} \le \epsilon[u]_{\sigma;B^{k+3}}+C\epsilon^{-\frac{1+\alpha}{\sigma-(1+\alpha)}}\|u\|_{L_\infty}. \end{equation} Recall that $\alpha = \frac{\sigma-1}{2}$ and denote $$ N := \frac{1+\alpha}{\sigma-(1+\alpha)} = \frac{\sigma+1}{\sigma-1}(>3). $$ Combining \eqref{eq12.141} and \eqref{eq12.142} with $\epsilon = C_0^{-1}2^{-3k-12N-1}$, we obtain \begin{align*} &[u]_{\sigma;B^k}\le C2^{3k+(3k+12N)N}\|u\|_{L_\infty}+ 2^{-12N-1}[u]_{\sigma;B^{k+3}}\\ &\quad +C2^k\sup_\beta\|f_\beta\|_{L_\infty(B_1)}+C \sum_{j=1}^\infty\big(2^{3k}\omega_u(2^{-j})+\omega_f(2^{-j})\big). \end{align*} Then we multiply $2^{-4kN}$ to both sides of the inequality above and get \begin{align*} &2^{-4kN}[u]_{\sigma;B^k}\le C2^{3k-kN}\|u\|_{L_\infty}+2^{-4N(k+3)-1}[u]_{\sigma;B^{k+3}}\\ &\quad +C2^{-4kN+k}\sup_\beta\|f_\beta\|_{L_\infty(B_1)}+C2^{-kN} \sum_{j=1}^\infty\big(\omega_u(2^{-j})+\omega_f(2^{-j})\big). \end{align*} We sum up the both sides of the inequality above and obtain \begin{align*} &\sum_{k=1}^\infty2^{-4kN}[u]_{\sigma;B^k}\le C\sum_{k=1}^{\infty}2^{3k-kN}\|u\|_{L_\infty} +\frac{1}{2}\sum_{k=4}^\infty2^{-4kN}[u]_{\sigma;B^k}\\ &\quad +C\sum_{k=1}^\infty2^{-4kN+k}\sup_\beta\|f_\beta\|_{L_\infty(B_1)} +C \sum_{j=1}^\infty\big(\omega_u(2^{-j})+\omega_f(2^{-j})\big), \end{align*} which further implies that \begin{align*} \sum_{k=1}^\infty2^{-4kN}[u]_{\sigma;B^k}\le C\|u\|_{L_\infty}+C\sup_\beta\|f_\beta\|_{L_\infty(B_1)} +C\sum_{j=1}^\infty\big(\omega_u(2^{-j})+\omega_f(2^{-j})\big), \end{align*} where $C$ depends on $d$, $\lambda$, $\Lambda$, $\sigma$, and $\omega_a$. In particular, when $k=4$, we deduce \begin{equation} \label{eq12.09} [u]_{\sigma;B^{4}}\le C\|u\|_{L_\infty}+C\sup_\beta\|f_\beta\|_{L_\infty(B_1)} +C\sum_{j=1}^\infty\big(\omega_u(2^{-j})+\omega_f(2^{-j})\big), \end{equation} which apparently implies \eqref{eq12.17}. Finally, since $\|v_1\|_{1}$ is bounded by the right-hand side \eqref{eq12.09}, from \eqref{eq12.03}, we see that \begin{align*} \sum_{j=0}^\infty 2^{j(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[v_1-p]_{\alpha;B_{2^{-j}}(x_0)}\le C. \end{align*} This and \eqref{eq9.06} with $u$ replaced by $v_1$ and $f_\beta$ replaced by $\eta_1f_\beta-h_{1\beta}$ give \begin{align*} &\sum_{j=k_1}^\infty 2^{(j+k_0)(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[v_{1}-p]_{\alpha;B_{2^{-j-k_0}}(x_0)}\\ &\le C2^{-k_0\alpha}+C2^{k_0(\sigma-\alpha)} \sum_{j=k_1}^\infty \big(\omega_f(2^{-j})+\omega_a(2^{-j})+\omega_u(2^{-j})+2^{-j\alpha}\big), \end{align*} Here we also used Lemma \ref{lem2.4} and \eqref{eq9.32} with $k=1$. Therefore, for any small $\varepsilon>0$, we can find $k_0$ sufficiently large then $k_1$ sufficiently large, depending only on $C$, $\sigma$, $\alpha$, $\omega_f$, $\omega_a$, $\omega_f$, and $\omega_u$, such that $$ \sum_{j=k_1}^\infty 2^{(j+k_0)(\sigma-\alpha)}\sup_{x_0\in \mathbb{R}^d}\inf_{p\in \mathcal{P}_1}[v_1-p]_{\alpha;B_{2^{-j-k_0}}(x_0)}<\varepsilon, $$ which, together with the fact that $v_1 = u$ in $B_{1/2}$ and the proof of Lemma \ref{lem2.3} (ii), indicates that $$ \sup_{x_0\in B_{1/2}} [u]_{\sigma;B_r(x_0)}\to 0 \quad\text{as}\quad r\to 0 $$ with a decay rate depending only on $d$, $\lambda$, $\Lambda$, $\omega_a$, $\omega_f$, $\omega_u$, $\sup_{\beta\in \mathcal{A}}\|f_\beta\|_{L_\infty(B_1)}$, and $\sigma$. Hence, the proof of the case when $\sigma\in (1,2)$ is completed. {\bf The case when $\sigma = 1$.} The proof is very similar to the case when $\sigma\in (1,2)$ and we only provide a sketch here. We use the same notation as in the previous case \begin{equation*} h_{k\beta}(x) = \int_{\mathbb{R}^d}\frac{\xi_k(x,y) a_\beta(x,y)}{|y|^{d+1}}\,dy, \end{equation*} where $$ \xi_k(x,y) := u(x+y)(\eta_k(x+y)-\eta_k(x))-u(x)y\cdot D\eta_k(x)\chi_{B_1}. $$ It is easy to see that when $|y|\ge2^{-k-3}$, $$|\xi_k(x,y)|\le C2^k|y|\|u\|_{L_\infty}.$$ On the other hand, when $|y|<2^{-k-3}$, \begin{align*} |\xi_k(x,y)| &\le |y|\int_0^1|u(x+y)D\eta_k(x+ty)-u(x)D\eta_k(x)|\,dt\\ &\le C2^{k}|y|w_u(|y|)+C2^{2k}|y|^2|u(x)|. \end{align*} Therefore, \begin{align*} \|h_{k\beta}\|_{L_\infty}\le C2^k\Big(\|u\|_{L_\infty}+\int_0^1\frac{w_u(r)}{r}\,dr\Big). \end{align*} Next we estimate the modulus of continuity of $h_{k\beta}$ and proceed as in the case when $\sigma\in (1,2)$. Indeed, it is easily seen that \begin{equation*} {\rm II} \le C2^k\Big(\|u\|_{L_\infty} +\int_0^1\frac{\omega_u(r)}{r}\,dr\Big)\,\omega_a(|x-x'|). \end{equation*} To estimate ${\rm I}$, we write \begin{align*} \xi_k(x,y)-\xi_k(x',y) = u(x+y)(\eta_k(x+y)-\eta_k(x))-u(x)y\cdot D\eta_k(x)\chi_{B_1}\\ -u(x'+y)(\eta_k(x'+y)-\eta_k(x'))+u(x')y\cdot D\eta_k(x')\chi_{B_1}. \end{align*} Obviously, when $|y|\ge2^{-k-3}$ \begin{align} \label{eq2.03} |\xi_k(x,y)-\xi_k(x',y)|\le C2^{2k}|y|\big(\|u\|_{L_\infty}|x-x'| + \omega_u(|x-x'|)\big). \end{align} When $|y|<2^{-k-3}$, we have $\chi_{B_1}(y) = 1$. Thus similar to the first case, \begin{align*} &|\xi_k(x,y)-\xi_k(x',y)|\\ &\le |y|\int_0^1|(u(x+y)-u(x))D\eta_k(x+ty)-(u(x'+y)-u(x'))D\eta_k(x'+ty)|\,dt\\ &\quad +|y|^2\int_0^1\int_0^1|u(x)D^2\eta_k(x+tsy)-u(x')D^2\eta_k(x'+tsy)|\,dt\,ds := {\rm III}+{\rm IV}. \end{align*} Clearly, \begin{align*} {\rm IV}\le C2^{3k}|y|^2(\omega_u(|x-x'|)+\|u\|_{L_\infty}|x-x'|). \end{align*} When $x,x' \in (B^{k+2})^c$, we have ${\rm III}\equiv 0$. When $x,x'\in B^{k+2}$, by the triangle inequality, \begin{align*} {\rm III}&\le |y|\int_{0}^1|u(x+y)-u(x)-(u(x'+y)-u(x'))||D\eta_k(x+ty)|\,dt\\ &\quad +|y|\int_0^1|u(x'+y)-u(x')||D\eta_k(x+ty)-D\eta_k(x'+ty)|\,dt\\ &\le C2^{k}|y|^{1+\gamma}|x-x'|^{\zeta}[u]_{\zeta+\gamma;B^{k+3}} +C2^{2k}|y|\omega_u(|y|)|x-x'|, \end{align*} where $C$ depends on $d$, and $\zeta+\gamma<1$. Here we used the inequality \begin{align*} |u(x+y)-u(x)-(u(x'+y)-u(x'))|\le 2[u]_{\gamma+\zeta}|x-x'|^\zeta|y|^\gamma. \end{align*} Set $\gamma=\zeta = 1/4$. When $x\in B^{k+2}$ and $x'\in (B^{k+2})^c$, \begin{align*} {\rm III}&=|y|\int_0^1|(u(x+y)-u(x))D\eta_k(x+ty)|\,dt\\ &=|y|\int_0^1|(u(x+y)-u(x))(D\eta_k(x+ty)-D\eta_k(x'+ty))|\,dt\\ &\le C2^{2k}|y|\omega_u(|y|)|x-x'|. \end{align*} The case when $x'\in B^{k+2}$ and $x\in (B^{k+2})^c$ is similar. Then with the estimates of ${\rm III}$ and ${\rm IV}$ above, we obtain that when $|y|<2^{-k-3}$, \begin{align*} |\xi_k(x,y)-\xi_k(x',y)|\le C2^{3k}|y|^2\big(\omega_u(|x-x'|)+\|u\|_{L_\infty}|x-x'|\big)\\ + C2^k|y|^{5/4}|x-x'|^{1/4}[u]_{1/2;B^{k+3}}+C2^{2k}|y|\omega_u(|y|)|x-x'|, \end{align*} which, combining with \eqref{eq2.03} for the case when $|y|\ge 2^{-k-3}$, further implies that \begin{align*} {\rm I}\le &C2^{2k}\Big(\omega_u(|x-x'|)+\|u\|_{L_\infty}|x-x'|\\ &\quad+[u]_{1/2;B^{k+3}}|x-x'|^{1/4} +|x-x'|\int_0^1\frac{w_u(r)}{r}\,dr\Big), \end{align*} where $C$ depends on $d$ and $\Lambda$. Hence, we obtain the estimate of the modulus of continuity of $h_{k\beta}(x)$: \begin{align*} \omega_h(r)= C2^{2k}\Big(\omega_u(r)+[u]_{1/2;B^{k+3}}r^{1/4} +\big(\|u\|_{L_\infty}+\int_0^1\frac{\omega_u(r)}{r}\,dr\big) \big(r+\omega_a(r)\big)\Big). \end{align*} The rest of the proof is the same as the previous case. \end{proof} \bibliographystyle{plain} \def$'${$'$}
1,116,691,498,733
arxiv
\section{\label{sec:level1}Introduction} MDS matrices, especially over finite fields of characteristic two, are widely used in cryptography for constructing block ciphers due to its diffusion properties. For an extensive survey, we consult Gupta et. al. \cite{gupta}. Two of many criteria of a good MDS matrix are being involutory and having few different elements. Junod and Vaudenay \cite{junod} have found some lower bounds (with examples) on the numbers of different entries in MDS matrices over finite fields of characteristic two. In this paper, we extend this result to involutory MDS matrices. This paper is organized as follows: second section introduces MDS matrices and some relevant results. Third section concerns the lower bounds (with examples) of the number of different entries in an involutory MDS matrices with order 1, 2, and 3 over finite fields of characteristic two. Fourth section concerns lower bounds (with examples) of the number of different entries in an involutory MDS matrices with order 4 over finite fields of characteristic two. The fifth section summarizes the results of this paper. \section{Definition and Lemma} Let $\mathbb{F}_{2^m}$ be the finite field with $2^m$ elements, and $m$ be a positive integer. We refer to $n\times n$ matrices as matrices of order $n$. A square matrix $A$ is involutory if $A^2=I$. Denote $A_{ij}$ as the entry of row $i$ and column $j$ in a matrix $A$. \begin{definition}\cite{gupta} A matrix $A$ is \textit{MDS} (Maximum Distance Separable) if every square submatrices of $A$ are nonsingular. \end{definition} The following lemma, and its corollaries, play an important part in this paper. \begin{lemma}\cite{gupta} For any permutation matrices (with correct sizes) $P$ and $Q$, if $A$ is MDS, $PAQ$ and $A^T$ is also MDS. \end{lemma} Based on this, and owing to the fact that $P^{-1}$ is also a permutation matrix, the following corollaries are obvious. \begin{corollary} \label{paap} A matrix $A$ is involutory MDS if and only if $PAP^{-1}$ is involutory MDS, with $P$ being a permutation matrix. \end{corollary} \begin{corollary}\label{transpose} A matrix $A$ is involutory and MDS if and only if $A^T$ is involutory and MDS. \end{corollary} \section{Matrices of Order One, Two, and Three} Obviously, there are at least one different element in an involutory MDS matrices of order 1, and $\begin{pmatrix} 1 \end{pmatrix}^2=\begin{pmatrix} 1 \end{pmatrix}$ For order two, obviously $\begin{pmatrix} a&a\\a&a \end{pmatrix}$ is not MDS - hence, any MDS matrices need at least two different elements. Now, for any $a\in \mathbb{F}_{2^m}, a\notin \{0,1\}$, $\begin{pmatrix} a&a+1\\a+1&a \end{pmatrix}^2=\begin{pmatrix} 1&0\\0&1 \end{pmatrix}$. Hence, at least two different elements are needed in an involutory MDS matrices. Furthermore, any involutory MDS matrices over $\mathbb{F}_{2^m}$ with order two and exactly two different elements is of the form $\begin{pmatrix} a&a+1\\a+1&a \end{pmatrix}$ - hence, there are exactly $2^m-2$ matrices in this case. The case of order three is more involved. First, any MDS matrices in this order has at least two different elements. Suppose there was an involutory MDS matrix which have exactly two different elements, denoted by $a$ and $b$. Combinatorial reasoning gives two possible families of MDS matrices, up to the transformation described in corollary \ref{paap}, - $\begin{pmatrix} a&a&b\\a&b&a\\b&a&a \end{pmatrix}$ and $\begin{pmatrix} a&a&b\\b&a&a\\a&b&a \end{pmatrix}$. Choosing appropriate $a$ and $b$ (such that none of the square submatrices are singular) results in an MDS matrix with exactly two different entries. But, were one of these matrices be involutory, by checking each entries, it can be inferred that one of $a$ and $b$ is 0 - a contradiction. Hence, at least three different entries is needed in an involutory MDS matrices over $\mathbb{F}_{2^m}$. Examples of an involutory MDS matrices (over $\mathbb{F}_{2^3}$) with exactly three different entries can be seen at G{\"u}zel et. al \cite{guzel}. \section{Matrices of Order Four} Junod and Vaudenay \cite{junod} have already proved that any MDS matrices over $\mathbb{F}_{2^m}$ with order 4 has at least three different entries, with lower bound attained (as example) from matrices used in AES \cite{junod}. However, this matrix is not involutory. Now suppose $A$ is an involutory MDS matrices with exactly three different entries $a,b,c\in \mathbb{F}_{2^m}$. Three successive claims (and sub-claims) are proven to classify the structure of $A$. \begin{claim}\label{first} $a,b,$ and $c$ appear at most twice in any row, or column, of $A$. \end{claim} \begin{proof} Without loss of generality, and considering corollary \ref{paap} and \ref{transpose}, it is sufficient to prove they appear at most twice in $A$'s first row. Suppose there exists a matrix $A$ that satisfy the initial condition such that its first row contains $a$ more than twice. By pigeonhole principle (applied over second row), $a$ must not appear four times. Hence, $a$ appears exactly thrice. Let another entry in $A$'s first row be $b$. By applying corollary \ref{paap}, there are only two form of $A$'s first row that needs to be considered: $\begin{pmatrix} b&a&a&a \end{pmatrix}$ and $\begin{pmatrix} a&a&a&b \end{pmatrix}$. \paragraph{First Case: $\begin{pmatrix} b&a&a&a \end{pmatrix}$.} We look at submatrix of $A$ made by removing the first column and row of $A$. Each entry in each row of this submatrix is pairwise different, because $A$ is MDS. Hence, for $i=2,3,4$, $\{A_{i2},A_{i3},A_{i4}\}=\{a,b,c\}$ and $A_{i2}+A_{i3}+A_{i4}=a+b+c$. Hence, $\displaystyle \sum_{i=2}^{4}\sum_{j=2}^{4} A_{ij} = 3(a+b+c)=a+b+c$. Meanwhile, considering $(A^2)_{1i}$ for $i=2,3,4$, we get $ba+aA_{2i}+aA_{3i}+aA_{4i}=0 \implies A_{2i}+A_{3i}+A_{4i}=b$. Adding all equation, $\displaystyle \sum_{i=2}^{4}\sum_{j=2}^{4} A_{ij} = b$. Hence, $a+b+c=b\implies a=c$, a contradiction. \paragraph{Second Case:$\begin{pmatrix} a&a&a&b \end{pmatrix}$.} By the same argument as the first case, for $i=2,3,4$, $\{A_{i1},A_{i2},A_{i3}\}=\{a,b,c\}$ and $A_{i1}+A_{i2}+A_{i3}=a+b+c$. Now, for $j=2,3,4$, if $A_{j4}=b$, $A$ has $\begin{pmatrix} a&b\\a&b \end{pmatrix}$ as submatrix - contradicting $A$ being MDS. Hence, $A_{j4}\in \{a,c\}$. By looking at $A_{24}$ and $A_{34}$, there are two cases to be considered: \begin{itemize} \item Let $A_{24}=A_{34}=x$. By considering $(A^2)_{14}$, we get $ab+aA_{24}+aA_{34}+bA_{44}=0 \implies ab=bA_{44} \implies A_{44}=a$. Then, considering $(A^2)_{24}+(A^2)_{34}$, we get $(A_{21}+A_{31})b+x(A_{22}+A_{23}+A_{32}+A_{33})=0$. Because $A_{i1}+A_{i2}+A_{i3}=a+b+c$ for $i=2,3,4$, this equation is equivalent to $(A_{21}+A_{31})b+x[(a+b+c+A_{21})+(a+b+c+A_{31})]=0$ and $(A_{21}+A_{31})(b+x)=0$. Since $x\neq b$, $A_{21}=A_{31}$. But, the submatrix $\begin{pmatrix} A_{21}&x\\A_{31}&x \end{pmatrix}$ is singular, a contradiction. \item $A_{24}\neq A_{34}$. Either $(A_{24},A_{34})=(a,c)$ or $(c,a)$. In both cases, by considering $(A^2)_{14}$, we get $ab+aa+ac+bA_{44}=0$. $A_{44}=a$ results in $a=0$ or $a=c$, and $A_{44}=c$ results in $a=b$ or $a=c$, a contradiction. \end{itemize} Because all cases leads to contradictions, the first statement must be true. Hence, the claim is proven. \end{proof} \begin{claim}\label{second} Each row and column of $A$ must contain $a,b,$ and $c$. \end{claim} \begin{proof} By the same argument as the last claim, it is sufficient to prove the first row of $A$ contains all of them. Suppose it is not. Without loss of generality (and by the last claim), let $a$ and $b$ appear twice in $A$'s first row, with $A_{11}=a$. By corollary \ref{paap}, let the first row of $A$ be $\begin{pmatrix} a&b&b&a \end{pmatrix}$. By considering the fourth row, we get $A_{42}\neq A_{43}$. Then, we exclude some possible values that they can take. \begin{claim} Neither $A_{42}$ nor $A_{43}$ are $b$. \end{claim} \begin{proof} Suppose the claim is false, and without loss of generality, let $A_{42}=b$. Considering $(A^2)_{12}$, we get $ab+bA_{22}+bA_{32}+ab=0 \implies A_{22}=A_{32}=x$. From claim \ref{first}, $x\neq b$. Now, consider the fourth row. Were $A_{41}$ or $A_{44}$ be $a$, $A$ would have singular submatrix - a contradiction. With the same reasoning, $A_{41}\neq A_{44}$. Hence $(A_{41},A_{44})=(b,c)$ or $(c,b)$ and $A_{41}+A_{44}=b+c$. On the other hand, considering $(A^2)_{42}$, we get $A_{41}b+bx+A_{43}x+bA_{44}=0 \implies b(b+c)+(b+A_{43})x=0$. If $A_{43}=c$, $x=b$. If $A_{43}=b, b=0$ or $b=c$. Both leads to contradictions - hence, $A_{43}=a$. Considering $(A^2)_{13}$, we get $ab+b(A_{23}+A_{33})+a^2=0$. If $(A_{23},A_{33})$ is a permutation of $(a,b)$, the last equation is equivalent to $a^2+b^2=0\implies a=b$ - a contradiction. If it is a permutation of $(a,c)$, the equation implies $a^2+bc=0$. Now, consider submatrix of $A$ constructed by taking first and fourth row, and taking third column and either of first and fourth column (depending of which of $A_{41}$ and $A_{44}$ is $c$). This submatrix is a permutation of $\begin{pmatrix} b&a\\a&c \end{pmatrix}$. Then, $A$ would have a singular submatrix - a contradiction. Hence, $(A_{23},A_{33})$ is a permutation of $(b,c)$, and $ab+bc+b^2+a^2=0 \label{important}$. Back to $(A^2)_{42}$, where we get $b(b+c)+(b+a)x=0$. If $x=c$, this equation is equivalent to $b^2+ac=0$. Combining with the last equation, we get $(a+b)(a+c)=0$ and $a=b$ or $a=c$ - a contradiction. Hence $x=a$. Considering $(A^2)_{43}$, $bA_{41}+A_{23}b+A+A_{33}a+aA_{44}=0$. By the previous observations, $A_{23}=A_{44}$ or $A_{23}=A_{41}$. Were the first be true, $A_{33}=A_{41}$ and $(a+b)(A_{23}+A_{33})=0$. This implied $A_{23}=A_{33}$ or $a=b$, a contradiction. Hence, $A_{23}=A_{41}=p$ and $A_{33}=A_{44}=q$, for $(p,q)=(b,c)$ or $(c,b)$. Considering $(A^2)_{32}$, $a(a+A_{33})+b(A_{31}+A_{34})=0$. Were $A_{31}=A_{34}$, $a=A_{33}$; a contradiction. Hence, they are different. If they are permutation of $(a,b)$, the equation is equivalent to $a^2+b^2+ab+aA_{33}=0$. But, from the previous paragraph, $ab+bc+b^2+a^2=0$; hence $bc=aA_{33}$ and $A_{33}=a$ - a contradiction. If they are permutation of $(a,c)$, the equation is equivalent to $a^2+ab+bc+aA_{33}=0\implies aA_{33}=b^2 \implies A_{33}=c$. But, $a^2+ab+bc+ac=0=(a+b)(a+c)\implies a=b$ or $a=c$, a contradiction. Hence, $(A_{31},A_{33})$ must be a permutation of $(b,c)$. This implies $a^2+b^2+bc+aA_{33}=0$ and $A_{33}=b=q$. Hence, $p=c$ and $A=\begin{pmatrix} a&b&b&a\\A_{21}&a&c&A_{24}\\A_{31}&a&b&A_{34}\\c&b&a&b \end{pmatrix}$. By analyzing submatrices with order 2 of $A$, $(A_{31},A_{34})=(b,c)$. But, from $(A^2)_{33}$, $1=b^2+ac+b^2+ac=0$; a contradiction. Hence the initial assumption is false, and this claim is proved. \end{proof} From the claim, $(A_{42},A_{43})$ must be a permutation of $(a,c)$. Without loss of generality, let $(A_{42},A_{43})=(c,a)$. From $(A^2)_{12}$, $ab+b(A_{22}+A_{32})+ac=0$. It can be seen that $A_{22}\neq A_{32}$. Furthermore, were $(A_{22},A_{32})$ be a permutation of $(b,c)$, the equation is equivalent to $(b+a)(b+c)=0$ and $a=b$ or $b=c$, a contradiction. Were they be a permutation of $(a,c)$, the equation implies $c(a+b)=0$ and $a=b$, also a contradiction. Hence they are permutation of $(a,b)$ and $b^2=ac$. Now, from $(A^2)_{13}$, $ab+b(A_{23}+A_{33})+a^2=0$. It can be seen that $A_{23}\neq A_{33}$. Furthermore, were $(A_{23},A_{33})$ be a permutation of $(a,b)$, the equation is equivalent to $a^2+b^2=0\implies a=b$, a contradiction. Were it be a permutation of $(b,c)$, the equation is equivalent to $ab+b^2+bc+a^2=0\implies (a+b)(a+c)=0\implies a=b$ or $a=c$, a contradiction. Hence, they are a permutation of $(a,c)$. And, $a^2=bc$. Now consider $\begin{pmatrix} A_{22}&A_{23}\\A_{32}&A_{33} \end{pmatrix}$, a submatrix of $A$ Notice that from all possible matrices that can be obtained by the restrictions imposed above, its determinant is zero. Hence, $A$ has a singular submatrix - a contradiction. Then, the initial assumption (that the first row of $A$ contains $a$ and $b$ only) is false. Hence, the claim is proven. \end{proof} \begin{claim}\label{last} For all $i=1,2,3,4$, $\{A_{i1},A_{i2},A_{i3},A_{i4}\}-\{A_{ii}\}=\{a,b,c\}$ and $\{A_{1i},A_{2i},A_{3i},A_{4i}\}-\{A_{ii}\}=\{a,b,c\}$. \end{claim} \begin{proof} It is sufficient to prove the first equality in case $i=1$. Suppose this claim is false. Considering the previous claim and corollary \ref{paap}, without loss of generality let the first row of $A$ be $\begin{pmatrix} a&b&b&c \end{pmatrix}$. First we assert that $A_{22}\neq A_{32}$ and $A_{23}\neq A_{33}$. Suppose this assertion is false. Without loss of generality (by considering corollary \ref{paap}), let $A_{22}=A_{32}=x$. Considering $(A^2)_{12}$, we get $ab+cA_{42}=0$. It can be seen that $A_{42}=c$. By claim \ref{second}, $x$ is neither $b$ nor $c$; hence it is $a$. Now consider $A$'s third column. It can be seen that $A_{23}\neq A_{33}$; furthermore, none of them is $a$. Hence, they are permutation of $(b,c)$ and from the claim \ref{second}, $A_{43}=a$. However, by looking at $(A^2)_{13}$, $ab+b^2+bc+ca=0\implies (b+a)(b+c)=0\implies b=a$ or $b=c$, a contradiction. Hence the initial assertion is true. For $i=2,3$, consider $(A^2)_{1i}$. We get the equation $ab+bA_{2i}+bA_{3i}+cA_{4i}=0$. From claim \ref{second}, one of $A_{2i},A_{3i},A_{4i}$ is $c$. If the first two element is not $c$, then it can be estabilished that $ab+b^2+ba+c^2=0\implies b=c$, a contradiction. Hence, one of $A+{2i}$ and $A_{3i}$ is $c$. Assume the other element is $b$. From claim \ref{second}, $A_{4i}$ must be $a$. However, from the equation above, $ab+b^2+bc+ca=0=(b+a)(b+c) \implies b=a$ or $b=c$. So, for $i=2,3$, $A_{2i}+A_{3i}=a+c$. However, $(ab+bA_{22}+bA_{32}+cA_{42})+(ab+bA_{23}+bA_{33}+cA_{43}=0 \implies c(A_{42}+A_{43})=0\implies A_{42}=A_{43}$. By considering the submatrix $\begin{pmatrix} b&b\\A_{42}&A_{43} \end{pmatrix}$, this contradicts $A$ being MDS. Hence, the initial assumption is wrong, and the claim is proven. \end{proof} Now, without loss of generality, let the first row of $A$ be $\begin{pmatrix} a&a&b&c \end{pmatrix}$. Then, we observe the second column. From claim \ref{last}, $(A_{32},A_{42})=(b,c)$ or $(c,b)$. Assume $(A_{32},A_{42})=(b,c)$. By reasoning based on claim \ref{last}, $A_{43}=a$, $A_{23}=c$, $A_{41}=b$, $A_{21}=a$, and $A_{31}=c$. Now, by considering $(A^2)_{11}$, we get $1=a^2+a^2+bc+cb=0$, a contradiction. So, $(A_{32},A_{42})=(c,b)$. By considering $(A^2)_{12}$, we get $a^2+aA_{22}+bc+cb=0\implies A_{22}=a$. Now we observe the third row. By claim \ref{last}, because $A_{32}=c$, either $(A_{31},A_{34})=(a,b)$ or $(b,a)$. If $(A_{31},A_{34})=(a,b)$, by reasoning based on claim \ref{last}, $A_{24}=a$, $A_{23}=c$, $A_{21}=b$, $A_{41}=c$, and $A_{43}=a$. Now, by considering $(A^2)_{13}$, $ab+ac+bA_{33}+ac=0 \implies a=A_{33}$. However, by considering $(A^2)_{23}$, we get $b^2+ac+ca+a^2=0 \implies b=a$, a contradiction. Hence, $(A_{31},A_{34})=(b,a)$. By considering $(A^2)_{32}$, we get $ab+ac+cA_{33}+ab=0 \implies a=A_{33}$. By considering $A$'s submatrix from first \& second row and first \& second column, we get $A_{21}\neq a$. By claim \ref{last}, $A_{21}=c$ and $A_{41}=a$. But, by considering $(A^2)_{31}$, we get $0=ab+c^2+ba+a^2 \implies c=a$, a contradiction. Hence, there are no involutory MDS matrices over $\mathbb{F}_{2^m}$ that has exactly, or less than, three different elements. We conclude that for any natural $m$, any involutory MDS matrix $A$ with elements in $\mathbb{F}_{2^m}$ has at least four different elements. An example of involutory MDS matrices (over $\mathbb{F}_{2^8}$) that has exactly four different entries is used in \textit{Anubis} block cipher \cite{anubis}. \section{Conclusion} Any involutory MDS matrices over $\mathbb{F}_{2^m}$ of order three and four need (respectively) three and four different elements. This result extends the result from Junod and Vaudenay \cite{junod}, which proves an MDS matrices (not needed to be involutory) over $\mathbb{F}_{2^m}$ of order three and four need at least two and three different elements. \begin{acknowledgments} The author would like to thank Aleams Barra and Intan Muchtadi-Alamsyah for providing valuable suggestions. This research is supported by Hibah Riset Dasar DIKTI 2019. \end{acknowledgments}
1,116,691,498,734
arxiv
\section{Introduction} In simulations of traffic systems, cellular automata (CA) are a common tool \cite{rev1,rev2,rev4,rev3}. A cellular automaton consists a structure of cells, a set of cell states and the rule of time evolution which transfers a state of a cell with its neighborhood to the state of this cell in the subsequent time moment. In this description, states, space and time are discrete, while the traffic systems at lest space and time are inherently continuous. Still, there is a rich variety of CA which enable to investigate properties and phenomena of traffic systems, reproduced sometimes with surprisingly subtle details. As a rule, the traffic CA are classified as single-cell or multi-cell models, where a vehicle occupies a single cell or more cells \cite{rev3}. On the other hand, traffic networks are also parametrized in different ways, as a node can represent a stop, a cross-road or a route \cite{spaces}. So simplified when compared with real systems, the technology of CA suffers known computational limitations: a more detailed description is paid by the smaller size of a simulated system.\\ More recently, a modification of CA has been applied where a cell represents a state of the whole considered system \cite{cpl05,gao}. This approach can be seen as an example of the concept of Kripke structures \cite{kri}, where nodes represent states of the whole system and links represent processes leading from one state to another. The obvious drawback of this parametrization is that the number of nodes increases exponentially with the system size. In \cite{cpl05}, this difficulty is evaded by taking into account only the states which appear during the time evolution. The same idea was developed into a technique of time series analysis, known as recurrence networks. Briefly, the rule of time evolution is used to generate new states which are attached as nodes to the simulated network; in this way the signal is characterized in terms of a growing network. For details see \cite{recnets} and references cited therein.\\ Our aim here is to discuss a new cellular automaton designed for modeling jams in traffic systems. The novelty of this automaton is that cells represent sections of road which can be either jammed or passable. A jam can grow at its end and flush at its front; the competition between these two processes depends on the local topology of the traffic network. Our description, inspired by percolation, is more coarse-grained, than in other models. According to the classification of traffic models, presented in \cite{www}, our model belongs to macroscopic queueing models. Some model elements remind the cell transmission model by Daganzo \cite{dag1,dag2}: namely, the rates of inflow and outflow in the cell transmission model are similar to the rates of grow and flush of traffic jam, defined below. However, as it is explained in details in the next section, it is only jammed and passable cells what is differentiated here, and the flows of vehicles are not identified. The price paid is that a range of dynamic phenomena as synchronization and density waves are excluded from the modeling. These phenomena, essential at the scale of a road \cite{e1,e2,e3,e4}, can be less important at the scale of a city. Consequently, our approach should be suitable for macroscopic modeling of large traffic systems. \\ Our second aim is to construct the Kripke structure on the basis of the same cellular automaton. This in turn limits again the size of the system, because of the exponential size dependence of the number of states. We are going to demonstrate that our recent tool, i.e. the symmetry-induced reduction of the network of states \cite{m1,m2}, is useful to partially reduce the computational barrier.\\ In the next section we describe the automaton in general terms and we recapitulate the method of reduction of the network of states, mentioned above. As the direct form of the automaton rules depends on the traffic system under considerations, the exact description of the rules is given in Section 3, together with the information on the analyzed systems, both artificial (the square lattice) and real (a small city). Two last sections are devoted to the numerical results and their discussion. \section{The model} \subsection{Cellular automaton} We analyze a simple automaton, where each cell $i$- a road section - can be either in the state $s=0$ or $s=1$. The state $s_i=0$ means that a fluent motion via a given road section is possible, while the state $s=1$ means a traffic jam. As each road section is a part of larger system, the cell state depends on the state of roads where one can enter from a given road section. Namely, the probability of a traffic jam back propagation as well as the probability of a traffic jam to be flushed depend on the number and state of neighboring road sections. To initialize calculations, one has to assign values of three parameters. Two of them, $w$ and $v$, describe the whole system, and the last one $p$ is related to the boundaries. Specifically, $w$ is the probability that a traffic jam arises on a given road section due to its presence on the roads directly preceding the currently considered one (jam behind jam), $v$ is the probability of a jam flush (jam behind passable gets passable), and $p$ is the probability that a traffic jam appears at a road section at the boundary, but out of the system. The latter parameter describes the system interaction with the outer world. The parameters $w$ and $v$ can be related to the flows used in \cite{h1,h2} for the discussion for congestion near on-ramps.\\ The probability of a change of the state of a given road section is obtained as the result of the analysis of the state of this section and the state of its neighborhood. We ask for which ranges of the parameters the system is passable in the stationary state. \\ The detailed realization of the model depends of the topology of the traffic network. In Section 3 the exact algorithm is presented together with the presentation of the analyzed systems. \subsection{The state space and its reduction} The automaton defined above can be used for simulations, and the results of these simulations are reported below. The same automaton is used here also to construct the network of states, as in \cite{cpl05,m1,m2}. This network, equivalent to the Kripke structure \cite{kri}, is formed by all possible combinations of states of roads which play the role of nodes. Next, an appropriate master equation \cite{vkamp} is constructed, which reflects all possibilities of states obtained from the current state. The obtained matrix of transitions between states, i.e. the transfer matrix, is equivalent to the connectivity matrix of our state network. For a given set of model parameters ($p$, $w$ and $v$), eigenvector of the matrix associated with the eigenvalue equal 1 serves to calculate probabilities of particular states in the stationary state. Having these values, one can evaluate how passable the system is under given conditions from the average number of unjammed ($s=0$) states \begin{equation} P(0)=\dfrac{1}{N}\sum\limits_{i=1}^{2^N}P_in_i(0) \label{e1} \end{equation} where: $N$ is the size of the system (the number of considered road sections), $P_i$ - probability of $i$-th state and $n_i(0)$ - number of zeros in $i$-th state. We note that in this equation, me make an average over the states of the whole network, and not over the states of local cells.\\ As the obtained number of states is large even for moderate systems, we reduce the system size by the application of the procedure proposed in our previous papers \cite{m1,m2}. The method of the reduction of the system size is based on the symmetry observed in the system, which manifests in the fact that properties of some elements of the system are exactly the same. The starting point is the network of states, and the core of the method is to divide nodes into classes; the stationary probability of each node in the same class is the same \cite{m1,m2}. To begin, for each node the list of its neighboring nodes is specified, with the consideration of weights of particular connections. Provisionally, the class of each state is determined by its degree; for each state its symbol is replaced by the symbol of class, which discriminate nodes which have different number of neighbors. At the next stage we examine the lists of neighboring nodes in terms of class symbols assigned to a particular neighbors and weights of appropriate ties. If for nodes assigned with the same symbol the symbols assigned to its neighbors are different or their are the same but their weights are different, an additional class distinction is introduced. At the end of the algorithm, the classes, i.e. subsets of nodes are indicated, which have identical lists of neighbors with respect of the number of neighbors, the symbol assigned to each of them, and weights of particular connections \cite{m1,m2}. \section{Analyzed systems} \subsection{Square lattice} As a reference system we analyze a system of directed roads placed on edges of a regular square lattice. The lattice is finite, with open boundary conditions. For such a system each road has two in-neighbors and two out-neighbors. As it will be explained in detail, the probability of the state change depends only on the state of out-neighbors. At the boundaries, a road has one or none out-neighbors (none at the corner). At each road, the traffic takes place only in one direction, say upwards and right. This setup is borrowed from the Biham-Middleton-Levine automaton \cite{bml}. The algorithm of a change of the state of the road for the square lattice is presented in Fig.\ref{alg1}. \begin{figure} {\footnotesize{ \begin{algorithmic} \IF{$s=0$} \IF{$L_n=0$} \STATE $0\xrightarrow{P=pw}1$ \ELSIF{$L_n=1$} \IF{$s_n=0$} \STATE $0\xrightarrow{P=p\dfrac{w}{2}}1$ \ELSE \STATE $0\xrightarrow{P=p\dfrac{w}{2}+\dfrac{w}{2}}1$ \ENDIF \ELSE \IF{$s_{n1}\ne s_{n2}$} \STATE $0\xrightarrow{P=\dfrac{w}{2}}1$ \ELSIF{$s_{n1}=s_{n2}=1$} \STATE $0\xrightarrow{P=w}1$ \ENDIF \ENDIF \ELSE \IF{$L_n=0$} \STATE $1\xrightarrow{P=v(1-p)}0$ \ELSIF{$L_n=1$} \IF{$s_n=0$} \STATE $1\xrightarrow{P=\dfrac{v}{2}(2-p)}0$ \ELSE \STATE $1\xrightarrow{P=\dfrac{v}{2}(1-p)}0$ \ENDIF \ELSE \IF{$s_{n1}\ne s_{n2}$} \STATE $1\xrightarrow{P=v}0$ \ELSIF{$s_{n1}=s_{n2}=1$} \STATE $1\xrightarrow{P=\dfrac{v}{2}}0$ \ENDIF \ENDIF \ENDIF \end{algorithmic} }} \caption{Algorithm of a change of the state of the road for the square lattice.} \label{alg1} \end{figure} In the above algorithm $s$ is a state of a given road, $L_n$ is the number of roads given road is connected to, the quantity $s_n$ refers to the state of the road which is a neighbor of the currently considered one (if the road has two neighbors their states are marked respectively as $s_{n1}$ and $s_{n2}$). The probability $P$ of a change of the state depends, as it was mentioned above, on the state of the considered road and the state of the neighborhood which determine how probable the change is. Namely, the transition from jammed to passable (1 to 0) mean that the jam at a given road section is flushed by free motion of vehicles at the jam front. This is possible only if the out-neighboring section is empty. Further, the transition from passable to jammed (0 to 1) is possible only if the out-neighboring section is jammed. In both cases, the transition depends on the state of the out-neighbors; the state of in-neighbors is not relevant.\\ In Fig.\ref{art} a piece of the system is presented. Each road can be either passable (in our notation a road is in a state $0$) which is marked as a dashed-line or a traffic jam can be formed (a road is in a state $1$) which is marked as a solid-line. The direction of traffic was ticked on roads in the state $0$, but the rule is the same for all roads; we keep down-up and left-right direction. Here we present one of the possible changes of the state of the system. \begin{center} \begin{figure} \includegraphics[width=.4\columnwidth, angle=0]{fig1.pdf} \begin{minipage}{.1\columnwidth} \vspace{-3cm} \centering$\Rightarrow$ \end{minipage} \includegraphics[width=.4\columnwidth, angle=0]{fig2.pdf} \caption{Example of the change of traffic. A dashed-line refers to the state $0$ (fluent flow), and a solid-line refers to the state $1$ (traffic jam). Arrows indicate the direction of traffic.} \label{art} \end{figure} \end{center} \subsection{Small city} The method was also applied to a real road network of a small Polish town Rabka. The structure of roads which matter in traffic was selected - dead ends are removed (Fig.\ref{map}). Each road, if necessary, was divided into sections of approximately equal length. We get $374$ sections. Here the number of neighbors for different roads varies as it results from the town topology. Each section is a two-way street. In this case the algorithm has a form presented in Fig.\ref{alg2}. \begin{figure}[!hptb] \begin{center} \includegraphics[width=.4\textwidth, angle=0]{fig13.pdf} \caption{Road network of Rabka. Roads are divided into $374$ approximately equal sections. The black dots mark the exit/entrance roads.} \label{map} \end{center} \end{figure} \begin{figure} {\footnotesize{ \begin{algorithmic} \IF{$s=0$} \IF{$L_n=0$} \STATE $0\xrightarrow{P=p\dfrac{w}{2}}1$ \ELSE \STATE $0\xrightarrow{P=\dfrac{\sum\limits_{out}s}{k_{out}}\dfrac{w}{2}}1$ \ENDIF \ELSE \IF{$L_n=0$} \STATE $1\xrightarrow{P=v(1-p)}0$ \ELSE \STATE $1\xrightarrow{P=v\sum\limits_{out}(1-s)}0$ \ENDIF \ENDIF \end{algorithmic} }} \caption{Algorithm of a change of the state of the road for a real network.} \label{alg2} \end{figure} In the algorithm presented in Fig.\ref{alg2} summing goes through the states of the roads outgoing from a given one, and $k_{out}$ is a number of outgoing roads. \section{Results} \label{results} \subsection{The square lattice} All presented results are a time average in the steady state over $100$ realizations for the square lattice of the size $N=100\times100$. To check that the results do not depend on the initial conditions, we use three options for the initial state: states of all roads set to $0$, states of all roads set to $1$, and a state of each road is set randomly to be $0$ or $1$.\\ The results depend on the values of the model parameters $p$, $w$ and $v$. As a result for the whole system the percentage of roads in the state $0$ ($\#0[\%]$) is calculated. The higher the number of zeros the more passable the system is. The results for two different values of the parameter $p$ are presented in Fig. \ref{sq} for $p=0.1$ and for $p=0.7$. \begin{figure}[!hptb] \begin{center} \subfigure[$p=0.1$]{ \includegraphics[width=.5\textwidth, angle=0]{fig3.pdf} \label{fig:3A} } \subfigure[$p=0.7$]{ \includegraphics[width=.5\textwidth, angle=0]{fig5.pdf} \label{fig:3B} } \caption{Diagram for the square lattice of the size $100\times100$ (average in the steady state over $100$ realizations).} \label{sq} \end{center} \end{figure} The increase of the percentage of zeros with the parameter $v$, visible in Fig.\ref{sq}, can be interpreted as an indication of a phase transition. To verify its sharpness dependence on the system size, we calculated the curve $\#0$ vs $v$ for a selected case: $p=0.7$, $w$=0.5 and various system sizes $N^2$. The results are shown in Fig.\ref{pf}. Indeed, the sharpness increases with $N$, and the curve for $N=200$ is close to the step function.\\ \begin{figure}[!hptb] \begin{center} \includegraphics[width=.4\textwidth, angle=0]{fig14.pdf} \caption{Percentage of zeros in a function of $v$ for square lattices of different sizes $N\times N$ for $p=0.7$ and $w=0.5$ (average in the steady state over $100$ realisations).} \label{pf} \end{center} \end{figure} We also check how removal of some number of roads changes the obtained results, to check if the symmetry of the square lattice is necessary. In Fig.\ref{fig4} the results obtained when $10\%$ randomly chosen road sections is removed. The removal is done separately for each realization. If, in a consequence of the removal, some part of lattice is isolated, it is removed as well. The results, shown in Fig.\ref{fig4}, indicate that the phase transition, found for the square lattice, is observed also in a randomized lattice. The maximal number of zeros in this case is less than 90 percent, because the plot is normalized to the whole square lattice, including the removed links. \begin{figure}[!hptb] \begin{center} \subfigure[$p=0.1$]{ \includegraphics[width=.5\textwidth, angle=0]{fig4.pdf} \label{fig:4A} } \subfigure[$p=0.7$]{ \includegraphics[width=.5\textwidth, angle=0]{fig6.pdf} \label{fig:4B} } \caption{Diagram for the square lattice of the size $100\times100$ (average in the steady state over $100$ realisations), with removal of $10\%$ randomly chosen road sections.} \label{fig4} \end{center} \end{figure} \subsection{Small city} The result obtained for the simulations of the traffic network in Rabka, formed by $374$ road sections, are presented in Figs.\ref{fig5}. The main difference between this network, as constructed from the map in Fig.\ref{map}, and the square lattice (with removals or not) is that the Rabka network is less connected. There, often the road sections form long chains. Comparing Figs.\ref{sq} and \ref{fig4} we see that the consequence of this difference is that jammed state is less likely. The origin of this result is that jams are created behind the jammed road sections; the more in-neighbors of these sections, the more jams appear. Besides of that, the obtained plot are similar to those for the square lattice.\\ \begin{figure}[!hptb] \begin{center} \subfigure[$p=0.1$]{ \includegraphics[width=.5\textwidth, angle=0]{fig7.pdf} \label{fig:5A} } \subfigure[$p=0.7$]{ \includegraphics[width=.5\textwidth, angle=0]{fig8.pdf} \label{fig:5B} } \caption{Diagram for the real network of the size $N=374$.} \label{fig5} \end{center} \end{figure} \subsection{A simplified map} \begin{figure}[!hptb] \begin{center} \subfigure[$p=0.1$]{ \includegraphics[width=.5\textwidth, angle=0]{fig9.pdf} \label{fig:6A} } \subfigure[$p=0.7$]{ \includegraphics[width=.5\textwidth, angle=0]{fig10.pdf} \label{fig:6B} } \caption{Diagram for the real reduced network of the size $N=18$.} \label{fig6} \end{center} \end{figure} Exact calculations of $P(0)$ can be performed merely for systems much smaller than hundreds of road sections. For the sake of comparison of the methods, we simplified the map of Rabka leaving only nine two-way roads. This leads to the system of $2^{18}$ states. The results of the simulations for this system are shown in \ref{fig6}. As the system is much simplified, the results for the full (Fig.\ref{fig5}) and reduced (Fig.\ref{fig6}) traffic networks differ substantially for $p=0.1$. Surprisingly, those for $p=0.7$ are quite similar. The same simplified network is solved exactly by the solution of $2^{18}=262144$ Master equations \cite{vkamp} for the stationary state for different sets of the model parameters of the related Kripke structure. In this exact method, the parameters $p,v,w$ enter to the weights of links between states, or - equivalently - to the rates of the processes which drive the system from one state to another. For each case we can then calculate, in accordance with Eq.\ref{e1}, the mean stationary probability that the road sections are passable. Obtained results are presented in Figs.\ref{fig7}. The same figure shows the solution for the classes of states, as described in \cite{m1,m2}. In this case, the class identification procedure allows for the reduction of the system size about twice, to $102400$ classes.\\ \begin{figure}[!hptb] \begin{center} \subfigure[$p=0.1$]{ \includegraphics[width=.5\textwidth, angle=0]{fig11.pdf} \label{fig:7A} } \subfigure[$p=0.7$]{ \includegraphics[width=.5\textwidth, angle=0]{fig12.pdf} \label{fig:7B} } \caption{Diagram for the real reduced network of the size $N=18$, obtained from the analysis of the state space. Here \textit{cl} refers to the network of classes, and \textit{st} to the network of states.} \label{fig7} \end{center} \end{figure} \section{Discussion} The goal of this paper is to describe large traffic networks with a cellular automaton, where states of road sections are reduced to two: passable and jammed. The coarse-grained character of the new automaton is close in spirit to the percolation effect. The results of our simulations allow to identify a phase transition between two macroscopic phases, again passable and jammed. Additionally, the calculations are repeated for a much smaller traffic network, constructed by a strong simplification of a map of a small Polish city. These calculations are performed to compare the results with the exact solution of the stationary state, obtained by two equivalent methods. This comparison suggests, that the accordance of simulation with the exact solution is better for more jammed systems, i.e. more close to the phase transition. \\ \\ The drawback of our automaton is that all information about specific local conditions of traffic jams cannot be reproduced. The model captures merely the jam spreading. The parameters $w$ and $p$ depend on the external state, and serve as an input for the calculations. The parameter $v$ should be calibrated separately for each traffic system. After this calibration, the main result of the model - the probability of the jammed phase - should be reproducible and useful to control the traffic phases. The advantage of the model is its simplicity, which allows to to simulate larger traffic systems in real time. \\ {\bf Acknowledgement:} The research is partially supported within the FP7 project SOCIONICAL, No. 231288 and by the Polish Ministry of Science and Higher Education and its grants for Scientific Research and by PL-Grid Infrastructure.
1,116,691,498,735
arxiv
\section{Introduction}\label{sec:introduction} Structural segmentation of concert audio recordings is very useful in music retrieval tasks such as navigation and automatic summarization. It is particularly strongly indicated for Indian classical music where concerts can extend for hours, and commercial audio recordings are rarely annotated, while the performance indeed follow an established structure depending on the genre. \textit{Khayal} vocal music is the single most prominent genre in the Indian classical tradition of Hindustani music. A raga performance in \textit{khayal} has a structure comprised of a number of elements such as the free form introduction (\textit{alap}), the composition (\textit{bandish}), metered improvisation (also, \textit{alap}), rhythmic improvisation (\textit{layakari}) and improvisation involving fast sequences of notes (\textit{taan}) \cite{rao2014overview}. The concert ensemble is made up of the vocalist accompanied by the drone and percussion and sometimes melodic accompaniment such as the harmonium or \textit{sarangi}. As such there are no changes in timbre texture due to the constancy of instrumentation, and harmony is non-existent. The structural elements mentioned earlier occur to various extents in the performance and in different orders depending on the school (\textit{gharana}). Even to the uninitiated (but attentive) listener, the different concert sections appear clearly contrasting in one or the other of the two important dimensions: rhythm and melodic style. Recently, tempo derived features were used to achieve structural segmentation at the highest time scale on Hindustani instrumental concert audio \cite{PV}. In this work, our focus is on segmenting sections that are melodically salient i.e. the sequence of melodic phrases or notes is rendered in a characteristic melodic style known as the \textit{taan}. The notes may be articulated in various ways including solfege and the syllables of the lyrics. Most common however is the \textit{akar} \textit{taan}, rendered using only the vowel /a/ (i.e. as melisma). The sequence of notes is relatively fast-paced and regular, produced as skillfully controlled pitch and energy modulations of the singer’s voice similar to vibrato. But unlike the use of vibrato which ornaments a single pitch position in Western music, the cascading notes of the \textit{taan} sketch elaborate melodic contours like ascents and descents over several semitones. The melodic structure is strictly within the \textit{raga} grammar while the step-like regularity in timing brings in a rhythmic element to the improvisation in contrast to the (also improvised) \textit{alap} sections. Apart from showcasing the singer’s musical skills, one or more \textit{taan} sections typically contribute to the climax of a raga performance and therefore serve as prominent markers musicologically. A broad overview of methods available for structural segmentation is summarized in \cite{Paulus}. Since our task involves the detection and segmentation of a specific named section of the concert, we need to invoke both segmentation and supervised classification methods. Musically motivated features and methods are our chosen approach given their potential for success with limited training data \cite{XSerra}. The challenges to \textit{taan} detection are the polyphonic setting where we want to focus on the vocal signal, and designing distinctive features that are artist and concert independent. Given that pitch modulations are the prime characteristic of \textit{taan}, reliable pitch detection with sufficient time resolution is necessary. Finally, we need to convert the low level analyses to annotation that closely matches with the musician’s labeling of \textit{taan} episodes from a performer’s point of view. Towards these goals, we use a vocal source separation algorithm based on predominant-F0 detection \cite{VRaoMelodyExtract}. Features designed to capture the characteristic of rapid but regular pitch and energy variations of the voice are presented. A frame level classification at 1 s granularity is followed by a grouping stage with the goal of emulating the subjective labeling of \textit{taan} by musicians as extended regions that occur at salient positions in the concert. Finally, we also wish to explore the interesting question of whether the hand-designed features can be replaced by learned features obtained via a CNN applied directly to the polyphonic audio spectra. There has been much recent research interest in automatic feature learning for a variety of audio tasks such as genre and artist classification \cite{AndrewCNN}, chord recognition \cite{ChordRecogBello}, onset detection \cite{ImprovedOnsetCNN}, and structural analysis \cite{Ullrich}. In the next section, we describe the characteristics of our audio database. This is followed by a discussion of the proposed melodic style features, and the classification and segmentation methods. Finally, we present the experiments and evaluation measures followed by a discussion of the results. \section{Database Description}\label{Database} Our audio database consists of 57 khayal vocal concert recordings from commercial CDs partitioned into two distinct sets of 22 single-artist (Pt. Jasraj) concerts, and 35 multi-artist concerts (that do not contain Pt. Jasraj). In both cases a number of different ragas are covered at various tempi. All artists are male. The 22 concert set is treated as the test set with two different training conditions: artist-specific training via leave-one-song-out cross validation, and artist-independent training where the test concert artist is not represented at all in the training data of 35 concerts. In order to achieve realistic training time for the CNN classification in the 22-fold cross-validation, the 22 test concert audios were edited to remove an early sections of each concert audio (where \textit{taan} typically does not occur). Eventually we have 3.5 hours of test audio with the proportion of \textit{taan} in the overall vocal region at 35\%. The labeling of concert sections was carried out by a musician using the PRAAT interface \cite{Praat}. \ref{fig:PraatSpecImg} shows a fragment of labeled audio comprising portions of 3 sections spanning 2.5 minutes of a \textit{khayal} performance. We observe that the single continuous section labeled ‘\textit{akar taan}’ (of duration 85 s) actually comprises of a cluster of \textit{taan} segments separated by instrumental or other regular singing segments. A \textit{taan} segment is easily identified in the audio spectrogram, computed with 40 ms Hamming windowing, by the modulated harmonics in the region of prominent formants (dark region above 800 Hz). Within the labeled \textit{taan} section, the individual \textit{taan} episodes can be as short as 5 s and be separated from each other by up to as much as 20 s. We observed that the musician labeled \textit{taan} based on the perceived intent of the performer i.e. relatively short durations of instrumentals and other vocal styles that occurred sandwiched between \textit{taan} episodes were subsumed by the \textit{taan} label (as in \ref{fig:PraatSpecImg}). For the real-world use case, we would like our automatic system to match the musician’s labeling of the \textit{taan} sections in the concert. \begin{figure}[t] \centerline{\framebox{ \includegraphics[width=\columnwidth,height=4cm]{TaanEpisodeAABibhas.png}}} \caption{Spectrogram of an episode of \textit{akartaan} flanked by other sections in a concert.The labeled \textit{taan} section shows rapid oscillatory movements of vocal harmonics, interrupted by short non-\textit{taan} movement in-between.} \label{fig:PraatSpecImg} \end{figure} \section{Feature extraction, classification and grouping}\label{sec:FeatExtrClasGrpng} Given our knowledge about \textit{taan} production and observations of the acoustic signal characteristics, it is clear that the presence of strong pitch modulation is among the distinctive traits of the \textit{taan} style of singing. The required audio pre-processing and feature extraction methods are presented in the following. \subsection{Vocal attributes extraction}\label{subsec:VocalAttExtr} The singing voice usually dominates over other instruments in a vocal concert performance in terms of its level and continuity over relatively large temporal extents although the accompaniment of \textit{tabla} and other pitched instruments such as the drone and harmonium are present. The singing voice regions, or vocal spurts, are identified in the audio track using an available singing voice detection system based on timbral and periodicity characteristics of the singing voice as opposed to the instrumentation \cite{VRaoContextAware}. The SVM classifier is trained on a few hours of Hindustani vocal music (different from the database used in the present work). Next, a predominant F0 detector is used to estimate the pitch at 10 ms intervals corresponding to the vocal component \cite{VRaoMelodyExtract}. The pitch detector uses an adaptive analysis window to optimize the time and frequency resolution trade-off in order to track rapid pitch variations. The total harmonic energy in the frequency region below 5 kHz, where the harmonics correspond to the detected pitch, provides an estimate of the vocal energy, also at 10 ms intervals. The purely instrumental regions, as determined by the singing voice detector, are not processed for feature extraction. \subsection{Pitch and energy modulation features}\label{subsec:Features} The melodic style descriptors are computed in the detected vocal regions only. The pitch values are first converted to a cents scale by normalising with a standard F0 chosen to be 55 Hz. The sampled pitch trajectory within each 1 s analysis frame is mean subtracted where mean refers to the slow trend in melodic shape. The mean smooth trajectory is obtained by a third order polynomial fit to the pitch samples in the frame \cite{ChitraSpringer}. The mean subtracted trajectory is analysed by the 128 point DFT of a sliding window of 1 s duration at 500 ms hop intervals to find the spectrum peak location and height in the region 1-20 Hz. The peak location is an estimate of the pitch modulation rate. It is observed to lie in the 5-10 Hz range irrespective of the underlying tempo of the section in the case of \textit{taan} like movements. The energy computed from the DFT power spectrum in a neighborhood of +/- 1.6 Hz (5 bins) around the peak represents the regularity and strength of the pitch modulation. It was also observed that the overall energy in the voice fluctuated with the pitch modulation. This could be a consequence of the physiology of production. \ref{fig:PitchEnergyContour} shows temporal trajectories of extracted pitch and energy across a region partly comprised of \textit{taan}, where we clearly observe the pitch modulation and rapid energy fluctuations. There is no apparent correlation between instantaneous values of pitch and vocal energy. In order to capture the energy fluctuation cue, we use the measured zero-crossing rate from the mean-removed energy contour over 1 s window duration at 500 ms hop. \begin{figure}[t] \centerline{\framebox{ \includegraphics[width=\columnwidth]{pitchenergyKMShree1.png}}} \caption{Pitch contour in cents (top) and energy in dB (bottom) of a section of concert audio. Non-\textit{taan} movements seen in the first half and taan seen after.} \label{fig:PitchEnergyContour} \end{figure} Next, a local averaging of the features is carried out over 5 s windows to obtain smoothened feature trajectories sampled at 1 s frame rate. The feature values are normalized to zero-mean and unit variance across the concert. We thus obtain a 3-dimensional normalized feature vector at 1 s frame rate in the vocal segments of the audio which can be used to classify frames into \textit{taan} and non-\textit{taan} categories. \subsection{Classification and grouping using posteriors}\label{ClassfcntGrpng} A frame-wise classification into \textit{taan} and non-\textit{taan} styles is carried out for all frames in the vocal segments by a trained MLP network. We use a feed-forward architecture with the sigmoid activation function for the hidden layer comprising 300 neurons. Training uses cross entropy error minimization via the error back-propagation algorithm. Upon classification, the recall and precision of \textit{taan} frame detections with respect to the ground-truth can serve to measure the discriminative power of the features. In our use case however we seek to label continuous regions of the audio rendered in \textit{taan} style much as a human annotator would. This requires the grouping of frames based on homogeneity with respect to the \textit{taan} characteristics. Novelty detection based on a self-distance matrix is an effective way to find segment boundaries \cite{Paulus}. We use a recently proposed approach to computing the SDM from the posterior probabilities derived from the features rather than the features themselves \cite{PV}. The use of Euclidean distance between vectors comprised of posteriors probabilities is found to provide for an SDM with enhanced homogeniety due to the reduced sensitivity to irrelevant local variations. The posteriors are the class-conditional probabilities obtained from the MLP classifier for each test input frame. Points of high contrast in the SDM are detected by convolution along the diagonal with a checker-board kernel whose dimensions depend upon the desired time scale of segmentation. Considering that the minimum \textit{taan} episode duration, this is chosen to be 5 s in the interest of obtaining reliable boundaries with minimal missed detections. The resulting novelty function is searched for peaks, representing segment boundaries, using `local peak local neighborhood' \cite{turnbulsupervised}. Whether a region between two detected boundaries corresponds to a \textit{taan} is determined by examining the majority of the frame-level classification in that region. Finally, the highest level of grouping is obtained by examining the region of audio separating every two detected \textit{taan} segments. A simple heuristic is set up to mimic the musician’s annotation where \textit{taan} episodes separated by non-\textit{taan} vocal activity of within 20 s are merged into a single section. The merging is also applied if the separation corresponds to a purely instrumental region of duration within 50 s. \section{Classification with CNN}\label{sec:ClassfcntCNN} Convolutional Neural Networks are a special case of feed forward neural networks where connections between neurons are restricted to local regions and connection weights are shared. This greatly reduces the model complexity compared to fully connected networks, allowing them to deal with high dimensional inputs such as images or spectrogram excerpts. A CNN consists of convolutional layers, pooling layers and fully connected layers. A convolutional layer computes a convolution of the previous layer outputs with fixed size filter kernels of learnable weights, followed by a non-linear activation function. A convolutional layer consists of multiple such filter kernels producing an output map for each kernel. Convolutional layers are optionally followed by pooling layers which spatially downsample the outputs of the previous layer. The final convolutional or pooling layer of the CNN is typically followed by one or more fully connected layers which reshape the output maps into feature vectors which are finally fed to the output layer. \subsection{CNN Inputs} We use excerpts of the spectrograms of our audio files as the input to the CNN. For each of our audio files, sampled at 8 kHz, we compute the log magnitude spectra using a 1024 point DFT on 40 ms Hamming windowed data segments at 20 ms intervals. We believe that the \textit{taan} section can be sufficiently characterized by the temporal variations of the first 2 to 3 vocal harmonics that lie within the frequency range of 0-1.5 kHz. Thus, in order to keep input feature dimension sizes reasonable, we retain only the first 94 frequency bins of the spectrogram corresponding to the frequency range of 0-1469 Hz. We then divide the spectrogram into temporal chunks of 1 s corresponding to our frame size (similar to that used in ground-truth labeling as well as in the hand-crafted features computation). Thus the inputs to the CNN are 94x50 dimensional matrices. By matching the spectrogram resolution and dimensions to our task, we eliminate the need for multiple channel inputs as has been the case in a previous audio task \cite{ImprovedOnsetCNN}. To bring the input values within a suitable range, we normalize each frequency band to zero mean and unit variance using the mean and standard deviation values estimated using the training. \cite{ImprovedOnsetCNN}. \subsection{CNN Architecture} The Convolutional Neural Network used in this work has an architecture similar to that described in \cite{schluterOnsetCNN}; the main difference being that the input spectrogram excerpts in our case use a single time resolution as opposed to having multiple input channels with different time resolutions in \cite{schluterOnsetCNN}. Our network architecture is summarized in \ref{fig:CNNArchtctr}. The CNN has five layers in total, two convolutional layers, two pooling layers and a fully connected layer. The first layer of the network is a convolutional layer consisting of 10 7x7 filter kernels producing 10 output maps of size 88x44 each. This is followed by an average pooling layer which retains the average value of non-overlapping 2x2 cells. This is followed by another convolutional layer of 10 3x3 filters and another 2x2 average pooling layer to give 10 output maps of size 21x10. These outputs are then reshaped to a 2100 dimensional feature vector and fully connected to a layer of 300 sigmoidal units. The outputs of these 300 units are finally given to a softmax output layer consisting of 2 units corresponding to the two classes being considered. \begin{figure}[t] \centerline{\framebox{ \includegraphics[width=\columnwidth,height=4cm]{CNNNetworkArchitecture.png}}} \caption{The CNN architecture employed} \label{fig:CNNArchtctr} \end{figure} \subsection{Training the CNN} The CNN training is carried out in two stages. The CNN is first trained without the fully connected layer, with the outputs of the second pooling layer directly connected to the softmax layer. The outputs of the trained CNN at the second pooling layer are then concatenated into a feature vector for each frame of the training data. These feature vectors are then treated as the training data for a Multi Layer Perceptron network with a single hidden layer of 300 units and a softmax output layer of two units. Finally, the trained CNN and MLP together form the CNN with the fully connected layer. The CNN and MLP are trained using the Error Backpropagation algorithm for minimizing the cross-entropy error between the softmax outputs and the labels for each 1 sec input frame of the training data. Training is carried out for a fixed 900 epochs over the train set of 35 concerts as described in section \ref{Database}. An initial learning rate of 0.1 is halved after every 150 epochs. \section{Experiments and Evaluation} Our ideal system would detect and segment \textit{taan} sections similar to a musician’s labeling. This high level task is attempted by the sequence of frame-level automatic classification and higher level grouping as described in section \ref{sec:FeatExtrClasGrpng}. In this section, we present experimental results on the performance of each of the components. Frame-level classification is measured by the detection of \textit{taan} in terms of recall and precision. Artist-dependent and artist-independent training are compared for the hand-crafted features based classifier. The same evaluations are carried out with the CNN classifier where the “features” are purely learned during training. The frame-level classification needs frame-level (i.e. 1 s resolution) annotation of \textit{taan} presence or absence. This is required both for the training of the classifiers as well as for reliable testing. The musician labels are not useful as such for this end due to the presence of non-\textit{taan} interruptions of significant duration within the musician labeled \textit{taan} sections as seen in \ref{fig:PraatSpecImg}. Thus, for the development of the frame-level classifier, we need a more fine-grained marking of \textit{taan} segments. Since this is a demanding task to carry out manually, we use a bootstrapped iterative approach where, for each concert audio, a 2-mixture GMM on the melodic style feature vector is fitted to a small amount of hand labeled data and updated with classified frames across the audio track in each iteration until convergence is achieved \cite{nguyen}. Casual inspection showed that the frame-level labels so obtained were indeed accurate and these were then used to train and evaluate the frame level classifiers. The system is also evaluated after grouping, this time in terms of the match between the detected segments and the subjectively labeled \textit{taan} segments for each concert. Measures of performance include the number of correctly retrieved \textit{taan} segments and number of false alarms. A section is said to be correctly retrieved if there is an overlap of at least 50\% of its duration with a detected segment. Also of interest is the extent of over- or under-segmentation of the correctly detected \textit{taan} sections. \ref{fig:EvalMetric} illustrates the different possibilities of mismatch that are observed between subjective labels and automatically labeled sections. When subjectively labeled section is correctly detected, it is observed that the onset and offset boundaries are always within 5 s of the corresponding ground-truth boundaries indicating the reliability of the posteriors based segmentation. \begin{figure}[t] \centerline{\framebox{ \includegraphics[width=\columnwidth]{GroupingStage2Scenarios1.png}}} \caption{Various scenarios that occur after grouping viz. (a) false alarm, (b) over-segmentation, (c) exact detection, (d) missed detection, (e) under-segmentation} \label{fig:EvalMetric} \end{figure} \section{Results and Discussion}\label{RnD} As mentioned in section \ref{Database}, our experimental evaluation of the two different frame-level classifier systems is based on (i) a single-artist concert dataset of 22 concerts trained and tested in leave-one-concert-out cross-validation mode, and (ii) testing on the same 22 concerts but with training on a large dataset where the given artist is not represented. \ref{ref:ROC22J3522V} (a) and (b) show the ROCs corresponding to each of these train-test scenarios. We observe that the performance of the hand-crafted features is superior to that obtained by the CNN in each case. We present some insights related to this in the next section. \begin{figure}[t] \centerline{\framebox{ \includegraphics[width=\columnwidth]{PrecRecallCurveCNNMLPSoftmax22LOSOV35train22test.png}}} \caption{(a)ROC for CNN and MLP on hand-crafted features for leave-one-song-out case of 22 concerts. (b)ROC for CNN and MLP on hand-crafted features for 35 train and 22 test concert scenario.} \label{fig:ROC22J3522V} \end{figure} By noting the equal error rates (precision = recall) for each classifier across the training sets, we see that the performance improves when the training dataset size is increased from 22 concerts to 35 concerts (which in reality is a 3-fold increase in the number of labeled frames in the training data due to the longer concert durations). Thus it appears that any gains from intra-artist training are over-shadowed by the benefits from the larger training data size. This could also be related to the fact that \textit{taan} singing style characteristics are relatively artist independent. The second stage of frame grouping and segmentation is implemented with the frame-level posteriors obtained by the MLP classifier using the hand-crafted features operating at its optimal operating point (f score = 0.86). Using the method presented in section \ref{sec:FeatExtrClasGrpng}, we obtain the results provided in \ref{tab:EvaluationST2}. We note that of the 115 subjectively labeled \textit{taan} sections across the 22 concerts, there are only 9 missed detections. We have 2 false detections. We thus have a system that does indeed accurately flag the occurrence of \textit{taan} sections across concerts. Finally of the 106 correct detections, the majority are correctly segmented. Over- and under-segmentations account for a third of the detections. These can possibly be corrected by modifying the heuristics of the highest level of grouping (bridging over gaps) discussed in section \ref{sec:FeatExtrClasGrpng}. Deriving empirical rules regarding high-level segmentation by musicians would ideally require a study over a larger database with more human annotators per concert. \begin{table}[t] \begin{center} \begin{tabular}{|c|l|l|} \hline {\begin{tabular}[c]{@{}c@{}}True Detection\\ (106)\end{tabular}} & Under-Segmentation & 32 \\ \cline{2-3} & Over-Segmentation & 3 \\ \cline{2-3} & Exact Detection & 71 \\ \hline \multicolumn{2}{|l|}{Missed} & 9 \\ \hline \multicolumn{2}{|l|}{False Alarm} & 2 \\ \hline \end{tabular} \end{center} \caption{Segmentation performance after grouping} \label{tab:EvaluationST2} \end{table} \section{Some Insights} While we noted in the previous section that the hand-crafted features perform better than the CNN learned features, it is interesting to look deeper at the distribution of frame-level errors shown in \ref{tab:McNemar}. We note that while the CNN features misclassify more frames in total, there are also a sizeable number of frames that are misclassified by the hand-crafted features but correctly classified by the CNN. This indicates the presence of complementary information and that a combination of classifiers is very likely to yield a performance superior to any one of the systems. \begin{table}[t] \begin{center} \begin{tabular}{llll} & & \multicolumn{2}{c}{CNN} \\ \cline{3-4} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{Correct} & \multicolumn{1}{l|}{Incorrect} \\ \cline{2-4} \multicolumn{1}{l|}{{\begin{tabular}[c]{@{}l@{}}Hand-crafted \\ features\end{tabular}}} & \multicolumn{1}{l|}{Correct} & \multicolumn{1}{l|}{4998} & \multicolumn{1}{l|}{762} \\ \cline{2-4} \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{Incorrect} & \multicolumn{1}{l|}{272} & \multicolumn{1}{l|}{296} \\ \cline{2-4} \end{tabular} \end{center} \caption{Distribution of classification errors} \label{tab:McNemar} \end{table} The hand-crafted features were designed to capture the temporal modulation of the pitch and energy trajectories after suitable normalization steps. This information is, of course, implicitly encoded in the spectrogram via the first several strong harmonics of the vocal source. Our choice of spectrogram parameters at the input of the CNN makes the same information, at least in spatial image form, available to the convolutional layers. We select a few examples to obtain an understanding of the encoding of \textit{taan} and non-\textit{taan} distinctions by the CNN features. In order to study the learned features, we note that the outputs of the second pooling layer finally get concatenated to form the feature vector for classification. Since the second pooling layer is the last layer where the outputs show spatial correspondences with the input spectrogram image, observing the outputs of the second pooling layer could give us insight into what the CNN encodes in each image. \ref{fig:CNNInputOutput} shows the input spectrogram patches for four different frame categories (based on classification achieved by each of the two systems) and the corresponding outputs at the $9^{th}$ channel of the second pooling layer. The $9^{th}$ channel was one of the channels having larger connection weights to the fully connected layer compared to other channels, implying that its outputs were more significant than those of the other channels for the classification. \begin{figure}[t] \centerline{\framebox{ \includegraphics[width=\columnwidth]{FirstfilterLayer2Modified.png}}} \caption{Input spectrograms and $9^{th}$ channel output maps for the $2^{nd}$ pooling layer. (a) Correctly classified as \textit{taan} by both CNN and hand-crafted features (b) Incorrectly classified as non-\textit{taan} by CNN and correctly classified as \textit{taan} by hand-crafted features (c) Correctly classified as non-\textit{taan} by both CNN and hand-crafted features (d) Incorrectly classified as \textit{taan} by CNN but correctly classified as non-\textit{taan} by hand-crafted features} \label{fig:CNNInputOutput} \end{figure} From \ref{fig:CNNInputOutput} we observe that the outputs of the second pooling layer are rather sparse with respect to the input spectrograms, indicating that the high level features learned by the CNN may be discarding the less relevant parts of the input spectrogram. Here the retained structure seems to correspond to higher energy portions of the spectrogram such as the vocal harmonics and occasionally other instrumental harmonics and percussion strokes. \ref{fig:CNNInputOutput}(b) and (c) show frames that were classified as non-\textit{taan} by the CNN. The frame in \ref{fig:CNNInputOutput}(c) actually corresponds to a non-\textit{taan} frame characterised by its non oscillating almost constant vocal harmonics, which did get captured as horizontal lines. The frame in \ref{fig:CNNInputOutput}(b), however was actually a \textit{taan} frame as seen by the oscillating vocal harmonics. However, the oscillations in the first harmonic at about 600 Hz were not prominent enough and got captured as a virtually horizontal structure leading to the misclassification. \ref{fig:CNNInputOutput} (a) and (d) show frames that were classified as \textit{taan} by the CNN. \ref{fig:CNNInputOutput}(a) was indeed a \textit{taan} frame. The rapid oscillations in its vocal harmonics appear as a scattered pattern in the output map. \ref{fig:CNNInputOutput}(d) represents a common CNN misclassification. This non-\textit{taan} frame has time-varying harmonics but the time-variation is not a regular pitch modulation characteristic of \textit{taan}. The output map shows a breakdown of the harmonic structure indicating that the precise nature of the time variation is not learned by the CNN features. Rather, the CNN appears to characterize non-\textit{taan} frames, which are marked by the presence of stable or at most slow varying vocal harmonics, with near horizontal lines in the output maps, and all inputs that do not match these stable characteristics as \textit{taan}. Finally, we also examined cases where the CNN features correctly classified \textit{taan} frames that were missed by the hand-crafted features. These frames had spectrogram images that clearly showed the oscillating harmonics. However it turned out that pitch tracking errors in these frame led to the loss of this information capture in the hand-crafted features. This raises the important point that the learning from raw audio spectra via the CNN could decrease vulnerability to errors in fixed high-level feature extraction modules such as predominant pitch detection. \section{Conclusion} We proposed a system for the segmentation and labeling of a prominent named structural component of the Hindustani vocal concert. The \textit{taan} section is characterized by a melodic style marked by rapid pitch and energy modulation of the singing voice. High-level features to capture this specific modulation from the pitch tracks extracted from the polyphonic audio, combined with novelty based grouping of frame posteriors, provided high accuracy \textit{taan} segmentation on our test dataset of concerts. We also investigated the possibility of automatically learning distinctive features, using a CNN for this task, from raw magnitude spectra computed from the polyphonic audio signal. Notwithstanding that we approached this particular comparison with a healthy dose of skepticism, it was observed that the CNN did indeed perform the frame-level classification far better than chance. An inspection of the outputs of the second pooling layer reflected a systematic difference in \textit{taan} and non-\textit{taan} frames. Although non-\textit{taan} frames where the harmonics varied over time were misclassified as \textit{taan} frames, it is entirely possible that training on a larger dataset with more such instances as well as using a network with more layers could help improve performance. Finally, the complementary errors of the two classifier systems can lead to fruitful combinations for further improvements in performance. The more general conclusion is that learned features can indeed add value to hand-crafted features in audio retrieval tasks. \section{Acknowledgement} This work received partial funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement 267583 (CompMusic). {\small \bibliographystyle{ieee}
1,116,691,498,736
arxiv
\section*{Acknowledgments} This work was supported in part by the National Science Foundation grant No. 10874142, 90921010, and the National Fundamental Research Program of China through Grant No. 2010CB92304, and the Fundamental Research Funds for the Central Universities No. SWJTU09CX078.
1,116,691,498,737
arxiv
\section{Introduction} \input{sections/introduction.tex} \section{Background} \input{sections/background.tex} \section{Methods} \input{sections/methods.tex} \section{Experiments} \input{sections/experiments.tex} \section{Conclusion} \input{sections/discussion.tex} \input{sections/acknowledgments.tex} \section*{Acknowledgments} This work was supported by the Big Data for Genomics and Neuroscience Training Grant 8T32LM012419, NSF TRIPODS Award CCF-1740551, the program ``Learning in Machines and Brains" of CIFAR, and faculty research awards. \section{Slowly varying functions} \begin{definition}[Slowly varying at infinity / zero] A positive function $L$ is said to be slowly varying at infinity if for any $u > 0$, $$ L(xu) \sim L(u) \ \ \text{ as } x\to\infty.$$ A function $L$ is slowly varying at zero if $L(1/u)$ is slowly varying at infinity. \end{definition} Slowly varying functions serve an important purpose in the study of long memory processes by significantly expanding the class of autocovariance (for slowly varying at infinity) or spectral density (slowly varying at zero) functions described without changing their relevant asymptotic behavior. In particular, if we define $$ R(u) = u^\rho L(u),$$ with $\rho \in \mathbb{R}$, then $R(au)/R(u) \to a^\rho$ as $u \to \infty$, so that the slowly varying function can be ignored in the limit. Properties of slowly varying functions are also used to show equivalence between various notions of long memory, or establish conditions under which such notions are equivalent. For example, the following proposition relates the time-domain definition of long memory to the summability of the autocovariance function: \begin{proposition}[Prop. 2.2.1, \cite{pipiras}] Let $L$ be slowly varying at infinity and $p>-1$. Then $$ \gamma_k = L(k) k^p, \ \ k\geq 1$$ implies that $$ \sum_{k=1}^n \gamma_k \sim \frac{L(n)n^{p+1}}{p+1}, \ \ \text{ as } n\to\infty.$$ \end{proposition} The equivalence between the time and spectral domain definitions of long memory can be established under the condition that the slowly varying part is quasi-monotone. \begin{definition}[Quasi-monotone] A slowly varying function $L$ on $[0,\infty)$ quasi-monotone if it is of bounded variation on any compact interval and if there exists some $\delta >0$ such that $$ \int_0^x u^\delta \vert dL(u) \vert = O(x^\delta L(x)), \ \ \text{ as } x\to\infty.$$ \end{definition} For a proof we refer to \cite{pipiras} Section 2.2.4. A thorough treatment of slowly varying functions in the context of long memory is available in \cite{pipiras} Sections 2.1-2.2 and \cite{beran} Section 1.3. \section{Short memory of common time series models} We first note that in order to show that the class of processes described by a given time series model has short memory, it is sufficient to show \begin{align} \sum_{k=-\infty}^\infty \vert \gamma_X(k) \vert < \infty \label{crit} \end{align} for each process $X_t$ belonging to the parametric family. The property $\sum_k \vert \gamma_X(k) \vert = \infty$ is implied by both the frequency and time domain definitions of long memory for a scalar process, which are themselves equivalent under the condition that the slowly varying part of the spectral density near zero is quasimonotone. Therefore, establishing \eqref{crit} for a given class of models implies that they do not satisfy the definition of a long memory process. \begin{proposition} Let $X_t$ be an irreducible and aperiodic Markov chain on a finite state space $\mathcal{X}$ such that its corresponding transition matrix $P$ has distinct eigenvalues. Let $g: \mathcal{X} \to \mathbb{R}$, and define $Y_t = g(X_t)$. Then $Y_t$ is a short memory process. \end{proposition} \begin{proof} Computation of the autocovariance for a finite state Markov model is classical, but we include it here for completeness. Let $X_t$ be an irreducible and aperiodic Markov chain on the finite space $\mathcal{X} = \{1,...,m\}$, and suppose that the transition matrix $P$ (where $P_{ij} = p(X_{t+1} = i | X_t = j)$) has distinct eigenvalues. Then $X_t$ has a unique stationary distribution, and we denote its elements $p(X_t = i) = \pi_i$. Let $X_0$ have the distribution $\pi$, and define $Y_t = g(X_t)$ for $t \geq 0$ and some $g: \mathcal{X} \to \mathbb{R}$. Note that $Y_t$ is stationary since $X_t$ is stationary. We will show that the scalar process $Y_t$ has short memory. Write the autocovariance \begin{align*} \gamma_Y(k) &= \mathbb{E} Y_k Y_0 - [\mathbb{E} Y_0 ]^2 \\ &= \sum_{i=1}^m \sum_{j=1}^m g(i) g(j) p_{ij}^{(k)}\pi_j - \sum_{i=1}^m \sum_{j=1}^m g(i) g(j) \pi_i \pi_j \\ &= \sum_{i=1}^m \sum_{j=1}^m g(i) g(j) \pi_j \left( p_{ij}^{(k)} - \pi_i \right), \end{align*} where $p_{ij}^{(k)} = p(X_{t+k} = i \vert X_t = j)$. Since $P$ has distinct eigenvalues, it is similar to a diagonal matrix $\Lambda$: $$ P = Q\Lambda Q^{-1},$$ so that $$ P^k = Q\Lambda^k Q^{-1} = \sum_{i=1}^m \lambda_i^k q_i \tilde{q}_i',$$ where $q_i$ and $\tilde{q}_i$ denote the $i^{th}$ row of $Q$ and $Q^{-1}$, respectively, and $x'$ denotes the transpose of $x$. Furthermore, from the existence of the unique stationary distribution $\pi = (\pi_1 ,..., \pi_m)'$ we have that $$ P \pi = \pi$$ so that $\lambda_1 = 1$, and since $P$ is a stochastic matrix, the corresponding left eigenvalue is $$ \begin{bmatrix} 1 & ... & 1 \end{bmatrix} P = \begin{bmatrix} 1 & ... & 1 \end{bmatrix} .$$ Thus $$ \lambda_1 q_1 \tilde{q}_i' = \pi \begin{bmatrix} 1 & ... & 1 \end{bmatrix} = \begin{bmatrix} \pi_1 & ... & \pi_1 \\ \vdots & \ddots & \vdots \\ \pi_m & ... & \pi_m \end{bmatrix} = \Pi.$$ Then we can write $$ P^k = \Pi + \sum_{i=2}^m \lambda_i^k q_i \tilde{q}_i',$$ and since the $\lambda_i$'s are distinct, $| \lambda_i | < 1$ for $i = 2,...,m$. Therefore, $$ | p_{ij}^{(k)} - \pi_i | < C_1 s^k$$ for some $s \in (0,1)$, which from above implies that $$ \gamma_Y(k) < C_2 s^k.$$ The absolute convergence of the autocovariance series then follows by comparison to the dominating geometric series. \end{proof} Furthermore, as we next show, neither extension of the Markov chain to higher (finite) order or taking (finite) mixtures of Markov chains is sufficient to obtain a long memory process. We provide a novel proof that the mixture transition distribution (MTD) model \citep{raftery} for high-order Markov chains defines a short memory process under conditions similar to those of the proof above. \begin{proposition} Let $X_t$ be an order-$p$ Markov chain whose transition tensor is parameterized by the MTD model \begin{align} p(X_t = i | X_{t-1} = j_1, ..., X_{t-p} = j_p) = \sum_{\ell = 1}^p \lambda_\ell Q^{(\ell)}_{ij_\ell}, \label{mtd} \end{align} where each $Q^{(\ell)}$ is a column-stochastic matrix, $\lambda_\ell >0$ for each $\ell = 1,...,p$, and $\sum_\ell \lambda_\ell = 1$. Suppose that the state space $\mathcal{X}$ is finite with $|\mathcal{X}| = m$, and we define $Y_t = g(X_t)$ for some $g: \mathcal{X} \to \mathbb{R}$. Then $Y_t$ is a short memory process. \end{proposition} \begin{proof} In order to write the autocovariance sequence of an MTD process, we must first establish its stationary distribution. Let $Q \in \mathbb{R}^{m^p \times m^p}$ denote the multivariate Markov transition matrix, which has entries \begin{align*} q_{s_{0:p-1}, s'_{1:p}} = p(X_{t} &= s_{0}, ..., X_{t-p+1} = s_{p-1} | X_{t-1} = s'_{1}, ..., X_{t-p} = s'_{p}) \\ &= \begin{cases} \sum_{\ell=1}^p \lambda_\ell Q^{(\ell)}_{s_{0}s'_{\ell}} & \\ \text{ if } s_t = s'_t \text{ for } t= 1,...,p-1 &\text{ otherwise.} \end{cases} \end{align*} We make the following assumptions on $Q$: \begin{itemize} \item $Q$ has distinct eigenvalues \item Each $Q^{(\ell)}$ has strictly positive elements on the diagonal \end{itemize} Each state of $Q$ is reachable from all others, so $Q$ is irreducible. The second assumption above shows that the states corresponding to the $m$ nonzero diagonal elements of $Q$ are aperiodic, and thus $Q$ is aperiodic. The transition matrix $Q$ therefore specifies an ergodic Markov chain and hence has a unique stationary distribution $\xi \in \mathbb{R}^{m^p}$. We will denote by $\pi \in \mathbb{R}^m$ the univariate marginal of $\xi$. Now let $\xi \in \mathbb{R}^{m^p}$ be the multivariate stationary distribution of $X_t$, and let $\pi \in \mathbb{R}^m$ be its univariate marginal. Let $(X_{-p},...,X_{-1})$ have the distribution $\xi$, and define $X_t$ according to \eqref{mtd} for $t = 0,1,2,...$. Then both $X_t$ and $Y_t = g(X_t)$ are stationary. The autocovariance $\gamma_Y(t)$ can be written as $$ \gamma_Y(k) = \sum_{i=1}^m \sum_{j=1}^m g(i) g(j) \pi_j \left( p_{ij}^{(k)} - \pi_i \right),$$ where $p_{ij}^{(k)} = p(X_{t+k} = i \vert X_t j)$. Observe that the transition probability $p_{ij}^{(k)}$ can be obtained from the $k$-step multivariate transition matrix $Q^k$ via \begin{align*} p_{ij}^{(k)} &= p(X_{t+k} = i | X_t = j) \\ &= \sum_{s_{1:p-1}} p(X_{t+k} = i, X_{t+k-1:t+k-p+1} = s_{1:p-1} | X_{t} = j) \\ &= \sum_{s_{1:p-1}} \sum_{s'_{1:p-1}} p(X_{t+k} = i, X_{t+k-1:t+k-p+1} = s_{1:p-1} | X_{t} = j, X_{t-1:t-p+1} = s'_{1:p-1}) p(X_{t-1:t-p+1} = s'_{1:p-1}) \\ &= \sum_{s_{1:p-1}} \sum_{s'_{1:p-1}} q_{is_{1:p-1}, js'_{1:p-1}}^{(k)} p(X_{t-1} = s'_1, ..., X_{t-p+1} = s'_{p-1}). \end{align*} We note that the summation over $s_{1:p-1}$ is precisely the marginalization required to obtain $\pi_i$ from $\xi$. Therefore, we can write \begin{align*} |p_{ij}^{(k)} - \pi_i | &= \left\vert \sum_{s_{1:p-1}} \left( \left[ \sum_{s'_{1:p-1}} q_{is_{1:p-1}, js'_{1:p-1}}^{(k)} p(X_{t-1} = s'_1, ..., X_{t-p+1} = s'_{p-1}) \right] - \xi_{is_{1:p-1}}\right) \right\vert \\ &\leq \sum_{s_{1:p-1}} \left\vert \left( \left[ \sum_{s'_{1:p-1}} q_{is_{1:p-1}, js'_{1:p-1}}^{(k)} p(X_{t-1} = s'_1, ..., X_{t-p+1} = s'_{p-1}) \right] - \xi_{is_{1:p-1}}\right) \right\vert \end{align*} However, for each $q_{s_{0:p-1}, s'_{1:p}}$ we have $$ |q_{s_{0:p-1}, s'_{1:p}} - \xi_{s_{0:p-1}} | < Cs^k$$ for some $s \in (0,1)$ by an argument analogous to the Markov chain example. This implies $$ \left\vert \sum_{s'_{1:p-1}} q_{is_{1:p-1}, js'_{1:p-1}}^{(k)} p(X_{t-1} = s'_1, ..., X_{t-p+1} = s'_{p-1}) - \xi_{is_{1:p-1}} \right\vert < Cs^k$$ since $\sum_{s'_{1:p-1}} q_{is_{1:p-1}, js'_{1:p-1}}^{(k)} p(X_{t-1} = s'_1, ..., X_{t-p+1} = s'_{p-1})$ is a convex combination of elements obeying the same bound. Therefore, we have \begin{align*} |p_{ij}^{(k)} - \pi_i | &\leq \sum_{s_{1:p-1}} \left\vert \left( \left[ \sum_{s'_{1:p-1}} q_{is_{1:p-1}, js'_{1:p-1}}^{(k)} p(X_{t-1} = s'_1, ..., X_{t-p+1} = s'_{p-1}) \right] - \xi_{is_{1:p-1}}\right) \right\vert \\ &< \sum_{s_{1:p-1}} Cs^k \\ &= \tilde{C} s^k, \end{align*} and hence the MTD model has short memory with exponentially decaying autocovariance. \end{proof} For processes on a real-valued state space, the autoregressive moving average (ARMA) model is a well-known and widely used tool. ARMA models have good approximation properties, as evidenced by the existence of AR and MA orders guaranteeing aribitrarily good approximation to a stationary real-valued stochastic process with continuous spectral density \citep{davis}. Furthermore, ARMA models with nontrivial moving average components are equivalent to autoregressive models of infinite order, suggesting that these models can integrate information over long histories. However, despite these appealing properties, this class of models cannot represent statistical long memory. \begin{proposition} Define the ARMA process $X_t$ by $$ \phi(B)X_t = \theta(B)Z_t,$$ where $Z_t$ is a white noise process with variance $\sigma^2$ and $\phi(z) \neq 0$ for all $z \in \mathbb{C}$ such that $|z| = 1$. Then $X_t$ is a short memory process. \end{proposition} \begin{proof} As in the Markov chain case, the proof is classical but included for completeness. Let $X_t$ be defined as in the statement above. Then $X_t$ has the representation $$ X_t = \sum_{j=-\infty}^\infty \psi_j Z_{t-j}$$ where the coefficients $\psi_j$ are given by $$\theta(z)\phi(z)^{-1} = \psi(z) = \sum_{j=-\infty}^\infty \psi_j z^j,$$ with the above series absolutely convergent on $r^{-1} < |z| < r$ for some $r>1$ (cf. \cite{davis}, Chapter 3). Absolute convergence implies that there exists some $\epsilon > 0$ and $L < \infty$ such that $$ \sum_{j= -\infty}^\infty \vert \psi_j (1+\epsilon)^j \vert = \sum_{j= -\infty}^\infty \vert \psi_j \vert (1+\epsilon)^j = L,$$ so that there exists a $K < \infty$ for which $$ \vert \psi_j \vert < \frac{K}{(1+\epsilon)^j}.$$ The autocovariance can be expressed as $$ \gamma_X(k) = \sigma^2 \sum_{j=-\infty}^\infty \psi_j \psi_{j+|k|},$$ and thus we can write \begin{align*} \vert \gamma_X(k) \vert &= \sigma^2 \left\vert \sum_{j=-\infty}^\infty \psi_j \psi_{j+|k|} \right\vert \\ &\leq \sigma^2 \sum_{j=-\infty}^\infty \vert \psi_j \vert \vert \psi_{j+|k|} \vert \\ &\leq \sigma^2K^2 \sum_{j=-\infty}^\infty \frac{1}{(1+\epsilon)^j}\frac{1}{(1+\epsilon)^{j+|k|}} \\ &= \left[ \sigma^2K^2 \sum_{j=-\infty}^\infty \frac{1}{(1+\epsilon)^{2j}} \right] \frac{1}{(1+\epsilon)^{|k|}} \\ &= Cs^{|k|} \end{align*} for $s = 1/(1+\epsilon) \in (0,1)$. Therefore, as with the Markov models, the autocovariance sequence of an ARMA process is not only absolutely summable but also dominated by an exponentially decaying sequence. \end{proof} Finally, we show that in general nonlinear state transitions are not sufficient to induce long range dependence, a point particularly relevant to the analysis of long memory in RNNs. \begin{proposition} Define the scalar nonlinear autoregressive process $$ X_{t+1} = f(X_t) + \varepsilon_t,$$ where $\{\varepsilon_t\}$ is a white noise sequence with positive density with respect to Lebesgue measure and satisfying $\mathbb{E}|\varepsilon_t| <\infty$, while $f: \mathbb{R} \to \mathbb{R}$ is bounded on compact sets and satisfies $$ \sup_{|x| > r} \left| \frac{f(x)}{x} \right| < 1$$ for some $r > 0$. Then $X_t$ has a unique stationary distribution $\pi$, and the sequence of random variables $\{X_t, t\geq 0\}$ initialized with $X_0 \sim \pi$ is strictly stationary and geometrically ergodic. Furthermore, if $$ \mathbb{E}|X_{t}|^{2+\delta} < \infty$$ for some $\delta >0$, then $\{X_t\}$ is a short memory process. \end{proposition} \begin{proof} The proof proceeds by analysis of $X_t$ as a Markov chain on a general state space $(\mathbb{R},\mathcal{B})$, where $\mathcal{B}$ is the standard Borel sigma algebra on the real line. Define the transition kernel $P(x,B) = P(X_t \in B | X_{t-1} = x)$ for any $x\in \mathbb{R}$ and $B \in \mathcal{B}$. We first establish that $X_t$ is aperiodic. A $d$-cycle is defined by a collection of disjoint sets $\{D_i\}, 0 = 1,...,d-1$ such that \begin{enumerate} \item For $x \in D_i$, $P(x,D_{i+1}) = 1$, $i = 0,...,d-1 \mod d$. \item The set $[\cup_i D_i]^C$ has measure zero. \end{enumerate} The period is defined as the largest $d$ for which $\{X_t\}$ has a $d$-cycle \citep{meyn}. Clearly, however, since $\varepsilon_t$ has positive density with respect to Lebesgue measure, $ p(x,D) = 1$ only if $D = \mathbb{R}$ up to null sets. Thus the period is $d=1$, so $\{X_t\}$ is aperiodic. Strict stationarity and geometric ergodicity are established by showing that the aperiodic chain $\{X_t\}$ satisfies a strengthened version of the Tweedie criterion \citep{meyn}, which requires the existence of a measurable non-negative function $g: \mathbb{R} \to \mathbb{R}$, $\epsilon > 0$, $R>1$ and $M < \infty$ such that \begin{align*} R \mathbb{E}[ g(X_{t+1}) | X_t = x] &\leq g(x) - \epsilon, \ \ x \in K^c \\ \mathbb{E}[ g(X_{t+1}) \mathbbm{1}\{X_{t+1} \in K^c\} | X_t = x] &\leq M, \ \ x \in K \end{align*} for some set $K$ satisfying $$ \inf_{x \in K} \sum_{n=1}^m P^n(x,B) > 0$$ Under the conditions of $f$ and $\varepsilon_t$ assumed above, this criterion is established for the process $X_t$ in \cite{tjostheim} (Thm 4.1), with $g(x) = |x|$. Geometric ergodicity implies that the $$ \norm{\lambda P^n - \pi}_{TV} \leq C\rho^n,$$ with $C < \infty$, $\rho \in (0,1)$, and where $\norm{\cdot}_{TV}$ denotes the total variation distance between measures. A well-known result in the theory of Markov chains \citep{bradley} establishes that geometric ergodicity is equivalent to absolute regularity, which is parameterized by $$ \beta(k) = \sup \frac{1}{2} \sum_{i=1}^I \sum_{j=1}^J | P(A_i \cup B_j) - P(A_i) P(B_j) |,$$ where the supremum is taken over all finite partitions $\{A_1,...,A_I\}$ and $\{B_1,...,B_J\}$ of the sigma fields $\mathcal{A} = \sigma(X_t)$ and $\mathcal{B} = \sigma(X_{t+k})$. In particular, $\beta(k)$ decays at least exponentially fast. Furthermore, for any two sigma fields $\mathcal{A}$ and $\mathcal{B}$ we have \begin{align*} \beta(\mathcal{A},\mathcal{B}) &= \sup \frac{1}{2} \sum_{i=1}^I \sum_{j=1}^J | P(A_i \cup B_j) - P(A_i) P(B_j) | \\ &\geq \sup \frac{1}{2} | P(A \cup B) - P(A)P(B)|, \ \ A \in \mathcal{A}, B \in \mathcal{B} \\ &= 2 \alpha(\mathcal{A},\mathcal{B}), \end{align*} so that the $\alpha$-mixing parameter is also bounded by an exponentially decaying sequence. Finally, if $\mathbb{E}|X|^{2+\delta}$ for some $\delta > 0$, then the absolute covariance obeys (\cite{ibragimov}, Thm. 17.2.2) $$ |\gamma(k) | = \sigma^{-2} |\rho(k) | \leq C\alpha(k)^{\delta / (2+\delta)},$$ which completes the proof. \end{proof} \section{Gradient of the GSE objective} Recall that the objective function is given by \begin{align*} \mathcal{L}_m(d) &= \log \det \widehat{G}(d) - 2 \sum_{i=1}^m d_i \frac{1}{m} \sum_{j=1}^m \log \lambda_j, \end{align*} with \begin{align*} \widehat{G}(d) &= \frac{1}{m} \sum_{j=1}^m \text{Re} \left[ \Lambda_j(d)^{-1} I_{T,X}(\lambda_j) \Lambda^*_j(d)^{-1} \right] \\ \Lambda_j(d) &= \text{diag}(\lambda_j^{-d}e^{i(\pi-\lambda_j)/2}) \\ I_{T,X}(\lambda_j) &= y_j y_j^*,\ \ y_j = \frac{1}{\sqrt{2\pi T}} \sum_{t = 1}^T x_t e^{- i \lambda_j t}, \ \ \lambda_j = 2\pi j / T. \end{align*} The derivative with respect to the element $d_\ell$ of the long memory vector $d$, for any $\ell = 1,...,p$, is $$ \frac{\partial}{\partial d_\ell} R(d) = \operatorname{\bf Tr}\left[\widehat{G}(d)^{-1} \frac{\partial}{\partial d_\ell} \widehat{G}(d) \right] - \frac{2}{m} \sum_{j=1}^m \log \lambda_j.$$ Note that Fourier frequencies $\lambda_j$ are strictly positive for $j \geq 1$, so that $\log \lambda_j$ is well defined. For the term $\frac{\partial}{\partial d_\ell} \widehat{G}(d)$, note that the $(h,k)$ element of the matrix $\widehat{G}(d)$ can be written as $$ \frac{1}{m} \sum_{j=1}^m \text{Re}\left[ I(\lambda_j)_{h,k} \exp\left( (d_h + d_k) \log \lambda_j + \frac{i(\pi - \lambda_j)(d_h - d_k)}{2} \right) \right],$$ and therefore the derivative $\frac{\partial}{\partial d_\ell} \widehat{G}(d)$ is given by $$\left( \frac{\partial}{\partial d_\ell} \widehat{G}(d)\right)_{h,k} = \begin{cases} \frac{1}{m} \sum_{j=1}^m \text{Re}\left[ I(\lambda_j)_{\ell, k} c_j^- \exp(c_j^+ d_k) \exp(c_j^- d_\ell) \right] & \text{ for } h = \ell, h \neq k \\ \frac{1}{m} \sum_{j=1}^m \text{Re}\left[ I(\lambda_j)_{h, \ell} c_j^+ \exp(c_j^- d_h) \exp(c_j^+ d_\ell) \right] & \text{ for } k = \ell, h \neq k \\ \frac{1}{m} \sum_{j=1}^m \text{Re}\left[ 2 I(\lambda_j)_{\ell, \ell} \log \lambda_j \exp(2 d_\ell \log \lambda_j)\right] & \text{ for } \ell = h = k \\ 0 & \text{ otherwise} \end{cases},$$ where \begin{align*} c_j^- &= \log \lambda_j - i\left(\frac{\pi - \lambda_j}{2}\right) \\ c_j^+ &= \log \lambda_j + i\left(\frac{\pi - \lambda_j}{2}\right). \end{align*} \section{Bias study for bandwidth parameter} We demonstrate the potential for semiparametric estimation to incur bias when the bandwidth parameter $m$ is set too high relative to the over length $T$ of the observed sequence. The bias results from inclusion of periodogram ordinates in the long memory estimator that capture behavior in the spectral density function not local to the origin. \begin{figure}[h!] \includegraphics[width=\textwidth,height=5cm]{./figures/sdf_bias.png} \caption{\small Spectral density function of an ARFIMA(1,$d$,1) process (left) and smoothed estimates of the periodogram for the first coordinate of the embedded Bible text and Bach cello suite (center and right, respectively). Cutoff points associated with four choices of the bandwidth $m$ are plotted as vertical dashed lines; the semiparametric estimate of the long memory for each sequence is essentially a measure of the slope based on the subset of points $(-2 \lambda, I(\lambda))$ to the \emph{right} of this line.} \label{fig:bias} \end{figure} We give an illustration for univariate time series, which allows us to take advantage of a convenient visual interpretation of the long memory as the slope of $\log I(\lambda_j)$ against $-2\log(\lambda_j)$ as $\lambda_j \to 0$. Figure \ref{fig:bias} shows the spectral density function corresponding to three scalar processes: an ARFIMA(1,$d$,1) process with $d = 0.25$, a univariate projection of the embedded text from the King James Bible, and a univariate projection of the embedded Bach cello suite. For the ARFIMA process, the spectral density function can be computed exactly; for the other two sequences, it is estimated by the smoothed periodogram. By marking the cutoff points $-2 \log \lambda_m$ associated with different choices of $m$, we indicate the subset of points $(-2 \log \lambda_j, \log I(\lambda_j))$ to the right of this cutoff used to compute the semparametric estimate of $d$. In the scalar case, this is essentially the slope of the SDF as $\lambda$ approaches zero; thus it becomes clear that bias can be introduced when points sufficiently far from the origin are included. On the other hand, choosing $m$ too small introduces the risk of high variance in the estimator; note for example that the estimate with $m = T^{0.4}$ for the Bach cello suite would be strongly influenced by a single point just to the left of $-2\log \lambda = 15$. \section{Validation of total memory estimator} We compute the total memory statistic $$ \bar{d} = \mathbbm{1}^T \hat{d}_{\text{GSE}}$$ for simulated fractionally differenced Gaussian white noise sequences of dimension $k = 200$. We simulate four different settings for the long memory parameter: \begin{itemize} \item {\bf Zero}: Each coordinate of $d$ is equal to zero. \item {\bf Constant}: Each coordinate of $d$ is set to the same value, $d = 0.25$. \item {\bf Subset}: 90$\%$ of the coordinates are set to $0$, while the remaining $10\%$ are set to have strong long memory with $d = 0.4$. \item {\bf Range}: The elements of $d$ are drawn from a scaled Beta distribution with support on $(0,0.25)$ and centered at $0.125$. \end{itemize} For each setting, we simulate $n = 100$ sequences and compute the total memory. Results are plotted in Figure \ref{fig:tm_sim}, while in Table \ref{tab:tm_sim} we compare the sample mean and variance of the estimator compared to the asymptotic value stated in the main paper. \begin{figure}[h!] \includegraphics[width=\textwidth,height=5cm]{./figures/tm_sim.eps} \caption{\small Sample distribution of the total memory estimator $\bar{d}$ in four different simulation settings.} \label{fig:tm_sim} \end{figure} {\renewcommand{\arraystretch}{1.3} \begin{table}[h] \centering \small \captionsetup{size=small} \setlength{\tabcolsep}{5pt} \caption{Comparison of the sample mean and variance for the total memory estimator with the true total memory of the generating process and the asymptotic variance of the total memory estimator (both given in parentheses).} \begin{tabularx}{0.6\textwidth}{c *{1}{C} c} \toprule {\bf Setting} & {\bf Mean} & {\bf Variance} \\ \cmidrule(l){1-3} Zero & $2.82 \times 10^{-4}$ (0.0) & 0.00801 (0.00698) \\ Constant & 0.249 (0.25) & 0.00793 (0.00698)\\ Subset & 0.382 (0.04) & 0.00804 (0.00698)\\ Range & 0.101 (0.1029) & 0.00696 (0.00698)\\ \bottomrule \end{tabularx} \label{tab:tm_sim} \end{table} } In each of these four diverse simulation settings, the total memory estimator accurately recovers the true underlying parameter of the data generating process. \section{Calibration of total memory vs. Wald test in high dimensions} Here we demonstrate that the standard Wald test can be badly miscalibrated in the high-dimensional regime, whereas testing for long memory with the total memory statistic remains well-calibrated. Recall that, given an estimate $\hat{d}_{\text{GSE}}$ of the multivariate long memory parameter, the Wald statistic for the null hypothesis $\mathcal{H}_0: d = 0$ is computed as $$ t_{\text{Wald}} = \hat{d}_{\text{GSE}}^T(\Omega/m)\hat{d}_{\text{GSE}}.$$ This quantity is distributed as a $\chi^2(p)$ random variable under $\mathcal{H}_0$. For the total memory, we compute $$ \bar{d} = \mathbbm{1}^T \hat{d}_{\text{GSE}},$$ and in the main paper we have shown that this quantity is distributed as a $\mathcal{N}(0,\Omega/m)$ random variable when the true total memory $\bar{d_0} = 0$. We simulate $n = 100$ realizations of length $T = 2^{16}$ from a standard Gaussian process (thus $d=0$) of dimension $p = 200$, computing both the Wald and total memory test statistics. In Figure \ref{fig:test_stats}, we plot the a comparison of the sample distribution of each test statistic against its asymptotic distribution over a range of values for $m$. For values of $m$ close to $p$, we see that the empirical type-I error of the Wald test is severely inflated relative to the nominal level $\alpha = 0.05$; in other words, the test spuriously rejects the null and claims to find long memory when none exists at a rate much higher than accounted for. The total memory test, by contrast, largely avoids this issue, even in the case where there are barely more observations than dimensions. \begin{figure}[t!] \centering \includegraphics[width=0.75\textwidth,height=12cm]{./figures/test_calib.eps} \caption{\small Sample distribution of the test statistic over $n = 100$ trials for $m = \sqrt{T} = 256$ (top row), $m = 512$ (middle), and $m = 1280$ (bottom). Empirical type-I errors are computed using the critical value corresponding to a nominal type-I error of $0.05$.} \label{fig:test_stats} \end{figure} Of course, with enough data, the Wald test becomes increasingly well-calibrated, but this is not at all an easy condition to satisfy while maintaining the integrity of the statistical analysis. We have already seen in Appendix C that simply increasing $m$ is not an option for real-world data, as this is likely to induce significant bias. On the other hand, the length $T$ of the observed sequence would have to be enormous, even by machine learning standards, to achieve $m \gg p$ with the reasonable choice $m = \sqrt{T}$ when the dimension $p$ is large. Finally, even if such data were available, we would likely prefer a method that allows valid inference at lower $m$ for computational reasons. \section{Impact of embedding choice on long memory} We evaluate the impact of embedding choice on estimated long memory from two perspectives. First, we include a re-analysis of the Bach cello suite data using the same MFCC features as used for the Miles Davis and Oum Kalthoum recordings. This allows us to state results for long memory estimation uniformly across a single choice of embedding, and to evaluate the impact of embedding choice on the long memory analysis across two very different but informative representations of the raw time series. The results (see Table \ref{tab:bach_embed}) show that the Bach data has long memory under both representations, though the average strength as measured by normalized total memory is somewhat variable. {\renewcommand{\arraystretch}{1.3} \begin{table}[h!] \small \captionsetup{size=small} \setlength{\tabcolsep}{3pt} \caption{Long memory of Bach data by choice of embedding.} \begin{tabularx}{\columnwidth}{@{} *{1}{C} *{1}{C} *{1}{C} c @{}} \toprule {\bf Embedding} & {\bf Norm. total memory} & {\bf p-value} & {\bf Reject $\mathcal{H}_0?$} \\ \hline Mel-frequency cepstral coefficients & 0.308 & 0.003 & \checkmark \\ Convolutional features & 0.0997 & $< 1 \times 10^{-16}$ & \checkmark \\ \bottomrule \end{tabularx} \label{tab:bach_embed} \end{table} } \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth,height=6cm]{./figures/ptb_permuted_tm.eps} \caption{\small Histogram of normalized total memory computed from $n = 100$ permutations of the Penn TreeBank training data.} \label{fig:perm} \end{figure} Second, we consider a ``negative control" experiment in which we re-estimate the long memory vector for the embedded Penn TreeBank training set after permuting the sequential ordering of the data. This addresses the question of whether our positive result truly captures a sequence-dependent property of the data, or if it could have been produced spuriously as the consequence of other decisions related to the data analysis (including, for example, the choice of embedding). We compute the total memory statistic for $n=100$ random permutations of the Penn TreeBank training data. The results (see Figure \ref{fig:perm}) show that the total memory of the permuted data is concentrated near zero, with a sample mean of $1.90 \times 10^{-5}$ and standard error $0.00136$; a one-sample test of the mean correspondingly fails to reject the null hypothesis $\mathcal{H}_0: \bar{d} = 0$ with $p=0.494$. \section{Classical music features from a MusicNet convolutional model} The reduced version of the MusicNet model of \cite{thickstun2} used to obtain an embedding for the Bach cello suite is derived from the convolutional model implemented in \texttt{musicnet\_module.ipynb}, a PyTorch interface to MusicNet available at \url{https://github.com/jthickstun/pytorch\_musicnet}. We reduce the number of hidden states to $200$ (this corresponds to setting $k=200$ in the notebook), both for computational tractability in the optimization procedure and to achieve consistency with the embedding dimension for our natural language experiments. The model is trained on the MusicNet training corpus with no further modification of the tutorial notebook. Successful training and an informative feature mapping are indicated by the competitive performance of the model, even despite the reduced dimension of the hidden representation, in terms of the average precision of its predictions on the test set (see Table \ref{tab:music}). Results for our trained model (\emph{longmem-embed}) are favorable in comparison to both short-time Fourier transform (STFT) and commercial software (Melodyne) baselines, while approaching the quality of the fully learned filterbank (Learned filterbank; \citet{thickstun1}) and state-of-the-art translation invariant network (Wide-translation-invariant; \citet{thickstun2}). {\renewcommand{\arraystretch}{1.3} \begin{table}[h!] \center \small \captionsetup{size=small} \setlength{\tabcolsep}{3pt} \caption{Performance Comparison for Models of MusicNet Data} \begin{tabularx}{0.4\textwidth}{ c c } \toprule {\bf Model} & {\bf Avg. Precision} \\ \cmidrule(l){1-2} STFT & 60.4 \\ Melodyne & 58.8 \\ \emph{longmem-embed} & 65.1 \\ Learned filterbank & 67.8 \\ Wide-translation-invariant & 77.3 \\ \bottomrule \end{tabularx} \label{tab:music} \end{table} } \subsection{Long memory in language and music} Much of the development of deep recurrent neural networks has been motivated by the goal of finding good representations and models for text and audio data. Our results in this section confirm that such data can be considered as realizations of long memory processes.\footnote{Code for all results in this section is available at \url{https://github.com/alecgt/RNN_long_memory}} A full summary of results is given in Table \ref{tab:data_res}, and autocovariance partial sums are plotted in Figure \ref{fig:data_longmem}. To facilitate comparison of the estimated long memory across time series of different dimension, we report the normalized total memory $\bar{d}/p = (\mathbbm{1}^T\hat{d}_{\text{GSE}})/p$ in all tables. For all experiments, we test the null hypothesis $$\mathcal{H}_0: \bar{d_0} = 0$$ against the one-sided alternative of long memory, $$\mathcal{H}_1: \bar{d_0} > 0.$$ We set the level of the test to be $\alpha = 0.05$ and compute the corresponding critical value $c_\alpha$ from the asymptotic distribution of the total memory estimator. Given an estimate of the total memory $\bar{d}(x_{1:T})$, a p-value is computed as $P(\bar{d} > \bar{d}(x_{1:T}) | \bar{d_0} = 0)$; note that a p-value less than $\alpha = 0.05$ corresponds to rejection of the null hypothesis in favor of the long memory alternative. {\renewcommand{\arraystretch}{1.3} \begin{table}[h] \small \captionsetup{size=small} \setlength{\tabcolsep}{3pt} \caption{Total Memory in Natural Language and Music Data.} \begin{tabularx}{\columnwidth}{@{} c *{1}{C} *{1}{C} *{1}{C} c @{}} \toprule & {\bf Data} & {\bf Norm. total memory} & {\bf p-value} & {\bf Reject $\mathcal{H}_0?$} \\ \cmidrule(l){2-5} \multirow{ 4}{*}{\shortstack[l]{Natural \\ language}} & Penn TreeBank & 0.163 & $<$1 $\times 10^{-16}$ & \checkmark \\ & Facebook CBT & 0.0636 & $<$1 $\times 10^{-16}$ & \checkmark \\ & King James Bible & 0.192 & $<$1 $\times 10^{-16}$ & \checkmark \\ \cmidrule(l){2-5} \multirow{ 4}{*}{Music} & J.S. Bach & 0.0997 & $<$1 $\times 10^{-16}$ & \checkmark \\ & Miles Davis & 0.322 & $<$1 $\times 10^{-16}$ & \checkmark \\ & Oum Kalthoum & 0.343 & $<$1 $\times 10^{-16}$ & \checkmark \\ \bottomrule \end{tabularx} \label{tab:data_res} \end{table} } \begin{figure*} \includegraphics[width=\textwidth,height=5cm]{./figures/data_longmem.eps} \caption{\small Partial sum of the autocovariance trace for embedded natural language and music data. \emph{Left}: Natural language data. For clarity we include only the longest of the 98 books in the Facebook bAbI training set. \emph{Right:} Music data. Each of the five tracks from both Miles Davis and Oum Kalthoum is plotted separately, while the Bach cello suite is treated as a single sequence.} \label{fig:data_longmem} \end{figure*} \paragraph{Natural language data.} We evaluate long memory in three different sources of English language text data: the Penn TreeBank training corpus \citep{marcus}, the training set of the Children's Book Test from Facebook's bAbI tasks \citep{weston}, and the King James Bible. The Penn TreeBank corpus and King James Bible are considered as single sequences, while the Children's Book Test data consists of 98 books, which are considered as separate sequences. We require that each sequence be of length at least $T = 2^{14}$, which ensures that the periodogram can be estimated with reasonable density near the origin. Finally, we use GloVe embeddings \citep{pennington} to convert each sequence of word tokens to an equal-length sequence of real vectors of dimension $k = 200$. The results show significant long memory in each of the text sources, despite their apparent differences. As might be expected, the children's book measured from the Facebook bAbI dataset demonstrates the weakest long-range dependencies, as is evident both from the value of the total memory statistic and the slope of the autocovariance partial sum. \paragraph{Music data.} Modeling and generation of music has recently gained significant visibility in the deep learning community as a challenging set of tasks involving sequence data. As in the natural language experiments, we seek to evaluate long memory in a broad selection of representative data. To this end, we select a complete Bach cello suite consisting of 6 pieces from the MusicNet dataset \citep{thickstun1}, the jazz recordings from Miles Davis' \emph{Kind of Blue}, and a collection of the most popular works of famous Egyptian singer Oum Kalthoum. For the Bach cello suite, we embed the data from its raw scalar wav file format using a reduced version of a deep convolutional model that has recently achieved near state-of-the-art prediction accuracy on the MusicNet collection of classical music \citep{thickstun2}. Details of the model training, including performance benchmarks, are provided in Appendix H of the Supplement. We are not aware of a prominent deep learning model for either jazz music or vocal performances. Therefore, for the recordings of Miles Davis and Oum Kalthoum, we revert to a standard method and extract mel-frequency cepstral coefficients (MFCC) from the raw wav files at a sample rate of $32000$ Hz \citep{logan}. A study of the impact of embedding choice on estimated long memory, including a long memory analysis of the Bach data under MFCC features, is provided in Appendix G. The results show that long memory appears to be even more strongly represented in music than in text. We find evidence of particularly strong long-range dependence in the recordings of Miles Davis and Oum Kalthoum, consistent with their reputation for repetition and self-reference in their music. Overall, while the results of this section are unlikely to surprise practitioners familiar with the modeling of language and music data, they are scientifically useful for two main reasons: first, they show that our long memory analysis is able to identify well-known instances of long-range dependence in real-world data; second, they establish quantitative criteria for the successful representation of this dependency structure by RNNs trained on such data. \subsection{Long memory analysis of language model RNNs} We now turn to the question of whether RNNs trained on one of the datasets evaluated above are able to represent the long-range dependencies that we know to be present. We evaluate the criteria for long memory on three different RNN architectures: long short-term memory (LSTM) \citep{hochreiter}, memory cells \citep{levy}, and structurally constrained recurrent networks (SCRN) \citep{mikolov}. Each network is trained on the Penn TreeBank corpus as part of a language model that includes a learned word embedding and linear decoder of the hidden states; the architecture is identical to the ``small" LSTM model in \citep{zaremba}, which is preferred for the tractable dimension of the hidden state. Note that our objective is not to achieve state-of-the-art results, but rather to reproduce benchmark performance in a well-known deep learning task. Finally, for comparison, we will also include an untrained LSTM in our experiments; the parameters of this model are simply set by random initialization. {\renewcommand{\arraystretch}{1.3} \begin{table}[h!] \small \captionsetup{size=small} \setlength{\tabcolsep}{3pt} \caption{Language Model Performance by RNN Type} \begin{tabularx}{\columnwidth}{@{} *{1}{C} *{1}{C} @{}} \toprule {\bf Model} & {\bf Test Perplexity} \\ \cmidrule(l){1-2} Zaremba et al. & 114.5 \\ LSTM & 114.5 \\ Memory cell & 119.0 \\ SCRN & 124.3 \\ \bottomrule \end{tabularx} \label{tab:lang} \end{table} } \paragraph{RNN integration of fractionally differenced input.} Having estimated the long memory parameter $d$ corresponding to the Penn TreeBank training data in the previous section, we simulate inputs $\tilde{x}_{1:T}$ with $T = 2^{16}$ from by fractional differencing of a standard Gaussian white noise and evaluate the total memory of the corresponding hidden representation $\Psi(\tilde{x}_{1:T})$ for each RNN. Results from $n=100$ trials are compiled in Table \ref{tab:rnn_pos_res} (standard error of total memory estimates in parentheses). We test the null hypothesis $\mathcal{H}_0: \bar{d} = 0$ against the one-sided alternative $\mathcal{H}_1: \bar{d} < 0$, which corresponds to the model's failure to represent the full strength of fractional integration observed in the data. {\renewcommand{\arraystretch}{1.3} \begin{table}[h] \small \captionsetup{size=small} \setlength{\tabcolsep}{3pt} \caption{Residual Total Memory in RNN Representations of Fractionally Differenced Input.} \begin{tabularx}{\columnwidth}{@{} c *{1}{C} *{1}{C} c @{}} \toprule {\bf Model} & {\bf Norm. total memory} & {\bf p-value} & {\bf Reject $\mathcal{H}_0?$} \\ \cmidrule(l){1-4} LSTM (trained) & $-8.36 \times 10^{-3}$ (0.00475) &$4.07 \times 10^{-2}$ & \checkmark \\ LSTM (untrained) &$-6.20 \times 10^{-2}$ (0.00387) & $<$1 $\times 10^{-16}$ & \checkmark \\ Memory cell & $-1.18 \times 10^{-2}$ (0.00539) & $1.52 \times 10^{-2}$ & \checkmark \\ SCRN & $-2.62 \times 10^{-2}$ (0.00631) & $3.32 \times 10^{-5}$ & \checkmark \\ \bottomrule \end{tabularx} \label{tab:rnn_pos_res} \end{table} } \paragraph{RNN transformation of white noise.} For a complementary analysis, we evaluate whether the RNNs can impart nontrivial long-range dependency structure to white noise inputs. In this case, the input sequence $z_{1:T}$ is drawn from a standard Gaussian white noise process, and we test the corresponding hidden representation $\Psi(z_{1:T})$ for nonzero total memory. As in the previous experiment, we select $T = 2^{16}$, choose the bandwidth parameter $m = \sqrt{T}$, and simulate $n=100$ trials for each RNN. Results are detailed in Table \ref{tab:rnn_res}. We test $\mathcal{H}_0: \bar{d}_0 = 0$ against $\mathcal{H}_1: \bar{d}_0 > 0$; here, the alternative corresponds to successful transformation of white noise input to long memory hidden state. {\renewcommand{\arraystretch}{1.3} \begin{table}[h] \small \captionsetup{size=small} \setlength{\tabcolsep}{3pt} \caption{Total Memory in RNN Representations of White Noise Input.} \begin{tabularx}{\columnwidth}{@{} c *{1}{C} *{1}{C} c @{}} \toprule {\bf Model} & {\bf Norm. total memory} & {\bf p-value} & {\bf Reject $\mathcal{H}_0?$} \\ \cmidrule(l){1-4} LSTM (trained) & $-8.59 \times 10^{-4}$ (0.00405) & 0.583 & \text{\sffamily X} \\ LSTM (untrained) &$-4.17 \times 10^{-4}$ (0.00223) & 0.572 & \text{\sffamily X} \\ Memory cell & $-5.96 \times 10^{-4}$ (0.00452) & 0.552 & \text{\sffamily X}\\ SCRN & $2.37 \times 10^{-3}$ (0.00522) & 0.324 & \text{\sffamily X} \\ \bottomrule \end{tabularx} \label{tab:rnn_res} \end{table} } \paragraph{Discussion.} We summarize the main experimental result as follows: there is a statistically well-defined and practically identifiable property, relevant for prediction and broadly represented in language and music data, that is not present according to two fractional integration criteria in a collection of RNNs trained to benchmark performance. Tables \ref{tab:rnn_pos_res} and \ref{tab:rnn_res} show that each evaluated RNN fails both criteria for representation of the long-range dependency structure of the data on which it was trained. The result holds despite a training protocol that reproduces benchmark performance, and for RNN architectures specifically engineered to alleviate the gradient issues typically implicated in the learning of long-range dependencies. \subsection{Simulated data} We generate sequences of length $N = 2^{14} = 16384$ from two models of scalar long memory processes: fractionally differenced Gaussian white noise (FD) and an order-3 stationary ARFIMA model. Sample sequences are drawn for values of $d$ ranging from $-0.4$ to $0.4$ in increments of $0.1$. We plot the objective that is minimized in the estimation of $d$ as a function of $d$ for the FD and ARFIMA data in Figures \ref{fig:fd_sim} and \ref{fig:arfima_sim}, respectively. \begin{figure}[h!] \centering \includegraphics[width=.95\textwidth]{../../figures/fracdiff_sim.pdf} \caption{\small Objective $R(d)$ vs. $d$ for realizations of a fractionally differenced process over a range of true $d$ values. The title of each plot indicates the true value of $d$ and the estimated $\hat{d}$ resulting from minimization of $R(d)$.} \label{fig:fd_sim} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=.95\textwidth]{../../figures/arfima_sim.pdf} \caption{\small Objective $R(d)$ vs. $d$ for realizations of an ARFIMA process over a range of true $d$ values. The title of each plot indicates the true value of $d$ and the estimated $\hat{d}$ resulting from minimization of $R(d)$.} \label{fig:arfima_sim} \end{figure} \newpage \subsection{Real data} \subsubsection{Recurrent models} In this section we plot the objective function against $d$ for the scalar process corresponding to the first hidden node of our four recurrent models. We also include plots of the log periodogram against negative log Fourier frequences near the origin; geometrically, the long memory parameter $d$ is the slope of this curve as $\lambda_j \to 0$. \begin{figure}[h!] \centering \includegraphics[width=.95\textwidth]{../../figures/recurrent_viz_1d.pdf} \caption{\small Objective $R(d)$ vs. $d$ for the first hidden node in each of our recurrent models.} \label{fig:rviz1} \end{figure} We also include two-dimensional plots of the objective for the process defined by the first two hidden nodes of the recurrent models. As in the one-dimensional case, the minimum appears to be achieved close to the origin. \begin{figure}[h!] \centering \includegraphics[width=.95\textwidth]{../../figures/recurrent_viz_2d.pdf} \caption{\small Objective $R(d)$ vs. $d$ for first two hidden nodes in each of our recurrent models.} \label{fig:rviz2} \end{figure} \newpage \subsubsection{Bach fugues} We include the same plots as above for the scalar process corresponding to the first hidden node of the embedded Bach fugues. While the information represented is the same, in this case we note that the estimated long memory parameters are significantly away from zero, which is confirmed by the positive slope in the log-periodogram plots. \begin{figure}[h!] \centering \includegraphics[width=.95\textwidth]{../../figures/bach_viz.pdf} \caption{\small Objective $R(d)$ vs. $d$ for the first hidden dimension of the embedded Bach fugues.} \label{fig:bach_viz} \end{figure} \newpage \subsubsection{Bible text} Finally, we include a plot of the objective and log-periodogram for the first hidden node of the embedded Bible text. Again, the result shows that the minimum (and therefore the estimated long memory parameter) is positive. \begin{figure}[h!] \centering \includegraphics[width=.95\textwidth]{../../figures/bible_viz.pdf} \caption{\small Objective $R(d)$ vs. $d$ for the first hidden dimension of the embedded Bible text.} \label{fig:bible_viz} \end{figure} Finally, we include a comparison study that shows the potentially significant effect of bias in semiparametric estimation of long memory. In Figure \ref{fig:bible_bias}, we show the log-periodogram plot over a range of periodogram window sizes. A larger window corresponds to denser estimation of the periodogram. For each window size $N$, the bandwidth of the Gaussian semiparametric estimator is set to be $m = \sqrt{N}$. This corresponds to a cutoff indicated by the vertical red line; only periodogram ordinates to the right of this line are used to estimate the long memory. As shown by the estimated $\hat{d}$ for each choice of $N$, this can have a significant impact on the result. The impact depends on the shape of the periodogram, which for real-world data is not known a priori. Essentially, bias is introduced when the bandwidth is large enough to capture parts of the periodogram that do not represent local behavior around the origin. We conclude that it is safest to take conservative choices of $m$, and that results should only be trusted if the periodogram can be estimated relatively densely around the origin. \begin{figure}[h!] \centering \includegraphics[width=.95\textwidth]{../../figures/bible_bias.pdf} \caption{\small Log periodogram of embedded Bible process for window size $N$ and bandwidth parameter $m = \sqrt{N}$. Red line indicates the bandwidth cutoff for estimation of long memory. Note that despite the fact that $m$ increases with $N$, the bandwidth for estimation around the origin decreases at $N$ grows.} \label{fig:bible_bias} \end{figure}
1,116,691,498,738
arxiv
\section{Introduction} \label{sec:intro} Drones or small unmanned aerial vehicles (UAVs) are becoming a promising solution for a wide range of civilian applications such as disaster recovery, traffic monitoring, and surveillance. Due to their high degree of freedom, and the ability to move autonomously to any hard-to-reach- areas, they are emerging in cellular network applications as well to provide coverage and higher quality services for the users. Drones can be equipped with the base station (BS) hardware and act as a flying BS, creating an attractive alternative to conventional roof or pole mounted base stations. The concept of drone base station (DBS) is still in its infancy, and many academic researchers are now actively working in the area. While recent studies \cite{al2014optimal,yaliniz2016efficient} on DBS mainly focused on finding the optimum location for the drones to hover so that the coverage is maximized, we are utilizing the flexibility and agility of drones and study the DBSs that can move continuously over the serving area. DBSs can adapt their directions in order to provide higher service quality for the mobile users, moving randomly within the small cell boundaries. In our previous works \cite{tmc_submitted2017,wowmom_main2017}, we designed drone mobility control algorithms according to drone's practical limitation \cite{Shanmugavel20101084}, in order to improve the performance of the cellular network. In the network area, divided to multiple small cell area, each DBS' mobility was limited to its small cell boundaries. All users in the small cell were remained to be associated to their local DBS all the time; although during movement of users, they might find another DBS with a higher received signal strength. We have shown that letting drones chasing users can significantly improve the system performance, especially the packet throughput for cell-edge users. Now, an intriguing question is: how about further freeing the drones and allowing them to fly over the entire network, instead of over a single cell. Our motivation is to increase the DBS mobility range, thus providing more candidate DBSs for users to connect with. However, the free movement model inherently requires the change of user association scheme. In more detail, due to the free movement of DBSs, a user may frequently find different DBSs available for communication. Therefore, users should be able to reselect their serving DBS in the network area. We consider two different user association schemes in this paper. We first show that a simple user association scheme only based on the received signal strength does not yield performance benefits in terms of system throughput from the free movement model because the traffic load is unbalanced in the network, e.g., a DBS may serve many users due to a good geometry condition while another DBS may be idle. To restore the load balance as achieved by the restricted movement model, we propose a more advanced user association scheme that jointly considers the signal strength and the load among UAVs. The rest of the paper is structured as follows. The system model is presented in Section \ref{sec:systemmodel}, followed by performance metrics in Section \ref{sec:performancemetrics}. Our proposed user association schemes are presented in Section \ref{sec:userassoc}. We then review our proposed drone movement algorithm in Section \ref{sec:proposedalg}. In Section \ref{sec:simulation}, the simulation results are presented. Finally the conclusion and future work are discussed in Section~\ref{sec:conclusion}. \section{System Model} \label{sec:systemmodel} \subsection{Network Scenario} Assume there is a large network area in size of $L(m) \times L(m)$, to be covered by drone base stations flying above the area. The target area is divided to $C$ small cells, each of size $l(m) \times l(m)$. In each small cell area $U_s$ users are moving according to Random Way Point model (RWP). In this model, each user selects a random destination within the small cell border independent of other users, and moves there following a straight trajectory with a constant speed selected randomly from a given range. Upon reaching the destination, users may pause for a while before continuing to move to another destination \cite{rwp1,rwp2}. The total number of users in the network is equal to $U_m = U_s\times C$. Moreover, there are $N$ drone base stations, constantly moving in the network with constant speed $v$ (m/s), at fixed altitude of $h$ (m). Figure \ref{fig:arch} shows the considered network architecture. Note that, deploying drones at the same height with free movement would cause collision among drones. One alternative to avoid the collision issue is using height separation technique. However, by using height separation, drones could be deployed at a very wide range of height, causing performance degradation for the system. As a result, we install all DBSs at the same height, and then address the possibility of collision. \begin{figure} \centering \includegraphics[scale=0.35]{arch.pdf} \caption{The network area with multiple mobile users and DBSs} \label{fig:arch} \end{figure} DBSs may be connected to a nearby cell tower with a wireless backhaul link. We further assume that each DBS is transmitting data to users using a fixed transmission power of $p_{tx}$ (watt), total bandwidth $B$ (Hz) with central carrier frequency of $f$ (Hz). It is assumed that transmission from a DBS can create interference on mobile users in the serving area up to $\kappa$ meter. The interference beyond $\kappa$ meter is negligible. The \textit{ground distance} or the two-dimensional (2D) distance between user $u~( u \in [1,2,\dots,U_m])$, and drone $n~(n \in [1,2,\dots,N])$ is defined by the distance between the user and the projection of the drone location onto the ground, denoted by $r_{u,n}$. The \textit{euclidean distance} or the three-dimensional (3D) distance between user $u$ and drone $n$ is presented by $d_{u,n} = \sqrt{r_{u,n}^2 +h^2}$, where $h$ is the height of drones. \subsection{Channel Model} \label{sec:channel} In this paper, we consider a practical path loss model incorporating both LoS (Line of Sight), and NLoS (Non Line of Sight) transmissions. More specifically, the path loss function is formulated according to a probabilistic LoS model~\cite{al2014optimal,7194055}, in which the probability of having a LoS connection between a drone and its user depends on the elevation angle of the transmission link. According to~\cite{al2014optimal}, the LoS probability function is expressed as \begin{equation} P^{LoS}(u,n) = \frac{1}{1+\alpha exp(-\beta[\omega -\alpha])}, \label{eq:plos} \end{equation} where $\alpha$ and $\beta$ are environment-dependent constants, $\omega$ equals to $arctan(h/r_{u,n})$ in degree. As a result of (\ref{eq:plos}), the probability of having a NLoS connection can be written as \begin{equation} P^{NLoS}(u,n) = 1 - P^{LoS}(u,n). \label{eq:pnlos} \end{equation} From (\ref{eq:plos}) and (\ref{eq:pnlos}), the path loss in dB can be modeled as \begin{equation} \eta_{path}(u,n) = A_{path} + 10\gamma_{path}\log_{10}(d_{u,n}), \label{eq:pathloss} \end{equation} where the string variable \textit{``path"} takes the value of {``LoS"} and {``NLoS"} for the LoS and the NLoS cases, respectively. In addition, $A_{path}$ is the path loss at the reference distance (1 meter) and $\gamma_{path}$ is the path loss exponent, both obtainable from field tests \cite{TR36.828}. \subsection{Traffic Model}\label{sec:traffic_model} The traffic model for each user follows the recommended traffic model by 3GPP \cite{3gpp36814}. In this model, there is a reading time interval between two subsequent user's data packet request. The reading time of each data packet is modeled as an exponential distribution with a mean of $\lambda$ (sec). Moreover, the transmission time for each data packet is defined as the time interval between the request time of a data packet and the end of its download, denoted by $\tau$ (sec). All data packets are assumed to have a fixed size of $p$ (MByte). The user is called an \textit{active} user during the transmission time. \subsection{Drone Mobility Control } \label{subsec:dronemobility} All DBSs have the same height, therefore we consider their mobility in the 2D plane only. Each drone moves \textit{continuously} in the 2D space with a constant linear speed of $v$, and updates its moving direction every $t_{m}$ sec, hereafter called \textit{Direction Update Interval}. The proposed continuously moving model is thus applicable to all types of drones, with or without rotors. When the drone wants to change its direction while keeping a constant speed, it moves along an arc. More importantly, the maximum possible turning angle $\theta_{max}$ for a drone during a specific time $t_m$ can be obtained by $\theta_{max} = \displaystyle \frac{a_{max} \times t_m}{v}$, where $ a_{max}$ and $v$ is the maximum acceleration and the speed of drone, respectively \cite{Shanmugavel20101084,Agility1998}. At every $t_m$, the DBS chooses an angle, $\theta_n$, between $\pm$[0,$ \theta_{max} $] and starts to complete the turn at the end of next $t_m$ sec. \section{Performance Metrics} \label{sec:performancemetrics} The main motivation for the proposed model is to improve the system capacity. In this section, we define the required metrics to evaluate the network performance. The received signal power, $S^{path}(u,n)$ (watt), of an active user $u$ associated to drone $n$ can be obtained by \begin{align} \begin{split} S^{path}(u,n)&=\frac{b_u}{B} \times p_{tx} \times 10^{\frac{-\eta_{path}(u,n)}{10}} \end{split} \label{eq:rcvpower} \end{align} where $b_u~(0 \leq b_u \leq B) $ is the allocated bandwidth to the user. Moreover, the total noise power, $N_u$ (watt), for an active user $u$ including the thermal noise power and the user equipment noise figure, can be represented by \cite{thermalnoise} \begin{equation} N_u = 10^{ \frac{-174+\delta_{ue}}{10}}\times{b_u}\times10^{-3}, \label{eq:noise} \end{equation} where $\delta_{ue}$ (dB) is the user equipment noise figure. Accordingly, the \textit{Signal to Noise (SNR)} and \textit{Signal to Interference plus Noise Ratio (SINR)} of user $u$ associated to drone $n$ can be expressed as: \begin{align} \begin{split} SNR^{path}(u,n)=\frac{S^{path}(u,n)}{N_u}, \end{split} \label{eq:snr} \end{align} \begin{align} \begin{split} SINR^{path}(u,n)&=\frac{S^{path}(u,n)}{I_u+N_u}, \end{split} \label{eq:sinr} \end{align} where $I_u = \big(\sum_{i \in {N}, i \not= n, r_{u,i} \leq \kappa } S^{path}(u,i)\big)$ represents the interference signal from neighbor DBSs received by user $u$. Then, the \textit{spectral efficiency (SE)} (bps/Hz) of an active user $u$ associated with drone $n$ can be formulated according to the Shannon Capacity Theorem as~\cite{Book_Proakis} \begin{align} \begin{split} \Phi^{path}(u,n) = \log_2 (1+SINR^{path}(u,n)). \end{split} \label{eq:individualspec} \end{align} Given the probabilistic channel model, the average SE for user $u$ can be expressed as \begin{align} \begin{split} \bar{\Phi}(u,n) =P^{LoS}\times\Phi^{LoS}(u,n) + P^{NLoS}\times\Phi^{NLoS}(u,n). \end{split} \label{eq:averagespec} \end{align} Moreover, the \textit{Throughput} (bps) of an communication link between an active user $u$ and drone $n$ can be formulates as \begin{align} \begin{split} T(u,n) = b_u\times\bar{\Phi}(u,n). \end{split} \label{eq:throughput} \end{align} Additionally, \textit{Packet Throughput}, the ratio of successfully transmitted bits over the time consumed to transmit the said data bits, can be expressed as \begin{align} \begin{split} P = p \times \frac{1}{\tau}. \end{split} \label{eq:packetthroughput} \end{align} Considering all downloaded packets by all users, the average packet throughout is considered as a performance metric. \section{User Association Schemes} \label{sec:userassoc} At any specific time, a set of users are connected to a DBS, however, in the free movement models, users can reselect their serving DBSs frequently. The set of all active users associated to a DBS $n$ during at a specific time $t$ is denoted by ${\mathcal{Q}}_{n}(t)$. Additionally, the total bandwidth of $B$ is shared \textit{equally} among all associated active users of a DBS, and the DBS updates resource allocation every $t_r$ sec, called \textit{Resource Allocation Interval}. In the following, two proposed schemes to control user association process are described. \subsection{RSS-Based Scheme} In this scheme, a user selects a DBS with the highest \textit{Received Signal Strength} (RSS), and can reselect its serving DBS every $t_r$. There is no limitation on the number of users that can be associated to a specific DBS. Note that each user can independently choose its serving DBS according to the observed RSS without any additional information from the other users. \subsection{Throughput-Based Scheme} By only taking into account the \textit{RSS}, a large number of users might select one DBS at the same time, thus creating unbalanced loads among DBSs and in turn reducing the system throughput due to the under-utilization of the frequency spectrum. To overcome this problem, we consider a more advanced association scheme, which needs global network knowledge. In this model, a user selects a DBS that can maximize the estimated throughput for the next resource allocation interval. In particular, when a user $u'$ requests a new packet at time $t$, the system throughputs based on the hypotheses of its association with each candidate DBS in the network area is estimated for time $t'= t +t_r$. The DBS that gives the highest system throughput will be selected to serve $u'$. To solve this problem, we first define a binary association variable as follow \[ x_i = \begin{dcases*} 1 & if DBS \textit{i} is selected\\ 0 & otherwise \end{dcases*} \] for $i \in N$. Then, the optimization problem to find the best DBS for user $u'$ can be expressed as \begin{equation} \max\limits_{ x_i \in \{0,1\}}\ \sum_{i=1}^{N} \Big(\sum_{u=1}^{\mathcal{Q}_{n}(t')} T(i,u) \Big) \label{eq:th_assoc_objective} \end{equation} \begin{equation} s.t. \quad \mathcal{Q}_{n}(t') = \mathcal{Q}_{n}(t)+x_i(u') \quad\quad \label{eq:constraint1} \end{equation} \begin{equation} \sum_{i=1}^{N} x_i =1 \label{eq:constraint2} \end{equation} The first constraint defines the set of associated users in each DBS considering serving/not serving the new request. To make sure that the user is connected to just one DBS, the second constraint must be satisfied. \section{DBS Mobility Algorithms} \label{sec:proposedalg} In our previous work \cite{tmc_submitted2017}, we proposed three different DBS mobility algorithms (DMAs). We showed that the one that employs Game Theory to make mobility decisions for DBSs performed the best. Therefore, in this paper we only consider the Game Theory based DMA. The task of a DMA is to choose turning angles for DBSs at the start of every $t_m$ interval to improve the performance of the system. The DBS will continue to follow the path specified by the turning angle selected at the \textit{start} of the interval for the next $t_m$ seconds. This path cannot be changed in the middle of $t_m$ despite any further changes in mobile user population and traffic in the system. When there is no associated user to a DBS, it chooses a random direction that keeps the drone in the intended border. To reduce the complexity of the problem, we discretized all turning options into a finite set of $[-\theta_{max},\dots,-2g,-g,0,g,2g,\dots,\theta_{max}]$, where $g=\displaystyle \frac{2\theta_{max}}{G-1}$, with $G$ representing the total number of turning options. Each drone can choose its direction from $G$ candidate ones. In the game theory based DMA, the direction selection is formulated as a non-cooperative game played by all serving DBSs in the system. The game is played at the start of each $t_m$ interval and the decisions leading to the Nash Equilibrium (NE) are adopted by the DBSs to update their directions. A pure NE is a convergence point where no player has an incentive to deviate from it by changing its action. Hereafter, we refer to this algorithm as \textit{GT} DMA. The game is described by ${\mathcal G} = ({\mathcal P},\{{\mathcal A}_p\},u_p)$, where ${\mathcal P} = \{ 1,2,\dots, P\}$ is the set of DBSs as players with at least one associated active user. ${\mathcal A}_p $ is the set of actions ($G$ turning angles) for each DBS, and $u_p$ is the utility function of each DBS. Furthermore, $u_p:{\mathcal A}\rightarrow {\rm I\!R} $ maps any member of the action space, $\theta \in {\mathcal A}$, to a numerical real number. The action space ${\mathcal A}$ is defined as the Cartesian product of the set of actions of all players (${\mathcal A} = {\mathcal A}_1 \times {\mathcal A}_2 \times \dots \times {\mathcal A}_P $). We denote the utility function of each player as $u_p(\theta_p, \theta_{-p})$, where $\theta_{-p}$ presents the action of all players except $p$. The utility function for each player is defined by the spectral efficiency of that player given the action of all players, as follows \begin{align} \begin{split} u_p(\theta) = u_p(\theta_p, \theta_{-p}) = \bar{ {\Phi}}(p), \end{split} \label{eq:utilityfunc_def} \end{align} where $ \bar{ {\Phi}}(p) $ is the average SE for the active users associated to DBS $p$. In a non-cooperative game, each player independently tries to find an action that maximizes its own utility, however its decision is influenced by the action of other players: \begin{align} \begin{split} \theta_{n} = \text{arg}\,\max\limits_{\forall {\theta_{p}\in {\mathcal A}_{p}}}\ u_p(\theta_p, \theta_{-p}) \quad \forall p \in \mathcal{P}. \end{split} \label{eq:maxutfun_game} \end{align} In this algorithm, at first, all drones select a random direction from their set of actions. Then each of them finds their best response considering other players' action. Finally, after few trials they all converge to a NE point and move towards the selected directions during the next $t_m$ interval. \section{Evaluation and Simulation Results} \label{sec:simulation} In this section, the performance of the DBS network is evaluated through extensive simulation by MATLAB. We refer to the model where drones are free to move in the network by a prefix of \textit{Free}. In this model, either a RSS-based or Throughput-based user association scheme can be employed. On the other hand, the prefix of \textit{Restricted}, represents the models where users are always associated to their local DBSs which are restricted to move over the small cell areas. Finally, \textit{HOV} denotes the models where hovering DBSs are deployed over the center of the small cell areas. \subsection{Simulation Setup} The network area is divided into a 7$\times$7 grid of small cells (49 small cells), each of size $80m \times 80m$. Due to interference, \textit{outer} cells in the simulated network scenario will receive less interference than \textit{inner} cells. To obtain unbiased performance results, data is collected only from users in the inner cell. We used the same physical setting for the drones as our previous works \cite{wowmom_main2017,7848883,tmc_submitted2017}. The drones's speed vary from 2m/s to 8m/s, with the capability of changing direction every $t_m =1s$. Moreover, the current observed drone acceleration is set to 4 $m/s^2$ \cite{wowmom_ws2017}, while higher accelerations are expected for future drones. The recommended height of 10m \cite{7842150} is selected for all DBSs in our simulation. The number of users and their traffic model follow the parameters recommended by the 3GPP~\cite{3gpp36814}, and are shown in Table~\ref{tbl:params}. Moreover, to mitigate the randomness of the results, all results have been averaged over 10 independent runs of 800-second simulations. \begin{table}[] \centering \caption{Definition of parameters and their value} \label{tbl:params} \begin{tabular}{lll} \hline {\bf Symbol} & {\bf Definition} & {\bf Value} \\ \hline \hline $N$ & Number of Drones & 49\\ $C$ & Number of Small Cells & 49 \\ $U_m$ &Total Number of Users in the Area&245\\ $U_s$ &Number of Users in Each Small Cell&5\\ $B$ & Total Bandwidth &5 MHz\\ $h$ & Drone Height &10 m\\ $v$ & Drone Speed &[2, 4, 6, 8] m/s \\ $w$ & Edge Length of a Small Cell &80m \\ $f$ &Working Frequency &2 GHz \\ $p_{tx}$& Drone Transmission Power& 24 dBm \cite{TR36.828}\\ $\lambda$ & Mean Reading Time & 40 sec \\ $\alpha, \beta$ & Environmental Parameters for Urban Area& 9.61, 0.16 \cite{yaliniz2016efficient} \\ $\gamma$ & Path Loss Exponent (LoS/NLoS)& 2.09/3.75 \cite{TR36.828}\\ $\delta_{ue}$ & UE Noise Figure & 9 dB \\ ${t}_{m}$ &Direction Update Interval &1 sec \\ $t_{r}$ &Resource Allocation Interval &0.2 sec \\ $\kappa$ &Interference Distance & 200 m\\ $p$ &Data Size & 4MByte \\ $G$ & Number of Candidate Directions&21 \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=1\linewidth]{avg_pckt_th} \caption{Average packet throughput for Restricted and Free-RSS movement model} \label{fig:avgpcktth} \end{figure} \begin{figure} \centering \includegraphics[scale=0.6]{packet_th.pdf} \caption{Empirical CDF of packet throughput for speed of 2m/s} \label{fig:packet_th} \end{figure} \begin{figure*} \centering \includegraphics[scale=0.54]{subplot_cdf.png} \caption{CDF of (a) DBS-to-user distance (b) Received Signal (c) Interference Signal and (d) Number of active DBSs} \label{fig:subplot_cdf} \end{figure*} \subsection{The Performance of RSS-based User Association} \label{subsec:performance_rslt} In this section, we study the performance of the RSS-base user association with the free movement model. Figure \ref{fig:avgpcktth} plots the average packet throughput of the system when DBSs are moving with various speeds, while the speed of ``0'' represents the \textit{HOV} scenario. From this figure, we can draw the following observations: \begin{itemize} \item Generally speaking, the average packet throughput of the \textit{Free-RSS} movement model is lower than that of the \textit{Restricted} movement model, especially when DBSs are moving with a low speed. However, the \textit{Free-RSS} movement model still outperforms the HOV scenario. We further study the behavior of various metrics in the network to see how a lower packet throughput than that of \textit{Restricted} movement model is obtained in the \textit{Free-RSS} movement model. \item Similar to the observed results in our previous works, a higher acceleration generates better results than a lower acceleration. \end{itemize} In order to show why the restricted movement model yields a worse performance than the Free-RSS movement model, we focus on the scenario when DBSs are moving with the speed of 2m/s, with maximum acceleration of 4$m/s^2$. First, the empirical CDF of the packet throughput is plotted in Figure \ref{fig:packet_th}. It can be observed form this figure that \textit{Free-RSS} model outperforms the \textit{HOV} model in terms of packet throughout, however, the \textit{Restricted} model generates higher performance than the \textit{Free-RSS} model. We further collected the ground distance between any active user and its associated DBS during the simulation time. The CDF of the ground distance is depicted in Figure \ref{fig:subplot_cdf}a. According to this figure, when drones follow the \textit{Free-RSS} movement model, the DBS-to-user distance becomes higher than both HOV and \textit{Restricted} movement model. A larger DBS-to-user distance deteriorates the received signal strength, as illustrated in Figure \ref{fig:subplot_cdf}b. Users in \textit{Free-RSS} movement model receives a lower signal strength than that of \textit{HOV} and \textit{Restricted} scenarios. We also investigate the interference signal at the users; as shown in Figure \ref{fig:subplot_cdf}c, the \textit{Free-RSS} movement model has the lowest interference signal in the network. It means that a less number of DBSs are transmitting data to users at a specific time, creating less interference. Figure \ref{fig:subplot_cdf}d confirms that the number of active DBSs during the simulation time in the free model movement is indeed less than other models. Additionally, in the \textit{Free-RSS} movement model, the number of associated active users to a DBS might change over time. To see how the DBS loads vary during the simulation time, the number of active users associated with each DBS are collected for both the \textit{Free} and \textit{Restricted} movement models, and compared with the hovering drones. \begin{figure} \centering \includegraphics[scale=0.5]{assoc_users.pdf} \caption{CDF of number of associated active users to DBSs } \label{fig:assoc_users} \end{figure} As can be seen from Figure \ref{fig:assoc_users}, there is a possibility of having a large number of users associated to one drone in \textit{Free-RSS} model, which causes unbalanced load among DBSs. In contrast, in the restricted and HOV models, the maximum number of associated users to a DBS is fixed to the number of users in a small cell. Note that unbalanced loads among DBSs reduces the system throughput due to the under-utilization of the frequency spectrum. \subsection{The Performance of Throughput-based User Association} In this section, we study the performance of the more intelligent user association scheme (Throughput-based). Figure \ref{fig:packetthall} shows that employing the Throughput-based user association scheme improves the packet throughput significantly. According to this figure, DBSs in the \textit{Free-Throughput} model with the acceleration value of 4$m/s^2$, and the speed of 2m/s achieves a remarkable packet throughout gain of 47\% compared to the \textit{HOV} scenario, while the achievable gain for \textit{Free-RSS} and \textit{Restricted} movement model are 8\% and 22\%, respectively. Particularly, it shows that by having a smart user association scheme, freeing up the DBS in the network generates better results than limiting them to serve local users within a small cell boundaries. This huge improvement is the result of balanced load among DBSs, as shown in Figure \ref{fig:usersall}. This figure illustrates the distribution of number of active associated users to DBSs during the simulation time, when drones are moving with the speed of 2m/s. It shows that deploying Throughput-based scheme balances the loads as achieved by the \textit{Restricted} and \textit{HOV} movement model. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{avg_packet_th_acc4} \caption{Average packet throughput of different movement models for acceleration of 4$m/s^2$} \label{fig:packetthall} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{users_all} \caption{CDF of number of associated active users to DBSs } \label{fig:usersall} \end{figure} \subsection{The DBSs Collision Issue} Note that when drones are moving freely at the same height, they may collide with each other. To study the probability of collision, we analyzed the DBS-to-DBS distance during the simulation time. Figure \ref{fig:dbstodbsdistance} illustrates the CDF of such DBS-to-DBS distance for the free movement models. As can be seen from this figure, the intelligent movement of drones maintains a comfortable distance among the DBSs. The intuition is that in the proposed DMA, each DBS tends to be closer to its serving users, and farther away from interfering DBSs. Therefore, the possibility of having two drones flying in close proximity is extremely low, which helps to prevent drones coming too close to each other. As shown in Figure \ref{fig:dbstodbsdistance} the probability that the DBS-to-DBS distance is less than 10m, is well below $0.00015$. \section{Conclusion and Future Work} \label{sec:conclusion} In this paper, we have shown that by freeing up the DBSs from the cells and letting them cruise in the network, a significantly larger system throughput can be achieved compared with the case that each DBS is restricted within each cell. However, this huge performance comes at the expense of an intelligent and complex user association scheme, which needs global network knowledge. Designing less complex user association scheme to gain benefits from the free movement model is left for future work. Moreover, by allowing DBSs to move freely, the opportunity to deploy a less number of DBSs becomes promising. The performance of different number of DBSs in the network will be studied in future work. \section*{Acknowledgment} Azade's research is supported by Australian Government Research Training Program Scholarship and Data61$|$CSIRO PhD top-up scholarship. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{dbs_to_dbs_distance} \caption{Empirical CDF of DBS-to-DBS distance} \label{fig:dbstodbsdistance} \end{figure} \bibliographystyle{IEEEtran}
1,116,691,498,739
arxiv
\section{Introduction} The precision of lattice QCD computations of leptonic and semileptonic decay amplitudes has now reached the sub-percent level\,\cite{Aoki:2019cca}. This implies that isospin-breaking effects, including electromagnetism, must be included for further progress to be made in the determination of the corresponding CKM matrix elements and other tests of the Standard Model. When studying radiative corrections to leptonic decays of pseudoscalar mesons at $O(\alpha_\mathrm{em})$, the presence of infrared divergences requires us to consider the rates for both the processes $P\to\ell\bar\nu_{\ell}$ and $P\to\ell\bar\nu_{\ell}\gamma$, which we denote by $\Gamma_0(P\to\ell\bar\nu)$ and $\Gamma_1(P\to\ell\bar\nu)$ respectively, where the subscript {\footnotesize 0,1} denotes the number of photons in the final state. Our initial proposal was to restrict the energy of the final-state photon to be sufficiently small ($E_\gamma<\Delta E_\gamma\simeq 20$\,MeV say) for the dependence on the structure of the meson to be negligible and yet to be within the experimental resolution\,\cite{Carrasco:2015xwa}. It is then convenient to organise the calculation in the form \begin{equation}\label{eq:masterleptonic} \Gamma_0+\Gamma_1(\Delta E_\gamma)=\lim_{V\to\infty}\big(\Gamma_0-\Gamma_0^\mathrm{pt}\big)+\lim_{V\to\infty}\big(\Gamma_0^\mathrm{pt}+\Gamma_1(\Delta E_\gamma)\big)\,,\end{equation} where ``pt" implies that the meson $P$ is treated as being \emph{point-like}. Each of the two terms on the right-hand side of Eq.\,(\ref{eq:masterleptonic}) is infrared finite and the second term can be calculated in perturbation theory and this was done in Ref.\,\cite{Carrasco:2015xwa}. On the other hand, $\Gamma_0$ must be computed in a lattice simulation, as the amplitude at $O(\alpha_\mathrm{em})$ includes a virtual photon which must be summed over all momenta. The introduction of the soft energy cut-off $\Delta E_\gamma$ can be avoided by computing amplitudes with a real photon in the final state. Such calculations are now in progress as reported at this conference\,\cite{Kane:2019jtj,deDivitiis:2019uzm}. The non-perturbative evaluation of $\Gamma_1$ has the important practical implication that the method can be applied to the decays of heavy mesons. For example, since $m_{B^\ast}-m_B\simeq 45$\,MeV, the hyperfine splitting for heavy mesons provides another small scale, which limits the scope and precision of the perturbative calculations for soft photons. \section{Semileptonic decays} \begin{figure}[t] \begin{center} \includegraphics[width=0.25\hsize]{Semilept0.eps}\hspace{0.7in} \includegraphics[width=0.25\hsize]{Semilept1.eps} \end{center} \caption{Radiative corrections to semileptonic $K_{\ell 3}$ decays at $O(\alpha_\mathrm{em})$ require the evaluation of the rates for both the processes $K\to\pi\ell\bar\nu_{\ell}$ and $K\to\pi\ell\bar\nu_{\ell}\gamma$; the corresponding amplitudes are sketched schematically here. \label{fig:SL1}} \end{figure} For the remainder of this talk, we consider the extension of the ideas of Ref.\,\cite{Carrasco:2015xwa} to semileptonic decays, focussing on $K_{\ell 3}$ decays as illustrated in Fig.\,\ref{fig:SL1}, but noting that the discussion is more general. A particularly appropriate measurable quantity to consider is $\frac{d^2\Gamma}{dq^2 ds_{\pi\ell}}$, where $q^2=(p_K-p_\pi)^2$ and $s_{\pi\ell}=(p_\pi+p_\ell)^2$. Following the same procedure as for leptonic decays we write: \begin{equation} \frac{d^2\Gamma}{dq^2 ds_{\pi\ell}}=\lim_{V\to\infty} \left( \frac{d^2\Gamma_0}{dq^2 ds_{\pi\ell}} -\frac{d^2\Gamma_0^{\mathrm{pt}}}{dq^2 ds_{\pi\ell}}\right) +\lim_{V\to\infty}\left( \frac{d^2\Gamma_0^{\mathrm{pt}}}{dq^2 ds_{\pi\ell}}+\frac{d^2\Gamma_1(\Delta E_\gamma)}{dq^2 ds_{\pi\ell}}\right)\,,\label{eq:SL1} \end{equation} where again ``pt" denotes \emph{pointlike} and the infrared divergences cancel separately in each of the two terms on the right-hand side. In Eq.\,(\ref{eq:SL1}) we have introduced the soft cut-off $\Delta E_\gamma$ on the energy of the photon, but this can be avoided by computing the amplitudes non-perturbatively with a real final-state photon non-perturbatively. We now discuss a number of issues which arise when considering semileptonic decays which are absent for leptonic decays. \begin{figure}[t] \begin{center} \includegraphics[width=0.35\hsize]{Semilept_aux1.eps} \end{center} \caption{Diagram contributing to the $K\to\pi\ell\bar{\nu}_\ell$ correlation function, illustrating the presence of unphysical terms which grow exponentially in time (see text). \label{fig:SL2}} \end{figure} \subsection{The presence of unphysical terms which grow exponentially in time.}Consider for illustration the diagram in Fig.\,\ref{fig:SL2}. The integration over the times $t_{1,2}$ yields terms in the momentum sum which are proportional to $e^{-(E_{\pi\ell}^\mathrm{int}-E_{\pi\ell}^\mathrm{ext})(t_{\pi\ell}-t_H)}$, where $ E_{\pi\ell}^\mathrm{int}$ and $E_{\pi\ell}^\mathrm{ext}$ are the internal and external energies of the pion-lepton pair and $t_{\pi\ell}$ and $t_H$ are the times of the insertion of the pion-lepton sink and of the weak Hamiltonian $H$. Depending on the choice of the momenta of the final-state pion and lepton, it is possible that the exchange of the photon with an allowed finite-volume momentum can result in the internal energy being smaller than the external one, $E_{\pi\ell}^\mathrm{int}<E_{\pi\ell}^\mathrm{ext}$, leading to unphysical terms which grow exponentially with $t_{\pi\ell}-t_H$. This is a generic feature when calculating long-distance contributions in Euclidean space and such terms must be identified and subtracted. The number of these terms depends on $s_{\pi\ell}$ and on the chosen boundary conditions which in general will include \emph{twisting}. Note that no such exponentially growing terms are present for leptonic decays. For $K_{\ell 3}$ decays, in some corners of phase space, there may also be multi-hadron intermediate states with energies smaller than the external one, and hence containing exponentials which grow with the time separation, but these are expected to be small. For example the $K\to\pi\pi\ell\nu\to\pi\ell\nu(\gamma)$ sequence only contributes at high order ($p^6$) in ChPT and is present due to the Wess-Zumino-Witten term in the action. More importantly however, we can restrict the values of $s_{\pi\ell}$ to a range below the multi-hadron threshold. Note that for $D$ and $B$ decays the large number of such terms which need to be subtracted in most of phase space, makes it \emph{very difficult} to perform a non-perturbative lattice calculation. \subsection{Finite-volume corrections} For leptonic decays of the pseudoscalar meson $P$, in QED$_\mathrm{L}$ finite-volume effects take the form: \begin{equation}\Gamma_0^{\mathrm{pt}}(L) = C_0(r_\ell) + \tilde C_0(r_\ell)\log\left(m_P L\right)+ \frac{C_1(r_\ell)}{m_P L}+ \dots \, ,\end{equation} where $r_\ell=m_\ell/m_P$\,\cite{Lubicz:2016xro}. An important point to note is that the exhibited $L$-dependent terms are \emph{universal}, i.e. independent of the structure of the meson and we have calculated these coefficients (using the QED$_{\mathrm{L}}$ regulator of the zero mode\,\cite{Hayakawa:2008an}). The leading structure-dependent FV effects in $\Gamma_0-\Gamma_0^{\mathrm{pt}}$ are of $O(1/L^2)$. The following scaling law is useful in determining which terms need to be evaluated to obtain the universal coefficients. If the leading behaviour of the infinite-volume integrand and finite-volume summand is proportional to $1/(k^2)^{\hspace{-0.5pt}\frac n2}$ as $k\to 0$ then the corresponding difference between the infinite-volume integral and finite-volume sum of $O(1/L^{4-n})$\,\cite{Lubicz:2016xro}. In the calculation of the mass spectrum $n=3$ and the leading finite-volume correction is of $O(1/L)$ and is universal, as is the subleading term of $O(1/L^2)$. In decay amplitudes $n=4$, corresponding to the presence of infrared divergences. \begin{figure}[t] \begin{center} \includegraphics[width=0.35\hsize]{Semilept_aux2.eps} \end{center} \caption{Diagram contributing to the $K\to\pi\ell\bar{\nu}_\ell$ correlation function used in the discussion of finite-volume effects (see text). \label{fig:SL3}} \end{figure} For illustration consider the diagram in Fig.\,\ref{fig:SL3}. At small photon momentum $k$, the pion and lepton internal propagators scale as $1/k$ and the photon propagator as $1/k^2$, so that the loop integrand/summand scales as $1/k^4$ corresponding to an infrared divergence. There are also subleading terms which scale as $1/k^3$ which lead to $1/L$ finite-volume effects. These arise by expanding the propagators and vertices, including the vertex containing the weak Hamiltonian, to $O(k)$. (Since the $1/L^2$ finite-volume corrections depend on the structure of the pion we do consider these further.) Electromagnetic Ward identities are particularly useful in the study of the universality of the $O(1/L)$ finite-volume corrections. (Alternatively one can construct a gauge-invariant effective theory.) To illustrate this consider the pion propagator in Fig.\,\ref{fig:propandvertex}(a). We define the Euclidean pion propagator $\Delta_\pi(p_\pi)$ by: \begin{eqnarray} C_{\pi\pi}(p_\pi)&=&\int d^{\,4}\!z~e^{-ip_\pi\cdot z}~\langle\,0\,|T\big\{\phi_\pi(z) \phi^\dagger_\pi(0)\big\}\,|\,0\,\rangle\nonumber\\ &\equiv&\big|\langle\,0\,|\phi_\pi(0)\,|\,\pi(p_\pi)\,\rangle\big|^2~\Delta_\pi(p_\pi)\\ &\equiv&\big|\langle\,0\,|\phi_\pi(0)\,|\,\pi(p_\pi)\,\rangle\big|^2~\frac{Z_\pi(p_\pi^2)}{p_\pi^2+m_\pi^2}\,.\nonumber \end{eqnarray} $Z_\pi$ parametrises the structure dependence of the pion propagator. We now expand the propagator for small values of $k$ and off-shellness $\epsilon_\pi^2=p_\pi^2+m_\pi^2$ to obtain: \begin{equation} \Delta_\pi(p_\pi+k)=\frac{1-2z_{\pi_1}p_\pi\cdot k-\epsilon_\pi^2 z_{\pi_1}+O(k^2,\epsilon_\pi^4,\epsilon_\pi^2 k)}{\epsilon_\pi^2+2p_\pi\cdot k+k^2}\,, \end{equation} where the \underline{structure dependent} parameter $z_{\pi_1}$ is given by: \begin{equation} z_{\pi_1}=\left.\frac{dZ^{-1}_\pi(p_\pi^2)}{dp_\pi^2}\right|_{p_\pi^2=-m_\pi^2}\,, \end{equation} (the subscript {\footnotesize 1} on $z_{\pi_1}$ labels the coefficient of the Taylor series expansion of $Z_\pi^{-1}$\,\cite{Lubicz:2016xro}). \begin{figure}[t] \begin{center} \includegraphics[width=0.6\hsize]{PionProp.eps} \end{center} \caption{(a) The pion propagator, (b) $\pi\gamma\pi$ vertex \label{fig:propandvertex}} \end{figure} Similarly we define the amputated $\pi\gamma\pi$ vertex $\Gamma_\pi^\mu$, by amputating the propagators and matrix elements of the interpolating operators in the correlation function (see Fig.\,\ref{fig:propandvertex}(b))\\[-0.1in] \begin{equation} C_\pi^\mu(p_\pi,k)=i\int d^{\hspace{1pt}4}\hspace{-1pt}z\,d^{\hspace{1pt}4}\hspace{-1pt}x\,e^{-ip_\pi\cdot z}\,e^{-ik\cdot x} \langle 0|\,T\big\{\phi_\pi(z)\,j^\mu(x)\,\phi_\pi^\dagger(0)\big\}\,|0\rangle \,.\end{equation} We now expand $\Gamma_\pi$ for small $k$ (and $\epsilon_\pi$). The key result is obtained from the Ward identity: \begin{equation} k_\mu\Gamma^\mu_P(p_\pi,k)=Q_\pi\left\{\Delta_\pi^{-1}(p_\pi+k)-\Delta^{-1}_\pi(p_\pi) \right\}\,,\end{equation} which relates the first-order expansion coefficients and yields \begin{equation} Z_\pi(p_\pi+k)\,\Gamma_\pi^\mu(p_\pi,k)=Q_\pi\,(2p_\pi+k)^\mu+O(k^2,\epsilon_\pi^2)\,.\end{equation} Here $Q_\pi$ is the electric charge of the pion. Thus, since we are neglecting the structure dependent $O(1/L^2)$ corrections, the pion propagator and $\pi\gamma\pi$ vertex combine to give the same result as in the point-like theory. We have seen that, as a result of the Ward identity, we do not need the derivatives of the pion form-factors to obtain the $O(1/L)$ corrections. However, we also need to expand the weak-vertex which, in QCD without QED, is a linear combination of two form-factors $f^{\pm}(q^2)$. Off-shell, the $K\pi\ell\bar\nu$ weak vertex is a linear combination of two functions $F^{\pm}(p_\pi^2,p_K^2,2p_K\cdot p_\pi)$ (which on-shell reduce to the form-factors $f^{\pm}(q^2)$). The Ward identity relates the $K\pi\ell\bar\nu$ and $K\pi\ell\bar\nu\gamma$ vertices and does lead to a partial, but not complete, cancellation of the $O(1/L)$ terms. The remaining $O(1/L)$ corrections are found to depend on the derivatives of the form factors $df^{\pm}(q^2)/dq^2$, as well as on the form factors $f^{\pm}(q^2)$ themselves; this will be demonstrated in a publication in preparation. Such derivative terms are a generic consequence of the Low theorem and are absent only in particularly simple cases, such as leptonic decays as explained below. These corrections are "universal" since the coefficients are physical, i.e. the form factors and their derivatives can be measured experimentally or computed in lattice simulations. On the other hand, there are no corrections of the form $df^{\pm}\!/dm_\pi^2$ or $df^{\pm}\!/dm_K^2$, which would not be physical. It is instructive to contrast the situation for semileptonic decays with the corresponding one for leptonic decays, e.g. \hspace{-11pt} for $K^+\to\ell^+\nu_\ell$ decays~\cite{Lubicz:2016xro}. In that case the leading isospin-breaking corrections are proportional to the decay constant $f_K$ computed in QCD simulations and again there are no $O(1/L)$ terms proportional to $df_K\!/dm_K^2$. In that case however, there is no scope for terms analogous to $df^{\pm}(q^2)/dq^2$. For leptonic decays we had calculated the $O(1/L)$ finite-volume corrections analytically using the Poisson summation formula\,\cite{Lubicz:2016xro}. For semileptonic decays, we have calculated the integrands/summands necessary to evaluate the coefficients of the $O(1/L)$ corrections but have not yet evaluated the corrections themselves. In the ignorance of the analytic coefficients, the subtraction of the $O(1/L)$ effects can be performed instead by fitting data obtained at different volumes with however, some loss of precision. For leptonic decays, where the $O(1/L)$ corrections are known and can be subtracted explicitly, we have checked that fitting these finite-volume effects numerically leads instead to an approximate doubling of the uncertainty in the theoretical prediction extrapolated to physical masses in the infinite volume limit. This may be disappointing, but recalling that isospin breaking corrections are of $O(1\%)$, it is not a major problem. \section{The perturbative calculation} We return now to the relation in Eq.\,(\ref{eq:SL1}) where we envisage that $d^2\Gamma_0^{\mathrm{pt}}/dq^2 ds_{\pi\ell}$ and \\ $d^2\Gamma_1/dq^2 ds_{\pi\ell}$ are to be calculated in perturbation theory. This has not yet been fully done. A related calculation has recently been performed by De Boer, Kitahari and Ni\v{s}and\v{z}i\'c\,\cite{deBoer:2018ipi} in the context of $B\to D^{(\ast)}$ semileptonic decays. This work was motivated by the $R(D)$ and $R(D^\ast)$ anomalies in semileptonic $B$-decays which seem to indicate a violation of lepton flavour universality between decays in which the final state charged lepton is a $\tau$ on the one-hand and a $\mu$ or electron on the other. The authors of Ref.\,\cite{deBoer:2018ipi} were investigating whether, within the Standard Model, this anomaly may be explained by radiative corrections not present in the \emph{photos} package; this appears not to be the case. The calculation however, is incomplete as we now explain. The calculations in Ref.\,\cite{deBoer:2018ipi} were not performed in the point-like approximation. Instead $d^2\Gamma_1/dq^2 ds_{\pi\ell}$ was obtained by using the eikonal approximation in which the denominators of the propagators of the charged lepton and meson are approximated by $\pm 2p\cdot k$, where $p$ is the momentum of the lepton or meson and $k$ is that of the photon. All powers of $k$ in the numerators (including at the weak vertex) are dropped. In the calculation of $d^2\Gamma_0^{\mathrm{pt}}/dq^2 ds_{\pi\ell}$, the dependence of the weak vertex on the photon's momentum $k$ is dropped, the form-factors are evaluated at the external value of $q^2$ (i.e. at $q=p_B-p_D$), but otherwise all factors of $k$ are kept\,\footnote{We thank Teppei Kitakara for helpful discussions on this point.}. The formulae in Ref.\,\cite{deBoer:2018ipi} can be readily adapted to semileptonic kaon decays by changing the masses of the mesons and leptons. By inserting the results in Eq.\,(\ref{eq:SL1}) the infrared divergences cancel in both the terms on the right-hand separately. On the other hand, the fact that terms which behave as $1/k^3$ as $k\to 0$ are not fully evaluated implies that not all the $O(1/L)$ corrections are obtained. In particular, as explained above, we would need terms proportional to the derivative of the form factors. \section{Summary and Conclusions} We are developing the framework for the computation of radiative corrections to semileptonic $K_{\ell 3}$ decays. This builds on the theoretical structure\,\cite{Carrasco:2015xwa,Lubicz:2016xro}, and its successful implementation\,\cite{Giusti:2017dwk,DiCarlo:2019thl}, developed for computations of radiative corrections to leptonic decays. At this conference, we have also presented the results of a successful computation of the $P\to\ell\bar\nu\gamma$ amplitude, making it possible to study leptonic decays of heavy mesons\,\cite{deDivitiis:2019uzm}. Among the important points to note are:\\[0.03in] (i) An appropriate observable to study for semileptonic decays is $d^2\Gamma/dq^2 ds_{\pi\ell}$.\\[0.02in] (ii) The presence of exponentially growing terms in $t_{\pi\ell}-t_H$ which need to be subtracted. \\[0.02in] (iii) The universality of the $O(1/L)$ corrections, which do however depend on the form-factors $f^{\pm}(q^2)$ and on their derivatives with respect to $q^2$. This is a generic feature, absent only for particularly simple processes such as leptonic decays. (In the present study we have used the QED$_{\mathrm{L}}$ regulator for the photon's zero mode; similar techniques can be used to investigate the universality (or otherwise) of the $O(1/L)$ corrections using other regulators.) \vspace{0.02in}Among the remaining things left to do is the analytic evaluation of the coefficients of the $O(1/L)$ corrections. Alternatively these corrections can be fitted numerically, in which case the result of Ref.\,\cite{deBoer:2018ipi} may be the most convenient one for the term which is added and subtracted in Eq.\,(\ref{eq:SL1}). Finally the method needs to be implemented and tested numerically.\\[0.01in] \vspace{-0.05in}\textbf{Acknowledgements:} V.L., G.M. and S.S. thank MIUR (Italy) for partial support under the contract PRIN 2015. C.T.S. was supported by an Emeritus Fellowship from the Leverhulme Trust. N.T. thanks the Univ. of Rome Tor Vergata for the support granted to the project PLNUGAMMA.
1,116,691,498,740
arxiv
\section{Introduction} We consider the unconstrained minimization problem \begin{align}\label{eq:min-f} \underset{\boldsymbol x\in\reals^d}{\text{minimize}}\ f(\boldsymbol x), \end{align} using the stochastic gradient descent (SGD) algorithm. Initialized at $\boldsymbol x_0 \in\reals^d$, the SGD algorithm is given by the iterations, \begin{equation} \label{eq:sgd} \boldsymbol x_{t+1} = \boldsymbol x_{t} - \gamma_{t+1} \left( \boldsymbol \nabla f(\boldsymbol x_{t}) + \boldsymbol \xi_{t+1} (\boldsymbol x_t)\right),\ \ t= 0,1,2,... \end{equation} where $\{\gamma_t\}_{t\in \mathbb N^+}$ denotes the step-size sequence, and $\{\boldsymbol \xi_t\}_{t\in \mathbb N^+}$ is a martingale difference sequence adapted to a filtration $\{\mathcal{F}_t\}_{t\in \mathbb N}$, characterizing the noise in the gradient (the sequence $\{\boldsymbol x_t\}_{t\in \mathbb N}$ is also adapted to the same filtration, if we assume $\boldsymbol x_0$ is $\mathcal{F}_0$-measurable). Our focus is on the case where the noise is state dependent, and its variance is infinite, i.e., $\Exp{\big[\|\boldsymbol \xi_t\|_2^2\big]}=\infty$. Many problems in modern statistical learning can be written in the form \eqref{eq:min-f}, where $f(\boldsymbol x)$ typically corresponds to the population risk, that is, $f(\boldsymbol x) \coloneqq \Exp_{z\sim \nu}[\ell(\boldsymbol x,z)]$ for a given loss function $\ell$ and an unknown data distribution $ \nu$. In practice, one observes independent and identically distributed (i.i.d.) samples $z_i\sim \nu$ for $i\in[n]$, and estimates the population gradient $\boldsymbol \nabla f(\boldsymbol x)$ with a noisy gradient at each iteration, which is based on an empirical average over a subset of the samples $\{z_i\}_{i\in[n]}$. Due to its simplicity, superior generalization performance, and well-understood theoretical guarantees, SGD has been the method of choice for minimization problems arising in statistical machine learning. Starting from the pioneering works of~\cite{robbins1951stochastic, chung1954stochastic, sacks1958asymptotic, fabian1968asymptotic, ruppert1988efficient, shapiro1989asymptotic,polyak1992acceleration}, theoretical properties of the SGD algorithm and its variants have been receiving a growing attention under different scenarios. Recent works, for example~\cite{tripuraneni2018averaging, su2018statistical, duchi2016local, toulis2017asymptotic,fang2018online, anastasiou2019normal, yu2020analysis} establish convergence rates for SGD in various settings, and build on the analysis of~\cite{polyak1992acceleration} to prove a central limit theorem (CLT) for the Polyak-Ruppert averaging, which leads to novel methodologies to compute confidence intervals using SGD. However, a recurring assumption in this line of work is the finite noise variance, which may be violated frequently in modern frameworks. Heavy-tailed behavior in statistical methodology may naturally arise from the underlying model, or through the iterative optimization algorithm used during model training. In robust statistics, one often encounters heavy-tailed noise behavior in data, which in conjunction with standard loss functions leads to infinite noise variance in SGD. Very recently, heavy-tailed behavior is shown to emerge from the multiplicative noise in SGD, when the step-size is large and/or the batch-size is small~\citep{hodgkinson2020multiplicative,gurbuzbalaban2020heavy}. On the other hand, there is strong empirical evidence in modern machine learning that the gradient noise often exhibits a heavy-tailed behavior, which indicates an infinite variance. For example, this is observed in fully connected and convolutional neural networks~\citep{simsekli19a,gurbuzbalaban2020fractional} as well as recurrent neural networks~\citep{zhang2019adam}. Thus, understanding the behavior of SGD under infinite noise variance becomes extremely important for at least two reasons. A \emph{computational complexity reason:} modern machine learning and robust statistics frameworks lead to heavy-tailed behavior in SGD; thus, understanding the performance of this algorithm in terms of precise convergence rates as well as the required conditions on the step-size sequence as a function of the `heaviness' of the tail become crucial in this setup. A \emph{statistical reason:} many inference methods that rely on Polyak-Ruppert averaging utilize a CLT that holds under finite noise variance (see e.g. online bootstrap and variance estimation approaches~\citep{fang2018online, su2018statistical,chen2020statistical}). Using the same methodology in the aforementioned modern framework (under heavy-tailed noise) will ultimately result in incorrect confidence intervals, jeopardizing the statistical procedure. Thus, establishing the limit distribution in this setting is of great importance. In this work, we study the behavior of the SGD algorithm with diminishing step-sizes for a class of strongly convex problems when the noise variance is infinite. We establish the convergence rates of the SGD iterates towards the global minimum, and identify a sufficient condition on the Hessian of $f$, which interpolates positive semi-definiteness and diagonal dominance with non-negative diagonal entries. We further study the Polyak-Ruppert averaging of the SGD iterates, and show that the limit distribution is a multivariate $\alpha$-stable distribution. We illustrate our theory on linear regression and generalized linear models, demonstrating how to verify the conditions of our theorems. Perhaps surprisingly, our results show that even under heavy-tailed noise with infinite variance, SGD with diminishing step-sizes can converge to the global optimum without requiring any modification neither to the loss function or to the algorithm itself, as opposed to the conventional techniques used in robust statistics~\citep{huber2004robust}. Finally, we argue that our work has potential implications in constructing confidence intervals in the infinite noise variance setting. \section{Preliminaries and Technical Background} \par \textbf{Notational Conventions.} By $\mathbb N$, $\mathbb N^+$ and $\mathbb R$ we denote the set of non-negative integers, positive integers, and real numbers respectively. For $m\in\mathbb N^+$, we define $[m]=\{1, \ldots, m\}$. We use italic letters (e.g. $x, \xi$) to denote scalars and scalar-valued functions, bold face italic letters (e.g. $\bm x, \bm \xi$) to denote vectors and vector-valued functions, and bold face upper case letters (e.g. $\bf A$) to denote matrices. We use $| \bm x |$ and $\| \bm x \|_p$ to denote the 2-norm and $p$-norm of a vector $\bm x$; $\|\bf A\|$ and $\| \bf A \|_p$ the operator 2-norm and operator $p$-norm of a matrix $\bf A$. The transpose of a matrix $\bf A$ and a vector $\bm x$ (viewed as a matrix with 1 column) are denoted by $\bf A\trsp$ and $\bm x \trsp$. If $\{\bf A_i\}_{i\in\mathbb N}$ is a sequence of matrices and $k>\ell$, the empty product $\prod_{i=k}^\ell \bf A_i$ is understood to be the identity matrix $\bf I$. The asymptotic notations are defined in the usual way: for two sequences of real numbers $\{a_t\}_{t\in\mathbb N}$, $\{b_t\}_{t\in\mathbb N}$, we write $a_t = \mathcal O(b_t)$ if $\limsup_{t\to\infty} |a_t|/|b_t| < \infty$, $a_t = o(b_t)$ if $\limsup_{t\to\infty} |a_t|/|b_t| = 0$, $a_t = \Theta(b_t)$ if both $a_t = \mathcal O(b_t)$ and $b_t = \mathcal O(a_t)$ hold, and $a_t \asymp b_t$ if $\lim_{t\to\infty} |a_t|/|b_t|$ exists and is in $(0,\infty)$. If $a_t = \mathcal{O}(b_t t^\varepsilon)$ for any $\varepsilon>0$, we say $a_t = \tilde{\mathcal{O}}(b_t)$. Sufficiently large or sufficiently small positive constants whose values do not matter are written as $C, C_0, C_1, \ldots$, sometimes without prior introduction. If $\bm X_1, \bm X_2, \ldots$ is a sequence of random vectors taking value in $\mathbb R^n$ and $\mu$ is a probability measure on $\mathbb R^n$, we write $\bm X_t \xrightarrow[t\to \infty]{\mathcal D} \mu $ if $\{\bm X_t\}_{t\in\mathbb N^+}$ converges in distribution (also called `converges weakly') to $\mu$. \vspace{4pt} \noindent\textbf{Stochastic Approximation.} In the SGD recursion \eqref{eq:sgd}, we can replace $\boldsymbol \nabla f$ with arbitrary continuous function $\bm R:\mathbb R^n \to \mathbb R^n$, and consider the same iterations that stochastically approximate the zero $\bm x^*$ of $\bm R$, \begin{equation}\label{eqn:SA} \boldsymbol x_{t+1} = \boldsymbol x_{t} - \gamma_{t+1} \left( \bm R(\boldsymbol x_{t}) + \boldsymbol \xi_{t+1} (\boldsymbol x_t)\right). \end{equation} This is called the \emph{stochastic approximation} process \citep{robbins1951stochastic}, which is a predecessor of stochastic gradient descent and describes a larger family of iterative algorithms (\cite{kushner2003stochastic}, Chapters 2 and 3). Theoretical investigation into recursion \eqref{eqn:SA} has been active ever since its invention, especially under finite noise variance assumption: \cite{robbins1951stochastic} prove the recursion \eqref{eqn:SA} can lead to the $L^2$ convergence $\lim_{t\to\infty}\Exp[|\bm x_t - \bm x^*|^2]=0$; \cite{chung1954stochastic} further calculates an exact convergence rate (see \eqref{eqn:chung-fv-rate}); \cite{blum1954approximation} presents an elegant proof that the convergence of $\boldsymbol x_t$ to $\bm x^*$ can hold almost surely. The asymptotic distribution of \eqref{eqn:SA} is also the discovery of \cite{chung1954stochastic}, the Theorem 6 of which states that $\gamma_t^{-1/2}(\boldsymbol x_t - \boldsymbol x^*)$ converges weakly to a normal distribution; \cite{polyak1992acceleration} and \cite{ruppert1988efficient} independently introduce the concept of `averaging the iterates', \begin{equation} \label{eqn:sa_intro} \overline{\boldsymbol x}_t = \frac{\boldsymbol x_0 + \ldots + \boldsymbol x_{t-1}}{t}, \end{equation} showing the striking result that $\sqrt t (\overline{\boldsymbol x}_t - \boldsymbol x^*)$ converges weakly to a fixed normal distribution \emph{regardless of the choice of the step-size $\{\gamma_t\}_{t\in \mathbb N^+}$}. Recently, optimization algorithms that can handle heavy-tailed $\boldsymbol \xi$ have been proposed \citep{davis2019low,nazin2019algorithms,gorbunov2020stochastic}; however, they still rely on a \emph{uniformly} bounded variance assumption, hence do not cover our setting. \par Compared with the copious collection of theoretical studies on stochastic approximation with finite variance mentioned above, papers on \emph{infinite} variance stochastic approximation are extremely scarce, and we shall summarize the only four papers known to us to the best of our knowledge. \cite{krasulina1969stochastic} is the first to consider such problems, proving almost sure and $L^p$ convergence for the one-dimensional stochastic approximation process without variance. The weak convergence of the iterates (without averaging) $t^{1/\alpha}(\boldsymbol x_t - \boldsymbol x^*)$ is also considered by \cite{krasulina1969stochastic}, but only for the fastest-decaying step-size $\gamma_t = 1/t$. \cite{goodsell1976almost} discuss how $\boldsymbol x_t \to \boldsymbol x^*$ in probability can imply $\boldsymbol x_t \to \boldsymbol x^*$ almost surely, when no finite variance is assumed, and \cite{li1994almost} provides a necessary and sufficient condition for almost sure convergence of $\boldsymbol x_t \to \boldsymbol x^*$, stating that faster-decaying step-size $\gamma_t = o(t^{-1/p})$ is required when moments of lower orders $\Exp[|\bm \xi_t|^p]$ are not in place. \cite{anantharam2012stochastic} show that although step-size that decays slower than $t^{-1/p}$ cannot yield almost sure convergence, $L^p$ convergence can still hold under what they call the `stability assumption', but their analysis technique provides no convergence rate. Recently, \cite{csimcsekli2019heavy} and \cite{zhang2019adam} considered SGD with heavy-tailed $\boldsymbol \xi$ having \emph{uniformly bounded} $p$-th order moments. Besides not being able to handle state-dependent noise due to this uniform moment condition, \cite{csimcsekli2019heavy} imposed further conditions on $\bm R = \nabla f$ such as global H\"{o}lder continuity for a non-convex $f$, whereas \cite{zhang2019adam} modified SGD with `gradient clipping', in order to be able to compensate the effects of the heavy-tailed noise. \par Finally, we shall mention that a class of stochastic recursions similar to \eqref{eqn:SA} have been considered in the dynamical systems theory \citep{mirek2011heavy,buraczewski2012asymptotics,buraczewski2016stochastic}, for which generalized central limit theorems with $\alpha$-stable limits have been proven. However, such techniques typically require $\bm R$ to be (asymptotically) linear and the step-sizes to be constant as they heavily rely on the theory of time-homogeneous Markov processes. Hence, their approach does not readily generalize to the setting of our interest, i.e., non-linear $\bm R$ and diminishing step-sizes, where the latter is crucial for ensuring convergence towards the global optimum. \vspace{4pt} \noindent\textbf{Stable Distributions.} In probability theory, a random variable $X$ is \emph{stable} if its distribution is non-degenerate and satisfies the following property: Let $X_{1}$ and $X_{2}$ be independent copies of $X$. Then, for any constants $a,b>0$, the random variable $aX_{1}+bX_{2}$ has the same distribution as $cX+d$ for some constants $c>0$ and $d$ (see e.g. \citep{ST1994}). The stable distribution is also referred to as the $\alpha$-stable distribution, first proposed by \cite{paul1937theorie}, where $\alpha \in (0,2]$ denoting the stability parameter. The case $\alpha=2$ corresponds to the normal distribution, and the variance under this distribution is undefined for any $\alpha<2$. The multivariate $\alpha$-stable distribution dates back to \cite{Feldheim}, which is a multivariate generalization of the univariate $\alpha$-stable distribution, which is also uniquely characterized by its characteristic function. In particular, an $\mathbb{R}^{d}$-valued random vector $X$ has a multivariate $\alpha$-stable distribution, denoted as $\bm X\sim\mathcal{S}(\alpha,\Lambda,\delta)$ if the joint characteristic function of $\bm X$ is given by \begin{equation}\label{char:alpha:stable:multi} \Exp\left[\exp\left(i\bm u\trsp\bm X\right)\right] =\exp\Big\{-\int_{\bm s\in S_{2}}(|\bm u\trsp\bm s|^{\alpha}+i\nu(\bm u\trsp\bm s,\alpha))\Lambda(\d\bm s)+i\bm u\trsp\delta\Big\}, \end{equation} for any $\bm u\in\mathbb{R}^{d}$, and $0<\alpha\le 2$. Here, $\alpha$ is the tail-index, $\Lambda$ is a finite measure on $S_{2}$ known as the spectral measure, $\bm \delta\in\mathbb{R}^{d}$ is a shift vector, and $\nu(y,\alpha):=-\sg(y)\tan(\pi\alpha/2)|y|^{\alpha}$ for $\alpha\neq 1$ and $\nu(y,\alpha):=(2/\pi)y\log|y|$ for $\alpha=1$ for any $y\in\mathbb{R}$, and $S_{2}$ denotes the unit sphere in $\mathbb{R}^{d}$; i.e. $S_{2}=\{\bm s\in\mathbb{R}^{d}:\Vert\bm s\Vert_{2}=1\}$. Stable distributions also appear as the limit in the Generalized Central Limit Theorem (GCLT) \citep{gnedenko1968limit}, which states that for a sequence of i.i.d.\ random variables whose distribution has a power-law tail with index $0< \alpha <2$, the normalized sum converges to an $\alpha$-stable distribution as the number of summands grows. \vspace{4pt} \noindent\textbf{Domains of Normal Attraction of Stable Distributions.} Let $\bm X_{1},\bm X_{2},\ldots,\bm X_{n}$ be an i.i.d. sequence of random vectors in $\mathbb{R}^{d}$ with a common distribution function $F(\bm x)$. If there exists some constant $a>0$ and a sequence $b_{n}\in\mathbb{R}^{d}$ such that \begin{equation}\label{eqn:a} \frac{\bm X_{1}+\cdots+\bm X_{n}}{an^{1/\alpha}}-b_{n} \xrightarrow[n\to \infty]{\mathcal D} \mu, \end{equation} then $F(\bm x)$ is said to belong to the \emph{domain of normal attraction} of the law $\mu$, and $\alpha$ is the characteristic exponent of the law $\mu$ \cite[page 181]{gnedenko1968limit}. If $\mu$ is an $\alpha$-stable distribution, then we say $F(\bm x)$ is said to belong to the domain of normal attraction of an $\alpha$-stable distribution. For example, the Pareto distribution belongs to the domain of normal attraction of an $\alpha$-stable law. In Appendix~\ref{sec:more-prelim}, we provide more details as well as a sufficient and necessary condition for being in the domain of normal attraction of an $\alpha$-stable law. \section{Convergence of SGD under Heavy-tailed Gradient Noise} \label{sec:convergence} In this section, we identify sufficient conditions for the convergence of SGD under heavy tailed gradient noise, and derive the explicit rate estimates. In the standard setting when the noise variance is finite, some notion of positive definite Hessian assumption is frequently utilized to achieve convergence (see for example ~\cite{polyak1992acceleration,tripuraneni2018averaging, su2018statistical, duchi2016local, toulis2017asymptotic,fang2018online,anastasiou2019normal}). When the noise variance is infinite, but it has finite $p$-th moment for $p \in [1,2)$, one requires a stronger notion of positive definiteness on the Hessian, which leads to an interesting interpolation between the positive semi-definite cone (as $p\to 2$), and the cone of diagonally dominant matrices with non-negative diagonal entries ($p=1$). \subsection{$p$-Positive Definiteness} First, we introduce a signed power of vectors which will be used when defining a family of matrices. \begin{wrapfigure}{r}{0.37\textwidth} \centering \includegraphics[width=0.37\textwidth]{cones.png} \vspace{-15pt} \caption{\small\!\! Geometry of $p$-PSD matrices. $\mathbb D_+$ cone refers to the cone of diagonally dominant matrices with non-negative diagonal entries. Their inclusion relationship is given in Propositions~\ref{rmk:1pd=udd} and \ref{rmk:ppd->2pd}.} \label{fig:ppos} \vspace{-15pt} \end{wrapfigure} For $\bm v = (v^1, \ldots, v^n) \trsp \in \mathbb R^n$ and $q \ge 0$, we let \begin{equation}\label{eqn:signed-power} \bm v^{\langle q \rangle} = \left(\sg\left(v^1\right) \left|v^1\right|^q, \ldots, \sg\left(v^n\right)\left|v^n\right|^q \right) \trsp. \end{equation} Denoting the $n$-dimensional $\ell_p$ unit sphere with $S_p = \{\bm v \in \reals^n : \|\bm v\|_p=1 \}$, and the set of $n\times n$ symmetric matrices with $\mathbb S$, we now define the following subset of $\mathbb S$. \begin{definition}[$p$-positive definiteness]\thlabel{def:p+} Let $p\ge 1$ and $\bf Q$ be a symmetric matrix. We say that $\bf Q$ is $p$-positive definite if for all $\bm v \in S_p$, $\bm v \trsp \bf Q \bm v^{\langle p - 1 \rangle} > 0$. Similarly, we call $\bf Q$ $p$-positive semi-definite if for all $\bm v \in S_p$, $\bm v \trsp \bf Q \bm v^{\langle p - 1 \rangle} \ge 0$. \end{definition} It is not hard to see that the set of $p$-positive semi-definite matrices ($p$-PSD) defines a closed pointed cone, which we denote by $\mathbb S^p_+$, with interior as the set of $p$-positive definite matrices ($p$-PD), denoted by $\mathbb S^p_{++}$. We are mainly interested in the case $1\le p<2$. Note that $\mathbb S^2_{+}$ coincides with the standard PSD cone, and we show in Section~\ref{sec:p+} that $\mathbb S^1_{+}$ is exactly the cone of diagonally dominant matrices with non-negative diagonal entries, denoted by $\mathbb D_+$. For any $p \in [1,2]$, these cones satisfy the following \eq{ \mathbb D_+ = \mathbb S^1_+ \subseteq \mathbb S^p_+ \subseteq \mathbb S^2_+. } Figure~\ref{fig:ppos} is an hypothetical illustration of the relationships between these cones. For a uniform version of Definition~\ref{def:p+}, we recall that every operator norm $\|\cdot\|_p$ induces the same topology on the set of $n$-dimensional matrices, which is just the usual topology on $\mathbb R^{n\times n}$. Further, the set of symmetric matrices $\mathbb S$, as the set of zeros of the continuous function $\bf X \mapsto \bf X - \bf X \trsp$, is a closed set. Hence for a set $\mathcal M \subseteq \mathbb S$, denoting its topological closure with $\overline{\mathcal{M}}$, we also have $\overline{\mathcal{M}} \subseteq\mathbb S$. We are interested in the case where $\mathcal{M}$ is bounded. \begin{definition}[uniform $p$-PD]\thlabel{def:unfmp+} Let $p\ge 1$ and $\mathcal M \subset \mathbb S$ be a non-empty set of symmetric matrices. We say that $\mathcal M$ is uniformly $p$-PD if for all $\bf Q \in \overline{\mathcal M}$, we have $\bf Q \in \mathbb S^p_{++}$. \end{definition} \par Notice that $\mathcal M$ is uniformly 2-PD if and only if the eigenvalues of the symmetric matrices in the set $\mathcal{M}$ are all lower bounded by a positive real number. Notice also that a finite subset of symmetric matrices is uniformly $p$-PD if and only if each element of the set is $p$-PD. $p$-PSD cone emerges naturally when analyzing SGD algorithm in the heavy-tailed setting, interpolating between the standard PSD cone to the cone of diagonally dominant matrices with non-negative diagonal entries. To the best of our knowledge, we are the first to study such families of matrices and their application in stochastic optimization. For further details about these cones, we refer interested reader to Appendix~\ref{sec:p+}. We make the following uniform smoothness and the curvature assumptions on the Hessian of the objective function. \begin{assumption}\label{as:ppos} The set of matrices $\{ \boldsymbol \nabla^2 f(\boldsymbol x): \bm x \in \mathbb R^n \}$ is bounded and uniformly $p$-PD. \end{assumption} \subsection{Rate of Convergence in $L^p$} We fix a probability space $(\Omega, \mathcal F, \Pr)$ with filtration $\{\mathcal F_t\}_{t\in \mathbb N}$, and make the following assumption on the gradient noise sequence. \begin{assumption}\label{as:noise} Let $\boldsymbol x_0$ be $\mathcal{F}_0$-measurable. The gradient noise sequence $\{\boldsymbol \xi_t\}_{t\in \mathbb N^+}$ is given as \begin{equation}\label{eqn:noise-decomp} \boldsymbol \xi_{t+1}(\boldsymbol x_t) = \bm m_{t+1}(\boldsymbol x_t)+ \bm \zeta_{t+1}, \end{equation} where $\{ \bm \zeta_t \}_{t\in \mathbb N^+}$ is an i.i.d. sequence with $\Exp[\bm \zeta_t] = 0$, and $\Exp[|\bm \zeta_t|^p] < \infty$ for some $p$, and $\{ \bm m_t \}_{t\in \mathbb N^+}$ is a martingale difference sequence, and both sequences are adapted to the filtration $\{\mathcal F_t\}_{t\in \mathbb N}$. Further, the state dependent component of the noise satisfies, for some $K>0$, \begin{equation}\label{eqn:state-dep-var} \Exp\left[ \left|\bm m_{t+1}(\boldsymbol x_t)\right|^2 \mid \mathcal F_t\right] \le K\left(1+|\boldsymbol x_t|^2\right). \end{equation} \end{assumption} We note that the above assumptions also imply that both the gradient noise sequence $\{\boldsymbol \xi_t\}_{t\in \mathbb N^+}$ as well as the SGD iterates $\{\boldsymbol x_t\}_{t\in \mathbb N}$ are adapted to the same filtration $\{\mathcal F_t\}_{t\in \mathbb N}$. We call $\bm m_t$ the \emph{state-dependent component} of the gradient noise, which naturally has a state dependent conditional second moment. The variance of this component of the noise can be arbitrarily large depending on the state; yet, for a given state $\boldsymbol x_t$, it is guaranteed to be finite. The \emph{heavy-tailed} noise behavior is due to $\bm \zeta_t$, which may have an infinite variance for $p<2$ (i.e., the second moment is not undefined). We point out that such a decomposition \eqref{eqn:noise-decomp} arises in many instances of stochastic approximation subject to heavy-tailed noise with long-range dependencies and has been considered in the literature, see e.g. \citet{polyak1992acceleration} and \citet{anantharam2012stochastic}. We shall show in Section~\ref{sec:examples} that such noise structure arises in practical applications such as linear regression and generalized linear models subject to heavy-tailed data. Our first result provides a convergence rate in $L^p$, for the SGD algorithm to the unique minimizer $\boldsymbol x^*$ of the objective function $f$ with uniformly $p$-PD Hessian, when the noise sequence $\{\bm \xi_t\}_{t \in \mathbb N^+}$ has potentially an infinite variance. \begin{theorem}\thlabel{thm:rate-lp} Suppose Assumptions~\ref{as:ppos} and \ref{as:noise} hold for some $1 < p\le 2$. For step-size satisfying $\gamma_t \asymp t^{-\rho}$ with $\rho \in (0, 1)$, the error of the SGD iterates $\{ \boldsymbol x_t\}_{t\in \mathbb N}$ from the minimizer $\boldsymbol x^*$ satisfies \begin{equation}\label{eqn:lpweak} \Exp\left[|\boldsymbol x_t - \boldsymbol x^*|^p\right] = \mathcal{O}\left(t^{ -\rho (p-1) } \right). \end{equation} Consequently, we have $\sup_{t\in \mathbb N^+}\Exp[|\bm \xi_t|^p] < \infty$. \end{theorem} The proof of \thref{thm:rate-lp} is provided in Appendix~\ref{sec:proofs}. We observe that the convergence rate of SGD depends on the highest finite moment $p$ of the noise sequence, and faster rates are achieved for larger values of $p$. The fastest rate implied by our result is near $\mathcal{O}\left(t^{ -p+1} \right)$, which is achieved for $\rho\approx1$; yet, SGD converges even for very slowly decaying step-size sequences with $\rho$ closer to $0$. If the noise has further integrability properties with a finite $p$-th moment for all $p \in [q,\alpha)$ for some $q<\alpha$ and if uniform $p$-PD assumption holds, then faster rates are achievable. In particular, the following result is a consequence of \thref{thm:rate-lp}, and its proof is provided in Appendix~\ref{sec:proofs}. \begin{corollary}\thlabel{thm:rate-lpalpha} For constants $q,\alpha$ satisfying $1 < q < \alpha \le 2$, suppose that Assumptions~\ref{as:ppos} and \ref{as:noise} hold for every $p \in [q, \alpha)$. For step-size satisfying $\gamma_t \asymp t^{-\rho}$ with $\rho \in (0, 1)$, the error of the SGD iterates $\{ \boldsymbol x_t\}_{t\in \mathbb N}$ from the minimizer $\boldsymbol x^*$ satisfies \begin{equation}\label{eqn:lpstrong} \Exp\left[|\bm x_t - \boldsymbol x^*|^q\right] = \tilde{\mathcal{O}} \left(t^{-\rho q \frac{\alpha - 1}{\alpha}} \right). \end{equation} \end{corollary} \begin{remark} The additional integrability assumption yields faster rates for any feasible step-size sequence since $p(\alpha-1)/\alpha \ge p-1$ for $p\in(1,2]$. \end{remark} \par Let us briefly compare our results stated above to those in the setting where the noise sequence has a finite variance. A classical convergence result that goes back to \citet[Theorem~5]{chung1954stochastic}\footnote{This result, like many other similar studies in the 1950s, concerns only the one-dimensional case. But they generalize easily to higher dimensions.} states that \begin{equation}\label{eqn:chung-fv-rate} \Exp\left[|\boldsymbol x_t - \boldsymbol x^*|^r\right] = \Theta\left(t^{-\rho r/2}\right), \end{equation} where $r\ge2$ is an integer such that the $r$-th moment exists for the stochastic approximation process, and this is achieved for strongly convex objective functions in one dimension (whose second derivative $\{ f''(\boldsymbol x): \bm x \in \mathbb R \}$ satisfies the uniformly $2$-PD property) with a step-size choice $\gamma_t \asymp t^{-\rho}$ for some $\rho \in (1/2,1)$. We point out that our rate \eqref{eqn:lpstrong} recovers the rate implied by \eqref{eqn:chung-fv-rate} when $r=2$, and extends it further to the case $1\le r<2$. \section{Stable Limits for the Polyak-Ruppert Averaging} \label{sec:averaging} \par In this section, we establish the limit distribution of the Polyak-Ruppert averaging under infinite noise variance, extending the asymptotic normality result given by \cite{polyak1992acceleration} to $\alpha$-stable distributions. Let us fix an $\alpha \in (1, 2]$ and assume the following throughout this subsection. \begin{assumption}\label{as:dona} Let $\boldsymbol x_0$ be $\mathcal{F}_0$-measurable. The gradient noise sequence $\{\boldsymbol \xi_t\}_{t\in \mathbb N^+}$ is given as \begin{equation}\label{eqn:noise-decomp-2} \boldsymbol \xi_{t+1}(\boldsymbol x_t) = \bm m_{t+1}(\boldsymbol x_t)+ \bm \zeta_{t+1}, \end{equation} where $\{ \bm \zeta_t \}_{t\in \mathbb N^+}$ is an i.i.d. sequence with $\Exp[\bm \zeta_t] = 0$, and it is in the domain of normal attraction of an $n$-dimensional symmetric $\alpha$-stable distribution $\mu$, i.e., \begin{equation}\label{eqn:noise-attracted} \frac{\bm \zeta_1 + \ldots + \bm \zeta_t}{t^{1/\alpha}} \xrightarrow[t\to \infty]{\mathcal D} \mu. \end{equation} The state dependent component $\{ \bm m_t \}_{t\in \mathbb N^+}$ is a martingale difference sequence with a second-moment satisfying~\eqref{eqn:state-dep-var}, and both sequences are adapted to the filtration $\{\mathcal F_t\}_{t\in \mathbb N}$. \end{assumption} The above assumption also implies that $\Exp[|\bm \zeta_t|^p] < \infty$ for every $p\in[1,\alpha)$, i.e., the moment condition on the i.i.d. heavy-tailed component of the noise in Assumption~\ref{as:noise} holds for every $p\in[1,\alpha)$. \par Denoting the Polyak-Ruppert averaging by $\overline{\bm x}_t \coloneqq \frac{1}{t} (\boldsymbol x_0+...+\boldsymbol x_{t-1})$, we are interested in the asymptotic behavior of \begin{equation} t^{1-1/\alpha} (\overline{\bm x}_t - \boldsymbol x^*) = \frac{(\bm x_0 + \ldots + \bm x_{t-1}) - t \bm x^*}{t^{1/\alpha}}, \end{equation} for $\alpha \in (1,2]$. In the special case when $\alpha = 2$, it is known that this limit converges to a multivariate normal distribution (which is a 2-stable distribution), a result proven in the seminal work by \cite{polyak1992acceleration}. Similarly, we begin with a result that considers a quadratic objective where the function $\boldsymbol \nabla f(\boldsymbol x)$ is linear in $\boldsymbol x$, and then building on this result, we establish the limit distribution of Polyak-Ruppert averaging also in the more general non-linear case. \begin{theorem}[linear case]\thlabel{thm:stab-linear} Suppose the function $\boldsymbol \nabla f(\boldsymbol x)$ is affine, i.e. $\boldsymbol \nabla f (\bm x) = \bf A \bm x - \bm b$ for a real matrix $\bf A \in \reals^{n\times n}$ and a real vector $\bm b \in \reals^n$ and there exist scalars $p,\rho$ satisfying \begin{align} \max \left( \frac{\alpha + \alpha \rho}{1 + \alpha \rho}, \alpha\rho \right) \le p\le \alpha, \end{align} such that $\bf A$ is $p$-PD and $\rho \in (0, 1)$. If the noise sequence satisfies Assumption~\ref{as:dona}, then for the step-size satisfying $\gamma_t \asymp t^{-\rho}$, the normalized average $t^{1-1/\alpha} (\overline{\bm x}_t - \boldsymbol x^*)$ converges weakly to an $n$-dimensional $\alpha$-stable distribution. \end{theorem} We observe from the above theorem that $\alpha$-stable limit is achieved for Polyak-Ruppert averaging for any step-size sequence with index $\rho \in (0,1]$. Thus, in the linear case, the size of the interval of feasible indices is the same in both heavy- and light-tailed noise settings (see e.g. \cite{polyak1992acceleration} and \cite{ruppert1988efficient}). Notably, $\alpha$-stable limit of the averaged iterates does not depend on the index $\rho$. Non-asymptotic rates are required to see the effect of step-size more clearly. The next result generalizes Theorem~\ref{thm:stab-linear} to the setting where $\boldsymbol \nabla f(\boldsymbol x)$ is non-linear. \begin{theorem}[non-linear case]\thlabel{thm:stab-nonlin} Let $1 < 1/\rho < q < \alpha$ and suppose Assumption~\ref{as:ppos} holds for every $p\in[q,\alpha)$. Assume further that the gradient $\boldsymbol \nabla f(\bm x)$ can be approximated using the Hessian matrix $\bf \boldsymbol \nabla^2 f(\boldsymbol x^*)$ around the minimizer $\bm x^*$ as \begin{align}\label{eq:local-linearity} \left|\boldsymbol \nabla f(\bm x) - \bf \boldsymbol \nabla^2 f(\boldsymbol x^*) (\bm x - \bm x^*) \right| \le K \left|\bm x - \bm x^*\right|^q. \end{align} If the noise sequence satisfies Assumption~\ref{as:dona}, for the step-size satisfying $\gamma_t \asymp t^{-\rho}$, the normalized average $t^{1-1/\alpha} (\overline{\bm x}_t - \boldsymbol x^*)$ converges weakly to an $n$-dimensional $\alpha$-stable distribution. \end{theorem} The additional assumption~\eqref{eq:local-linearity} is standard (see e.g. \citet[Assumption~3.2]{polyak1992acceleration}), which simply imposes a linearity condition on the gradient of $f$ with an order-$q$ polynomial error term. We notice that the size of the interval of feasible indices $\rho \in (1/\alpha, 1)$ is smaller this time compared to the light tailed case, where \citet[Theorem~2]{polyak1992acceleration} allows $\rho \in (1/2, 1)$. The above theorem establishes that, when the noise has diverging variance, the Polyak-Ruppert averaging admits an $\alpha$-stable limit rather than a standard CLT. This result has potential implications in statistical inference in the presence of heavy-tailed data. Inference procedures that take into account the computational part of the training procedure (instead of drawing conclusions for the minimizer of the empirical risk) rely typically on variations of Polyak-Ruppert averaging and the CLT they admit~\citep{fang2018online, su2018statistical,chen2020statistical}. The above theorem simply states this CLT does not hold under heavy-tailed gradient noise. Therefore, many of these procedures require further adaptation, if the gradient has undefined variance. Finally, it is well-known that Polyak-Ruppert averaging achieves the Cram\'er-Rao lower bound~\citep{polyak1992acceleration,gadat2017optimal}, which is a lower bound on the variance of an unbiased estimator. However, it is not clear what this type of optimality means when the variance is not defined. These are important directions that require thorough investigations, and they will be studied elsewhere. \section{Examples in the Presence of Heavy-tailed Noise} \label{sec:examples} In this section, we demonstrate how the stochastic approximation framework discussed in our paper covers several interesting examples, most notably linear regression and generalized linear models (GLMs), such that the heavy-tailed behavior naturally arise and the assumptions we proposed for Theorems~\ref{thm:rate-lp}, \ref{thm:stab-linear}, and \ref{thm:stab-nonlin} are all met. \subsection{Ordinary Least Squares} \par Let us first consider the following linear model, \begin{equation}\label{eqn:m-lin-model} y = \bm z\trsp \bm \beta_0 + \epsilon, \end{equation} where $\bm \beta_0 \in\reals^n$ is the true coefficients, $y \in \mathbb R$ is the response, the random vector $\bm z \in \mathbb R ^n$ denotes the covariates with a positive-definite second moment $0\prec\Exp[\bm z \bm z\trsp] < \infty$, and $\epsilon$ is a noise with zero conditional mean $\Exp[ \epsilon | \bm z ] = 0$. In the classical setting, the noise $\epsilon$ is assumed to be Gaussian, whose variance is well defined. In this case, the population version of the maximum likelihood estimation (MLE) problem corresponds to minimizing $f(\boldsymbol x) = \Exp[(y - \bm z\trsp \boldsymbol x)^2]/2$ (where the expectation is taken over the $(y,\bm z)$ pair), or equivalently solving the following normal equations \begin{equation}\label{eqn:lin-model-obj} \boldsymbol \nabla f(\bm x) \coloneqq \Exp\big[ \bm z \bm z \trsp\big] \bm x - \Exp[\bm z y] = 0. \end{equation} It can be easily verified that the true coefficients $\bm \beta_0$ is the unique zero of the above equation, i.e. we have $\boldsymbol x^* = \bm\beta_0$. \par Now, suppose we are given access to a stream of i.i.d.\ drawn instances of the pair $(y, \bm z)$, denoted by $\{y_t, \bm z_t \}_{t\in \mathbb N^+}$. In large-scale settings, one generally runs the following stochastic approximation process, which is simply online SGD on the population MLE objective $f(\boldsymbol x)$: \begin{equation}\label{eqn:lin-model-SA-ver2} \bm x_{t} = \bm x_{t-1} - \gamma_t \left(\bm z_t \bm z_t\trsp \bm x_{t-1} - \bm z_t y_t\right). \end{equation} Manifestly, \eqref{eqn:lin-model-SA-ver2} is a special case of \eqref{eqn:SA}, where the gradient noise admitting the decomposition $\bm \xi_t = \bm \zeta_t+\bm m_t$, for an i.i.d.\ component $\bm \zeta_t$ and a state-dependent component $\bm m_t$ (see \eqref{eqn:noise-decomp-2}), \begin{equation} \begin{cases}\bm \zeta_t = \Exp[\bm z y] - \bm z_t y_t, \\\bm m_t = \left(\bm z_t \bm z_t\trsp - \Exp\left[ \bm z \bm z \trsp \right]\right)\bm x_{t-1}. \end{cases} \end{equation} In the presence of heavy-tailed noise, i.e., $\epsilon$ has possibly infinite variance, the population MLE objective $f(\boldsymbol x)$ may not be finite and one should typically resort to methods from M-estimation and choose an appropriate loss function within robust statistics framework~\citep{huber2004robust,van2000asymptotic}. However, the SGD iterations \eqref{eqn:lin-model-SA-ver2} may still be employed to estimate the true coefficients $\bm\beta_0$ (potentially due to model misspecification), as we demonstrate below. First, notice that the noise sequence can be decomposed in two parts, and the i.i.d. component $\{\bm \zeta_t\}_{t\in \mathbb N}$ exhibits the heavy-tailed behavior. Assume that this component has the highest defined moment order $1\le p<2$, i.e., $\Exp[|\bm \zeta_t|^p]<\infty$. Further, the state dependent component $\bm m_t$ defines a martingale difference sequence, and the condition \eqref{eqn:state-dep-var} is met since the covariates $\bm z$ have finite second moment, i.e., \begin{equation} \Exp\left[|\bm m_t|^2 \mid \bm x_{t-1}\right] \le C |\bm x_{t-1}|^2. \end{equation} Hence, Assumption~\ref{as:noise} is satisfied. Next, assuming that the second moment of the covariates $\boldsymbol \nabla^2 f(\boldsymbol x) = \Exp[\bm z \bm z\trsp]$ is $p$-PD, one can guarantee that Assumption~\ref{as:ppos} is satisfied. Therefore, our convergence results can be invoked. We emphasize that this assumption is always satisfied if $\Exp[\bm z \bm z\trsp]$ is diagonally dominant, but the condition is milder for $p>1$. % \subsection{Generalized Linear Models} In this section, we consider the problem of estimating the coefficients in generalized linear models (GLMs) in the presence of heavy-tailed noise. GLMs play a crucial role in numerous statistics problems, and provide a miscellaneous framework for many regression and classification tasks, with many applications~\citep{mccullagh1989generalized,nelder1972generalized}. For a response $y \in \mathbb R$ and random covariates $\bm z \in \mathbb R ^n$, the population version of an $\ell_2$-regularized MLE problem in the canonical GLM framework reads \begin{equation}\label{objective:GLM} \underset{\boldsymbol x}{\text{minimize}} \ f(\boldsymbol x)\coloneqq \Exp\left[\psi\left(\boldsymbol x\trsp \bm z\right)- y\boldsymbol x\trsp \bm z\right] + \frac{\lambda}{2} |\boldsymbol x|^2\ \quad \ \text{ for }\ \quad \ \lambda>0. \end{equation} Here, $\psi : \reals \to \reals$ is referred to as the cumulant generating function (CGF) and assumed to be convex. Notable examples include $\psi(x)=x^2/2$ yielding linear regression, $\psi(x)=\log(1+e^{x})$ yielding logistic regression, and $\psi(x) = e^{x}$ yielding Poisson regression. Gradient of the above objective \eqref{objective:GLM} is given by \begin{equation}\label{eq:glm-grad} \boldsymbol \nabla f(\bm x) = \Exp\left[\bm z \psi'\big(\bm z\trsp\bm x\big)\right] - \Exp[\bm z y] + \lambda \boldsymbol x. \end{equation} We define the unique solution of the population GLM problem as the unique zero of \eqref{eq:glm-grad}, which we denote by $\boldsymbol x^*$. Note that we do not assume a model on data, allowing for model misspecification similar to~\citet{erdogdu2016scaled,erdogdu2019scalable}. As in the previous section, we assume that the covariates have finite fourth moment and the response is contaminated with heavy-tailed noise with infinite variance. In this setting, the objective function is always defined, even if the response has infinite variance. We are given access to a stream of i.i.d. drawn instances of the pair $(y, \bm z)$, denoted by $\{y_t, \bm z_t \}_{t\in \mathbb N^+}$, and we solve the above non-linear problem using the following stochastic process, \begin{equation}\label{eqn:glin-model-SA} \bm x_{t} = \bm x_{t-1} - \gamma_t \left(\bm z_t \psi'\big(\bm z_t\trsp \bm x_{t-1}\big) - \bm z_t y_t + \lambda \boldsymbol x_{t-1}\right), \end{equation} with gradient noise admitting the decomposition $\bm \xi_t = \bm \zeta_t + \bm m_t$ where \begin{equation} \begin{cases}\bm \zeta_t = \Exp[\bm z y] - \bm z_t y_t, \\\bm m_t = \bm z_t \psi'\left(\bm z_t\trsp \bm x_{t-1}\right) - \Exp\left[\bm z_t \psi'\big(\bm z_t\trsp \bm x_{t-1}\big)\right]. \end{cases} \end{equation} In what follows, we verify our assumptions for a CGF satisfying $|\psi'(x)|\le C(1+|x|)$ and $\psi''(x) \ge 0$ for all $x\in\reals$. These assumptions can be easily verified for any convex CGF that grows at most linearly (e.g. $\psi(x) = \log(1+e^x)$). $\bm\zeta_t$ are i.i.d.\ and contain the entire heavy-tailed part of the gradient noise. Assume that this component has the highest defined moment order $1\le p<2$, i.e., $\Exp[|\bm \zeta_t|^p]<\infty$. Further observe that the state dependent component defines a martingale difference sequence and satisfies the condition \eqref{eqn:state-dep-var} since the covariates $\bm z$ have finite fourth moment, and $|\psi'|$ grows at most linearly. Therefore, Assumption~\ref{as:noise} is satisfied. We note that the Hessian of the objective $f$ is given as \begin{equation}\label{eq:glm-hess} \boldsymbol \nabla^2 f(\bm x) = \Exp\left[\bm z\bm z\trsp \psi''\big(\bm z\trsp\bm x\big)\right] + \lambda \bf I. \end{equation} Since $\psi'' (x)\ge 0$, $\boldsymbol \nabla^2 f(\bm x)$ is clearly PD for all $\lambda >0$. For sufficiently large $\lambda$, this matrix can also be made diagonally dominant, which implies that it is $p$-PD for any $p\ge 1$, further implying Assumption~\ref{as:ppos}. Therefore, for an appropriate step-size sequence, our convergence results on the SGD can be applied to this framework. \section{Conclusion} In this paper, we considered SGD subject to state-dependent and heavy-tailed noise with a potentially infinite variance when the objective belongs to a class of strongly convex functions. We provided a convergence rate for the distance to the optimizer in $L^p$ under appropriate assumptions. Furthermore, we provided a generalized central limit theorem that shows that the averaged iterates converge to a multivariate $\alpha$-stable distribution. We also discussed the implications of our results to applications such as linear regression and generalized linear models subject to heavy-tailed input data. Finally, while we leave it for a future study, we emphasize the importance of adapting existing statistical inference techniques that rely on the averaged SGD iterates in the presence of heavy-tailed gradient noise which arises naturally in modern statistical learning applications. \section*{Acknowledgements} MAE is partially funded by CIFAR AI Chairs program, and CIFAR AI Catalyst grant. MG’s research is supported in part by the grants NSF DMS-1723085 and NSF CCF-1814888. LZ is grateful to the support from a Simons Foundation Collaboration Grant. \newpage \bibliographystyle{abbrvnat} {\small
1,116,691,498,741
arxiv
\section{Errors for increasing reliabilities} \label{app:percentile-errors} \setcounter{figure}{13} \setcounter{equation}{26} Fig.~\ref{fig:all-percentile-sojourn-errors} shows the error for the best approximation A-F from the 99\textsuperscript{th} up to the 99.999\textsuperscript{th} percentile of the sojourn time. The sojourn time error is unitless, and it illustrates the increasing error as $\rho$ approaches $1$, so as the increasing oscilacions of the error for higher reliabilities, even with mid values of the number of CPUs like $R=4$ -- as explained in Section~\ref{sec:behaviour-high-reliabilities}. \begin{figure}[ht!] \centering \subfloat[]{\includegraphics[width=0.57\columnwidth]{img/99-percentile-sojourn-error.pdf}} \\ \subfloat[]{\includegraphics[width=0.57\columnwidth]{img/999-percentile-sojourn-error.pdf}} \\ \subfloat[]{\includegraphics[width=0.57\columnwidth]{img/99999-percentile-sojourn-error.pdf}} \\ \caption{The best method error for the 99\textsuperscript{th}, 99.9\textsuperscript{th}, and 99.999\textsuperscript{th} percentile the sojourn time. Positive/negative mean over/under-estimation, respectively. } \label{fig:all-percentile-sojourn-errors} \end{figure} \section{Getting \texttt{sojourn\_percentile($\eta',R'$)}} \label{app:sojourn-percentile} We provide an open-source implementation\footnote{\url{https://github.com/geraintpalmer/mmr-jsq-ps/}} of the proposed approximation methods~A-F. Every method is implemented in Python and yields the sojourn time CDF for a given number of CPUs $R$. Additionally, it is possible to specify the truncation limits for the maximum number of customers considered at each CPU $L_1$, and the maximum number of customers at the system $L_2$. In order to obtain the result of the \texttt{sojourn\_percentile($\eta',R'$)} function used inside Algorithm~\ref{alg:scaling}, we first compute the load $\rho$ given a number of CPUs $R'$, and the arrival and service rates $\Lambda,\mu$; respectively. Second, we check Fig.~\ref{fig:bestperforming} to know which is the best method for the given $(\rho,R')$ tuple, e.g., method-A. Third, we create an instance of method-A invoking \texttt{jsq.MethodA($\Lambda,\mu,R,L_1,L_2, \{t_0,t_1,\ldots\}$)}, with $\{t_0,t_1,\ldots\}$ being the discrete time points at which we compute the CDF. Then, we obtain the CDF of method-A by accessing property \texttt{sojourn\_time\_cdf} of the method instance. This property holds a vector $\{P_0,P_1,\ldots\}$ that represents the CDF computed by method-A. In particular, each element represents $P_i=\mathbb{P}(T\leq t_i)$. Finally, we obtain the $\eta'$ percentile of the sojourn time as \begin{equation} t^{\eta'}=\argmin\{t_i: \mathbb{P}(T\leq t_i)>\eta'\} \end{equation} \section{Considered $\mu$ for autonomous driving} \label{app:deriving-mu} The scaling experiments presented in Section~\ref{subsec:example} consider an autonomous driving service, namely, an infrastructure assisted environment perception service~\cite{5g-americas}. Vehicles send an \mbox{H.265/HEVC} video stream that is processed in a remote server (modelled as an M/M/R-JSQ-PS system) to detect road events and inform the vehicle. The vehicles video stream $\Lambda$ is expressed as frames/sec (fps), and is processed at a rate of $\mu$ fps in the server hosting the autonomous driving service. For the experiments in Section~\ref{subsec:example} we take into consideration the time that it takes to decode the video stream, and the time it takes for a Convolutional Neural Network (CNN) to detect events in a video frame. According to~\cite{decoding,cnn-detection} an Intel Xeon family CPU manages decode an HEVC video frame in 8~ms, and takes 0.37~ms to detect a road event in a video frame. Hence a single CPU within the considered M/M/R-JSQ-PS system offers a rate of $\mu=\tfrac{10^3}{8.37}$~fps for the considered infrastructure assisted environment perception service. Such value of $\mu$ is the one we used in the experiments presented in Section~\ref{subsec:example}. \section{Introduction} Recent advances in the networking community aim at a better control over infrastructure behaviour. Although the Internet was designed to provide a best-effort delivery~\citep{rfc3724}, 5G~\citep{21.915}, WiFi~6~\citep{IEEE802.11ax,IEEE802.11be} and WiFi~7~\citep{IEEE802.11be} have enhanced the mobile connectivity to increase the network reliability. With the new wireless technologies it is possible to support services that require low latencies and high reliabilities like vehicle to everything (V2X)~\citep{5g-americas}, drones~control~\citep{sardo}, remote surgery~\citep{remote-surgery}, or Industry~4.0~\citep{ASCHENBRENNER2015159}. In particular, it is now possible to remotely control an industrial robotic arm over a wireless connection~\citep{deep} while guaranteeing a reliable communication. The 3\textsuperscript{rd} Generation Partnership Project (3GPP) claims \citep{38.913} that the aforementioned services require an Ultra-Reliable Low Latency Communication (URLLC) over the Internet. That is, any URLLC service should foresee Internet latencies in the order of 10~ms and reliabilities above a 99\%. 5G and WiFi already guarantee a low latency and reliable wireless communication through diverse mechanisms \citep{28.811,21.915,23.725,23.502,29.517}, but it is out of their scope whether the processing time of Internet traffic satisfies an URLLC. Internet packets exchanged by URLLC services are typically processed in a remote server accessible through a 5G or WiFi connection. In a remotely controlled industrial robot \citep{deep} the steps are as follows: ($i$) the robot reports its position within packets sent over 5G/WiFi to a server; ($ii$) the server calculates the next robot position; and ($iii$) the robot receives an instruction with its new position over the 5G/WiFi connection. Calculating the next robot position at step ($ii$) induces a processing latency that depends on factors such as the server load, the arrival distribution of sensor data, how fast the server is, how many CPUs the server has, or how complex operations are. In the case of a remotely controlled robotic arm, the server CPUs perform inverse/forward kinematics~\citep{kinematics} and PID control~\citep{pid} operations to derive the next position of the robotic arm. Both operations are performed at a CPU for each URLLC packet, and their delay is impacted by the number of packets being processed at the same CPU. Hence, if a CPU is attending multiple URLLC packets it is more unlikely that the processing time remains below the latency requirement of 10~ms. Consequently, the latency of a remotely controlled robotic arm may exceed the 10~ms requirement even if the 5G or WiFi~6/7 connection provides URLLC -- steps ($i$) and ($iii$) in the prior paragraph. Assuming that a 5G or WiFi~6/7 connection suffices to provide URLLC for a service is not enough. It is also necessary to understand how the processing time is distributed when URLLC traffic is attended by a server. Only when both the URLLC traffic processing and wireless communication satisfy the latency requirements, we can tell that the network infrastructure provides an URLLC service, e.g., that it ensures latencies below 10~ms 99\% of probability. Therefore, it is of paramount importance to model the URLLC processing latency. \bigskip In this paper we study how the traffic processing latency is distributed to determine whether a service meets URLLC. Specifically, we develop open-source simulation software, and also propose analytical approximations to characterise the processing latency of servers that dispatch the traffic processing to the least loaded CPU within a pool of $R$ CPUs. As assumed by the state of the art~\citep{RCohen15,jemaa2016qos,oljira2017model}; and alike Linux-based systems, we assume that each CPU utilises a processor-sharing policy to attend the traffic processing. The contributions of our work are summarised as follows: \begin{itemize} \item We build a discrete event simulation for G/G/R-JSQ-PS systems; \item We propose six analytical approximations for the sojourn time cumulative distribution function (CDF) of M/M/R-JSQ-PS systems; \item We derive a run-time complexity analysis to obtain the sojourn time CDF using both the simulation and analytical approximations; \item We study which approximation is more accurate depending on the system load and number of CPUs; \item We study the accuracy of the best approximation for the latency percentiles required by URLLC services, i.e., from the 99\textsuperscript{th} percentile to the 99.999\textsuperscript{th} percentile. \end{itemize} In terms of Wasserstein distance, the proposed analytical approximations deviate less than a 2 out of 182 time units from the sojourn time CDF in M/M/R-JSQ-PS systems. For the 99.99\textsuperscript{th} percentile, the best approximation yields an error of less than 1.78~time units. The paper is structure as follows. In Section~\ref{sec:mmr-jsq-ps} we introduce the considered system that we study in this paper. Then, in~Section~\ref{sec:related} we go over the related work about the sojourn time CDF in queueing systems. Later, in~Section~\ref{sec:simulator} we discuss the development of the G/G/R-JSQ-PS simulation, and in~Section~\ref{sec:analytical} we detail the analytical approximations that we propose for the sojourn time CDF of M/M/R-JSQ-PS systems. Afterwards, in~Section~\ref{sec:complexity} and Section~\ref{sec:comparison} we study the run-time complexity and accuracy of the proposed approximations, respectively. Then, in Section~\ref{sec:behaviour-high-reliabilities} we study the accuracy of our approximations at the percentiles required by URLLC services. Finally, in~Section~\ref{sec:conclusions} we conclude our work and point out future research directions. \section{An M/M/R-JSQ-PS queueing system} \label{sec:mmr-jsq-ps} This work is concerned with the sojourn time distribution $\mathbb{P}(T \leq t)$ of customers in an M/M/R-JSQ-PS system, that is a system with $R$ parallel processor-sharing queues, with overall Poisson arrival rate $\Lambda$, and intended service times distributed exponentially with rate $\mu$. Customers join the processor-sharing queue that has the least amount of customers. Processor-sharing (PS) is a queueing discipline where all customers are served simultaneously, but the service load is shared between the customers. That is, if a customer is expecting to receive a service time $s$, then the rate at which that service is given is $s/n$ when there are $n$ customers present. Therefore if there are $n$ customers present throughout the customer's service, then it will last $sn$ time units. A key feature is that $n$ can vary during that customer's service. Figure~\ref{fig:mmrjsqps} illustrates this system. Note that URLLC packets can be considered as customers in the context of queueing theory, hence, throughout the paper we refer to customers as it a standard term in queuing theory. \begin{figure}[t] \centering \resizebox{\columnwidth}{!}{% \input{mmc.tex} } \caption{M/M/R-JSQ-PS system processing URLLC packets.} \label{fig:mmrjsqps} \end{figure} \section{Related work} \label{sec:related} In the networking community queuing theory is a well-established tool to assess the modelling of network infrastructure~\citep{kleinrock,mor}. The packet-based nature of the Internet, so as the buffering and processor-sharing nature of servers, make it a useful theoretical tool to derive insights on the behaviour of the network. Recent URLLC services and their urgent need for communication guarantees can benefit from the theoretical results of queuing theory in order to adequately evaluate the network performance. The fundamental results of queuing theory~\citep{kleinrock} give closed-form formulas for the sojourn time (waiting plus service time) of M/M/1 systems, i.e. systems with 1 server that has exponential service time to dispatch customers arriving according to a Poisson distribution and queue up before they are served. Namely, it is possible to find both the average and CDF for the sojourn time of M/M/1 systems, with the latter having also an exponential distribution~\citep{kleinrock}. However, the internet traffic is typically dispatched in parallel by multiple servers or CPUs within a server. Hence, it is better resorting to M/M/R systems with up to $R$ servers (or CPUs) that attend customers in parallel. For such systems, the queueing theory fundamentals also give closed-form expressions for the average sojourn time~\citep{kleinrock}, and indications on how to derive its CDF~\citep{mor}. But still, both M/M/1 and M/M/R systems may not be suitable to model networking components. Either because the assumption of Poissonian arrivals is not suitable or because considering exponential service times is not realistic. To that end, the literature has devoted effort to derive the sojourn time CDF expressions of systems not satisfying such assumptions. For example \citep{md1,mg1} provide expressions for the sojourn time CDFs of M/D/1 and M/G/1 systems, respectively. However both works provide the sojourn time CDF expression in the form of the Laplace-Stieltjes transform, i.e. a non-closed expression of the sojourn time CDF. Other works such as~\citep{masuyama2003sojourn} shift the interest to systems that follow Markovian arrival processes (MAP), rather than Poissonian and provide closed-formulas for the sojourn time CDF in MAP/M/1 systems. In general, making the assumption of Poissonian arrivals is fair as long as there is a considerable amount of independent flows, as the Palm–Khintchine theorem states~\citep{palm}. Hence, it is reasonable to model data centres as M/G/R systems, as suggested by~\citep{mor}. Namely, \citep{mor}~motivates the study of M/M/R systems as server farms for traffic processing, and the book leaves as exercise how to derive the sojourn time CDF of an M/M/R system following the strategy used for M/M/1 systems. Nevertheless, M/M/R systems do not mimic the behaviour of Linux based systems where each CPU shares the computing time using a processor-sharing discipline, rather than the one-at-a-time processing of M/M/R, where packets wait in the queue until a server finishes processing a job. As such, \citep{sqa}~propose to model web server farms using M/M/R-JSQ-PS systems with jobs joining the CPU with the shortest queue (JSQ), and each CPU serving all it's jobs simultaneously via a processor-sharing discipline (here joining the `shortest queue' implies joining the CPU with the smallest current load). The research resorts to single queue analysis (SQA) to provide insights on how the traffic intensity changes depending on the queue occupation at each CPU, so as the average number of jobs at each CPU. \bigskip The queuing theory literature has widely studied the sojourn time in different systems, and has managed to find out not only their average sojourn time but also the CDF. However, the latter has only been possible in some systems that do not capture the multiple CPUs with PS fashion of Linux based servers. To the best of our knowledge, the existing literature does not provide expressions to compute the sojourn time CDF in PS multi-processor systems that are close to those servers that will process URLLC traffic. Therefore this paper contributes to the related work by proposing six approximations for the sojourn time CDF of M/M/R-JSQ-PS systems. The proposed approximations are useful to check whether the URLLC traffic processing will meet the 99\% or similar guarantees of URLLC with almost negligible latencies in the order of 1-10~milliseconds. To check the accuracy of the proposed approximations we resort to stochastic simulations of the M/M/R-JSQ-PS system. Discrete-event simulation is a standard technique for the task \citep{robinson2014simulation}, with a number of commercial (e.g. Simul8 \citep{simul8} and AnyLogic \citep{anylogic}) and open-source (e.g. Simmer \citep{simmer}, SimPy \citep{simpy}, and Ciw \citep{palmer2019ciw}) software options. However, to the authors' knowledge, prior to the work of this paper the listed options do not offer straightforward out-of-the-box ways to simulate processor-sharing servers, requiring bespoke code or modifications. Therefore, another major contribution of this paper is the extension of the Ciw software to be able to simulate various kinds of processor-sharing queues. This work is described in Section~\ref{sec:simulator}. \section{Simulation of G/G/R-JSQ-PS}\label{sec:simulator} In discrete event simulation a virtual representation of a queueing system is created, and `run' by sampling a number of basic random variables such as arrival dates of customers and intended service times, which interact with one another and the system to emulate the behaviour of the queueing system under consideration. Given a long enough runtime and/or a large enough number of trials, observed statistics will converge to exact values due to the law of large numbers. However due to their stochastic nature convergence may be slow, and depending on the complexity of the system, can be computationally expensive. Here the Ciw library \citep{palmer2019ciw} is used, an open-source Python library for conducting discrete event simulation. A key contribution of this work is the adaption of the library to include processor-sharing capabilities, which were included in release v2.2.0: these capabilities include standard processor-sharing, limited processor-sharing as described in \citep{zhang2009law}, and capacitated processor-sharing as described in \citep{li2011radio}, and their combinations. Ciw uses the event-scheduling approach to discrete event simulation \citep{palmer2019ciw}. Here time jumps from event to event in a discrete manner, while events themselves can cause any number of other events to be scheduled, either immediately or at some point in the future. If they are scheduled for the future, then they are called \textbf{B}-events, for example the event of a customer beginning service will cause a future scheduled event of that customer finishing service. If the events are scheduled immediately, then they are called conditional or \textbf{C}-events, for example the event a customer joining a queue may immediately cause another event, that customer beginning service, if there was enough service capacity. In addition to scheduling events, events can cause future events to be re-scheduled for a later or earlier time. A \textbf{B}-event, and its scheduling and re-scheduling of future events, is called the \textbf{B}-phase; a \textbf{C}-event, and its scheduling and re-scheduling of future events is called the \textbf{C}-phase; and advancing the clock to the next \textbf{B}-event is called the \textbf{A}-phase. Figure~\ref{fig:eventscheduling} illustrates this event scheduling process. \begin{figure} \centering \includestandalone[width=0.25\textwidth]{img/eventschedulingapproach} \caption{Flow diagram of the event scheduling approach used by Ciw, taken from \citep{palmer18}.} \label{fig:eventscheduling} \end{figure} Processor sharing is implemented by manipulating the re-scheduling of future events in the following way. Upon arrival, a customer is given an arrival date $t_\star$, and an intended service time $s$. They also observe the number of customers, including themselves, who are present at the processor-sharing server, $x_\star$. At this point they have already received $d = 0$ of their intended service time. Given that nothing else changes, this customer will finish service at date $t_{\text{end}}$ calculated in \eqref{eqn:reschedule}. \begin{equation}\label{eqn:reschedule} t_{\text{end}} = t_\star + \frac{1}{x_\star} (s - d) \end{equation} Therefore this is the date that will be scheduled for that customer to finish service. Now, say and event happens at some $t$ such that $t_\star < t < t_{\text{end}}$, and that event is either an arrival to the server, or another customer finishing service with the server. If the event is an arrival, set $x = x_\star + 1$; and if the event is a customer finishing service then set $x = x_\star - 1$. At this point our original customer will have received $d = d + \frac{1}{x_\star}(t - t_\star)$ of their intended service. Now set $x_\star = x$, $t_\star = t$, and re-calculate their service end date using \eqref{eqn:reschedule}, and then re-schedule their finish service event. This re-scheduling process is to be performed for every customer in service at any \textbf{B}- or \textbf{C}- event that causes $x_\star$ to change. This was implemented and released in Ciw~v2.2.0, along with some processor-sharing variations: ($i$) limited processor-sharing queues \citep{zhang2009law}, a generalisation of a processor-sharing queue, in which only a given number of customers may share the service load at any one time; and ($ii$) capacitated processor-sharing queues \citep{li2011radio} with a switching parameter, where the service discipline flips from FIFO to processor-sharing if the number of customers exceeds this parameter. The join-shortest-queue processor-sharing system considered in this paper -- see~Fig.~\ref{fig:mmrjsqps} -- is implemented by combining this processor-sharing capability with custom routing (JSQ) using inheritance of Ciw's modules. An example is given in the documentation: \url{https://ciw.readthedocs.io/en/latest/Guides/behaviour/ps_routing.html}. Sojourn time CDFs can then be calculated easily as all customer records are saved, namely, the sojourn time of each customer is derived as $t_{\text{end}}-t_*$. \section{M/M/R-JSQ-PS sojourn time CDF approximations} \label{sec:analytical} \begin{table}[t] \caption{Notation table} \label{tbl:notation} \centering \begin{tabular}{ c l } \toprule \textbf{Symbol} & \textbf{Definition} \\ \midrule $T$ & random variable for the customer sojourn time \\ $\Lambda$ & overall arrival rate. \\ $\mu$ & intended service rate \\ $R$ & number of parallel processor-sharing servers \\ $\rho$ & traffic intensity $\rho = \frac{\Lambda}{R\mu}$ \\ $\lambda_n$ & arrival rate seen by a server with $n$ customers \\ $W$ & complementary sojourn time CDF: $\mathbb{P}(T > t)$ \\ $w_n$ & $W$ with $n$ customers, $w_n(t) = \mathbb{P}(T > t \;|\; n)$ \\ $A_n$ & probability of joining a server with $n$ customers \\ $\pi_n$ & portion arrivals at a server with $n$ customers \\ $C(\mathbf{v}, b)$ & number of occurrences of $b$ in the vector $\mathbf{v}$ \\ $Z(\mathbf{v}, b)$ & set of indices in $\mathbf{v}$ where $b$ occurs \\ $Q$ & system transition matrix, with entries $q_{i,j}$ \\ $p_j$ & probability of being in state $j$ \\ $D$ & defective infinitesimal generator \\ $L_1$ & maximum number of customers at a server\\ $L_2$ & maximum number of customers at the system\\ $q_{\text{max}}$ & maximum runtime of the simulation \\ $q_{\text{warmup}}$ & warmup time used in the simulation \\ $t_{\text{max}}$ & largest value of $T$ calculated \\ $\Omega(G, H)$ & Wasserstein distance between CDFs $G$ and $H$ \\ \bottomrule \end{tabular} \end{table} In order to find the sojourn time distribution of a join-shortest-queue processor-sharing M/M/R-JSQ-PS queue, we follow an approach outlined in \citep{sqa}, called Single Queue Analysis (SQA). Here, rather than consider the whole M/M/R queue, we consider each server as it's own M/M/1-PS queue, with state-dependent arrival rates dependent on the join-shortest-queue mechanism. Let $\Lambda$ denote the overall arrival rate to the M/M/R-JSQ-PS system, then for each PS server in~Fig.~\ref{fig:mmrjsqps} their effective state-dependent arrival rate is $\lambda_n$ when there are $n$ customers already being served by that server. Table~\ref{tbl:notation} summarizes the notation used throughout this paper. Now, considering a single server as its own queue, we adapt the methodology developed in \citep{masuyama2003sojourn} to the join-shortest-queue situation. In that paper Theorem 1 gives the sojourn time CDF of a single MAP/M/1-PS queue. A small adaptation, now considering an generic MAP process state-dependent Markovian arrivals $\lambda_n$, gives the sojourn time CDF as: \begin{equation}\label{eqn:sojourn_time_cdf} \mathbb{P}(T \leq t) = 1 - \mathbb{P}(T > t) = 1 - W(t) = 1 - \sum_{n=0}^{\infty} A_n w_n(t) \end{equation} where $A_n$ is the probability of an arriving customer joining the queue when there are $n$ customers already present, and $w_n(t)$ is the conditional probability that the sojourn time is greater than $t$ given that there are $n$ customers already present at arrival. We study two approximations each for finding the $\lambda_n$, $A_n$, and $w_n$ for each $n$. Then combining these in \eqref{eqn:sojourn_time_cdf} gives us six approximations of the sojourn time CDF for an M/M/R-JSQ-PS queue. \subsection{First approximation of $\lambda_n$}\label{sec:lambdan_mc} Note first that the arrival rate for each single queue being dependent on the number of customers already present in that queue is a valid assumption: the arrival rates to each individual queue when there are $n$ customers already present depends on the probability of $n$ being the smallest number of customers present in all $R$ of the queues. This however is not straightforward to calculate in isolation of the other $R$ queues, therefore we resort to approximations. First we note that $\lambda_n = \pi_n \Lambda$, where $\pi_n$ is the proportion of arrivals a server will receive if they have $n$ customers already present. We find $\pi_n$ by constructing a truncated Markov chain of the M/M/R-JSQ-PS system. Define the state space of the non-truncated Markov chain by \begin{equation} \label{eq:states-markov} S = \{(a_1, a_2, \dots, a_R) \; \forall \; a_1, a_2, \dots, a_R \in \mathbb{N}_0\} \end{equation} where $a_z$ denotes the number of customers with server $z$. Order the states and let $\mathbf{s}_i$ be the $i$th state. Define the transition rate $q_{i, j}$ from $\mathbf{s}_i$ to $\mathbf{s}_j$, for all $i$, $j$, by \eqref{eqn:transitions}: \begin{equation}\label{eqn:transitions} q_{i, j} = \left\{ \begin{matrix*}[l] \mu & \text{if } C(\delta, 0) = R-1 \land C(\delta, -1) = 1; \\ \frac{\Lambda}{C(\mathbf{s}_i, \min(\mathbf{s}_i))} & \text{if } \delta = C(\delta, 0) = R-1 \land C(\delta, 1) = 1\\ & \land\ Z(\delta, 1) \subseteq Z(\mathbf{s}_i, \min(\mathbf{s}_i)); \\ 0 & \text{otherwise,} \end{matrix*} \right. \end{equation} where $\delta = \mathbf{s}_i - \mathbf{s}_j$; $C(\mathbf{v}, b) = |\{z \in \mathbf{v} : z = b\}|$ is a function that counts the number of occurrences of $b$ in a vector $\mathbf{v}$; and $Z(\mathbf{v}, b) = \{z : \mathbf{v}_z = b\}$ is the set of indices in $\mathbf{v}$ where $b$ occurs. Figure~\ref{fig:markovchain} is a representation of the Markov chain when $R=2$. When $R=1$ this reduces to an M/M/1 (or equivalently M/M/1-PS) system, and it becomes difficult to represent this system when $R>2$. \begin{figure} \begin{center} \includestandalone[width=\columnwidth]{markov_chain} \end{center} \caption{Transition state diagram of the M/M/R-JSQ-PS system when $R=2$.} \label{fig:markovchain} \end{figure} Steady-state probabilities can be found numerically by truncating the Markov chain, that is choosing an appropriate $L_1$ such that $a_z < L_1$ for all servers $z$, and solving $\mathbf{p} Q = \mathbf{0}$ with $\mathbf{p} \mathbf{e} = 1$, where $Q$ is the transition matrix with entries $q_{i, j}$ and $\mathbf{e}$ is the vector of ones. Once all $p_i$ are found, the proportion of arrivals a server will receive if it has $n$ customers already present, $\pi_n$, can be found using \eqref{eqn:props}: \begin{equation}\label{eqn:props} \pi_n = \left(\sum_{\substack{\mathbf{s}_{j, 0} = n \\ \min{\mathbf{s}_j} = n}} \frac{p_j}{C(\mathbf{s}_j, n)}\right) \left(\sum_{\mathbf{s}_{j, 0} = n} p_j\right)^{-1} \end{equation} where $\mathbf{s}_{j, 0}$ represents the number of customers at the first server when in state $j$. \subsection{Second approximation of $\lambda_n$}\label{sec:lambdan_approx} The authors of \citep{sqa} provide numerical approximations for $\lambda_{0}, \lambda_{1}, \lambda_{2}$ in~\citep[Section 5]{sqa}, given in \eqref{eqn:approxlambda0}, \eqref{eqn:approxlambda1} and \eqref{eqn:approxlambda2}, and all other $\lambda_n$ for $n \geq 3$ by \eqref{eqn:approxlambdan}. \begin{align} \lambda_0 &= \mu \left(k_a - k_b k_c^R - k_d k_e^R\right) \label{eqn:approxlambda0} \\ \lambda_1 &= \frac{\mu \left(\rho^{R} - 1 + \frac{\mu \left(\rho - \rho^{R + 1}\right)}{\lambda_{0} \left(1 - \rho\right)}\right)}{\frac{\lambda_{2}}{\mu} - \rho^{R} + 1} \label{eqn:approxlambda1} \\ \lambda_2 &= \mu k_f k_g^R \label{eqn:approxlambda2} \\ \lambda_n &= \mu\left(\frac{\Lambda}{n\mu}\right)^n \label{eqn:approxlambdan} \end{align} with $k_a$, $k_b$, $k_c$, $k_d$, $k_e$, $k_f$ and $k_g$ defined by: \begin{align} k_a &= \frac{\rho}{(1-\rho)}\\ k_b &= \frac{-0.0263\rho^2+0.0054\rho+0.1155}{\rho^2-1.939\rho+0.9534}\\ k_c &= -6.2973\rho^4+14.3382\rho^3-12.3532\rho^2\nonumber\\ & +6.2557\rho-1.005\\ k_d &= \frac{-226.1839\rho^2+342.3814\rho+10.2851}{\rho^3-146.2751\rho^2-481.1256\rho+599.9166)}\\ k_e &= 0.4462\rho^3-1.8317\rho^2+2.4376\rho-0.0512\\ k_f &= -0.29 \rho^3 + 0.8822 \rho^2 - 0.5349 \rho + 1.0112\\ k_g &= -0.1864 \rho^2 + 1.195 \rho - 0.016 \end{align} \subsection{First approximation of $A_n$}\label{sec:An_mc} Using the same Markov chain defined in Section~\ref{sec:lambdan_mc}, $A_n$ can be found by manipulating the steady-state probabilities $p_n$, given in \eqref{eqn:An}: \begin{equation}\label{eqn:An} A_n = \sum_{\min{\mathbf{s}_j} = n} p_j. \end{equation} \subsection{Second approximation of $A_n$}\label{sec:An_approx} From the SQA we can consider each PS server to be its own M/M/1-PS queue with state-dependent arrival rates. This gives a birth-death process, where $A_n$ is the probability of that system being in state $n$. Thus we have: \begin{align} A_n &= \prod_{i=0}^{n-1} \frac{\lambda_i}{\mu} A_0\label{eqn:An2}\\ A_0 &= \left( 1 + \sum_{i=1}^{\infty} \prod_{j=0}^{i-1} \frac{\lambda_j}{\mu} \right)^{-1}.\label{eqn:A0} \end{align} \subsection{First approximation of $w_n(t)$}\label{sec:wnt_matrix} Again, we resort to SQA and focus on one server of our M/M/R-JSQ-PS system in Fig.~\ref{fig:mmrjsqps}. As aforementioned, such server behaves as an M/M/1-PS queue with state-dependant arrivals at rate $\lambda_n$, and has a complementary sojourn time CDF $w_n(t)$ when it is attending $n$ customers. We follow the strategy from~\citep[Section 3]{masuyama2003sojourn}, where authors derive $w_n(t)$ for an MAP/M/1-PS queue. Specifically, we derive $\mathbf{w}(t) = (w_0(t), w_1(t), w_2(t),\ldots)$ as the solution of the differential equation $\tfrac{d}{dt}\mathbf{w}(t)=D \mathbf{w}(t)$, which is: \begin{equation} \mathbf{w}(t) = e^{D t} \mathbf{e} \label{eqn:exp-generator} \end{equation} with $D$ the defective infinitesimal generator for our state-dependant M/M/1-PS queue in the SQA: \begin{equation}\label{eqn:defective_IG} \resizebox{0.85\hsize}{!}{% $D = \begin{pmatrix} -(\lambda_0+\mu) & \lambda_0 & 0 & 0 & \ldots \\ \frac{1}{2}\mu & -(\lambda_1+\mu) & \lambda_1 & 0 & \ldots \\ 0 & \frac{2}{3}\mu & -(\lambda_2+\mu) & \lambda_2 & \ldots \\ 0 & 0 & \frac{3}{4}\mu & -(\lambda_3+\mu) & \ldots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}$% } \end{equation} By constructing a truncated $D$ explicitly, numerical methods, such as Padé's method~\citep{pade}, are used to find the matrix exponential. As a result, $w_n(t)$ is obtained as the $n$\textsuperscript{th} entry of the $\mathbf{w}(t)$ vector in~\eqref{eqn:exp-generator}. \subsection{Second approximation of $w_n(t)$}\label{sec:wnt_unroll} As constructing $D$ explicitly and numerically computing a matrix exponential can be computationally inefficient, in Lemma~\ref{lemma:soj-conditioned} we give a recurrent relation for finding $w_n(t)$. \begin{lemma} \label{lemma:soj-conditioned} If a server within an M/M/R-JSQ-PS system has $n$ customers, its sojourn time CDF is \begin{equation} \mathbb{P}(T > t\;|\;n) = w_n(t) = \sum_{i=0}^{\infty}\frac{(\lambda_{0} + \mu)^i t^i}{i!} e^{-(\lambda_{0} + \mu)t} h_{n, i} \label{eq:soj-conditioned} \end{equation} with $h_{n, 0} = 1$ for all $n$, $h_{-1, i} = 0$ for all $i$, and $h_{n,i}$ satisfying \begin{multline} h_{n, i+1} = \frac{n}{n+1}\frac{\mu}{\lambda_{0} + \mu} h_{n-1, i} + h_{n,i}\left(1 - \frac{\lambda_n+\mu}{\lambda_{0}+\mu}\right)\\ + \frac{\lambda_n}{\lambda_{0} + \mu} h_{n+1, i}\label{eq:hs} \end{multline} \end{lemma} \begin{proof} We mimic the proof presented in~\citep[Corollary 2]{masuyama2003sojourn}. and apply the uniformisation technique~\citep{uniformization} for the matrix exponential in \eqref{eqn:exp-generator}. As a result we obtain: \begin{equation} \mathbf{w}(t)=\sum_{i=0}^{\infty}\frac{(\lambda_0 + \mu)^i t^i}{i!} e^{-(\lambda_0 + \mu)t} \left[ I + \frac{1}{\lambda_{0}+\mu}D \right]^i \mathbf{e} \end{equation} with $I$ the identity matrix. To ease the computation of the matrix to the power of $i$, (i.e., $[\cdot]^i$) the following vector is defined $\mathbf{h}_{n,i}=\left[I+\tfrac{1}{\lambda_{0}+\mu}D\right]^i\mathbf{e}$. And it leads to the recursion $\mathbf{h}_{n,i+1}=\left[I+\tfrac{1}{\lambda_{0}+\mu}D\right]\mathbf{h}_{n,i}$, with $\mathbf{h}_{n,0}=\mathbf{e}, \forall n$. As a result, $\mathbf{w}(t)$ is defined as \begin{equation} \mathbf{w}(t) = \sum_{i=0}^\infty \frac{(\lambda_{0} + \mu )^i t^i}{i!} e^{-(\lambda_{0}+\mu)t} \mathbf{h}_{n,i} \end{equation} and the n\textsuperscript{th} element of $\mathbf{w}(t)$ is given by~\eqref{eq:soj-conditioned}. \end{proof} This gives $w_n(t)$ in a form which, for a sufficiently large value, $L_2$, in place of infinity, can be found recursively. This naive adaptation of \citep{masuyama2003sojourn} replaces their static MAP with the state-dependent arrival rate $\lambda_n$. \subsection{Summary \& Considerations} \label{subsubsec:considerations} In this work we implement and test six different methods of approximating the complementary sojourn time CDF of an M/M/R-JSQ-PS system, $W(t)$. Table~\ref{tbl:methods} summarises the methodology. \begin{table} \centering \caption{Summary of the six methods of calculating $W(t)$.} \begin{tabular}{cccc} \toprule Method & $\lambda_n$ & $A_n$ & $w_n(t)$ \\ \midrule \textbf{A} & \ref{sec:lambdan_mc} & \ref{sec:An_mc} & \ref{sec:wnt_matrix} \\ \textbf{B} & \ref{sec:lambdan_mc} & \ref{sec:An_approx} & \ref{sec:wnt_matrix} \\ \textbf{C} & \ref{sec:lambdan_approx} & \ref{sec:An_approx} & \ref{sec:wnt_matrix} \\ \textbf{D} & \ref{sec:lambdan_mc} & \ref{sec:An_mc} & \ref{sec:wnt_unroll} \\ \textbf{E} & \ref{sec:lambdan_mc} & \ref{sec:An_approx} & \ref{sec:wnt_unroll} \\ \textbf{F} & \ref{sec:lambdan_approx} & \ref{sec:An_approx} & \ref{sec:wnt_unroll} \\ \bottomrule \end{tabular} \label{tbl:methods} \end{table} Choices of model hyper-parameters, those that concern only the methodology and not the system that is itself being modelled, can effect both the accuracy and run-time (or computational complexity) of the model, and choices are usually a compromise between the two. For the Ciw simulation there are three hyper-parameters to consider: the maximum simulation time, the warm up time, and the number of trials. The larger the number of trials, the more we can smooth out the stochastic nature of the DES by take averages of the key performance indicators of each trial, however the more trials take longer to run. The warm-up time is a proportion of the maximum simulation time where results are not collected. This filtering of results ensures that key performance indicators are not collected before the simulation reaches steady-state, and therefore and not dependent on the starting conditions of the simulation. The larger the warm-up time, the higher the chance that the collected results are in steady state (this is highly dependent on other model parameters), although this means less results to collect and so more uncertainty. A larger maximum simulation time does both, ensures that there are enough results to decrease uncertainty, and increases the chance that steady-state is reached, however this also increases run-times. Each of the six sub-methods described in Section~\ref{sec:analytical} have hyper-parameters than need to be chosen. Those that explicitly build an infinite Markov chain, that is methods~\ref{sec:lambdan_mc} and \ref{sec:An_mc}, need to truncate the Markov chain using a limit $L_1$, so that numerical methods can be used on a finite Markov chain. The limit $L_1$ corresponds to the maximum number of customers each PS server will receive. Thus these Markov chains will have $L_1^R$ states, and so its construction requires defining $L_1^{2R}$ transitions. The larger the $L_1$ the more accurate the model, as there would be a smaller probability of a server receiving more than $L_1$ customers, however larger limits have longer run-times and larger memory consumption. Other sub-methods, methods~\ref{sec:An_approx} and~\ref{sec:wnt_unroll} contain infinite sums. For these, a sufficiently large cut-off, $L_2$ is required to truncate these sums for numerical computation. This $L_2$ corresponds to the overall maximum number of customers that can be present, and so can be chosen to much larger than $L_1$. Similarly, method~\ref{sec:wnt_matrix} requires the construction of a matrix, where each state corresponds the the overall number of customers, and so $L_2$ is also be used to truncate this matrix. \subsection{Markov chain truncation} When we approximate $\lambda_n$ and $A_n$ using Section~\ref{sec:lambdan_mc} and Section~\ref{sec:An_mc}, respectively, we truncate the transition matrix $Q$ of the Markov chain in \eqref{eqn:transitions}. Namely, we limit the ``last'' considered state $S_i=(L_1-1)\mathbf{e}$ has $L_1-1$ users in all the $R$ servers. The truncation $L_1$ should be carefully selected such that \begin{equation} \sum_{\mathbf{s}_j:\ \max \mathbf{s}_j\geq L_1}p_j <\varepsilon \label{eqn:truncation-tolerance} \end{equation} that is, the probability of entering a state with a server with $L_1$ or more customers should remain below a tolerance $\varepsilon\in\mathbb{R}^+$ Figure~\ref{fig:mc-limit} illustrates how probability of having $L_1$ or more users at a server decreases as we increase the truncation limit $L_1$ and how this is effected by both $\rho$ and $R$. This data was obtained using the simulation described in Section~\ref{sec:simulator}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{img/mc-limit} \caption{Probability of having $L_1$ or more customers at some server~\eqref{eqn:truncation-tolerance} with different loads $\rho=0.85,0.90,0.95$ and available servers $R=1,\ldots,9$.} \label{fig:mc-limit} \end{figure} \section{Complexity analysis} \label{sec:complexity} As stated in the paper title, the main motivation for modelling the M/M/R-JSQ-PS system is to tell whether an URLLC service attended by a multi-processor system meets latency and reliability constraints. Hence, it is of paramount importance to consider the run-time complexity of each approximation $\lambda_n, A_n, w_n(t)$, as a network operator may require fast operational decisions to satisfy the URLLC. If the approximation run-time is not fast enough, the operator would not be able to update the operational decisions on time upon demand changes -- e.g., increase the dedicated servers to attend the increasing demand for an URLLC service. Therefore, in the following we analyse the run-time complexity of each approximation for $\lambda_n, A_n$ and $w_n(t)$. \subsection{First approximation of $\lambda_n$} Using~\eqref{eqn:props} this approximation finds the portion of arrivals that a server foresees using the steady-state probabilities $p_i$ of the \mbox{M/M/R-JSQ-R} Markov chain with $L_1^R$ states and transition matrix $Q$ of size $L_1^{2R}$. For each entry $q_{i,j}$ of the transition matrix we make $\min(\mathbf{s}_i),Z(\delta,b),C(\delta,b)$ operations, all of them of complexity $\mathcal{O}(R)$. Hence, computing all entries of the transition matrix $Q$ takes $\mathcal{O}(R L_1^{2R})$ operations. To find the steady-state vector $\mathbf{p}$ we solve $(\tilde{Q}|\mathbf{e})^T \mathbf{p}= (\mathbf{p}|\mathbf{e})^T$, where $\tilde{Q}$ is the transition matrix $Q$ less one row. This is a linear system with a matrix of size $L_1^R \times L_1 ^ R$. Finding such solution with the LAPACK~\citep{lapack} \verb|gesv| method leads to a cubic run-time complexity on the matrix size. Therefore, obtaining the steady-state probability has complexity $\mathcal{O}\left(L_1^{3R}\right)$. Note that it is the computation of $\mathbf{p}$ that dominates the complexity of approximating $\lambda_n$, as creating the transition matrix $Q$ has $\mathcal{O}\left(RL_1^R\right)$ complexity and computing $\pi_n$ has $\mathcal{O}\left(L_1^R\right)$ complexity -- see~\eqref{eqn:props}. Hence, the first approximation of $\lambda_n$ has run-time complexity $\mathcal{O}\left(L_1^{3R}\right)$. \subsection{Second approximation of $\lambda_n$} In~\eqref{eqn:approxlambdan} we see that there is a power relationship between $n$ and $\lambda_n$, namely, $\lambda_n=\mu(\tfrac{\Lambda}{n\mu})^n$. As computing a power has complexity $\mathcal{O}(\log n)$, the second approximation of $\lambda_n$ has complexity $\mathcal{O}(\log n)$. \subsection{First approximation of $A_n$} Once we compute the Markov chain steady-state probabilities $p_n$, this method only performs a summation over such probabilities~\eqref{eqn:An}. Thus, the complexity of computing $A_n$ is $\mathcal{O}\left(L_1^R\right)$, for we iterate over all the $L_1^R$ states and check whether each of them satisfies $\min \mathbf{s}_j=n$. \subsection{Second approximation of $A_n$} Given the values of $\lambda_n$, first we compute the probability of joining the queue when there are 0 users $A_0$ in~\eqref{eqn:A0}. As mentioned in~Section~\ref{subsubsec:considerations}, we truncate the infinite summations up to $L_2$. Hence, it takes $\sum_i^{L_2} i$ operations to compute $A_0$, and so is $\mathcal{O}\left(L_2^2\right)$. Once $A_0$ is computed, we perform $\mathcal{O}(L_2)$ operations to compute $A_n$ in~\eqref{eqn:An2}. Therefore as a whole, the second approximation of $A_n$ has $\mathcal{O}(L_2^2)$ complexity. \subsection{First approximation of $w_n(t)$} This approximation computes the exponential of the defective infinitesimal generator matrix $D$ -- see~\eqref{eqn:exp-generator}. As mentioned in~Section~\ref{subsubsec:considerations}, we also truncate the $D$ matrix up to $L_2$ elements in its diagonal such that $D$ is an $L_2 \times L_2$ matrix. As $D$ is diagonal with $\leq3$ terms at each row, its creation has complexity $\mathcal{O}(L_2)$. With Padé's method~\citep{pade} we compute $D$ exponential with $\mathcal{O}(L_2\log L_2)$ complexity. \subsection{Second approximation of $w_n(t)$} Using the recurrent formula of Lemma~\ref{lemma:soj-conditioned} we can check the complexity of this second approximation of $w_n(t)$. As mentioned in~Section~\ref{subsubsec:considerations}, we truncate the infinite summation in~\eqref{eq:soj-conditioned} to $L_2$ iterations. At each summation iteration $i$, we perform $\mathcal{O}(\log i)$ operations (the power operators), hence, computing the second approximation of $w_n(t)$ has complexity $\mathcal{O}(L_2 \log L_2)$. Note that we compute $h_{n,i}$ incrementally thanks to the recursive approach, hence, such computation does not dominate the approximation complexity as $h_{n,i+1}$ reuses already computed values of $h_{*,i}$. Similarly, we also keep inside a hash table the factorial computations $i!$ at~\eqref{eq:soj-conditioned} denominator to ease the computational burden. \bigskip Depending on which Method we use -- see Table~\ref{tbl:methods} -- we will get different run-time complexities. Namely, methods A,B,D, and E have an $\mathcal{O}\left(L_1^{3R}\right)$ complexity because they rely on the truncated Markov chain to derive $\lambda_n$, which is the most demanding approximation. While methods C and F have an overall complexity of $\mathcal{O}\left(L_2^2\right)$ because the \ref{sec:lambdan_approx} and \ref{sec:wnt_unroll} approximations dominate the computation of $W(t)$. Table~\ref{tbl:complexity} summarises the computational complexity of each method. {\renewcommand{\arraystretch}{1.3} \begin{table} \centering \caption{Complexity of each method.} \begin{tabular}{cllll} \toprule Method & $\lambda_n$ & $A_n$ & $w_n(t)$ & Overall \\ \midrule \textbf{A} & $\mathcal{O}\left(L_1^{3R}\right)$ & $\mathcal{O}\left(L_1^{R}\right)$ & $\mathcal{O}\left(L_2\log L_2\right)$ & $\mathcal{O}\left(L_1^{3R}\right)$\\ \textbf{B} & $\mathcal{O}\left(L_1^{3R}\right)$ & $\mathcal{O}\left(L_2^2\right)$ & $\mathcal{O}\left(L_2\log L_2\right)$ & $\mathcal{O}\left(L_1^{3R}\right)$\\ \textbf{C} & $\mathcal{O}\left(\log n\right)$ & $\mathcal{O}\left(L_2^2\right)$ & $\mathcal{O}\left(L_2\log L_2\right)$ & $\mathcal{O}\left(L_2^2\right)$\\ \textbf{D} & $\mathcal{O}\left(L_1^{3R}\right)$ & $\mathcal{O}\left(L_1^{R}\right)$ & $\mathcal{O}\left(L_2 \log L_2\right)$ & $\mathcal{O}\left(L_1^{3R}\right)$\\ \textbf{E} & $\mathcal{O}\left(L_1^{3R}\right)$ & $\mathcal{O}\left(L_2^2\right)$ & $\mathcal{O}\left(L_2 \log L_2\right)$ & $\mathcal{O}\left(L_1^{3R}\right)$\\ \textbf{F} & $\mathcal{O}\left(\log n\right)$ & $\mathcal{O}\left(L_2^2\right)$ & $\mathcal{O}\left(L_2 \log L_2\right)$ & $\mathcal{O}\left(L_2^2\right)$\\ \bottomrule \end{tabular} \label{tbl:complexity} \end{table} } \subsection{Simulation} Events, and more importantly the number of events in a run of the simulation are random. Therefore we cannot have a true complexity analysis, but we can say something about the order of expected number of operations. In this section we consider the average time complexity of the M/M/R-JSQ-PS system. We will consider number of operations per unit of simulation time when in steady state. Assuming there are $M$ customers in the system at steady-state, there are two types of \textbf{B}-events that can take place in a given time unit, arrivals, and customers ending service. \begin{itemize} \item \textit{Arrivals}: there's an average of $\Lambda$ arrivals per time unit. At each arrival we need to check $R$ servers to see which is least busy. Then once a server is chosen, we need to go through each customer at that server and re-schedule their end service dates - \eqref{eqn:reschedule}. As join-shortest-queue systems should evenly share customers between servers, we expect there to be $\frac{M}{R}$ customers at that server. So per time unit, the expected number of operations for arrival events is $\mathcal{O}\left(\Lambda\left(R + \frac{M}{R}\right)\right)$. \item \textit{End services}: at steady state, due to work conservation and Burke's theorem \citep{burke1956output}, there's an average of $\Lambda$ services ending per time unit. At each end service we need to go through each customer at that server and re-schedule their end service dates. So per time unit, the expected number of operations for end service events is $\mathcal{O}\left(\Lambda \frac{M}{R}\right)$. \end{itemize} It is difficult to find a closed expression for $M$, hence the need for simulation and approximations. However a naive estimate for the average number of customers $M$ is the traffic intensity, $M \approx \rho = \frac{\Lambda}{\mu R}$. Let $q_{\text{max}}$ be the maximum simulation time. Altogether, in steady state the expected number of operations is $\mathcal{O}\left(q_{\text{max}} \left( \Lambda \left( R + \tfrac{M}{R}\right) + \Lambda \tfrac{M}{R} \right) \right)$ for a simulation run, which is equivalent to $ \mathcal{O}\left(q_{\text{max}} \left( \Lambda R^2 + \tfrac{2 \Lambda^2}{\mu R^2}\right) \right)$. Although $q_{\text{max}}$ is a user chosen hyper-parameter, and increases the expected number of operations linearly, it is useful to consider if it's choice should be influenced by other system parameters. Consider that, when in steady state, increasing the simulation time increases the number of sojourn time samples we have to estimate the CDF. Say we need $X$ samples to estimate a good CDF, then $q_{\text{max}}$ should be chosen such that $q_{\text{max}} = \frac{X}{\Lambda}$. As $X$ is independent of any other parameter, it can be considered a constant. However, this is assuming a steady state. We should actually choose $q_{\text{max}} = \frac{X}{\Lambda} + q_{\text{warmup}}$, where $q_{\text{warmup}}$ is the warmup time, the time it takes to reach steady state. It is likely that $q_{\text{warmup}}$ would be effected by the system parameters. It is interesting to note that the six approximations' time complexities, and the expected time complexity for the simulation, are affected by different parameters. The approximations are affected by the hyper-parameters $L_1$ and $L_2$, along with $R$, however the simulation is effected by the system parameters themselves. This shows that for some specific cases and parameter sets, it might be worthwhile resorting to simulation after all. \section{Approximations' accuracy} \label{sec:comparison} \begin{figure} \begin{center} \includestandalone[width=0.75\columnwidth]{wasserstein} \end{center} \caption{Graphical interpretation of the Wasserstein distance between the actual and approximated CDFs.} \label{fig:wasserstein} \end{figure} \begin{figure*} \centering \subfloat[Method A]{\includegraphics[width=0.33\textwidth]{img/compare_accuracies_A.pdf}% \label{fig:accuracyA}} ~ \subfloat[Method B]{\includegraphics[width=0.33\textwidth]{img/compare_accuracies_B.pdf}% \label{fig:accuracyB}} ~ \subfloat[Method C]{\includegraphics[width=0.33\textwidth]{img/compare_accuracies_C.pdf}% \label{fig:accuracyC}}\\ \subfloat[Method D]{\includegraphics[width=0.33\textwidth]{img/compare_accuracies_D.pdf}% \label{fig:accuracyD}} ~ \subfloat[Method E]{\includegraphics[width=0.33\textwidth]{img/compare_accuracies_E.pdf}% \label{fig:accuracyE}} ~ \subfloat[Method F]{\includegraphics[width=0.33\textwidth]{img/compare_accuracies_F.pdf}% \label{fig:accuracyF}} \caption{Accuracy of each method with increasing traffic intensity $\rho$ and number of CPUs $R$.} \end{figure*} We perform a computational experiment to compare the six methods against one another under various circumstances. With a fixed choice of $\mu = 1$ we calculate the sojourn time CDFs using each method, for all $1 \leq R \leq 10$, and all $\rho \in (0, 1)$ in steps of $0.01$. CDFs are compared against the simulation CDF using the Wasserstein distance \citep{mostafaei2011probability}, or Earth-mover's distance. This is given in \eqref{eqn:wasserstein}, with a graphic interpretation given in Figure~\ref{fig:wasserstein}. This measure goes from $0$, representing equal CDFs, to $t_{\text{max}}$, the maximum sojourn time calculated, representing the largest possible difference between the CDFs. In practice this is calculated numerically by taking Riemann sums with $\Delta = 0.01$ time units. \begin{equation}\label{eqn:wasserstein} \Omega(G, H) = \int_{-\infty}^{+\infty} | G(t) - H(t) | dt \end{equation} For these experiments hyper-parameter choices are a fixed: $L_2 = 130$; $t_{\text{max}} = 182.32$; a maximum simulation time of $q_{\max}=160000$ time units and a warm-up time of $q_{\text{warmup}}=8000$. The choice of the Markov chain limit $L_1$ is dependent on $R$, it is chosen to be both large enough that the probability of exceeding this is small, and small enough so that the number of defined transitions is manageable, we choose $(L_1+1)^{2R} < 10 \times 10^{10}$. For each $R$ our choice of $L_1$ is given in Table~\ref{tbl:mc_limit}. \begin{table} \centering \caption{Choice of Markov chain limit $L_1$ for each $R$.} \begin{tabular}{cccc} \toprule $R$ & $L_1$ \\ \midrule 1 & 22 \\ 2 & 22 \\ 3 & 22 \\ 4 & 13 \\ 5 & 7 \\ 6 & 5 \\ 7 & 4 \\ 8 & 3 \\ 9 & 3 \\ 10 & 2 \\ \bottomrule \end{tabular} \label{tbl:mc_limit} \end{table} Figure.~\ref{fig:accuracyA}-\ref{fig:accuracyF} show the obtained Wasserstein distance, for each method A to F respectively, for each value of $R$ and $\rho$. First it is important to note the scale of the y-axis on these plots, they range from $0$ to $2$; while the Wasserstein distance has the potential to range from $0$ to $182.32$. Therefore, wherever the Wasserstein distance falls within the plot's range, we can note that these are not bad approximations overall. We can see that all methods are highly dependant on the traffic intensity $\rho$, however the relationship between accuracy and $\rho$ is different for the methods that use the first approximation of $w_n$ (methods A, B and C), and those that use the second approximation of $w_n$ (methods D, E and F). For the first approximation, low and high values of the load $\rho$ result in higher approximation error. This may be due to unstable approximation algorithms used to compute matrix exponential \citep{moler2003nineteen}. While the second approximation performs much better for low values of $\rho$, middling values perform much worse here. In addition, we see that the second approximation is more dependant of $R$, with lower values of $R$ performing better. Similarly, this dependence on $R$ is more pronounced in methods C and F, suggesting that the second approximation of $\lambda_n$ performs worse with higher $R$ than the first Markov chain approximation. Figure~\ref{fig:bestperforming} shows which method was most accurate for each $R$, $\rho$ pair. From this we see that method D performed best for low values of $\rho$, while method C performed best for middling to high values of $\rho$. Method E is the best performing methods for very high values of $\rho$, however from the plot in Figure~\ref{fig:accuracyE} we know that these are still not good approximations of the CDF. Interestingly, when $R=1$, that is when there is no join-shortest-queue behaviour happening, methods E, D and F are the best performing. \begin{figure*} \centering \includegraphics[width=\textwidth]{img/best_performing.pdf} \caption{Most accurate method Table~\ref{tbl:methods} for each $R$, $\rho$ pair.} \label{fig:bestperforming} \end{figure*} \section{Behaviour in high reliabilities} \label{sec:behaviour-high-reliabilities} In the prior section we have seen that methods A-F yield an accurate approximation of the sojourn time CDF, namely that the Wasserstein distance remains reasonably small. Depending on the load conditions $\rho$ we can use the approximation with highest accuracy (see Figure~\ref{fig:bestperforming}) to achieve accurate sojourn time CDF approximations. However, URLLC services ask for end-to-end latencies with high reliabilities as 99\%, 99.9\%, 99.99\%, or 99.999\%. This means that the network latency plus processing latency of a service (that is the sojourn time) should be met, e.g., 99.99\% of the times. If the end-to-end latency requirement is of 100ms and the maximum network latency remains below 28ms, this means that the sojourn time should remain below 72ms in the 99.99\% of the times. Therefore, the applicability of our methods A-F depend on their accuracy at the 99.99\textsuperscript{th} percentile. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{img/9999-percentile-sojourn-error-tiny} \end{center} \caption{Sojourn time 99.99\textsuperscript{th} percentile error using the best method. Positive/negative means over/under-estimation, respectively.} \label{fig:9999-percentile-sojourn-error} \end{figure} In Figure~\ref{fig:9999-percentile-sojourn-error} we illustrate the error, measured in scalable time units, achieved by the best approximation at the 99.99\textsuperscript{th} percentile. In other words, if $T_{a,99.99}$ is the best method 99.99\textsuperscript{th} percentile for the sojourn time, and $T_{99.99}$ is the simulated 99.99-percentile; then Figure~\ref{fig:9999-percentile-sojourn-error} illustrates $T_{a,99.99}-T_{99.99}$. To derive the simulated 99.99\textsuperscript{th} percentile use to the simulation from Section~\ref{sec:simulator}, As with the Wasserstein distance (see Figure.~\ref{fig:accuracyA}-\ref{fig:accuracyF}), Figure~\ref{fig:9999-percentile-sojourn-error} evidences that the 99.99\textsuperscript{th} error becomes more prominent as the load $\rho$ approaches to $1$ in the M/M/R-JSQ-PS system. In particular, the best method under-estimates the 99.99\textsuperscript{th} percentile of the sojourn time for the error falls towards negative values near -175 as $\rho$ approaches to $1$. Note that the maximum sojourn time in the experiments can pop up to $t_{\max}=182.32$ time units, hence, the error is notoriously large towards the highest load $\rho\approx1$. Nevertheless, for not so high loads the 99.99\textsuperscript{th} percentile error remains low. Namely, the error is of less than $t=12$~time units with respect to the simulations when $R\geq3$~CPUs and $\rho\leq0.85$, and less than $t=1.78$~time units when $R\geq3$~CPUs and $\rho\leq0.50$. If the system has $R<3$~CPUs, then the best method has erratic oscillations, indeed the 99.99\textsubscript{th} percentile is underestimated by $t=-56.6$~time units with $R=1$ and $\rho=0.85$ -- see Table~\ref{tbl:percentile_errors}. We have also analysed what is best method error for the 99\textsuperscript{th}, 99.9\textsuperscript{th}, and 99.999\textsuperscript{th} percentiles. The results are shown in~\ref{app:percentile-errors} and they show the same pattern as the observed for the 99.99\textsuperscript{th} percentile in Figure~\ref{fig:9999-percentile-sojourn-error}. That is, the best A-F method results in under-estimations of the sojourn time that get worse as $\rho$ approaches to $1$. Moreover, the results from Figure~\ref{fig:all-percentile-sojourn-errors} in~\ref{app:percentile-errors} shows that the error oscillations start to become more prominent with higher reliabilities and mid values of CPUs. Overall, the best method gives accurate estimations for the 99.99\textsuperscript{th} percentile of the sojourn time as long as $\rho\leq0.85$; tends under-estimate the 99.99\textsuperscript{th} percentile; and is more stable for $R\geq3$~CPUs. \subsection{Comparison with non-exponential service times} \label{subsec:exp-vs-others} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{img/pessimism_logscale.pdf} \caption{The 99th, 99.9th, 99.99th and 99.999th percentiles of the sojourn time distributions, on a log scale, when service times are modelled as exponentially distributed, uniformly distributed, and deterministic.} \label{fig:pessimism} \end{figure*} So far we have seen that the methods A-F perform sufficiently well to estimate high reliabilities of an \mbox{M/M/R-JSQ-PS} system, e.g., the 99.99\textsuperscript{th} percentile of the sojourn time. However, exponentially distributed service rates are often an unrealistic assumption, with uniformly distributed or deterministic service times being more realistic for services with a bounded number of operations. Here we explore the use of our best M/M/R-JSQ-PS approximation to upper bound 99.99\textsuperscript{th} percentiles of the sojourn time with uniform and deterministic service distributions. Such an upper bound is useful to tell whether $R$~CPUs are enough to process URLLC service traffic, e.g., to tell if $R$~CPUs process the URLLC service traffic in less than 28~ms with a 99.99\% of probability. To investigate, we calculate the compare sojourn time CDFs using exponentially distributed services and uniformly distributed and deterministic services, for various values of $R$ and $\rho$. All CDFs were obtained using the simulation using an exponential distribution with average service time $\tfrac{1}{\mu}$; a uniform distribution $U\left(\tfrac{1}{2\mu},\tfrac{3}{2\mu}\right)$; and deterministic service time of $\tfrac{1}{\mu}$ time units. In such a manner, all distributions share the same average service time, although their variances are not equal: the exponentially distributed times having the highest variance ($1/\mu^2$), followed by the uniform distributed times ($1/(12\mu^2)$), and then the deterministic with no variance. We see that the CDFs obtained when modelling service times as exponentially distributed always lie below those obtained using uniformly distributed and deterministic service times. This is demonstrated in Figure~\ref{fig:pessimism}, which shows that the tail percentiles are always larger, or more pessimistic, when modelling exponential services as opposed to uniform and deterministic services. Figure~\ref{fig:pessimism} also evidences how near $\rho=0.6$ when we use $R=5$ or $R=10$~CPUs the best method (dark~green) gets closer to the percentiles of the simulated results (yellow). This behaviour is because after $\rho=0.6$ the best method changes from method~D to method~C (see~Figure~\ref{fig:bestperforming}). Similarly, at high loads $\rho\geq0.97$ the best method changes from C to E, thus, the sudden change in the sojourn time percentiles. Namely, the sojourn time percentiles at such high loads differs significantly from the values obtained in the simulation (yellow). The erratic values of the best method for $\rho\geq0.97$ even goes below the sojourn time percentiles obtained for uniform and deterministic service times (blue and red lines in Figure~\ref{fig:pessimism}, respectively). Altogether, Figure~\ref{fig:pessimism} shows: that our best method (dark green) stays close to the sojourn time percentiles obtained in simulations (yellow) for loads $\rho<0.97$; that our best method lies above the sojourn times provided by deterministic (red) and uniformly distributed (blue) service times; and that our best method largely underestimates the sojourn time percentile for $\rho\geq0.97$, resulting in even smaller percentiles than uniformly distributed and deterministic service times. \begin{table} \begin{center} \caption{Sojourn time 99.99\textsuperscript{th} percentile errors using the best method with increasing number of CPUs $R$ and load $\rho$. Positive/negative mean over/under-estimation, respectively.} \label{tbl:percentile_errors} \begin{tabular}{lrrrrr} \toprule {} & $\pmb{R=1}$ & $\pmb{R=3}$ & $\pmb{R=5}$ & $\pmb{R=7}$ & $\pmb{R=10}$\\ \midrule $\pmb{\rho = 0.10}$ & -3.76 & 0.29 & 1.49 & 1.38 & 1.50\\ $\pmb{\rho = 0.25}$ & -0.68 & 2.38 & 2.58 & 2.94 & 3.04\\ $\pmb{\rho = 0.50}$ & -0.41 & 0.22 & 1.17 & 1.47 & 1.78\\ $\pmb{\rho = 0.75}$ & -6.56 & -7.15 & -4.51 & -3.04 & -1.87\\ $\pmb{\rho=0.85}$ & -56.60 & -8.83 & -11.87 & -9.84 & -7.29\\ $\pmb{\rho = 0.90}$ & -109.06 & -31.78 & -24.69 & -20.86 & -16.63\\ $\pmb{\rho = 0.95}$ & -104.87 & -89.57 & -68.65 & -54.94 & -47.24\\ $\pmb{\rho = 0.99}$ & -180.58 & -180.59 & -180.60 & -180.61 & -180.62\\ \bottomrule \end{tabular} \end{center} \end{table} \section{Conclusions} \label{sec:conclusions} This paper models the M/M/R-JSQ-PS sojourn time distribution for URLLC services whose traffic is processed using a multi-processor PS system. In the paper we: \begin{enumerate}[i)] \item present a generic open-source discrete event simulation software for \mbox{G/G/R-JSQ-PS} systems; \item derive and compare six analytical approximations for the sojourn time CDF of \mbox{M/M/R-JSQ-PS} systems, and analyse their run time complexities; and \item investigate the applicability of \mbox{M/M/R-JSQ-PS} models to \mbox{M/G/R-JSQ-PS} systems under both Uniform and Deterministic intended service times. \end{enumerate} The proposed approximations have polynomial time complexities $\mathcal{O}\left(L_1^{3R}\right)$, and are useful to determine if $R$~CPUs are enough to meet the URLLC requirements under mid loads, for they yield errors of less than 1.78~time units in high percentiles as a 99.99\%. For mid to high loads the error remains below 12~time units. \section*{Acknowledgements} This work has been partially funded by European Union’s Horizon 2020 research and innovation programme under grant agreement No 101015956, and the Spanish Ministry of Economic Affairs and Digital Transformation and the European Union-NextGenerationEU through the UNICO 5G I+D 6G-EDGEDT and 6G-DATADRIVEN.
1,116,691,498,742
arxiv
\section{Introduction} Recent years have witnessed a remarkable convergence of two broad trends. The first of these concerns information i.e.\ data -- rapid technological advances coupled with an increased presence of computing in nearly every aspect of daily life, have for the first time made it possible to acquire and store massive amounts of highly diverse types of information. Concurrently and in no small part propelled by the environment just described, research in artificial intelligence -- in machine learning \cite{Aran2012g,Aran2012h,Aran2015,Aran2015d}, data mining~\cite{BeykAranPhunVenk+2014}, and pattern recognition, in particular -- has reached a sufficient level of methodological sophistication and maturity to process and analyse the collected data, with the aim of extracting novel and useful knowledge~\cite{Aran2015c,BeykAranPhunVenk+2014}. Though it is undeniably wise to refrain from overly ambitious predictions regarding the type of knowledge which may be discovered in this manner, at the very least it is true that few domains of application of the aforesaid techniques hold as much promise and potential as that of medicine and health in general. Large amounts of highly heterogeneous data types are pervasive in medicine. Usually the concept of so-called ``big data'' in medicine is associated with the analysis of Electronic Health Records \cite{ChriElli2016,Aran2015g,Aran2016,VasiAran2016,VasiAran2016a}, large scale sociodemographic surveys of death causes \cite{RGI2009}, social media mining for health related data~\cite{BeykAranPhunVenk+2015} etc. Much less discussed and yet arguably no less important realm where the amount of information presents a challenge to the medical field is the medical literature corpus itself. Namely, considering the overarching and global importance of health (to say nothing of practical considerations such as the availability of funding), it is not surprising to observe that the amount of published medical research is immense and its growth is only continuing to accelerate. This presents a clear challenge to a researcher. Even restricted to a specified field of research, the amount of published data and findings makes it impossible for a human to survey the entirety of relevant publications exhaustively which inherently leads to the question as to what kind of important information or insight may go unnoticed or insufficiently appreciated. The premise of the present work is that advanced machine learning techniques can be used to assist a human in the analysis of this data. Specifically, we introduce a novel methodology based on Bayesian non-parametric inference that achieves this, as well as free software which researchers can use in the analysis of their corpora of interest. \subsubsection{Previous work} A limitation of most models described in the existing literature lies in their assumption that the data corpus is static. Here the term `static' is used to describe the lack of any associated temporal information associated with the documents in a corpus -- the documents are said to be exchangeable~\cite{BleiLaff2006a}. However, research articles are added to the literature corpus in a temporal manner and their ordering has significance. Consequently the topic structure of the corpus changes over time~\cite{Dyso2012,BeykPhunAranVenk2015,BeykAranPhunVenk2015a}: new ideas emerge, old ideas are refined, novel discoveries result in multiple ideas being related to one another thereby forming more complex concepts or a single idea multifurcating into different `sub-ideas' etc. The premise in the present work is that documents are not exchangeable at large temporal scales but can be considered to be at short time scales, thus allowing the corpus to be treated as \emph{temporally locally static}. \section{Proposed approach\label{s:proposed}} In this section we introduce our main technical contributions. We begin by reviewing the relevant theory underlying Bayesian mixture models, and then explain how the proposed framework employs these for the extraction of information from temporally varying document corpora. \subsection{Bayesian mixture models}\label{ss:mixModels} Mixture models are appropriate choices for the modelling of so-called heterogeneous data whereby heterogeneity is taken to mean that observable data is generated by more than one process (source). The key challenges lie in the lack of observability of the correspondence between specific data points and their sources, and the lack of \emph{a priori} information on the number of sources~\cite{RichGree1997}. Bayesian non-parametric methods place priors on the infinite-dimensional space of probability distributions and provide an elegant solution to the aforementioned modelling problems. Dirichlet Process~(DP) in particular allows for the model to accommodate a potentially infinite number of mixture components~\cite{Ferg1973}: \begin{align} p\left(x|\pi_{1:\infty},\phi_{1:\infty}\right)=\sum_{k=1}^{\infty}\pi_{k}f\left(x|\phi_{k}\right). \end{align} where $\text{DP}\left(\gamma,H\right)$ is defined as a distribution of a random probability measure $G$ over a measurable space $\left(\Theta,\mathcal{B}\right)$, such that for any finite measurable partition $\left(A_{1},A_{2},\ldots,A_{r}\right)$ of $\Theta$ the random vector $\left(G\left(A_{1}\right),\ldots,G\left(A_{r}\right)\right)$ is a Dirichlet distribution with parameters $\left(\gamma H\left(A_{1}\right),\ldots,\gamma H\left(A_{r}\right)\right)$. Owing to the discrete nature and infinite dimensionality of its draws, the DP is a useful prior for Bayesian mixture models. By associating different mixture components with atoms $\phi_{k}$, and assuming $x_{i}|\phi_{k}\overset{iid}{\sim}f\left(x_{i}|\phi_{k}\right)$ where $f\left(.\right)$ is the kernel of the mixing components, a Dirichlet process mixture model (DPM) is obtained~\cite{Radf2000}. \subsubsection{Hierarchical DPMs} While the DPM is suitable for the clustering of exchangeable data in a single group, many real-world problems are more appropriately modelled as comprising multiple groups of exchangeable data. In such cases it is desirable to model the observations of different groups jointly, allowing them to share their generative clusters. This so-called ``sharing of statistical strength'' emerges naturally when a hierarchical structure is implemented. The DPM models each group of documents in a collection using an infinite number of topics. However, it is desired for multiple group-level DPMs to share their clusters. The hierarchical DP (HDP)~\cite{TehJordBealBlei2006} offers a solution whereby base measures of group-level DPs are drawn from a corpus-level DP. In this way the atoms of the corpus-level DP are shared across the documents; posterior inference is readily achieved using Gibbs sampling~\cite{TehJordBealBlei2006}. \subsection{Modelling topic evolution over time\label{ss:contrib}} We now show how the described HDP based model can be applied to the analysis of temporal topic changes in a \emph{longitudinal} data corpus. Owing to the aforementioned assumption of a temporally locally static corpus we begin by discretizing time and dividing the corpus into epochs. Each epoch spans a certain contiguous time period and has associated with it all documents with timestamps within this period. Each epoch is then modelled separately using a HDP, with models corresponding to different epochs sharing their hyperparameters and the corpus-level base measure. Hence if $n$ is the number of epochs, we obtain $n$ sets of topics $\boldsymbol{\phi}=\left\{ \boldsymbol{\phi}_{t_{1}},\ldots,\boldsymbol{\phi}_{t_{n}}\right\} $ where $\boldsymbol{\phi}_{t}=\left\{ \theta_{1,t},\ldots,\phi_{K_{t},t}\right\} $ is the set of topics that describe epoch $t$, and $K_{t}$ their number. \subsubsection{Topic relatedness\label{ss:similarity}} Our goal now is to track changes in the topical structure of a data corpus over time. The simplest changes of interest include the emergence of new topics, and the disappearance of others. More subtly, we are also interested in how a specific topic changes, that is, how it evolves over time in terms of the contributions of different words it comprises. Lastly, our aim is to be able to extract and model complex structural changes of the underlying topic content which result from the interaction of topics. Specifically, topics, which can be thought of as collections of memes, can merge to form new topics or indeed split into more nuanced memetic collections. This information can provide valuable insight into the refinement of ideas and findings in the scientific community, effected by new research and accumulating evidence. The key idea behind our tracking of simple topic evolution stems from the observation that while topics may change significantly over time, changes between successive epochs are limited. Therefore we infer the continuity of a topic in one epoch by relating it to all topics in the immediately subsequent epoch which are sufficiently similar to it under a suitable similarity measure -- we adopt the well known Bhattacharyya distance (BHD). This can be seen to lead naturally to a similarity graph representation whose nodes correspond to topics and whose edges link those topics in two epochs which are related. Formally, the weight of the directed edge that links $\phi_{j,t}$, the $j$-th topic in epoch $t$, and $\phi_{k,t+1}$ is $\rho_\text{BHD}\left(\phi_{j,t},\phi_{k,t+1}\right)$ where $\rho_\text{BHD}$ denotes the BHD. In constructing a similarity graph a threshold to used to eliminate automatically weak edges, retaining only the connections between sufficiently similar topics in adjacent epochs. Then the disappearance of a particular topic, the emergence of new topics, and gradual topic evolution can be determined from the structure of the graph. In particular if a node does not have any edges incident to it, the corresponding topic is taken as having emerged in the associated epoch. Similarly if no edges originate from a node, the corresponding topic is taken to vanish in the associated epoch. Lastly when exactly one edge originates from a node in one epoch and it is the only edge incident to a node in the following epoch, the topic is understood as having evolved in the sense that its memetic content may have changed. \begin{figure*}[t] \centering \subfigure[Topic speciation]{\includegraphics[width=0.8\columnwidth]{speciation.pdf}\label{f:speciation}}~~~~~~~~~~~~~~~~~~~~~~~ \subfigure[Topic splitting]{\includegraphics[width=0.8\columnwidth]{splitting.pdf}\label{f:splitting}} \caption{ This paper is the first work to describe the difference between two topic evolution phenomena: (a) topic speciation and (b) topic splitting. } \end{figure*} A major challenge to the existing methods in the literature concerns the detection of topic merging and splitting. Since the connectedness of topics across epochs is based on their similarity what previous work describes as `splitting' or indeed `merging' does not adequately capture these phenomena. Rather, adopting the terminology from biological evolution, a more accurate description would be `speciation' and `convergence' respectively. The former is illustrated in Fig~\ref{f:speciation} whereas the latter is entirely analogous with the time arrow reversed. What the conceptual diagram shown illustrates is a slow differentiation of two topics which originate from the same `parent'. Actual topic splitting, which does not have a biological equivalent in evolution, and which is conceptually illustrated in Fig~\ref{f:splitting} cannot be inferred by measuring topic similarity. Instead, in this work we propose to employ the Kullback-Leibler divergence (KLD) for this purpose. This divergence is asymmetric can be intuitively interpreted as measuring how well one probability distribution `envelops' another. KLD between two probability distributions $p(i)$ and $q(i)$ is defined as follows: \begin{align} \rho_\text{KLD} = \sum_i p(i) \log \frac{p(i)}{q(i)} \end{align} It can be seen that a high penalty is incurred when $p(i)$ is significant and $q(i)$ is low. Hence, we use the BHD to track gradual topic evolution, speciation, and convergence, while the KLD (computed both in forward and backward directions) is used to detect topic splitting and merging. \subsubsection{Automatic temporal relatedness graph construction\label{ss:construction}} Another novelty of the work first described in this paper concerns the building of the temporal relatedness graph. We achieve this almost entirely automatically, requiring only one free parameter to be set by the user. Moreover the meaning of the parameter is readily interpretable and understood by a non-expert, making our approach highly usable. Our methodology comprises two stages. Firstly we consider all inter-topic connections present in the initial fully connected graph and extract the empirical estimate of the corresponding cumulative density function (CDF). Then we prune the graph based on the operating point on the relevant CDF. In other words if $F_\rho$ is the CDF corresponding to a specific initial, fully connected graph formed using a particular similarity measure (BHD or KLD), and $\zeta \in [0, 1]$ the CDF operating point, we prune the edge between topics $\phi_{j,t}$ and $\phi_{k,t+1}$ iff $\rho(\phi_{j,t},\phi_{k,t+1}) < F^{-1}_\rho (\zeta)$. \section{Evaluation and discussion} We now analyse the performance of the proposed framework empirically on a large real world data set. \subsection{Evaluation data}\label{sss:rawData} We used the PubMed interface to access the US National Library of Medicine and retrieve from it scholarly articles. We searched for publication on the metabolic syndrome (MetS) using the keyphrase``metabolic syndrome'' and collected papers written in English. The earliest publication found was that by Berardinelli~\textit{et al.}~\cite{BeraCorddeAlCouc1953}. We collected all matching publications up to the final one indexed by PubMed on 10th Jan 2016, yielding a corpus of 31,706 publications. \subsubsection{Pre-processing\label{sss:preprocessing}} The raw data collected from PubMed is in the form of free text. To prepare it for automatic analysis a series of `pre-processing' steps are required. The goal is to remove words which are largely uninformative, reduce dispersal of semantically equivalent terms, and thereafter select terms which are included in the vocabulary over which topics are learnt. We firstly applied soft lemmatization using the WordNet$^\circledR$ lexicon~\cite{Mill1995} to normalize for word inflections. No stemming was performed to avoid semantic distortion often effected by heuristic rules used by stemming algorithms. After lemmatization and the removal of so-called stop-words, we obtained approximately 3.8 million terms in the entire corpus when repetitions are counted, and 46,114 unique terms. Constructing the vocabulary for our method by selecting the most frequent terms which explain 90\% of the energy in a specific corpus resulted in a vocabulary containing 2,839 terms. \subsection{Results} We stared evaluation by examining whether the two topic relatedness measures (BHD and KLD) are capturing different aspects of relatedness. To obtain a quantitative measure we looked at the number of inter-topic connections formed in respective graphs both when the BHD is used as well as when the KLD is applied instead. The results were normalized by the total number of connections formed between two epochs, to account for changes in the total number of topics across time. Our results are summarized in Fig~\ref{f:common}. A significant difference between the two graphs is readily evident -- across the entire timespan of the data corpus, the number of Bhattacharyya distance based connections also formed through the use of the KLD is less than 40\% and in most cases less than 30\%. An even greater difference is seen when the proportion of the KLD connections is examined -- it is always less than 25\% and most of the time less than 15\%. \begin{figure} \centering \subfigure[BHD-KLD normalized overlap]{\includegraphics[width=0.99\columnwidth]{BH_KLD_overlap.pdf}} \subfigure[KLD-BHD normalized overlap]{\includegraphics[width=0.99\columnwidth]{KLD_BH_overlap.pdf}} \caption{ The proportion of topic connections shared between the BHD and the KLD temporal relatedness graphs, normalized by (a) the number of BHD connections, and (b) the number of KLD connections, in an epoch. } \label{f:common} \end{figure} To get an even deeper insight into the contribution of the two relatedness measures, we examined the corresponding topic graphs before edge pruning. The plot in Fig~\ref{f:smooth} shows the variation in inter-topic edge strengths computed using the BHD and the KLD (in forward and backward directions) -- the former as the $x$ coordinate of a point corresponding to a pair of topics, and the latter as its $y$ coordinate. The scatter of data in the plot corroborates our previous observation that the two similarity measures indeed do capture different aspects of topic behaviour. We performed extensive qualitative analysis which is necessitated by the nature of the problem at hand and the so-called `semantic gap' that underlies it. In all cases we found that our algorithm revealed meaningful and useful information, as confirmed by an expert in the area of metabolic MetS research. Our final contribution comprises a web application which allows users to upload and analyse their data sets using the proposed framework. The application allows a range of powerful tasks to be performed quickly and in an intuitive manner. For example, the user can search for a given topic using keywords (and obtain a ranked list), trace the origin of a specific topic backwards in time, or follow its development in the forward direction, examine word clouds associated with topics, display a range of statistical analyses, or navigate the temporal relatedness graph. \section{Summary and Conclusions} In this work we presented a case for the importance of use of advanced machine learning techniques in the analysis and interpretation of medical literature. We described a novel framework based on non-parametric Bayesian techniques which is able to extract and track complex, semantically meaningful changes to the topic structure of a longitudinal document corpus. Moreover this work is the first to describe and present a method for differentiating between two types of topic structure changes, namely topic splitting and what we termed topic speciation. Experiments on a large corpus of medical literature concerned with the metabolic syndrome was used to illustrate the performance of our method. Lastly, we developed a web application which allows users such as medical researchers to upload their data sets and apply our method for their analysis; the application and its code will be made freely available following publication. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{KLD_BH_smooth.pdf} \caption{ Relationship between inter-topic edge strengths computed using the BHD and the KLD before the pruning of the respective graphs. } \label{f:smooth} \end{figure} \balance \bibliographystyle{ieee}
1,116,691,498,743
arxiv
\section{Introduction} Machine Learning (ML) systems are increasingly being used to support high stakes public policy decisions, in areas such as criminal justice, education, healthcare, and social services \cite{Potash2020, Ye2019, Rodolfa2020, Bauman2018, Caruana2015}. As users of these systems have grown beyond ML experts and the research community, the need to better interpret and understand them has grown as well, particularly in the context of high-stakes decisions that affect individuals' health or well-being \cite{Lakkaraju2016, Rudin2019, Lipton2018}. Likewise, new legal frameworks reflecting these needs are beginning to emerge, such as the \textit{right to explanation} in the European Union's General Data Protection Regulation \cite{Goodman2017}. Against this background, research into \textit{explainability/ interpretability}\footnote{It is worth noting that in this paper, we do not distinguish between the two terms \textit{interpretability} and \textit{explainability}. We use both terms to refer to the ability to understand, interpret, and explain ML models and their predictions} of ML models has experienced rapid expansion and innovation in recent years. Several methods have been developed, broadly falling into two categories: 1) directly interpretable models \cite{Rudin2019, Ustun2013, Lakkaraju2016, Caruana2015}, and 2) post-hoc methods for explaining (opaque) complex models and/or their predictions \cite{Ribeiro2016, Ribeiro2018, Lundberg2017, Lundberg2018a, Lundberg2018, Bach2015}. Recently, the research community has also highlighted the need for consistent language and definitions; clearly defined explainability goals and desiderata; and metrics and methods for evaluating the quality of explanations \cite{Lipton2018, Doshivelez2017, Weller2019, Bhatt2020, Sokol2020}. In particular, Doshi-Velez and Kim pointed out the lack of rigor in evaluating explanation methods and proposed a three tiered evaluation framework for evaluating the ``quality'' of explanations \cite{Doshivelez2017}. Bhatt et al. presented a discussion about considerations to be made when deploying explainable ML models \cite{Bhatt2020}. They present several use-cases of ML explanations from the perspective of ML engineers through a number of interviews. While these generalized frameworks are steps in the right direction, we argue here that significant problem- and domain-specific work remains to define clear operational objectives for explanation, measurable outcomes for evaluation, and related metrics that would inform the efficacy of an explainable ML method (whether directly interpretable or post-hoc) in improving those outcomes of interest. Despite these recent efforts to improve the level of rigor in defining the needs and evaluation criteria for ML explainablity methods, two key gaps remain in most existing work on explainable ML methods: \begin{enumerate} \item These methods are often developed as ``general-purpose'' methods with a broad and loosely-defined goal of explainability such as transparency, not to address specific needs of real-world use-cases. \item These methods are not evaluated to adequately reflect how effective they are in real-world settings. Barring a few exceptions \cite{Ustun2019a, Caruana2015, Lundberg2018}, much of the existing work is designed and developed for benchmark classification problems with synthetic data and validated with user studies limited to users in research settings such as Amazon Mechanical Turk \cite{Ribeiro2016, Lundberg2017, Bach2015, Hu2019, Amarasinghe2019, Plumb2018, Zeiler2014, Simonyan2013}. \end{enumerate} The result is a body of methodological work without clear use-cases and established real-world utility. A necessary first step for filling these gaps is clearly defining how explainable ML fits into a decision making process. As explainability is not a monolithic concept, and can play different roles in different applications \cite{Molnar2019, Lipton2018}, this process requires extensive domain/application specific efforts. In this paper, we focus on applications of ML to public policy and social good problems. We seek to define the role of explainable ML in these domains, and how it can be used to improve policy outcomes. To that end, this paper offers the following: \begin{enumerate} \item Identifying a use-case taxonomy of ML explanations in public policy applications \item Identifying the explanation goals, the end-users, and the explanation needs, for each use-case \item Identifying research gaps by comparing the existing body of work to the needs of the use-cases \item Proposing research directions to develop effective explainable ML systems that would lead to improved policy decisions and consequently improved societal outcomes \end{enumerate} The goal of this discussion is to bridge the gap between methodological work in explainable ML and real-world use-cases. We believe that the gap we are addressing is critical to bridge if we want AI and ML to have a practical impact on societal problems. As computer scientists who develop and apply ML algorithms to social problems in collaboration with government agencies and non-profits, we are uniquely positioned to understand both the existing body of work in explainable ML and the needs of the domains such as health, education, criminal justice, and economic development. This paper is our attempt at connecting the AI research community with problems in public policy and social good where ML explainability methods can have an impact. This paper is not intended to be a thorough survey of existing work in explainable ML (since there are already excellent articles on that topic) but intends to highlight use cases (goals, users, and the need), map the existing body of ML work to those use cases, identify research gaps, and then propose research directions to bridge those identified gaps. The primary audience of this discussion is the ML research community that designs and develops explainable ML systems that may be implemented in public policy decision making systems. We believe that this discussion will serve as a framework for designing explainable ML methods with an understanding of the following: \begin{enumerate} \item The purpose the explanations serve and the related policy/societal outcome \item The end-user of the explanations and how the explanations impact their decisions \item How to measure the effectiveness of generated explanations in helping the end-users making better decisions that result in improved public outcomes \end{enumerate} In addition, we believe our approach to explicitly define the role of ML explanations in the domain of public policy can act as a template for other domains as well. \section{Use of Machine Learning in Public Policy} \label{sec:policy_characteristics} To illustrate the applicability of ML to policy problems, we focus on one common task of early warning systems. In an early warning system, the ML model is used to identify entities (people for example) for some intervention, based on a predicted risk of some (often adverse) outcome, such as an individual getting diagnosed with a disease in the next year, a student not graduating on time, or a child getting lead poisoning in the next year \cite{Bauman2018, Ye2019, Rodolfa2020}. While there are several other policy problem templates that ML is used for, such as inspection targeting, scheduling, routing, and policy evaluation, we use early warning systems to illustrate our ideas in this paper. \subsection{Characteristics of ML applications in public policy} Several characteristics of typical public policy problems set them apart from standard benchmark ML data sets often used to evaluate new proposed algorithms: \textbf{Non-stationary environments.} In a policy context, ML models use data about historical events to predict the likelihood of either the occurrence of an event in the future or the existence of a present need, and the context around the problem changes over time. This non-stationary nature in the data introduces strong temporal dependencies that should be considered throughout the modeling pipeline and makes these models susceptible to errors such as data leakage. For instance, the use of standard k-fold cross-validation as a model selection strategy might create training sets with information from the future, which would not have been available at model training time. \textbf{Evaluation metrics reflect real-world resource constraints.} The mental health outreach in \cite{Bauman2018} was limited by staffing capacity to intervene on only 200 individuals at a time, and the rental inspections team in \cite{Ye2019} could only inspect around 300 buildings per month. Resource constraints such as these are inherent in policy contexts, and the metrics used to evaluate and select models should reflect the deployment context. As such, these applications fall into the \textit{top-k} setting, where the task involves selecting exactly $ k $ instances as the ``positive'' class \cite{Liu2016}. In such a setting, we are concerned with selecting models that work well for precision in the top $k$\% of predicted scores \cite{Boyd2012} rather than optimizing accuracy or AUC-ROC (as often done in ``standard'' classification problems), which would be sub-optimal. \textbf{Heterogeneous data sources with strong spatiotemporal patterns.} Developing a feature set that adequately represents individuals in policy applications typically entails combining several heterogeneous data sources, often introducing complex correlation structures to the feature space not usually encountered in ML problems used in research settings. For instance, in \cite{Bauman2018}, the ML model combines data sources such as criminal justice data (jail bookings), emergency medical services data (ambulance dispatches), and mental health data (electronic case files) to gain a meaningful picture of an individual's state. Additionally, temporal patterns in the data are often particularly instructive, requiring further expansion of the feature space to capture the variability of features across time (number of jail bookings in the last six months, 12 months, and five years). Together, the combination of features across a range of domains, geographies, and time frames yields a large (and densely-populated) feature space compared to typical structural data based ML problems we encounter in research settings. \subsection{Socio-technical systems} Typical ML supported public policy decision making systems have at least four types of users that interact with ML models in public policy applications: \begin{enumerate} \item \textbf{ML system developers} who build the ML components of the system. \item \textbf{High-level decision-makers/regulators} who determine whether to use/incorporate the ML models in the decision making process or are responsible for auditing the ML models to ensure intended policy outcomes. \item \textbf{Action-takers} (e.g., social workers, health workers, employment counselors) who act and intervene based on the recommendation of the model. Most policy applications of ML do not involve fully automated decision-making, but rather a combined system of ML model and action-taker that we consider as one decision making entity here. Action-takers often make two types of decisions: deciding whether to accept/override the model prediction for a given entity \textit{(whether to intervene)}, and deciding which intervention to select in each case \textit{(how to intervene)}. \item \textbf{Affected individuals} that are impacted by the decisions made by the combined human-ML system. \end{enumerate} \begin{table*}[] \caption{Use-cases of ML explainability in public policy applications} \centering \begin{tabular}{p{3.5cm} | p{3.5cm} | p{9cm}} \toprule \textbf{Use case} & \textbf{End users} & \textbf{How the explanation would be used} \\ \midrule Model debugging & ML System Developers & Uncover errors/bugs in the ML pipeline/model such as leakage or biases by understanding what patterns the model learned \\ \hline Trust \& adoption & Policymakers, Regulators, \& Action Takers & Help users understand how the model makes decisions, evaluate its reasonableness, and trust its recommendations \\ \hline Whether to intervene & Action Takers & Help action takers identify correct and unreliable predictions by explaining how the model arrived at individual risk scores \\ \hline Improving intervention assignments & Action Takers & Help action takers select appropriate interventions by understanding factors that contribute to risk \\ \hline Recourse & Affected Individuals & Help affected individuals take action to improve their outcomes in the future or appeal decisions based on inaccurate data\\ \bottomrule \end{tabular} \label{tab:usecases} \end{table*} \section{What Role can Explainable ML Play in Public Policy Applications?} \label{sec:usecases} Based on our experience working in over a hundred such projects, we identify five main use-cases for explainable ML in a public policy decision-making process (see Table \ref{tab:usecases}). For each use-case, we identify the end-user(s) of the explanations, the goal the explanations need to achieve, and the desired characteristics of the explanations to reach that goal. To better illustrate the use-cases, we will make use of concrete application drawn from our work (preventing adverse interactions between police and the public) to serve as a running example. Many applied ML contexts share a similar structure, such as: supporting child welfare screening decisions \cite{Chouldechova2018}, allocating mental health interventions to reduce recidivism \cite{Rodolfa2020, Bauman2018}, intervening in hospital environments to reduce future complications or readmission \cite{Ramachandran2020}, recommending training programs to reduce risk of long term unemployment \cite{Zejnilovic2020}. \textbf{Illustrative example:} Adverse incidents between the public and police officers, such as unjustified use of force or misconduct, can result in deadly harm to citizens, decaying trust in police, and less safety in affected communities. To proactively identify officers at risk for involvement in adverse incidents and prioritize preventative interventions (e.g., counseling, training, adjustments to duties), many police departments make use of Early Intervention Systems (EIS), including several ML-based systems \cite{Carton2016}. The prediction task of the EIS is to identify $k$ currently active officers who are most at risk of an adverse incident in a given period in the future (in the next 12 months), where intervention capacity of the police department determines $k$. The EIS uses a combination of data sources such as officer dispatch events; citizen reports of crimes; citations, traffic stops, and arrests; and employee records to represent individual officers and generates labels using their history of adverse incidents \cite{Carton2016}. \subsection{Use Case 1: Model Debugging} ML model building workflow is often iterative: ML developers build models, analyze them, fix any errors, improve the models, and iterate until they are satisfied with the models and their performance. One critical piece of this workflow is doing continuous sanity checks on the model(s) to see if they \textit{make sense}. A key goal of explanations at this early stage is to help the system developer identify and correct errors in their models. Common errors such as data leakage (the model having access to information at training/building time that it would not have at test/deployment/prediction time that is accidentally being used in the training data) \cite{Kaufman2011}, and spurious correlations/biases (that exist in training data but do not reflect the deployment context of the model) are often found by observing model explanations and finding predictors that should not be useful showing up as extremely predictive \cite{Ribeiro2016, Caruana2015}. \textit{E.g.} In the EIS, an adverse incident gets determined to be \textit{unjustified} quite a long time after the incident date. When training an ML model with the entire incident record, accidentally using the future \textit{determination state} of the incident can introduce data leakage. In this case, explanations could uncover that the feature \textit{case state} is deemed important by the model when it takes a value related to the determination state, and can point the ML practitioner and/or a domain expert to recognize that information has leaked from the future. \subsection{Use Case 2: Building Trust} Decision-makers have to sufficiently trust the ML model to adopt and use them in their processes. Trust, in general, is a common theme behind explainable ML \cite{Ribeiro2016, Lipton2018, Lundberg2018a}. In our experience, it takes two forms: 1) trust by high-level decision-makers that leads to its adoption in the process, and 2) trust by the action-taker in the model's predictions that leads to individual actions/interventions. This use case focuses on the former, where the goal of explanations is to help users (policymakers) understand and trust the model's overall decision-making process.\footnote{It is important to note that explainability is not the only aspect that affects user trust. In a policy context, factors such as: 1) stability of predictions, 2) the training users have received, and 3) user involvement in the modeling process, also impacts user trust \cite{Ackermann2018}.} The role of the explanation in this case is to both help the users understand what factors are affecting the model predictions, as well as characteristics of individuals that are being scored as high or low risk. Since the user in this instance is not an ML expert but has expertise in the domain being tackled, communicating the explanation in a way that increases the chances of creating trust is critical. \textit{E.g.} In the EIS, the explanations should inform the ranking officer at the PD---who acts as the regulator---of the factors that lead to increasing/decreasing a police officer's risk score \cite{Carton2016}. In that instance, \textit{"A high number of investigations in the last 15 years"} is an interpretable indicator while \textit{"positive first principal component of arrest data"} is not. \subsection{Use Case 3: Deciding Whether to Intervene} No ML model makes perfect predictions, particularly when predicting rare events. For example, consider an ML model that predicts children and homes at risk of lead hazards for allocating limited inspection and remediation resources. If only 5\% of households have lead hazards, a model that identifies these hazards with a 30\% success rate would provide significant improvement over a strategy of performing random inspections, but would still be wrong 70\% of the time. In the ideal case, the action-taker in the loop would be able to determine when to agree with and act on the model’s recommendation, and when to override it, and end up with a improved list of $ k $ entities. This is closely related to the notion of trust that we discussed in the above use-case, but at the level of individual predictions and with the end-user being the action-taker as opposed to a high level regulator. Effective explanations can potentially help users, combined with their domain expertise, determine when the model is wrong and improve the overall decisions made by the combined Human-ML system. Therefore, the goal of explanations in this use case is to help the action-taker make the decision of \textit{whether to intervene} by detecting unreliable model predictions so that the performance of the overall system---precision@$k$ in the example above---would improve. For instance, if the explanation indicates that the model is basing the prediction on seemingly non-related factors, they may override that prediction. As the end-users are domain experts, the same user-interpretability requirement from the above use-case holds for the explanations. \textit{E.g.} In the EIS, if an explanation exists for each officer in the top-$k$, that explains \textit{why} they are at risk of an adverse incident, the internal affairs division, who decides \textit{whether to intervene}, can use those explanations to determine the reliability of the model's recommendation in order to act on it or override it. \subsection{Use Case 4: Improving Intervention Selection} While ML models may help identifying entities that need intervention, they often provide little to no guidance on how to select from one of many interventions. For instance, consider a model that predicts students' risk of not graduating high school on time. A student might be at risk due to a number of reasons, such as: struggling with a specific course, bullying, transportation issues, health issues, or family obligations. Each of those reasons would require a different type of assistive intervention. In this use case, the goal of the explanation is to help the action-taker determine \textit{how to intervene} and often choose among one of many possible interventions available. While explanations are not truly causal, the factors deemed important by the ML model can provide valuable information in choosing interventions. As in the previous use case, the end-users here are domain experts and the explanations should be mapped to the problem domain. As domain expertise is extremely valuable in understanding causal links between the explanations and possible interventions, collaborations between the domain experts and ML practitioners are necessary to map explanations to interventions. \textit{E.g.} Consider an officer flagged by the EIS for whom the explanation indicates the model is prioritizing features related to the type of dispatches the officer was assigned to in the last few months. Upon further inspection of the data, it can be seen that the officer had been dispatched to high-stress situations on a regular basis. In this instance, a possible intervention is reassigning of duties or putting them on low-stress dispatches after a series of high-stress dispatches. \subsection{Use Case 5: Recourse} When individuals are negatively impacted by ML aided decisions, providing them with a concrete set of actionable changes that would lead to a different decision is critical. This ability of an individual to affect model outcomes through actionable changes is called recourse \cite{Ustun2019}. While recourse has been studied independently from explainable ML \cite{Ustun2019}, ML explanations have the potential to help individuals seek recourse in public policy applications. In this use-case, there are two explanation goals: 1) help the user understand the reasons behind the decision, enabling them discover any inaccuracies in the model and /or data and dispute the decision, and 2) help the user identify the set of actionable changes that would lead to an improved decision in the future. As the user in this use-case is the affected individual, the explanations that indicate reasons behind the decisions should be mapped to a domain that is understandable by the individual. Furthermore, the explanations that indicate changes should contain actionable changes (e.g. reducing age by 10 years vs reducing debt). \textit{E.g.} In the EIS, the affected individual is the flagged officer. If the officer is provided with explanations indicating the reasons behind the elevated risk score and actionable changes that could reduce their risk score, they could either point any inaccuracies or take measures themselves (in addition to the intervention by the PD) to reduce the risk score. \section{Current State of Explainable ML} \label{sec:current_state} In this section, we summarize the existing approaches in explainable ML. The intention is not to provide a comprehensive literature review but rather a broad summary of existing approaches and how they apply to public policy settings. We refer readers to \cite{Arya2019, Adadi2018, Molnar2019, Bhatt2020, Guidotti2018} for more comprehensive reviews of existing work. \subsection{Existing work in explainable/interpretable ML} Existing approaches broadly fall into two categories: 1) directly interpretable ML models, and 2) post-hoc methods for explaining opaque ML models.\footnote{Note that this opacity may either be a reflection of 1) the model being too complex to be comprehensible, or 2) the model being proprietary \cite{Rudin2019}. In this paper, we focus on opacity created through model complexity.} ML explanations take two forms: 1) explaining individual predictions (local explanation), and 2) explaining overall behavior of the models (global explanation). Local explanations help users understand \textit{why} the model arrived at the given prediction for a given instance, while global explanations explain \textit{how} the model generally behaves \cite{Plumb2018}. Table \ref{tab:method_categories} summarizes the existing approaches and how they fit in our use-case taxonomy. \begin{table*}[t] \centering \caption{Existing approaches for explainable ML} \vspace{-1em} \begin{tabular}{p{4.3cm} | p{1.6cm} | p{1.8cm} | p{1.6cm} | p{1.8cm} | p{1.7cm} | p{2.5cm}} \toprule \multirow{3}{*}{Method} & \multicolumn{4}{c |}{Post-hoc methods} & \multirow{3}{=}{Interpretable Models} & \multirow{3}{*}{References}\\ \cline{2-5} & \multicolumn{2}{c |}{Local} & \multicolumn{2}{c |}{Global} & & \\ \cline{2-5} & Model agnostic & Model specific & Model agnostic & Model specific & & \\ \midrule Sparse models & & & & & \checkmark & \cite{Ustun2013}, \cite{Ustun2019a}, \cite{Hu2019} \\ \hline Decision Rules/Lists/Sets & & & & & \checkmark & \cite{Lakkaraju2016} \\ \hline Linear/additive models & & & & & \checkmark & \cite{Caruana2015}, \cite{Lou2012}\\ \hline Local surrogate models & \checkmark & \checkmark & & & \checkmark & \cite{Ribeiro2016}, \cite{Plumb2018}\\ \hline Permutation (Shapley values) & \checkmark & \checkmark & & \checkmark & & \cite{Lundberg2017, Lundberg2018a, Lundberg2018, Lundberg2020}\\ \hline Global rule extraction & & \checkmark & \checkmark & & & \cite{Ribeiro2018}, \cite{Tsukimoto2000} \\ \hline Gradient-propagation & & \checkmark & & & & \cite{Bach2015}, \cite{Simonyan2013}, \cite{Baehrens2010}, \cite{Zeiler2014} \\ \hline Influence functions & \checkmark & & & & & \cite{Koh2017} \\ \hline Counterfactual explanations & \checkmark & & & & & \cite{Ustun2019}, \cite{Poyiadzi2020} \\ \bottomrule \end{tabular} \label{tab:method_categories} \end{table*} \subsubsection{Directly interpretable ML models} Directly (or inherently) interpretable ML models are designed such that an end-user could understand the decision-making process \cite{Lakkaraju2016}. In a policy context, with an interpretable model, a user could: (a) understand how the model calculates a risk score (global explanation), and (b) for a given instance, understand what factors contributed to that risk score (local explanation). Several efforts have focused on developing directly interpretable models, such as those for healthcare and criminal justice \cite{Zeng2017, Caruana2015}. These include sparse linear models \cite{Ustun2013, Ustun2019}, sparse decision trees \cite{Hu2019}, generalized additive models \cite{Hastie1990, Lou2013, Lou2012}, and interpretable decision sets \cite{Lakkaraju2016}. Directly interpretable models often rely on carefully curated representations of data with meaningful input features \cite{Rudin2019}, often through discretization or binary encodings \cite{Ustun2019, Caruana2015, Lakkaraju2016}. While any ML model requires diligent feature engineering, distilling complex data spaces into a set of optimally discretized and meaningful features can entail extensive effort and optimization of its own. Doing so may prove particularly challenging with the complex and heterogeneous feature spaces typically found in policy settings. \subsubsection{Post-hoc methods for explaining black-box ML models} Post-hoc/post-modeling methods derive explanations from already trained black-box/opaque ML models. As post-hoc methods do not interfere with the model's training process, they enable the use of complex ML models to achieve explainability without risk of sacrificing performance. However, as black-box ML models are often too complex to be explained entirely, post-hoc methods typically derive an approximate explanation \cite{Rudin2019, Gilpin2018}, which makes ensuring the fidelity of the explanations to the model a key challenge in this work. Unlike directly interpretable models, local and global explanations for opaque complex ML models require different methods. For both types of explanations, both model-specific and model-agnostic methods exist in literature. \textbf{Post-hoc local explanations} A local explanation in a typical public policy problem is used to understand which factors affected the predicted risk score for an individual entity. The most common format of local explanation is feature attribution--also known as feature importance or saliency---where each input feature is assigned an importance score that quantifies its contribution to the model prediction \cite{Baehrens2010, Bhatt2020}. Several approaches exist for deriving feature importance scores: training a directly interpretable surrogate model (linear classifier) around a local neighborhood of the instance in question (LIME, MAPLE) \cite{Ribeiro2016, Plumb2018}; feature perturbation based methods for approximating each feature's importance using game-theoretic shapely values (SHAP, TreeSHAP) \cite{Lundberg2017, Lundberg2018a}; gradient-based techniques such as sensitivity analysis (SA) \cite{Zeiler2014}, deconvolution \cite{Simonyan2013}, layer-wise relevance propagation (LRP)\cite{Bach2015}. Among these approaches, methods such as LIME, SHAP, influence functions, and SA are model-agnostic methods, whereas LRP, Deconvolution, and TreeSHAP are model specific methods. MAPLE stands out among these methods as it can act both as a directly interpretable model as well as a model-specific post-hoc local explainer \cite{Plumb2018}. Other approaches such as influence functions \cite{Koh2017}, nearest neighbors, prototypes, and criticisms \cite{Kim2016a, Plumb2018}, make use of other instances, rather than features, to provide local explanations. A special form of example-based explanation is counterfactual explanations, which seek to answer the following question: \textit{"what’s the smallest change in data that would result in a different model outcome?"} \cite{Barocas2020, Karimi2020, Mothilal2020, Molnar2019}. In a top-$k$ setting, the \textit{change in outcome} can be the inclusion vs. exclusion of the individual from the top-$k$ list. Counterfactual explanations can provide insight on \textit{how to act} to change the risk score, supplementing the feature attribution methods that explain \textit{why} the model arrived at the risk score. \textbf{Post-hoc global explanations} A global explanation in a typical policy problem would be a summary of factors/patterns that are generally associated with high-risk scores, often expressed as a set of rules \cite{Plumb2018, Ribeiro2018}. Global explanations should enable the users to accurately predict, sufficiently frequently, how the model would behave in a given instance. However, deriving global explanations of models that learn highly complex non-linear decision boundaries is very difficult \cite{Ribeiro2016}. As a result, the area of deriving post-hoc global explanations is not as fully developed as local explanation methods. Some approaches for global explanations from black-box ML models include: 1) aggregation of local explanations \cite{Amarasinghe2019, Lundberg2020}, 2) global surrogate models \cite{Frosst2017}, and 3) rule extraction from trained models \cite{Tsukimoto2000}. A noteworthy contribution to deriving globally faithful explanations is ANCHORS \cite{Ribeiro2018}. ANCHORS identifies feature behavior patterns that have high precision and coverage in terms of its contribution to the model predictions of a particular class. Methods proposed in \cite{Lundberg2020, Ribeiro2018} are model-agnostic and methods presented in \cite{Frosst2017, Tsukimoto2000, Amarasinghe2019} are model-specific. \subsection{Mapping existing explainability methods to public policy use-cases} For each use-case, Table \ref{tab:method_applicability} ranks the capabilities of existing methods on a three-point scale: \begin{description} \item $ \bigstar \largestar \largestar $: Potentially applicable methods exist for the use-case. However, their efficacy in the use-case is not demonstrated through any form of evaluation. \item $ \bigstar \bigstar \largestar $: Some evidence of methods being effective in the use-case exists, but the efficacy is not empirically validated through a well-designed user-study. \item $ \bigstar \bigstar \bigstar $: Existing methods are empirically validated on their efficacy of helping users achieve better outcomes for the use-case. \item $ \mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex] \draw (0,0) -- (1,1) (0,1) -- (1,0);} $ : Method group is not applicable to the use-case \end{description} The discussion below summarizes how existing work maps to each use-case and our assessment of the status of current work with respect to these applications. It should be noted that directly explainable models are potentially applicable to all the use-cases. Therefore, we focus on the post-hoc methods in the summaries below. \begin{table*} \centering \caption{Applicability of existing methods to public policy use-cases} \begin{tabular}{p{4.2cm}|p{2cm}|p{2cm}| p{2cm} | p{5.2cm} } \toprule Use-case & Post-hoc Local & Post-hoc global & Interpretable Models & Potentially applicable approaches \\ \midrule Model debugging & $ \bigstar \bigstar \largestar $ & $ \bigstar \bigstar \largestar $ & $ \bigstar \bigstar \largestar $ & \cite{Ribeiro2016, Ribeiro2018, Lundberg2017, Lundberg2018a, Lundberg2020, Bach2015, Baehrens2010, Ustun2013, Ustun2019, Arya2019} \\ \hline Model trust and adoption & $ \bigstar \largestar \largestar $ & $ \bigstar \largestar \largestar $ & $ \bigstar \largestar \largestar $ & \cite{Ribeiro2018, Lundberg2018, Ribeiro2016, Lundberg2017, Lundberg2018a, Bach2015, Plumb2018} \\ \hline Decision making system performance & $ \bigstar \largestar \largestar $ & $ \mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex] \draw (0,0) -- (1,1) (0,1) -- (1,0);} $ & $ \bigstar \largestar \largestar $ & \cite{Ribeiro2016, Lundberg2017, Lundberg2018a, Bach2015, Plumb2018, Ustun2019a, Lou2012, Hu2019, Lakkaraju2016} \\ \hline Intervention selection & $ \bigstar \largestar \largestar $ & $ \mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex] \draw (0,0) -- (1,1) (0,1) -- (1,0);} $ & $ \bigstar \largestar \largestar $ & \cite{Ribeiro2016, Lundberg2017, Lundberg2018a, Bach2015, Plumb2018, Ustun2019a, Lou2012, Hu2019, Lakkaraju2016} \\ \hline Recourse & $ \bigstar \bigstar \largestar $ & $ \mathbin{\tikz [x=1.4ex,y=1.4ex,line width=.2ex] \draw (0,0) -- (1,1) (0,1) -- (1,0);} $ & $ \bigstar \bigstar \largestar $ & \cite{Ribeiro2016, Lundberg2017, Lundberg2018a, Bach2015, Plumb2018, Ustun2019a, Lou2012, Hu2019, Lakkaraju2016, Ustun2019, Poyiadzi2020} \\ \bottomrule \end{tabular} \label{tab:method_applicability} \end{table*} \textbf{Model debugging:} Methods for both local and global post-hoc explanations are potentially useful in this use-case. Global explanations could help identify errors in overall decision-making patterns (globally important features can help identify data leakage), and local explanations can help to uncover errors in individual predictions. Although some recent work lends evidence for the utility of explanations in discovering model errors \cite{Ribeiro2016}, \cite{Caruana2015}, the efficacy of these methods is not empirically validated through well-defined user trials in real-world applications. Likewise, evaluations have yet to be performed in the context of policy problems. \textbf{Trust and model adoption:} As with model-debugging, both global and local explanation methods are potentially applicable. However, as the end-user is the domain expert, explanations will need to be extended beyond feature attribution while preserving fidelity to what the model has learned. While existing methods discuss user trust as a broad goal, to the best of our knowledge, their ability to help regulators or decision-makers adequately trust ML models is not demonstrated through well-defined evaluations or user-trials. \textbf{Unreliable prediction detection:} Feature attribution based local explanations is potentially applicable to provide the necessary information to the user. However, feature attribution alone may not be sufficient. Users may need more contextual information such as \textit{How does the instance fit into the training data distribution?} \textit{How does the model behave for similar examples? and what factors did it rely on for those predictions?} To that end, there have been some efforts to present visual summaries of explanations to the user \cite{Lundberg2017, Lundberg2018, Ribeiro2018} which could potentially be useful in this use-case. Therefore, available local explanation methods do provide a good starting point. However, the effectiveness of those methods in generating contextual and user-interpretable explanations that help identify unreliable predictions is not evidenced through evaluations or well-defined user-trials \textbf{Intervention selection:} As the intervention determinations are often individualized, local explanation methods are potentially applicable for generating the reasons behind the risk score. As with the above use-case, users may need more contextual information to supplement the local explanations such as: \textit{how the instance fits into the training data distribution}, and \textit{intervention history} for \textit{similar}---\textit{w.r.t} data and \textit{w.r.t} explanation---individuals. To the best of our knowledge, there isn't evidence in the existing body of work on the efficacy of using these local explanation methods for informing intervention selection. \textbf{Recourse:} Feature attribution based local explanations are potentially applicable for deriving \textit{reasons} behind the decision, and counterfactual explanations are potentially useful in explaining how to improve the outcomes. However, simple counterfactual explanations do not guarantee explanations with actionable changes. There have been some efforts to deriving \textit{actionable} counterfactual explanations \cite{Poyiadzi2020, Ustun2019a}. While there is some evidence of counterfactual explanations' potential for helping individuals seek recourse, empirical validation is still required to establish their efficacy. \section{Gaps and Proposed Research Directions} The previous section mapped the applicability of existing methods to the use case taxonomy. In this section, we use that mapping to identify gaps in existing explainable ML research when compared to the needs of real-world public policy and social good problems, and propose an ML research agenda to fill those gaps. We believe that tackling these research gaps is critical for the machine learning discipline if we want our systems to get deployed, used, and have a positive and lasting impact on society and public policy. \subsection{Gap 1: Existing methods have not been sufficiently and effectively evaluated in real-world policy contexts} The most pronounced gap in existing methods is the lack of effective evaluation to establish their efficacy in practical settings. Evaluating the "quality" of explanations has been a topic of interest in the explainable ML community \cite{Holzinger2020, Doshivelez2017, Ribeiro2016}. The most common approach to evaluating an explanation has been assessing its faithfulness to the model and data \cite{Ribeiro2016, Holzinger2020}. In this work, we approach evaluation of an explanation with respect to its ability to improve a public outcome of interest. While Doshi-Velez and Kim highlighted the need of evaluating explanations at different levels and presented a generalized framework in \cite{Doshivelez2017}, we argue that devising a framework for a given problem domain, such as public policy problems, requires efforts that account for the specific requirements and nuances of that domain. A complete evaluation of ML explanations in a policy context, needs three elements: 1) a real-world policy problem with a well-defined goal (and metrics), 2) real-world data, and 3) a well-defined user-study (with real users). Most of the work in this area to date has focused on benchmark ML problems and data sets (e.g., image classification on MNIST data), with somewhat general-purpose metrics (AUC, F1, etc.), and with users in a lab setting (often Amazon Mechanical Turk) \cite{Ribeiro2016, Lundberg2017, Lundberg2018a, Lou2013, Bach2015, Shrikumar2017, Hu2019}. Benchmark problems have played a crucial role in the development and refinement of explainable ML methods by virtue of their convenience and availability to a wide range of researchers. However, we argue that these problems, data, and users are far removed from the actual deployment context encountered in a public policy setting and thus fail to provide convincing evidence of method effectiveness in informing the choices of domain experts in complex problem settings. The relatively small number of explainable ML studies that have incorporated some aspects of practical evaluation have unfortunately consistently lacked at least one (and in many cases multiple) of the elements necessary to offer conclusive evidence of real-world efficacy. For instance, \cite{Caruana2015} and \cite{Zeng2017} describe evaluations making use of real problems with a clear goal and real-world data. However, both studies failed to use real users to evaluate the usefulness of explanations. In the case of \cite{Lundberg2018, Lundberg2020}, the authors applied methods to a real problem with clear goals, using real-world data and real end-users, but failed to conduct a well-defined user study for empirical evaluation. In \cite{Poursabzisangdeh2021}, authors used real-world data and a large cohort of real users. However, how the application of real-estate valuation relates to a decision and an outcome of interest; and the utility of explanations in achieving that goal were not clearly established. This gap is particularly acute in the context of public policy problems, given the characteristics that set them apart from other ML settings (Section 2) such as standard image classification. Several research directions to better evaluate the extent to which existing explainable ML methods can meet the needs of real-world applications in the public policy domain: \subsubsection{Identify real-world public policy test-cases} The first step is to identify test-cases for implementing these methods. A complete evaluation for public policy applications should involve the following components: \begin{enumerate} \item \textbf{A real-world policy problem and goal}: Focusing on goals faced by practitioners will ensure that any evaluation reflects the ability of explanations to improve socially-relevant outcomes. Picking problems that represent a range of policy settings with different types of goals (resource allocation, early warning, impact analysis) would enable a more comprehensive assessment. \item \textbf{Real-world data}: To capture the nuances and characteristics of applying ML to a policy area in practice, the use of real data from the problem domain is essential. This is of particular importance with evaluating explainable ML methods, as simplified or synthetic data sets might provide an overly-optimistic evaluation of their ability to extract meaningful information. \item \textbf{Real users:} Although their time is often scarce, the domain users who will be acting on model outputs must be involved in the evaluation process to ensure it reflects the actual deployment scenario. Because interaction between model predictions, explanations, and users' domain expertise will dictate the performance of the system, substituting inexperienced users (for instance, from Mechanical Turk) provides little insight into how well explanations will perform in practice. \end{enumerate} \subsubsection{Evaluate performance-explainability trade-off (if any) for directly explainable models} As directly interpretable models rely on carefully curated input features, it is necessary to explore the trade-off between performance and scalability in practice. To that end, the models should be implemented on several real policy problems, evaluating: 1) the trade-off between feature preparation efforts and performance, and 2) their ability to generalize on future data under strong temporal dependencies. While the prospect of simple, directly-interpretable models certainly holds considerable appeal, their performance must be rigorously tested against more opaque models such as tree-ensembles across problem domains and applications to understand any potential trade-offs in practice and make informed implementation decisions. Certain critical applications will require models to be completely interpretable while other applications can have built-in guardrails that can protect against unintended harms, and it's important for our methods to support that spectrum of use cases. \subsubsection{Evaluate explanation methods on their ability to improve outcomes} For each use-case, we need to define outcomes as well as other criteria by which explanation effectiveness should be evaluated. For instance, when informing decisions about whether (and how) to intervene, the outcome of interest is the \textit{precision} of the list generated by the decision making-system, while other criteria---consistency/stability of explanations (e.g., \textit{"Does SHAP/LIME yield the same feature attribution scores for repeated runs for the same instance and same model?"}) and how they apply to the specific problem context---will be informative of the utility of the explanations. Once these evaluation criteria are defined, they can inform the design and implementation of user studies to directly validate existing explanation methods in the context of each use case. Most importantly, these experiments should focus on evaluating the usefulness of explanations to improve the relevant \textit{policy outcomes} rather than more narrowly on end-user \textit{perceptions} of the explanations. As such, any user studies should include the appropriate control and treatment group variants to rigorously assess how outcomes differ in the presence and absence of explanations. \subsection{Gap 2: Existing methods are not explicitly designed for specific use-cases} As discussed above, existing methods are developed with loosely defined or generic explainability goals (e.g., transparency) and without well-defined context-specific use-cases. As a result, methods are developed without understanding the specific requirements of a given domain, use-case, or user-base, resulting in a lack of adoption and sub-optimal outcomes. While several existing methods may be applicable for each use-case, their effectiveness in real-world applications is not yet well-established, meaning this potential applicability may fail to result in practical impact. As more methods are rigorously evaluated in practical, applied settings as suggested above, gaps in their ability to meet the needs of these use-cases may become evident. For instance, model agnostic methods such as LIME \cite{Ribeiro2016} and SHAP \cite{Lundberg2017} are capable of extracting input feature importance scores for individual predictions from otherwise opaque models. However, it is unclear whether they can address needs such as generating explanations that are well-contextualized and truly interpretable by less technical users without sacrificing fidelity (e.g., to help a domain expert identify unreliable model predictions or an affected individual seek recourse). \section{Conclusion} Despite the existence of a wide array of explainable ML methods, their efficacy in improving real-world decision-making systems is yet to be sufficiently explored. In this paper, we presented an initial step towards filling that void in the context of public policy applications by defining the scope of explainable ML in the domain. First, we identified a taxonomy of use-cases for ML model explanations in the ML aided public policy decision making pipeline: 1) model debugging, 2) regulator trust \& model adoption, 3) unreliable prediction detection, 4) intervention selection, and 5) recourse. For each use-case, we defined the goals of an ML explanation and the intended end-user. Then, we summarized the existing approaches in explainable ML and identified the degree to which this work addresses the needs of the identified use-cases. We observed that, while the existing approaches are potentially applicable to the use-cases, their utility has not been thoroughly validated for any of the use-cases through well-designed empirical user-studies. Two main gaps were evident in the design and evaluation of existing work: 1) methods are not sufficiently evaluated on real-world contexts, and 2) they are not designed and developed with target use-cases and well-defined explainability goals in mind. In response to these gaps, we proposed several research directions to systematically evaluate the existing methods with problems with real policy goals, real-world data, and domain experts. This paper is our attempt at connecting the ML research community who develop explainable ML methods to the problems and needs of the public policy and social good world. As computer scientists who develop and apply ML algorithms to social/policy problems in collaboration with government agencies and non-profits, we are ideally and uniquely positioned to understand both the existing body of work in explainable ML and the explainability needs of the domains such as public health, education, criminal justice, and economic development. Two main factors motivated us to compile this discussion: 1) despite the existence of a large body of methodological work in explainable ML, we failed to identify methods that we could directly apply to the problems we were tackling in the real world and 2) the frequent conversations initiated by our colleagues in the ML research community on how their methods could be better suited and developed for real-world ML problems. Explainable ML methods are critical for ML system that are designed for policy and societal problems, but they will only have impact if we design and develop them for explicitly defined use-cases and evaluate them in a way that demonstrates their effectiveness on those use-cases. Therefore, the goal of this paper was not to develop new algorithms, nor was it to conduct a thorough survey of explainable ML work (since there are already a number of excellent articles on that topic). Rather, our goal was to take the necessary first steps to bridge the gap between methodological work and real-world needs. We hope that this discussion will help the ML research community collaborate with the Policy and HCI communities to ensure that existing and newly proposed explainable ML methods are well-suited to meet the needs of the end-users who will be applying them to the benefit of society. \begin{acks} We would like to thank the Block Center for Technology and Society at Carnegie Mellon University for funding this work. \end{acks} \bibliographystyle{ACM-Reference-Format}
1,116,691,498,744
arxiv
\subsection{Algorithm for Acyclic Networks}\label{sec:algAcyclic} \subsubsection{Code Generation and Data Encoding} initially, all local and global encoding kernels are set to 0. At time $t$, the $(t+1)$-th coefficient $k_{e',e,t}$ of the local encoding kernel $k_{e',e}(z)$ is chosen uniformly randomly from $\mathbb{F}_q$ for each adjacent pair $(e',e)$, independently from other kernel coefficients. Each node $v$ stores the local encoding kernels and forms the outgoing data symbol as a random linear combination of incoming data symbols in its memory according to Eq.~\eqref{eq:yet}. Node $v$ also stores the global encoding kernel matrix $F_v(z)$ and computes the global encoding kernel $f_{e}(z)$, in the form of a vector of coding coefficients, according to Eq.~\eqref{eq:fe}. During this code construction process, $f_e(z)$ is attached to the data transmitted on $e$. Once code generation terminates and the CNC $F_r(z)$ is known at each sink $r$, $f_e(z)$ no longer needs to be forwarded, and only data symbols are sent on each outgoing edge. Recall that we ignore the reduction in rate due to the transmission of coding coefficients, since this overhead can be amortized over long periods of data transmissions. In acyclic networks, a complete topological order exists among the nodes, starting from the source. Edges can be ranked such that coding can be performed sequentially, where a downstream node encodes after all its upstream nodes have generated their coding coefficients. Observe that we have not assumed non-zero transmission delays. \subsubsection{Testing for Decodability and Data Decoding} \label{subsubsec:decoding} at every time instant $t$, each sink $r$ decides whether its global encoding kernel matrix $F_r(z)$ is full rank. If so, it sends an ACK signal to its parent node. An intermediate node $v$ which has received ACKs from all its children at time $t_0$ will send an ACK to its parent, and set all subsequent local encoding kernel coefficients $k_{e',e,t}$ to $0$ for all $t>t_0$, $e' \in In(v)$, and $e\in Out(v)$. In other words, the constraint lengths of the local convolutional codes increase until they are sufficient for downstream sinks to decode successfully. Such automatic adaptation eliminates the need for estimating the field size or the constraint length a priori. It also allows nodes within the network to operate with different constraint lengths as needed. If $F_r(z)$ is not full rank, $r$ stores received messages and waits for more data to arrive. At time $t$, the algorithm is considered successful if all sinks can decode. This is equivalent to saying that the determinant of $F_r(z)$ is a non-zero polynomial. Recall from Section~\ref{sec:basicDefs}, $F_r(z)$ can be written as $F_r(z)=F_{r,0}+F_{r,1}z+\cdots+F_{r,t}z^{t}$, where $F_{r,t}$ is the global encoding kernel matrix at time $t$. Computing the determinant of $F_r(z)$ at every time instant $t$ is complex, so we test instead the following two conditions, introduced in \cite{CG2009} and \cite{MS1968} to determine decodability at a sink $r$. The first condition is necessary and easy to compute, while the second is both necessary and sufficient, but slightly more complex. \begin{enumerate} \item $rank(\widehat{F}_{r,t})=m$, where $\widehat{F}_{r,t}=(F_{r,0} , F_{r,1}, \ldots, F_{r,t})$. \item $rank(M_{r,t})-rank(M_{r,t-1})=m$, where \begin{align} M_{r,i} = \left( {\begin{array}{*{20}c} F_{r,0} & F_{r,1} & \cdots & F_{r,i} \\ 0 & \ddots & \ddots & \vdots \\ 0 & \cdots & F_{r,0} & F_{r,1} \\ 0 & \cdots & 0 & F_{r,0} \\ \end{array}} \right). \end{align} \end{enumerate} Once $F_r(z)$ is full rank, $r$ can perform decoding operations. Let $T_r$ be the \emph{first decoding time}, or the earliest time at which the decodability conditions are satisfied. Denote by ${x}_0^{T_r}$ and ${y}_0^{T_r}$ the row vectors $(x_0, \cdots, x_{T_r})$ and $(y_0, \cdots, y_{T_r})$. Each source message $x_t$ is a size $m$ row vector of source symbols $x_{i,t}\in \mathbb{F}_q$ generated at $s$ at time $t$; each data message $y_t$ is a size $In(r)$ row vector of data symbols $y_{e,t}\in \mathbb{F}_q$ received on the incoming edges of $r$ at time $t$, $e\in In(r)$. Hence, ${y}_0^{T_r} = {x}_0^{T_r} M_{T_r}$. To decode, we want to find a size $In(r)({T_r}+1)\times m$ matrix $D$ such that $M_{T_r}D= {I_{m} \choose \textbf{0}} $. We can then recover source message $x_0$ by evaluating $y_0^{T_r} D= x_0^{T_r}M_{T_r} D= x_0$. Once $D$ is determined, we can decode sequentially the source message $x_t$ at time $t+{T_r}$, $t>0$. Note that if $|In(r)|>m$, we can simplify the decoding process by using only $m$ independent received symbols from the $|In(r)|$ incoming edges. Observe that, an intermediate node $v$ only stops lengthening its local encoding kernels $k_{e',e}(z)$ when \emph{all} of its downstream sinks achieve decodability. Thus, for a sink $r$ with first decoding time ${T_r}$, the length of $F_r(z)$ can increase even after $T_r$. Recall from Section~\ref{sec:basicDefs} that $L_r$ is the degree of $F_r(z)$. We will show in Section~\ref{sec:stoppingTime} that ARCNC converges in a finite amount of time for a multicast connection. In other words, when the decodability conditions are satisfied at \emph{all} sinks, the values of $L_r$ and $T_r$ at an individual sink $r$ satisfy the condition $L_r\geq {T_r}$, where $L_r$ is finite. Decoding of symbols after time ${T_r}$ can be conducted sequentially. Details of the decoding operations can be found in \cite{guoLetter2012}. \subsubsection{Feedback} As we have described in the decoding subsection, acknowledgments are propagated from sinks through intermediate nodes to the source to indicate if code length should continue to be increased at coding nodes. ACKs are assumed to be instantaneous and require no dedicated network links, thus incurring no additional delay or throughput costs. Such assumptions may be reasonable in many systems since feedback is only required during the code construction process. Once code length adaptation finishes, ACKs are no longer needed. We show in Section~\ref{sec:stoppingTime} that ARCNC terminates in a finite amount of time. Therefore, the cost of feedback can be amortized over periods of data transmissions. \subsection{Algorithm Statement for Cyclic Networks}\label{sec:algCyclic} In an acyclic network, the local and global encoding kernel descriptions of a linear network code are equivalent, in the sense that for a given set of local encoding kernels, a set of global encoding kernels can be calculated recursively in any upstream-to-downstream order. In other words, a code generated from local encoding kernels has a unique solution when decoding is performed on the corresponding global encoding kernels. By comparison, in a cyclic network, partial orderings of edges or nodes are not always consistent. Given a set of local encoding kernels, there may exist a unique, none, or multiple sets of global encoding kernels (\S 3.1, \cite{NWT2005}). If the code is non-unique, the decoding process at a sink may fail. A sufficient condition for a CNC to be successful is that the constant coefficient matrix consisting of all local encoding kernels be nilpotent \cite{KM2003}; this condition is satisfied if we code over an acyclic topology at $t=0$ \cite{CG2009}. In the extreme case, all local encoding kernels can be set to 0 at $t=0$. This setup translates to a unit transmission delay on each link, which as previous work on RLNC has shown, guarantees the uniqueness of a code construction \cite{HMK2006}. To minimize decoding delay, it is intuitive to make as few local encoding kernels zero as possible. In other words, a reasonable heuristic is to assign 0 to a minimum number of $k_{e',e,0}$, $(e',e)\in\mathcal{E}$, and to assign values chosen uniformly randomly from $\mathbb{F}_q$ to the rest. The goal is to guarantee that each cycle contains at least a single delay. Although seemingly similar, this process is actually not the same as the problem of finding the minimal feedback edge set. A feedback edge set is a set containing at least one edge of every cycle in the graph. When a feedback edge set is removed, the graph becomes an acyclic directed graph. In our setup, however, since $k_{e',e,0}$ is specific to an adjacent edge pair, $k_{e',e,0}$ does not need to be 0 for all $e'$ where $(e',e) \in \mathcal{V}$. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/cyclicExample.eps}\\ \caption{A sample cyclic network with edges numerically indexed. The set of indices is not unique, and depends on the order at which nodes are visited, starting from $s$. On the left, $r_1$ is visited before $r_2$; on the right, $r_2$ is visited before $r_1$. In each case, $(e',e)$ is highlighted with a curved arrow if $e'\succeq e$.}\label{fig:cyclicExample} \end{figure} For example, a very simple but not necessarily delay-optimal heuristic is to index all edges, and to assign 0 to $k_{e',e,0}$ if $e'\succeq e$, i.e., when $e'$ has an index larger than $e$; $k_{e',e,0}$ is chosen randomly from $\mathbb{F}_q$ if $e' \prec e$. Fig.~\ref{fig:cyclicExample} illustrates this indexing scheme. A node is considered to be visited if one of its incoming edges has been indexed; a node is put into a queue once it is visited. For each node removed from the queue, numerical indices are assigned to all of its outgoing edges. Nodes are traversed starting from the source $s$. The outgoing edges of $s$ are therefore numbered from 1 to $|Out(s)|$. Note that the index set thus obtained is not necessarily unique. In this particular example, we can have two sets of edge indices, as shown in Fig.~\ref{fig:cyclicExample}. Here $r_1$ is visited before $r_2$ on the left, and vice versa on the right. In each case, an adjacent pair $(e',e)$ is highlighted with a curved arrow if $e'\succeq e$. At $t=0$, we set $k_{e',e,0}$ to 0 for such highlighted adjacent pairs, and choose $k_{e',e,0}$ uniformly randomly from $\mathbb{F}_q$ for other adjacent pairs. Observe that, in an acyclic network, this indexing scheme provides a total ordering for the nodes as well as for the edges: a node is visited only after all of its parents and ancestors are visited; an edge is indexed only after all edges on any of its paths from the source are indexed. In a cyclic network, however, an order of nodes and edges is only partial, with inconsistencies around each cycle. Such contradictions in the partial ordering of edges make the generation of unique network codes along each cycle impossible. By assigning 0 to local encoding kernels $k_{e',e,0}$ for which $e'\succeq e$, such inconsistencies can be avoided at time 0, since the order of $e'$ and $e$ becomes irrelevant in determining the network code. After the initial step, $k_{e',e,t}$ is not necessarily 0 for $e'\succeq e$, $t>0$, nonetheless the convolution operations at intermediate nodes ensure that the 0 coefficient inserted at $t=0$ makes the global encoding kernels unique at the sinks. This idea can be derived from the expression for $f_{e,t}$ given in Eq.~\eqref{eq:fet}. In each cycle, there is at least one $k_{e',e,0}$ that is equal to zero. The corresponding $f_{e',t}$ therefore does not contribute to the construction of other $f_{e,t}$'s in the cycle. In other words, the partial ordering of arcs in the cycle can be considered consistent at $t=1$ and later times. Although this heuristic for cyclic networks is not optimal, it is universal. One disadvantage of this approach is that full knowledge of the topology is required at $t=0$, making the algorithm centralized instead of entirely distributed. Nonetheless, if inserting an additional transmission delay on each link is not an issue, we can always bypass this code assignment stage by zeroing all local encoding kernels at $t=0$. After initialization, the algorithm proceeds in exactly the same way as in the acyclic case. \subsection{Success probability} Discussions in \cite{HMK2006, LYC2003, KM2003} state that in a network with delays, ANC gives rise to random processes which can be written algebraically in terms of a delay variable $z$. In other words, a convolutional code can naturally evolve from the message propagation and the linear encoding process. ANC in the delay-free case is therefore equivalent to CNC with constraint length 1. Similarly, using a CNC with constraint length $l>1$ on a delay-free network is equivalent to performing ANC on the same network, but with $l-1$ self-loops attached to each encoding node. Each self-loop carries $z, z^2, \ldots, z^{l-1}$ units of delay respectively. For example, in Figure~\ref{fig:ANC_CNC}, we show a node with two incoming edges. Let the data symbol transmitted on edge $\dot{e}$ at time $t$ be $y_{\dot{e},t}$. A CNC with length $l=2$ is used on the left, assuming that transmissions are delay free. According to Eq.~\eqref{eq:yet_k} and \eqref{eq:yet}, the data symbol transmitted on $e_1$ at time $t$ is \begin{align} y_{e_1,t} & = \sum_{\dot{e}\in\{e',e''\}} y_{\dot{e},t}k_{\dot{e},e,0}+y_{\dot{e},t-1}k_{\dot{e},e,1}\,. \label{ye1tCNC} \end{align} On the right, the equivalent ANC is shown for the same node, where a single loop with a transmission delay of 1 has been added. Using a matrix notation, the output data symbols from $v$ are $(y_{e_2,t} \quad y_{e''',t}) = (y_{e',t} \quad y_{e'',t} \quad y_{e''',t-1})K_v $, i.e., \begin{align} y_{e''',t} & = y_{e',t}k_{e',e,1} +y_{e'',t}k_{e'',e,1}\notag \\ y_{e_2,t}& = y_{e',t}k_{e',e,0}+y_{e'',t}k_{e'',e,0}+y_{e''',t-1} \notag \\ & = \sum_{\dot{e}\in\{e',e''\}} y_{\dot{e},t}k_{\dot{e},e,0}+y_{\dot{e},t-1}k_{\dot{e},e,1} \label{ye2tANC} \end{align} \begin{figure}[t!] \centering \includegraphics[width=3.4in]{./graphics/ANC_CNC.eps}\\ \caption{A convolution code resulting from self-loops on a network with transmission delays. (a) CNC in a delay-free network. Data transmitted on incoming edges are $y_{e'}(z)$ and $y_{e}(z)$ respectively. The local encoding kernels are given by $K_{\text{CNC}}(z)$. (b) Equivalent ANC in a network with delays. The given self-loop carries a single delay $z$. Data symbols transmitted on incoming edges are $y_{e',t}$ and $y_{e'',t}$ at time $t$. The ANC coding coefficients are given by the matrix $K_{\text{ANC}}$.}\label{fig:ANC_CNC} \end{figure} Clearly $y_{e_1,t}$ is equal to $y_{e_2,t}$. The ARCNC algorithm we have proposed therefore falls into the framework given by Ho et al. \cite{HMK2006}, in the sense that the convolution process either arises naturally from cycles with delays, or can be considered as computed over self-loops appended to acyclic networks. Applying the analysis from \cite{HMK2006}, we have the following theorem, \begin{theorem}\label{thm:prob} For multicast over a general network with $d$ sinks, the ARCNC algorithm over $\mathbb{F}_q$ can achieve a success probability of at least $(1-d/q^{t+1})^{\eta}$ at time $t$, if $q^{t+1}>d$, and $\eta$ is the number of links with random coefficients. \end{theorem} \begin{proof} At node $v$, the local encoding kernel $k_{e',e}(z)$ at time $t$ is a polynomial with maximal degree $t$, i.e., $k_{e',e}(z)=k_{e',e,0}+k_{e',e,1}z+\cdots+k_{e',e,t}z^{t}$, where $k_{e',e,i}$ is randomly chosen over $\mathbb{F}_q$. If we group the encoding coefficients, the ensuing vector, $k_{e',e}=\{k_{e',e,0},k_{e',e,1},\cdots,k_{e',e,t}\}$, is of length $t+1$, and corresponds to a random element over the extension field $\mathbb{F}_{q^{t+1}}$. Using the result in \cite{HMK2006}, we conclude that the success probability of ARCNC at time $t$ is at least $(1-d/q^{t+1})^{\eta}$, as long as $q^{t+1}>d$. \end{proof} We could similarly consider the analysis done by Balli et al. \cite{BYZ2009}, which states that the success probability is at least $(1-d/(q-1))^{|J|+1}$, $|J|$ being the number of encoding nodes, to show that a tighter lower bound can be given on the success probability of ARCNC, when $q^{t+1}>d$. \subsection{First decoding time}\label{sec:stoppingTime} As discussed in Section~\ref{subsubsec:decoding}, we define the \emph{first decoding time} $T_r$ for sink $r$, $1\leq r\leq d$, as the time it takes $r$ to achieve decodability for the first time. We had called this variable the \emph{stopping time} in our previous work \cite{guo2011localized}. Also recall that when all sinks are able to decode, at each individual sink $r$, $T_r$ can be smaller than $L_r$, which is the degree of the global encoding kernel matrix $F_r(z)$. Denote by $T_N$ the time it takes for all sinks in the network to successfully decode, i.e., $T_N=\max\{T_1,\ldots,T_d\}$, then $T_N$ is also equal to $\max\{L_1,\ldots,L_d\}$. The following corollary holds: \begin{corollary} For any given $0<\varepsilon<1$, there exists a $T_0>0$ such that for any $t \geq T_0$, ARCNC solves the multicast problem with probability at least $1-\varepsilon$, i.e., $P(T_N > t) < \varepsilon$.\label{crlry2} \end{corollary} \proof Let $T_0 = \left\lceil \log_q d-\log_q(1-\sqrt[\eta]{1-\epsilon}) \right\rceil -1 $, then $T_0 + 1 \geq \lceil \log_q d \rceil$ since $0 < \varepsilon <1$, and $(1-d/q^{T_0+1})^\eta>1-\varepsilon$. Applying Theorem~\ref{thm:prob} gives $P(T_N > t) \leq P(T_N >T_0) < 1-(1-d/q^{t+1})^{\eta}< \varepsilon$ for any $t\geq T_0$, \endproof Corollary~\ref{crlry2} shows that $T_N$ is a valid random variable, and ARCNC converges in a finite amount of time for a multicast connection. Another relevant measure of the performance of ARCNC is the \emph{average first decoding time}, $T_{\text{avg}}=\frac{1}{d}\sum_{r=1}^{d}T_r$. Observe that $E[T_{\text{avg}}] \leq E[T_N]$, where \begin{align*} E[T_N] & = \sum_{t=1}^{\lceil \log_q d\rceil -1}P(T_N\geq t) + \sum_{t= \lceil \log_q d\rceil}^{\infty}{P(T_N\geq t)}\\ & \le \lceil \log_q d\rceil -1 + \sum_{t=\lceil \log_q d\rceil}^{\infty}[1-(1-\frac{d}{q^t})^{\eta}] \\ & = \lceil \log_q d\rceil -1 + \sum_{k=1}^{\eta}(-1)^{k-1}{\eta \choose k} \frac{d^k}{q^{\lceil \log_q d\rceil k}-1}\,. \end{align*} When $q$ is large, the summation term becomes $1-(1-d/q)^\eta$ by the binomial expansion. Hence as $q$ increases, the second term above diminishes to $0$, while the first term $\lceil \log_q d\rceil -1$ is 0. $E[T_{\text{avg}}]$ is therefore upper-bounded by a term converging to 0; it is also lower bounded by 0 because at least one round of random coding is required. Therefore, $E[T_{\text{avg}}]$ converges to $0$ as $q$ increases. In other words, if the field size is large enough, ARCNC reduces in effect to RLNC. Intuitively, the average first decoding time of ARCNC depends on the network topology. In RLNC, all nodes are required in code in finite fields of the same size; thus the effective field size of RLNC is determined by the worst case sink. This scenario corresponds to having all nodes stop at $T_N$ in ARCNC. ARCNC enables each node to decide locally what is a good constraint length to use, depending on side information from downstream nodes. Since $E[T_{\text{avg}}]\leq E[T_N]$, some nodes may be able to decode before $T_N$. The corresponding effective field size is therefore expected to be smaller than in RLNC. Two possible consequences of a smaller effective field size are reduced decoding delay, and reduced memory requirements. In Section~\ref{sec:examples}, we confirm through simulations that such gains can be attained by ARCNC. \subsection{Memory}\label{sec:memory} To measure the amount of memory required for ARCNC, first recall from Section~\ref{sec:basicDefs} that at each node $v$, the global encoding kernel matrix $F_v(z)$, the local encoding kernel matrix $K_v(z)$, and past data $y_{e'}(z)$ on incoming arcs $e'\in In(v)$ need to be stored in memory. $K_v(z)$ and $y_{e'}(z)$ should always be saved because together they generate new data symbols to transmit (see Eq.~\eqref{eq:yet_k} and \eqref{eq:yet}). $F_v(z)$ should be saved during the code construction process at intermediate nodes, and always at sink nodes, since they are needed for decoding. Let us consider the three contributors to memory use one by one. Firstly, recall from Section~\ref{sec:basicDefs} that $F_v(z)$ can be viewed as a polynomial in $z$. When all sinks are able to decode, at an individual node $v$, $F_v(z)$ has degree $L_v$, and coefficients from $\mathbb{F}_q^{m\times In(v)}$. The total amount of memory needed for $F_v(z)$ is therefore proportional to $\lceil\log_2 q\rceil m In(v) (L_v+1)$. Secondly, from Eq.~\eqref{eq:fet}, we see that the length of a local encoding kernel polynomial $k_{e',e}(z)$ should be equal to or smaller than that of $f_e(z)$. Thus, the length of $K_v(z)$ should also be equal to or smaller than that of $F_v(z)$. The coefficients of $K_v(z)$ are elements of $\mathbb{F}_q^{In(v)\times Out(v)}$. Hence, the amount of memory needed for $K_v(z)$ is proportional to $\lceil\log_2 q\rceil Out(v) In(v) (L_v+1)$. Lastly, a direct comparison between Eq.~\eqref{eq:yet} and \eqref{eq:fet} shows that memory needed for $y_{e'}(z)$, $e'\in In(v)$ is the same for memory needed for $F_v(z)$. In practical uses of network coding, data can be transmitted in packets, where symbols are concatenated and operated upon in parallel. Packets can be very long in length. Nonetheless, the exact size of data packets is irrelevant for comparing memory use between different network codes, since its effect cancels out during comparisons. Observe that, $m$ is the number of symbols in the source message, determined by the min-cut of the multicast connection, independent of the network code used. Similarly, $In(v)$ and $Out(v)$ are attributes inherent to the network topology. To compare the memory use of different network codes, we can omit these terms, and define the average memory use of ARCNC by the following common factor: \begin{align} W_{\text{avg}} \triangleq \frac{\lceil\log_2 q\rceil}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}(L_v+1)\,.\label{eq:Wavg} \end{align} \noindent In RLNC, $L_v=0$, and the expression simplifies to $\lceil\log_2 q\rceil$, which is the amount of memory needed for a single finite field element. In Section~\ref{example:umbrella}, we examine the memory overheads of ARCNC in a family of networks called umbrella networks by considering $E[W_\text{avg}]$. In Section~\ref{sec:simulations}, we plot $E[W_{\text{avg}}]$ for several structured and random networks. \subsection{Complexity} to study the computation complexity of ARCNC, first observe that, once the adaptation process terminates, the computation needed for the ensuing code is no more than a regular CNC. In fact, the expected computation complexity is proportional to the average code length of ARCNC. We therefore omit the details of the complexity analysis of regular CNC here and refer interested readers to \cite{EF2004}. For the adaptation process, the encoding operations are described by Eq.~\eqref{eq:fet}. If the algorithm stops at time $T_N$, then the number of operations in the encoding steps is $O(D_{in}|\mathcal{E}|T_N^2m)$, where $D_{in}$ represents the maximum input degree over all nodes, i.e., $D_{in}=\max_{v\in\mathcal{V}}|In(v)|$. To determine decodability at a sink $r$, we check if the rank of the global encoding matrix $F_r(z)$ is $m$. A straight-forward approach is to check whether the determinant of $F_r(z)$ is a non-zero polynomial. Alternatively, Gaussian elimination could also be applied. At time $t$, because $F_r(z)$ is an $m \times |In(r)|$ matrix and each entry is a polynomial with degree $t$, the complexity of checking whether $F_r(z)$ is full rank is $O(D_{in}^22^mmt^2)$. Instead of computing the determinant or using Gaussian elimination directly, we propose to check the conditions given in Section~\ref{sec:algAcyclic}. For each sink $r$, at time $t$, determining $rank\left({\begin{array}{*{20}c} F_0 & F_1 & \cdots F_{t} \end{array}}\right)$ requires a computation complexity of $O(D_{in}^2mt^2)$. If the first test passes, we then need to calculate $rank(M_{t})$ and $rank(M_{t-1})$. Observe that $rank(M_{t-1})$ was computed during the last iteration. $M_t$ is a $(t+1)m \times (t+1)|In(r)|$ matrix over field $\mathbb{F}_q$. The complexity of calculating $rank(M_t)$ by Gaussian elimination is $O(D_{in}^2mt^3)$. The process of checking decodability is performed during the adaptation process only, hence the computation complexity here can be amortized over time after the coding coefficients are determined. In addition, as decoding occurs symbol-by-symbol, the adaptation process itself does not impose any additional delays. \subsection{First decoding time}\label{sec:stoppingTime} As discussed in Section~\ref{subsubsec:decoding}, we define the \emph{first decoding time} $T_r$ for sink $r$, $1\leq r\leq d$, as the time it takes $r$ to achieve decodability for the first time. We had called this variable the \emph{stopping time} in \cite{guo2011localized}. Also recall that when all sinks are able to decode, at each sink $r$, $T_r$ can be smaller than $L_r$, the degree of the global encoding kernel matrix $F_r(z)$. Denote by $T_N$ the time it takes for all sinks in the network to successfully decode, i.e., $T_N=\max\{T_1,\ldots,T_d\}$, then $T_N$ is also equal to $\max\{L_1,\ldots,L_d\}$. The following corollary holds: \begin{corollary} For any given $0<\varepsilon<1$, there exists a $T_0>0$ such that for any $t \geq T_0$, ARCNC solves the multicast problem with probability at least $1-\varepsilon$, i.e., $P(T_N > t) < \varepsilon$.\label{crlry2} \end{corollary} \proof Let $T_0 = \left\lceil \log_q d-\log_q(1-\sqrt[\eta]{1-\epsilon}) \right\rceil -1 $, then $T_0 + 1 \geq \lceil \log_q d \rceil$ since $0 < \varepsilon <1$, and $(1-d/q^{T_0+1})^\eta>1-\varepsilon$. Applying Theorem~\ref{thm:prob} gives $P(T_N > t) \leq P(T_N >T_0) < 1-(1-d/q^{t+1})^{\eta}< \varepsilon$ for any $t\geq T_0$, \endproof Since $ Pr\{ \cup_{i=t}^{\infty}[T_N \le t]\}=1-Pr\{\cap _{i=t}^{\infty}[T_N>t]\} >1-\varepsilon$, Corollary~\ref{crlry2} shows that ARCNC converges and stops in a finite amount of time with probability 1 for a multicast connection. Another relevant measure of the performance of ARCNC is the \emph{average first decoding time}, $T_{\text{avg}}=\frac{1}{d}\sum_{r=1}^{d}T_r$. Observe that $E[T_{\text{avg}}] \leq E[T_N]$, where \begin{align*} E[T_N] & = \sum_{t=1}^{\lceil \log_q d\rceil -1}P(T_N\geq t) + \sum_{t= \lceil \log_q d\rceil}^{\infty}{P(T_N\geq t)}\\ & \le \lceil \log_q d\rceil -1 + \sum_{t=\lceil \log_q d\rceil}^{\infty}[1-(1-\frac{d}{q^t})^{\eta}] \\ & = \lceil \log_q d\rceil -1 + \sum_{k=1}^{\eta}(-1)^{k-1}{\eta \choose k} \frac{d^k}{q^{\lceil \log_q d\rceil k}-1}\,. \end{align*} When $q$ is large, the summation term approximates $1-(1-d/q)^\eta$ by the binomial expansion. Hence as $q$ increases, the second term above decreases to $0$, while the first term $\lceil \log_q d\rceil -1$ is 0. $E[T_{\text{avg}}]$ is therefore upper-bounded by a term converging to 0; it is also lower bounded by 0 because at least one round of random coding is required. Therefore, $E[T_{\text{avg}}]$ converges to $0$ as $q$ increases. In other words, if the field size is large enough, ARCNC reduces in effect to RLNC. Intuitively, the average first decoding time of ARCNC depends on the network topology. In RLNC, all nodes are required in code in finite fields of the same size; thus the effective field size is determined by the worst case sink. This scenario corresponds to having all nodes stop at $T_N$ in ARCNC. ARCNC enables each node to decide locally what is a good constraint length to use, depending on side information from downstream nodes. Since $E[T_{\text{avg}}]\leq E[T_N]$, some nodes may be able to decode before $T_N$. The corresponding effective field size is therefore expected to be smaller than in RLNC. Two possible consequences of a smaller effective field size are reduced decoding delay, and reduced memory requirements. In Section~\ref{sec:examples}, we confirm through simulations that such gains can be attained by ARCNC. \subsection{Complexity} To study the computation complexity of ARCNC, first observe that, once the adaptation process terminates, the computation needed for the ensuing code is no more than a regular CNC. In fact, the expected computation complexity is proportional to the average code length of ARCNC. We therefore omit the details of the complexity analysis of regular CNC here and refer interested readers to \cite{EF2004}. For the adaptation process, the encoding operations are described by Eq.~\eqref{eq:fet}. If the algorithm stops at time $T_N$, the number of operations in the encoding steps is $O(D_{in}|\mathcal{E}|T_N^2m)$, where $D_{in}=\max_{v\in\mathcal{V}}|In(v)|$. To determine decodability at a sink $r$, we check if the rank of $F_r(z)$ is $m$. A straight-forward approach is to check whether its determinant is a non-zero polynomial. Alternatively, Gaussian elimination could be applied. At time $t$, because $F_r(z)$ is an $m \times |In(r)|$ matrix and each entry is a polynomial with degree $t$, the complexity of checking whether $F_r(z)$ is full rank is $O(D_{in}^22^mmt^2)$. Instead of computing the determinant or using Gaussian elimination directly, we propose to check the conditions given in Section~\ref{sec:algAcyclic}. For each sink $r$, at time $t$, determining $rank\left({\begin{array}{*{20}c} F_0 & F_1 & \cdots F_{t} \end{array}}\right)$ requires $O(D_{in}^2mt^2)$ operations. If the first test passes, we calculate $rank(M_{t})$ and $rank(M_{t-1})$ next. Observe that $rank(M_{t-1})$ was computed during the last iteration. $M_t$ is a $(t+1)m \times (t+1)|In(r)|$ matrix over field $\mathbb{F}_q$. The complexity of calculating $rank(M_t)$ by Gaussian elimination is $O(D_{in}^2mt^3)$. The process of checking decodability is performed during the adaptation process only, hence the computation complexity here can be amortized over time after the coding coefficients are determined. In addition, as decoding occurs symbol-by-symbol, the adaptation process itself does not impose any additional delays. \subsection{Memory}\label{sec:memory} To measure the amount of memory required by ARCNC, first recall from Section~\ref{sec:basicDefs} that at each node $v$, the global encoding kernel matrix $F_v(z)$, the local encoding kernel matrix $K_v(z)$, and past data $y_{e'}(z)$ on incoming arcs $e'\in In(v)$ need to be stored in memory. $K_v(z)$ and $y_{e'}(z)$ should always be saved because together they generate new data symbols to transmit (see Eqs.~\eqref{eq:yet_k} and \eqref{eq:yet}). $F_v(z)$ should be saved during the code construction process at intermediate nodes, and always at sinks, since they are needed for decoding. Let us consider individually the three contributors to memory use. Firstly, recall from Section~\ref{sec:basicDefs} that $F_v(z)$ can be viewed as a polynomial in $z$. When all sinks are able to decode, at node $v$, $F_v(z)$ has degree $L_v$, with coefficients from $\mathbb{F}_q^{m\times In(v)}$. The total amount of memory needed for $F_v(z)$ is therefore proportional to $\lceil\log_2 q\rceil m In(v) (L_v+1)$. Secondly, from Eq.~\eqref{eq:fet}, we see that the length of a local encoding kernel polynomial $k_{e',e}(z)$ should be equal to or smaller than that of $f_e(z)$. Thus, the length of $K_v(z)$ should also be equal to or smaller than that of $F_v(z)$. The coefficients of $K_v(z)$ are elements of $\mathbb{F}_q^{In(v)\times Out(v)}$. Hence, the amount of memory needed for $K_v(z)$ is proportional to $\lceil\log_2 q\rceil Out(v) In(v) (L_v+1)$. Lastly, a direct comparison between Eqs.~\eqref{eq:fet} and \eqref{eq:yet} shows that memory needed for $y_{e'}(z)$, $e'\in In(v)$ is the same for that needed for $F_v(z)$. In practical uses of network coding, data can be transmitted in packets, where symbols are concatenated and operated upon in parallel. Packets can be very long in length. Nonetheless, the exact packet size is irrelevant for comparing memory use between different network codes, since all comparisons are naturally normalized to packet lengths. Observe that, $m$ is the number of symbols in the source message, determined by the min-cut of the multicast connection, independent of the network code used. Similarly, $In(v)$ and $Out(v)$ are attributes inherent to the network topology. To compare the memory use of different network codes, we can omit these terms, and define the average memory use of ARCNC by the following common factor: \begin{align} W_{\text{avg}} \triangleq \frac{\lceil\log_2 q\rceil}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}(L_v+1)\,.\label{eq:Wavg} \end{align} \noindent In RLNC, $L_v=0$, and the expression simplifies to $\lceil\log_2 q\rceil$, which is the amount of memory needed for a single finite field element. One point to keep in mind when measuring memory use is that even after a sink achieves decodability, its code length can still increase, as long as at least one of its ancestors has not stopped increasing code length. We say a non-source node $v$ is \emph{related} to a sink $r$ if $v$ is an ancestor of $r$, or if $v$ shares an ancestor, other than the source, with $r$. Hence, $L_r$ is dependent on all nodes related to $r$. \subsection{Success probability} Discussions in \cite{HMK2006, LYC2003, KM2003} state that in a network with delays, ANC gives rise to random processes which can be written algebraically in terms of a delay variable $z$. Thus, a convolutional code can naturally evolve from message propagation and linear encoding. ANC in the delay-free case is therefore equivalent to CNC with constraint length 1. Similarly, using a CNC with constraint length $l>1$ on a delay-free network is equivalent to performing ANC on the same network, but with $l-1$ self-loops attached to each encoding node. Each self-loop carries $z, z^2, \ldots, z^{l-1}$ units of delay respectively. \begin{figure}[t!] \centering \includegraphics[width=3.5in]{./graphics/ANC_CNC.eps} \caption{A convolution code resulting from self-loops on a network with transmission delays. (a) CNC in a delay-free network. Data transmitted on incoming edges are $y_{e'}(z)$ and $y_{e}(z)$ respectively. The local encoding kernels are given by $K_{\text{CNC}}(z)$. (b) Equivalent ANC in a network with delays. The given self-loop carries a single delay $z$. Incoming data symbolsare $y_{e',t}$ and $y_{e'',t}$ at time $t$. The ANC coding coefficients are given by the matrix $K_{\text{ANC}}$.}\label{fig:ANC_CNC} \end{figure} For example, in Fig.~\ref{fig:ANC_CNC}, we show a node with two incoming edges. Let the data symbol transmitted on edge $\dot{e}$ at time $t$ be $y_{\dot{e},t}$. A CNC with length $l=2$ is used in (a), assuming that transmissions are delay-free. The local encoding kernel matrix $K_{\text{CNC}}(z)$ contains two polynomials, $k_{e',e_1}(z) = k_{e',e_1,0}+k_{e',e_1,1}z$ and $k_{e'',e_1}(z) = k_{e'',e_1,0}+k_{e'',e_1,1}z$. According to Eq.~\eqref{eq:yet_k} and \eqref{eq:yet}, the data symbol transmitted on $e_1$ at time $t$ is \begin{align} y_{e_1,t} & = \sum_{\dot{e}\in\{e',e''\}} y_{\dot{e},t}k_{\dot{e},e_1,0}+y_{\dot{e},t-1}k_{\dot{e},e_1,1}\,. \label{ye1tCNC} \end{align} In (b), the equivalent ANC is shown. A single loop with a transmission delay of $z$ has been added, and the local encoding kernel matrix $K_{\text{ANC}}=(k_{\dot{e},e})_{\dot{e}\in In(v),e\in Out(v)}$ is constructed from coding coefficients from (a). The first column of $K_{\text{ANC}}$ represents encoding coefficients from incoming edges $e', e'', e'''$ to the outgoing edge $e_2$, and the second column represents encoding coefficients from incoming edges $e', e'', e'''$ to the outgoing edge $e'''$. Using a matrix notation, the output data symbols from $v$ are $(y_{e_2,t} \quad y_{e''',t}) = (y_{e',t} \quad y_{e'',t} \quad y_{e''',t-1})K_{\text{ANC}} $, i.e., \begin{align} y_{e''',t} & = y_{e',t}k_{e',e'''} +y_{e'',t}k_{e'',e'''}+y_{e''',t}0\notag \\ & = y_{e',t}k_{e',e_1,1} +y_{e'',t}k_{e'',e_1,1}\notag \\ y_{e_2,t}& = y_{e',t}k_{e',e_2}+y_{e'',t}k_{e'',e_2}+y_{e''',t-1}k_{e''',e_2} \notag \\ & = y_{e',t}k_{e',e_1,0}+y_{e'',t}k_{e'',e_1,0}+y_{e''',t-1} \notag \\ & = \sum_{\dot{e}\in\{e',e''\}} y_{\dot{e},t}k_{\dot{e},e_1,0}+y_{\dot{e},t-1}k_{\dot{e},e_1,1} \label{ye2tANC} \end{align} Clearly $y_{e_1,t}$ is equal to $y_{e_2,t}$. ARCNC therefore falls into the framework given by Ho et~al. \cite{HMK2006}, in the sense that the convolution process either arises naturally from cycles with delays, or can be considered as computed over self-loops appended to acyclic networks. Applying the analysis from \cite{HMK2006}, we have the following theorem, \begin{theorem}\label{thm:prob} For multicast over a general network with $d$ sinks, the ARCNC algorithm over $\mathbb{F}_q$ can achieve a success probability of at least $(1-d/q^{t+1})^{\eta}$ at time $t$, if $q^{t+1}>d$, and $\eta$ is the number of links with random coefficients. \end{theorem} \begin{proof} At node $v$, $k_{e',e}(z)$ at time $t$ is a polynomial with maximal degree $t$, i.e., $k_{e',e}(z)=k_{e',e,0}+k_{e',e,1}z+\cdots+k_{e',e,t}z^{t}$, $k_{e',e,i}$ is randomly chosen over $\mathbb{F}_q$. If we group the coefficients, the vector $k_{e',e}=\{k_{e',e,0},k_{e',e,1},\cdots,k_{e',e,t}\}$ is of length $t+1$, and corresponds to a random element over the extension field $\mathbb{F}_{q^{t+1}}$. Using the result in \cite{HMK2006}, we conclude that the success probability of ARCNC at time $t$ is at least $(1-d/q^{t+1})^{\eta}$, as long as $q^{t+1}>d$. \end{proof} We could similarly consider the analysis done by Balli et al. \cite{BYZ2009}, which states that the success probability is at least $(1-d/(q-1))^{|J|+1}$, $|J|$ being the number of encoding nodes, to show that a tighter lower bound can be given on the success probability of ARCNC, when $q^{t+1}>d$. \subsection{Combination Network} \label{example:combination} A $n\choose m$ combination network contains a single source $s$ that multicasts $m$ independent messages over $\mathbb{F}_q$ through $n$ intermediate nodes to $d$ sinks \cite{NWT2005}; each sink is connected to a distinct set of $m$ intermediate nodes, and $d={n\choose m}$. Fig.~\ref{fig:combination} illustrates the topology of a combination network. Assuming unit capacity links, the min-cut to each sink is $m$. It can be shown that, in combination networks, routing is insufficient and network coding is needed to achieve the multicast capacity $m$. Here coding is performed only at $s$, since each intermediate node has only $s$ as a parent node; an intermediate node simply relays to its children data from $s$. For a general $n \choose m$ combination network, we showed in \cite{guo2011localized} that the expected average first decoding time can be significantly improved by ARCNC when compared to the deterministic BNC algorithm. We restate the results here, with details of the derivations included. \begin{figure}[t!] \centering \includegraphics[width=5.5cm]{./graphics/nrbn.eps}\\ \caption{A combination network}\label{fig:combination} \end{figure} At time $t-1$, for a sink $r$ that has not satisfied the decodability conditions, $F_r(z)$ is a size $m\times m$ matrix of polynomials of degree $t-1$. $F_r(z)$ has full rank with probability \begin{align} Q&=(q^{tm}-1)(q^{tm}-q^t)\cdots(q^{tm}-q^{t(m-1)})/q^{tm^2} \notag\\ &=(1-\frac{1}{q^{tm}})(1-\frac{1}{q^{t(m-1)}})\cdots(1-\frac{1}{q^t})\notag\\ &=\prod_{l=1}^m\left(1-\frac{1}{q^{tl}}\right)\,. \label{eq:11suc} \end{align} Hence, the probability that sink $r$ decodes after time $t-1$ is \begin{align} P(T_r\geq t) & = 1- Q = 1- \prod_{l=1}^m\left(1-\frac{1}{q^{tl}}\right) \,, \quad t\geq 0. \label{eq:pt} \end{align} The expected first decoding time for sink node $r$ is therefore upper and lower-bounded as follows. \begin{align} E[T_r] & = \sum_{t=1}^\infty tP(T_r = t) = \sum_{t=1}^\infty P(T_r\geq t) \notag\\ & = \sum_{t=1}^\infty \left(1-\prod_{i=1}^m\left(1-\frac{1}{q^{tr}}\right)\right)\\ & < \sum_{t=1}^\infty \left(1-\left(1-\frac{1}{q^{t}}\right)^m\right)\label{eq:13mean}\\ & = \sum_{t=1}^\infty \left(1-\sum_{k=0}^m(-1)^k{m\choose k}\left(\frac{1}{q^t}\right)^k\right)\label{eq:14mean}\\ & = \sum_{k=1}^m(-1)^{k-1}{m\choose k}\left(\sum_{t=1}^{\infty}\frac{1}{q^{tk}}\right)\\ & = \sum_{k=1}^m (-1)^{k-1}{m\choose k}\frac{1}{q^k-1} \triangleq ET_{UB}(m,q) \,. \label{eq:ETUB} \end{align} \begin{align} E[T_r] & = \sum_{t=1}^\infty\left(1-\prod_{l=1}^m\left(1-\frac{1}{q^{tl}}\right)\right) \\ & > \sum_{t=1}^\infty \left(1-\left(1-\frac{1}{q^{tm}}\right)^m\right)\\ & = \sum_{k=1}^m (-1)^{k-1}{m\choose k}\frac{1}{q^{km}-1} \triangleq ET_{LB}(m,q)\,. \label{eq:ETLB} \end{align} Recall from Section~\ref{sec:stoppingTime} that the expected average first decoding time is $E[T_{\text{avg}}] = E\left[ \frac{1}{d}\sum_{r=1}^{d} T_r \right]$. In a combination network, $ E[T_{\text{avg}}]$ is equal to $E[T_r]$. Consequently, $E[T_{\text{avg}}]$ is upper-bounded by $ET_{UB}$, defined by Eq.~\eqref{eq:ETUB}. $ET_{UB}$ is a function of $m$ and $q$ only, independent of $n$. For example, if $m=2$, $q=2$, $ET_{UB} = \frac53$. If $m$ is fixed, but $n$ increases, $ E[T_{\text{avg}}]$ does not change. In addition, if $q$ is large, $ET_{UB}$ becomes 0, consistent with the general analysis in \cite{guo2011localized}. Next, we want to bound the variance of $T_{\text{avg}}$, i.e., \begin{align} \hspace{-5pt} var[T_{\text{avg}}] & = E[T_{\text{avg}}^2] - E^2[T_{\text{avg}}] \notag \\[4pt] & = E\left[\left(\frac{1}{d}\sum_{r=1}^d T_r\right)^2\right] - E^2[T_r] \notag\\[4pt] & = \frac{E[T_r^2]}{d} + \left(\sum_{r=1}^{d}\sum_{r\neq r'}\frac{E(T_rT_{r'})}{d^2}\right) - E^2[T_r]\,. \label{eq:varTavg} \end{align} We upper-bound the terms above one by one. First, \begin{align} E[T_r^2] & = \sum_{t=1}^\infty t^2P(T_r=t) \\ & = \sum_{t-1}^\infty t^2(P(T_r \ge t)-P(T_r \ge t+1)) \\ & = \sum_{t=1}^\infty ((t+1)^2-t^2)P(T_r\geq t) \label{eq:lowdim}\\ & < \sum_{t=1}^\infty (2t+1) \left(1-\left(1-\frac{1}{q^{t}}\right)^m\right) \label{eq:upbound11}\\ & < ET_{UB} + 2 \sum_{k=1}^m (-1)^{k-1}{m\choose k}\sum_{t=1}^\infty \frac{t}{q^{tk}}\label{eq:apple}\\ & = ET_{UB} + 2 \sum_{k=1}^m (-1)^{k-1}{m\choose k}\left(\frac{q^k}{q^k-1}\right)^2 \label{eq:orange}\\ & \triangleq (ET^2)_{UB} \end{align} Eq.~\eqref{eq:lowdim} follows through organization and simplifying. Eq.~\eqref{eq:upbound11} is obtained by replacing the terms in Eq.~\eqref{eq:lowdim} with the upperbound of Eq.~\eqref{eq:11suc}. We represent Eq.~\eqref{eq:upbound11} with binomial expansion and substitute with the upperbound in Eq.~\eqref{eq:ETUB}. Next, let $\rho_\lambda=E[T_rT_{r'}]$ if sinks $r$ and $r'$ share $\lambda$ parents, $0 \leq \lambda < m$. Thus, $\rho_0 = E^2[T_r]$. When $\lambda \neq 0$, given sink $r$ succeeds in decoding at time $t_1$, the probability that sink $r'$ has full rank before $t_2$ is lower-bounded as follows, \begin{align} P(T_{r'} < t_2|T_r=t_1) > \prod_{l=1}^{m-\lambda}\left(1-\frac{1}{q^{t_2l}}\right) >\left(1-\frac{1}{q^{t_2}}\right)^{m-\lambda}\,. \end{align} Consequently, if $\lambda\neq0$, \begin{align} \rho_\lambda & = E[T_rT_{r'}] \\ & = \sum_{t_1=1}^\infty \sum_{t_2=t_1}^\infty t_1 t_2 P(T_r = t_1)P(T_{r'}=t_2|T_r=t_1)\\ & = \sum_{t_1=1}^\infty t_1 P(T_r = t_1) \sum_{t_2=1}^\infty P(T_{r'}\geq t_2|T_r=t_1) \\ & < \sum_{t_1=1}^\infty t_1 P(T_r = t_1) \sum_{t_2=1}^\infty \left(1-\left(1-\frac{1}{q^{t_2}}\right)^{m-\lambda}\right)\\ & < \sum_{t_1=1}^\infty t_1 P(T_r = t_1) \sum_{k=1}^{m-\lambda}(-1)^{k-1}{ m-\lambda \choose k } \frac{1}{q^k-1}\\ & < ET_{UB}\left(\sum_{k=1}^{m-\lambda}(-1)^{k-1}{m-\lambda\choose k}\frac{1}{q^k-1}\right)\\ & \triangleq \rho_{\lambda,UB} \end{align} \noindent Let $\rho_{UB} = \max\{\rho_{1,UB},\ldots,\rho_{m-1,UB}\}$. For a sink $r$, Let the number of sinks that share at least one parent with $r$ be $\Delta$, then $\Delta = d-1-{n-m\choose m}$. Thus, the middle term in Eq.~\eqref{eq:varTavg} is bounded by $\frac{\Delta}{d}\rho_{UB}+\frac{d-1-\Delta}{d}E^2[T_r]$ and \begin{align} var[T_{\text{avg}}] & < \frac{(ET^2)_{UB}}{d} + \frac{\Delta}{d}\rho_{UB} - \left(\frac{\Delta+1}{d}\right)ET^2_{LB} \label{eq:varTvalue}\,. \end{align} Depending on the relative values of $n$ and $m$, we have the following three cases. \begin{itemize} \item $n>2m$, then ${n-m \choose m} = \frac{(n-m)!}{m!(n-2m)!}$,and \begin{align} \frac{\Delta}{d} & = 1-\frac1d -\frac{{n-m \choose m}}{d} \\ & = 1-\frac1d -\frac{(n-m)!(n-m)!}{n!(n-2m)!} \\ & = 1-\frac1d-\frac{(n-m)(n-m-1)\ldots(n-2m+1)}{n(n-1)\ldots(n-m+1)}\\ & = 1-\frac1d-\left(\frac{n-m}{n}\right)\ldots \left(\frac{n-2m+1}{n-m+1}\right)\\ & < 1-\frac1d-\left(\frac{n-2m+1}{n-m+1}\right)^m \label{eq:Delta_d}\,. \end{align} Observe from Eqs.~\eqref{eq:ETUB} and \eqref{eq:ETLB} that all of the upper-bound and lower-bound constants are functions of $m$ and $q$ only. If $m$ and $q$ are fixed and $n$ increases, in Eq.~\eqref{eq:Delta_d}, both $\frac{\Delta}{d}$ and $\frac{\Delta+1}{d}$ approaches 0. Therefore, $var(T)$ diminishes to 0. Combining this result with the upper-bound $ET_{UB}$, we can conclude that, when $m$ is fixed, even if more intermediate nodes are added, a large proportion of the sink nodes can still be decoded within a small number of coding rounds. \item $n=2m$, then ${n-m \choose m} = 1$, $\frac{\Delta}{d} = 1-\frac{2}{d}$, and \begin{align} \hspace{-10pt} var[T_{\text{avg}}] & <\frac{(ET^2)_{UB}}{d} + \left(1-\frac{2}{d}\right)\rho_{UB}- \left(1-\frac{1}{d}\right)ET_{LB}^2 \notag\\ & <\frac{(ET^2)_{UB}}{d} + \rho_{UB}- \left(1-\frac{1}{d}\right)ET_{LB}^2 \end{align} Here $m$ and $n$ are comparable in scale, and the bounds depend on the exact values of $ET^2_{UB}$, $\rho_{UB}$ and $ET_{UB}$. We will illustrate through simulation in Section~\ref{subsec:simuCombination} that in this case, $T_{avg}$ also converges to 0. \item $n<2m$, then ${n-m \choose m} = 0$, $\frac{\Delta}{d} = 1-\frac{1}{d}$, and \begin{align} var[T_{\text{avg}}] & < \frac{(ET^2)_{UB}}{d} + \rho_{UB}- ET_{LB}^2, \end{align} \noindent similar to the second case above. \end{itemize} Comparing with the deterministic BNC by Xiao et~al. \cite{xiao2008binary}, we can see that, for a large combination network, with fixed $q$ and $m$, ARCNC achieves much lower first decoding time. In BNC, the block length is required to be $p\geq n-m$ at minimum; the decoding delay increases at least linearly with $n$, where as in ARCNC, the expected average first decoding time is independent of the value of $n$. On the other hand, with RLNC \cite{HMK2006}, the multicast capacity can be achieved with probability $(1-d/q)^n$. The exponent $n$ is the number of links with random coefficients; since each intermediate node has the source as a single parent, coding is performed at the source only, and coded data are transmitted on the $n$ outgoing arcs from the source. When $q$ and $m$ are fixed, the success probability of RLNC decreases exponentially in $n$. Thus, an exponential number of trials is needed to find a successful RLNC. Equivalently, RLNC can use an increasingly large field size $q$ to maintain the same decoding probability. So far we have used $n\choose m$ combination networks explicitly to illustrate the operations and the decoding delay gains of ARCNC. It is important to note, however, that this is a very restricted family of networks, in which only the source is required to code, and each sink shares at least $1$ parent with other ${n \choose m}-{n-m\choose m}-1$ sinks. In terms of memory, if sink $r$ cannot decode, all sinks related to $r$ are required to increase their memory capacity. Recall from Subsection~\ref{sec:memory} that a non-source node $v$ is said to be related to a sink $r$ if $v$ is an ancestor of $r$, or if $v$ shares an non-source ancestor with $r$. As $n$ becomes larger, the number of nodes related to $r$ increases, especially if $m$ increases too. Thus, in combination networks, we do not see considerable gains in terms of memory overheads when compared with BNC, unless $m$ is small. In more general networks, however, when sinks do not share ancestors with as many other sinks, ARCNC can achieve gains in terms of memory overheads as well, in addition to decoding delay. As an example, we define a \emph{sparsified combination network} next. \subsection{d-distance Network} \label{example:dcosntraint} \begin{figure}[t!] \centering \includegraphics[width=7cm]{./graphics/dconstraint.eps}\\ \caption{A d-distance network}\label{fig:dconstraint} \end{figure} A d-distance network contains a single source $s$ that multicasts $d$ independent messages over $\mathbb{F}_q$ through $n$ intermediate nodes to $n-d+1$ sinks; each sink is connected to $d$ intermediate nodes in its $d$ distance area. Figure \ref{fig:dconstraint} illustrates the topology of a d-distance network. Assuming unit capacity links, the min-cut to each sink is $d$. It can also be shown that in combination networks, routing is insufficient and network coding is needed to achieve the multicast capacity $d$. Similar to combination networks, here network coding is performed only at the source. Similarly, at time $t-1$, for a sink $r$ that has not satisfied the decodability conditions, $F_r(z)$ is a size $d \times d$ matrix of polynomials of degree $t-1$. $F_r(z)$ has full rank with probability \[ \begin{array}{l} P_{suc} = \frac{{(q^d - 1)(q^d - q) \cdots (q^d - q^{d - 1} )}}{{q^{d^d } }} \\ = (1 - \frac{1}{{q^d }})(1 - \frac{1}{{q^{d - 1} }}) \cdots (1 - \frac{1}{q}) \\ = \prod\limits_{i = 1}^d {(1 - \frac{1}{{q^i }})} \\ \end{array} \] Hence, the probability that sink $r$ decodes after time $t-1$ is \[P(T_r \ge t) = 1 - P_{suc} = 1 - \prod\limits_{i = 1}^d {(1 - \frac{1}{{q^{ti} }})} \] The expected fist decoding time for sink node $r$ is there fore upper and lower-bounded as follows. $\begin{array}{l} E[T_r ] = \sum\limits_{t = 1}^\infty {tP(T_r = t)} = \sum\limits_{t = 1}^\infty {P(T_r \ge t)} \\ = \sum\limits_{t = 1}^\infty {(1 - \prod\limits_{i = 1}^d {(1 - \frac{1}{{q^{ti} }})} )} \\ < \sum\limits_{t = 1}^\infty {(1 - (1 - \frac{1}{{q^t }})^d )} \\ = \sum\limits_{k = 1}^d {( - 1)^{d - 1} \left( {\begin{array}{*{20}c} d \\ k \\ \end{array}} \right)} \left( {\sum\limits_{t = 1}^\infty {\frac{1}{{q^{tk} }}} } \right) \\ = \sum\limits_{k = 1}^d {( - 1)^{d - 1} \left( {\begin{array}{*{20}c} d \\ k \\ \end{array}} \right)} \left( {\frac{1}{{q^k - 1}}} \right) \\ \end{array}$ Recall that the expected average first decoding time is $E[T_{\text{avg}}] = E\left[ \frac{1}{d}\sum_{r=1}^{d} T_r \right]$. Similarly, in a d-distance network, $ E[T_{\text{avg}}]$ is equal to $E[T_r]$. And now we consider the variance of $ E[T_{\text{avg}}]$. For an edge, there is $d$ related sinks. So $ E[T_{\text{avg}}]$ is expressed as $\begin{array}{l} {\mathop{\rm var}} [T_{avg} ] = E[T^2 _{avg} ] - E^2 [T_{avg} ] \\ = E[(\frac{1}{{(n - d + 1)}}\sum\limits_{r = 1}^{(n - d + 1)} {T_r } )^2 ] - E^2 [T_r ] \\ = \frac{{E[T_r^2 ]}}{{(n - d + 1)}} + \frac{1}{{(n - d + 1)^2 }}(\sum\limits_{r = 1}^{(n - d + 1)} {\sum\limits_{r \ne r'}^{(n - d + 1)} {E(T_r T_{r'} )} } ) - E^2 [T_r ] \\ < \frac{{E[T_r^2 ]}}{{(n - d + 1)}} + \frac{{(n - d + 1)d\rho _{{\rm{ub}}} }}{{(n - d + 1)^2 }}{\rm{ + }}\frac{{(n - d + 1)(n - 2d + 1)E^2 [T_r ]}}{{(n - d + 1)^2 }} - E^2 [T_r ] \\ {\rm{ = }}\frac{{E[T_r^2 ]}}{{(n - d + 1)}}{\rm{ + }}\frac{{d\rho _{{\rm{ub}}} }}{{(n - d + 1)}} - \frac{d}{{(n - d + 1)}}E^2 [T_r ] \\ \to 0\begin{array}{*{20}c} {} & {\begin{array}{*{20}c} {as} & {d - fixed,n - 2d + 1 \to \infty } \\ \end{array}} \\ \end{array} \\ \end{array}$ Next, we consider the memory requirement. The memory requirement is measured by the expected average memory use $E[W_{\text{avg}}] = \frac{\lceil\log_2 q\rceil}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}(E[L_v]+1)$. for ARCNC, the memory requirement can be expressed as $E[W_{avg,ARCNC} ] = \frac{{\left\lceil {\log _2 q} \right\rceil }}{{|{\cal V}|}}\sum\limits_{v \in {\cal V}} ( E[L_v ] + 1) = \left\lceil {\log _2 q} \right\rceil \cdot E[T_{avg} ]$. which is a value only in terms of field size $q$ and the distance constraint $d$. On the other hand,if RLNC is applied. Suppose $q_R$ is the field size for network code generation. There are n related encoding edges, so the success probability for the decodability at all sinks is at least $(1 - \frac{1}{{q_R }})^n $. To guarantee a overall success probability larger than $1 - \varepsilon $, we want $(1 - \frac{1}{{q_R }})^n > 1 - \varepsilon $, hence $E[W_{avg,RLNC} ] = \left\lceil {\log _2 q_R } \right\rceil > 1 - \log _2 (1 - \sqrt[n]{{1 - \varepsilon }})$ where $\varepsilon$ is small, $q_R$ needs to be quite large. Therefore, ARCNC has large memory saving than RLNC in d-distance network. \subsection{Shuttle Network}\label{example:shuttle} \begin{figure}[t!] \centering \includegraphics[width=5.7cm]{./graphics/zero_shuttlenet.eps}\\ \caption{The shuttle network. Each link has unit capacity. $s$ is the source; $r_1$ and $r_2$ are sinks each with a min-cut of 2. Edges are directed and labeled as $e_i$, $1\leq i\leq 10$. Edges indices are assigned according to Section~\ref{sec:algCyclic}. An adjacent pair $(e',e)$ is labeled with a curved pointer if $e'\succeq e$.}\label{fig:shuttlenet} \end{figure} In this section, we illustrate the use of ARCNC in cyclic networks by applying it to a shuttle network, shown in Fig.~\ref{fig:shuttlenet}. We do not provide a formal definition for this network, since its topology is given explicitly by the figure. Source $s$ multicasts to sinks $r_1$ and $r_2$. Edges $e_i$, $1\leq i\leq 10$, are directed. The edge indices have been assigned according to Section~\ref{sec:algCyclic}. An adjacent pair $(e',e)$ is labeled with a curved pointer if $e'\succeq e$. There are three cycles in the network; the left cycle is formed by $e_3$, $e_5$, and $e_7$; the middle cycle is formed by $e_5$, $e_8$, $e_6$, and $e_9$; the right cycle is formed by $e_4$, $e_6$, and $e_{10}$. In this example, we use a field size $q=2$. At node $v$, the local encoding kernel matrix is $K_v(z)=(k_{e',e}(z))_{e'\in In(v), e\in Out(v)}=K_{v,0}+K_{v,1}z+K_{v,2}z^2+\ldots$; each local encoding kernel is a polynomial, $k_{e',e}(z) =k_{e',e,0}+k_{e',e,1}z+k_{e',e,2}z^2+\ldots$. At $s$, assume $f_{e_1}(z) = {1 \choose 0}$, and $f_{e_2}(z) = {0 \choose 1}$, i.e., the data symbols sent out from $s$ at time $t$ are $y_{e_1,t} = x_{1,t}$ and $y_{e_2,t} = x_{2,t}$, respectively. The source can also linearly combine source symbols before transmitting on outgoing edges. At $t=0$, we assign 0 to local encoding kernel coefficients $k_{e',e,0}$ if $e'\succeq e$; and choose $k_{e',e,0}$ uniformly randomly from $\mathbb{F}_2$ otherwise. One possible assignment is given in Fig.~\ref{fig:shuttle0}. Here we circle $k_{e',e,0}$ if $e'\succeq e$. Since $q=2$, we set all other local encoding kernel coefficients to 1. The data messages transmitted on each edge at $t=0$ are then derived and labeled on the edge. Observe that, $r_1$ receives $x_{1,0}$ and $r_2$ receives $x_{2,0}$; neither is able to decode both source symbols. Hence no acknowledgment is sent in the network. \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./graphics/zero_shuttle0.eps}\\ \caption{An example of local encoding kernel matrices at $t=0$. For a node $v$, $K_v(z)=(k_{e',e}(z))_{e'\in In(v), e\in Out(v)}=K_{v,0}+K_{v,1}z+K_{v,2}z^2+\ldots$. For any adjacent pair $(e',e)$ where $e'\succeq e$, $k_{e',e,0}=0$. Each edge $e$ is labeled with the data symbol $y_{e,t}$ it carries, e.g., $y_{e_1,0}=x_{1,0}$.}\label{fig:shuttle0} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./graphics/zero_shuttle1.eps}\\ \caption{An example of local encoding kernel matrices at $t=1$.} \label{fig:shuttle1} \end{figure} At $t=1$, we proceed as in the acyclic case, randomly choosing coefficients $k_{e',e,1}$ from $\mathbb{F}_2$. Since no acknowledgment has been sent by $r_1$ or $r_2$, all local encoding kernels increase in length by 1. One possible coding kernel coefficient assignment is given in Fig.~\ref{fig:shuttle1}. Both $v_1$ and $v_3$ have one incoming edge only and thus route instead of code, i.e., $K_{v_1}(z)=K_{v_3}(z)=(1 \quad 1)$. The other local encoding kernel matrices in this example are as follows \begin{align*} K_{r_1,1}(z) & = {k_{e_1,e_3,0} \choose k_{e_7,e_3,0}} + {k_{e_1,e_3,1} \choose k_{e_7,e_3,1}} z = {1\choose 0} + {1\choose 0}z\,,\\[2pt] K_{r_2,1}(z) & = {k_{e_2,e_4,0} \choose k_{e_{10},e_4,0}} + {k_{e_2,e_4,1} \choose k_{e_{10},e_4,1}} z = {1 \choose 0} + {0 \choose 1}z\,, \\[2pt] K_{v_4,1}(z) & = {k_{e_3,e_5,0} \choose k_{e_9,e_5,0}} + {k_{e_3,e_5,1} \choose k_{e_9,e_5,1}} z = {1 \choose 0}+{1 \choose 1}z\,,\\[2pt] K_{v_2,1}(z) & = {k_{e_4,e_6,0} \choose k_{8_1,e_6,0}} + {k_{e_4,e_6,1} \choose k_{e_8,e_6,1}} z = {0 \choose 1}+{1 \choose 0}z. \end{align*} Data symbols generated according to Eq.~\eqref{eq:yet} for this particular code are also labeled on the edges. For example, on edge $e_5 =(v_4,v_1)$, the data symbol transmitted at $t=1$ is \begin{align} y_{e_5,1} & = y_{e_3,0}k_{e_3,e_5,1}+ y_{e_3,1}k_{e_3,e_5,0} \notag \\ & \quad \quad + y_{e_9,0}k_{e_9,e_5,1}+ y_{e_9,1}k_{e_9,e_5,0} \label{eq:ye5} \\ & = x_{1,0}\cdot 1 + (x_{1,0}+x_{1,1})\cdot 1 + x_{2,0}\cdot 1 + y_{e_9,1}\cdot 0 \notag \\ & = x_{1,1} + x_{2,0} \notag \end{align} Observe that there are no logical contradictions in any of the three cycles. For example, in the middle cycle, on $e_5$, regardless of the value of $y_{e_9,1}$, the incoming data symbol at $t=1$, $y_{e_5,1}$, can be evaluated as in Eq.~\eqref{eq:ye5}. In other words, in evaluating the global encoding kernel coding coefficients according to Eqs.~\eqref{eq:fe} and \eqref{eq:fet}, even though $f_{e_9,1}$ is unknown, $f_{e_5,1}$ can still be computed since $k_{e_9,e_5,0}=0$. Also from Fig.~\ref{fig:shuttle1}, observe that both sinks can decode two source symbols at $t=1$: $r_1$ can decode $x_{1,1}$ and $x_{2,0}$, while $r_2$ can decode $x_{2,1}$ and $x_{1,0}$. Equivalently, we can compute the global encoding kernel matrices and check the decodability conditions given in Section~\ref{sec:algAcyclic}. We omit the details here, but interested readers can verify using Eq.~\eqref{eq:fe} that the global encoding matrices are $F_{r_1}(z)={1\,\,\, 1 \choose 0\,\,\, z}$, $F_{r_2}(z)={0\quad z \,\, \choose 1\,\,\, 1+z}$, and the decodability conditions are indeed satisfied. Acknowledgments are sent back by both sinks to their parents, code lengths stop to increase, and ARCNC terminates. The first decoding time for both sinks is therefore $T_{r_1}=T_{r_2}=1$. As we have discussed in Section~\ref{sec:algCyclic}, the deterministic edge indexing scheme proposed is an universal but heuristic way of assigning local encoding kernel coefficients at $t=0$. In this shuttle network example, observe from Fig.~\ref{fig:shuttle0} that in the middle cycle composed of edges $e_8$, $e_6$, $e_9$ and $e_5$, this scheme introduces two zero coefficients, i.e., $k_{e_8,e_6,0}=0$, and $k_{e_9,e_5,0}=0$. A better code would be to allow one of these two coefficients to be non-zero. For example, if $k_{e_8,e_6,0}=1$, $K_{v_2}(z)={1 \choose 1}$ at $t=0$. It can be shown in this case that the data symbol transmitted on $e_{10}$ to $r_2$ is $y_{e_{10},0}=x_{1,0}+x_{2,0}$, enabling $r_2$ to decode both source symbols at time $t=0$. \subsection{Regular sparsified Combination Network} \label{example:sparsecombnet} \begin{figure}[t!] \centering \includegraphics[width=6cm]{./graphics/extend_sparse.eps}\\ \caption{A regular sparsified combination network (inside the dotted frame) with an extension. }\label{fig:dconstraint} \end{figure} We define a regular sparsified combination network as a modified combination network, with only \emph{consecutive} intermediates nodes connected to unique sink nodes. The framed component in Fig.~\ref{fig:dconstraint} illustrates its structure. Source $s$ multicasts $m$ independent messages through $m$ intermediate nodes to each sink, with $n-m+1$ sinks in total. This topology can be viewed as an abstraction of a content distribution network, where the source distributes data to intermediate servers, and clients are required to connect to $l$ servers closest in distance to collect enough degrees of freedom to obtain the original data content. This network can be arbitrarily large in scale. In a regular sparsified combination network, the number of other sinks related to a sink $r$ is fixed at $2(m-1)$, and is even smaller if $r$'s parents are on the edge of the intermediate layer. Thus, the average first decoding time of sinks in a regular sparsified combination network behaves similarly to the fixed $m$ case discussed in the previous subsection, approaching 0 as $n$ goes to infinity. On other hand, since now each intermediate node is connected to a fixed number of $m$ sinks as well, when a sink $r$ fails to decode and requests an increment in code length, a maximum of $m+2(m-1)=3m-2$ related nodes are required to increase their memory capacity. To compute $W_\text{avg}$ using Eq.~\eqref{eq:Wavg}, observe that for a sink $r$, assuming there are the maximum number of $2(m-1)$ other sinks related to $r$, the cumulative probability distribution of $L_r$ is as follows \begin{align*} & \Pr\{L_r<t\} \\ & = \Pr \{T_{r-m+1} <t,\ldots, T_{r} <t, \ldots, T_{r+m-1} <t\}\\ & = \Pr \{T_{r-m+1} <t\} \Pr \{T_{r-m+2} <t|T_{r-m+1} <t\} \\ & \quad \ldots \Pr \{T_{r+m-1} <t|T_{r-m+1} <t,\ldots, T_{r+m-2} <t\} \\ & = Q \left(1-\frac{1}{q^t}\right)^{2m-2} \end{align*} \noindent where $Q$ is defined in Eq.~\eqref{eq:11suc}. Thus, using the derivation from Eq.~\eqref{eq:pt} to \eqref{eq:ETUB}, we have \begin{align*} E[L_r] & = \sum_{t=1}^\infty P(L_r\geq t) \notag\\ & = \sum_{t=1}^\infty \left(1-\prod_{i=1}^m\left(1-\frac{1}{q^{tr}}\right)\left(1-\frac{1}{q^{t}}\right)^{2m-2}\right)\\[4pt] & < ET_{UB}(3m-2, q) \end{align*} \noindent Similarly, for an intermediate node $v$, we can bound $E[L_v]$ by $ ET_{UB}(2m, q)$. Clearly $W_\text{avg,ARCNC}$ computed using Eq.~\eqref{eq:Wavg} is a function of $m$ and $q$ only, independent of $n$. In other words, in a regular sparsified combination network, since each sink has a fixed number of parents, and are related to a fixed number of other sinks through its parents, the average amount of memory use across the network is independent of $n$. On the other hand, assume a field size of $q_R$ is used for RLNC code generation. There is a single coding node in the network, with $n-m+1$ sinks. To guarantee an overall success probability larger than $1 - \varepsilon $, we have $(1 - \frac{n-m+1}{q_R-1 })^2 > 1 - \varepsilon $ from \cite{BYZ2009}. Hence \begin{align*} E[W_{\text{avg,RLNC}} ] = \left\lceil {\log _2 q_R } \right\rceil > \left\lceil \log _2 (1 + \frac{n-m+1}{1-\sqrt{1 - \varepsilon }})\right\rceil, \end{align*} which can be very large if $n$ is large and $\varepsilon$ is small. Comparing the lower-bound on $E[W_{\text{avg},\text{RLNC}}]$ and the upper-bound on $E[W_{\text{avg},\text{ARCNC}}]$, we see that the gain of ARCNC over RLNC in terms of memory use is infinite as $n$ increases because $E[W_{\text{avg},\text{ARCNC}}]$ is bounded by a constant value. An intuitive generalization of this observation is to extend this regular sparsified combination network by attaching another arbitrary network off one of the sinks, as shown in Fig.~\ref{fig:dconstraint}. Regardless of the depth of this extension from the sink $r$, as $n$ increases, memory overheads can be significantly reduced with ARCNC when compared with RLNC, since most of the sinks and intermediate nodes are unrelated to $r$, thus not affected by the decodability of sinks within the extension. \subsection{Umbrella Network} \label{example:umbrella} Figure~\ref{fig:umbrella} demonstrates the topology of an umbrella network. We do not provide a formal definition, since the figure is sufficiently self-explanatory. The top Layer 1 of an umbrella network contains $2\alpha$ nodes, where $\alpha$ is odd. In this particular example, $\alpha=9$. The center node $s_1$ in Layer 1 (shaded in the figure) is the parent of additional $3 \choose 2$ combination networks, which with repetitions, form a ``handle'' to the umbrella. There is a total of $\beta$ layers, $\beta-1$ of which are on the handle. In this example, $\beta=3$. All nodes without children are counted as sink nodes. Each shaded node is the only source node for subsequent layers. Therefore, for the multicast to be successful, the shaded nodes need to achieve their min-cuts just as the \textcolor{red}{sink nodes}. We thus count the shaded nodes as ``sinks'' within their corresponding layers. \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./graphics/umbrella_wshade.eps}\\ \caption{A three-layer umbrella network, $\beta =3$. There are a total of $2\alpha$ nodes in Layer 1, $\alpha=9$.}\label{fig:umbrella} \end{figure} First, observe that, in an umbrella network, network coding is necessary for the multicast problem to be solvable. To see why, consider the upper level of nodes in Layer 1. Any two nodes that share a common child need to carry independent data for the multicast to be successful under a routing scheme. If we represent the dependencies among these nodes by connecting those that share a common child, we obtain a circle similar to the one shown in Figure~\ref{fig:circle}, which corresponds to the umbrella network shown in Figure~\ref{fig:umbrella}. Since there is an odd number of nodes in the circle, there is one pair of neighboring nodes which do not carry independent information. In this particular example, the two left-most neighboring nodes do not carry independent information. The sink connected to these two nodes therefore cannot achieve its min-cut of 2. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/circle.eps} \caption{Dependencies among nodes in the upper half of Layer 1 of the umbrella network shown in Figure~\ref{fig:umbrella}. Two nodes are connected if they share a common child. Two nodes of different colors receive independent information from the source. Since $\alpha=9$ is odd, the two left-most neighboring nodes do not carry independent information, even though they share a same child node. }\label{fig:circle} \end{figure} A second observation from the umbrella network in Figure~\ref{fig:umbrella} is that only the source and the shaded nodes need to perform network coding operations; other nodes either are sinks or have a single parent only. With RLNC, such encoding nodes need to use a large field size to accommodate the sink nodes at the bottom of the handle. With ARCNC, on the other hand, smaller field sizes with short local encoding kernels can be used in the top layer, leading to smaller memory requirements for the sinks on top. The general idea here is similar to the work by Ho et al. in \cite{ho2010universal}: the effective field size should be chosen proportional to the depths of nodes. Recall from Section~\ref{sec:memory} that we measure the memory requirement of different network codes by the expected average memory use $E[W_{\text{avg}}] = \frac{\lceil\log_2 q\rceil}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}(E[L_v]+1)$; $q$ is the field size used by all nodes in the network; $\mathcal{V}$ is the set of all nodes; and $L_v$ is the degree of the global encoding kernel matrix at node $v\in \mathcal{V}$ when all sinks in the network achieve the decodability conditions for the first time. If RLNC is applied, $L_v=0$ for all nodes. Suppose $q_{\text{R}}$ is the field size for network code generation, then $E[W_{\text{avg},\text{RLNC}}]= \lceil \log_2 q_{R} \rceil$. To find a lower-bound for $E[W_{\text{avg},\text{RLNC}}]$, we consider the center sink $s_\beta$ in the bottom layer of the umbrella. The decoding probability at this sink lower-bounds the overall decoding probability of all sinks in the network. We know that the success probability for the center bottom sink is at least $(1-\frac{1}{q_R})^{2\beta}$ since the number of links with random coefficients associated with this sink is $2\beta$. To guarantee a overall success probability larger than $1-\epsilon$, we want $(1-\frac{1}{q_R})^{2\beta} > 1-\epsilon$, hence \begin{align} E[W_{\text{avg},\text{RLNC}}] = \lceil \log_2 q_{\text{R}} \rceil & > -\log_2(1-\sqrt[2\beta]{1-\epsilon})\,.\label{eq:WavgRLNC} \end{align} When $\epsilon$ is small, $q_R$ needs to be quite large. On the other hand, if ARCNC is applied, decodability can always be achieved, and most sinks in Layer 1 would be able to decode much sooner than the ones on the handle of the umbrella. Observe that, $s_1$ in Figure~\ref{fig:umbrella} shares parents $p_l$ and $p_r$ with two other sinks $r_l$ and $r_r$ in Layer 1. $L_v$ at $p_l$, $p_r$, $r_l$ and $r_r$ are therefore controlled by $L_{s_1}$. For all sinks in Layer 1 other than $r_l$, $s_1$ and $r_r$, we can apply Theorem~\ref{thm:prob} and bound $E[L_r]$ in a way similar to the derivation of $ET_{UB}$ in the combination network example. In applying Theorem~\ref{thm:prob}, we assume $r$ as the only sink in a multicast connection. Again, let $q$ be the field size for network code generation. Thus, for a sink $r$ in Layer 1, if $r$ is not $s_1$, $r_l$, or $r_r$, \begin{align} E[L_r] = E[T_r] & =\sum_{t=1}^\infty P(T_r\geq t) \\ & < \sum_{t=1}^\infty (1-(1-\frac{1}{q^{t}})^2) \label{eq:umbrella_Elv2}\\ & = \sum_{k=1}^2 (-1)^{k-1} {2 \choose k} \frac{1}{q^k-1} \\ & = \frac{2q+1}{q^2-1} \,. \label{eq:umbrella_Elv3} \end{align} This upper-bound also applies to all intermediate nodes in the upper half of Layer 1, except $p_l$ and $p_r$. This is because a parent node stops increasing its code lengths only when all of its child nodes have achieved decodability. For $s$, $s_1$, $p_l$, $p_r$, $r_l$, $r_r$, as well as nodes in other layers of the umbrella, we can upper-bound $E[L_v]+1$ by $\lceil \log_q q_{\text{R}} \rceil$, which is in turn lower-bounded by a given $\epsilon$ according to Eq.~\eqref{eq:WavgRLNC}. In summary, the expected average memory use is of ARCNC is upper-bounded as follows. Here $|\mathcal{V}|= 1+2\alpha + 6(\beta-1) = 2\alpha+6\beta-5$. We also ignore integer constraints for simplicity and since we are interested in asymptotic behavior of the system. \begin{align} & \,\, E[W_{\text{avg},\text{ARCNC}}] =\frac{\log_2 q}{|\mathcal{V}|}\sum_{v\in\mathcal{V}}(E[L_v]+1) \\[4pt] & < \frac{\log_2 q \left[(2\alpha-5)(\frac{2q+1}{q^2-1}+1) + 6\beta \log_q q_R \right]}{|\mathcal{V}|} \\[4pt] & = \log_2 q \left[ \frac{2\alpha-5}{|\mathcal{V}|}\frac{q^2+2q}{q^2-1}\right] + \log_2 q_R\frac{6\beta}{|\mathcal{V}|} \end{align} Comparing the lower-bound on $E[W_{\text{avg},\text{RLNC}}]$ and the upper-bound on $E[W_{\text{avg},\text{ARCNC}}]$, we see that the gain of ARCNC over RLNC in terms of memory use is lower-bounded as \begin{align} G & = \frac{E[W_{\text{avg},\text{RLNC}}]}{E[W_{\text{avg},\text{ARCNC}}]} \\ & > \frac{\log_2 q_R}{\log_2 q \left[ \frac{2\alpha-5}{|\mathcal{V}|}\frac{q^2+2q}{q^2-1}\right] + \log_2 q_R\frac{6\beta}{|\mathcal{V}|}} \\[4pt] & \simeq \left\{ \begin{array}{ll} 1\,, & 1\ll\alpha\ll\beta \\[12pt] \dfrac{q^2-1}{q^2+2q}\cdot\dfrac{\log_2 q_R}{\log_2 q}\,, & \alpha \gg\beta\gg 1 \\[6pt] \end{array} \right. \label{eq:umbrella}\,. \end{align} \noindent For a given $\epsilon$, $\lceil\log_2 q_R\rceil$ is finite; for $\epsilon$ to be very small, field size $q_R$ should be large for RLNC. On the other hand, field size $q$ can be a small fixed constant for ARCNC. Hence if the umbrella is very wide ($\alpha\gg\beta$), ARCNC offers large gains in terms of memory storage. An intuitive generalization of this statement is that if all nodes within the network are approximately equally distant from the source, then a static random linear code is \textcolor{red}{approximately as good as variable length convolutional codes. Actually, ARCNC has some gain in memory use but not much.} However, if only a few nodes are far from the source in terms of hop counts while all other nodes are close, then it is more advantageous in terms of memory to use an adaptive random convolution network code. \section{Introduction}\label{sec:introduction} \input{introduction} \section{Adaptive Randomized Convolutional Network Codes}\label{sec:algorithm} \input{model} \input{alg_acyclic} \input{alg_cyclic} \section{Analysis}\label{sec:analysis} \input{analysis_prob} \input{analysis_T} \input{analysis_memory} \input{analysis_complexity} \section{Examples}\label{sec:examples} \input{example_combination} \input{example_sparsecombnet} \input{example_shuttle} \section{Simulations}\label{sec:simulations} \input{simu_combination} \input{simu_shuttle} \input{simu_acyclicRandom} \input{simu_cyclicRandom} \section{Conclusion}\label{sec:conclusion} \input{conclusion} \bibliographystyle{IEEEtranTCOM} \subsection{Basic Model and Definitions}\label{sec:basicDefs} We model a communication network as a finite directed multigraph, denoted by $\mathcal{G}=(\mathcal{V},\mathcal{E})$, where $\mathcal{V}$ is the set of nodes and $\mathcal{E}$ is the set of edges. An edge represents a noiseless communication channel with unit capacity. We consider the single-source multicast case, i.e., the source sends the same messages to all the sinks in the network. The source node is denoted by $s$, and the set of $d$ sink nodes is denoted by $R=\{r_1,\ldots,r_d\} \subset \mathcal{V}$. For every node $v \in \mathcal{V}$, the sets of incoming and outgoing channels to $v$ are $In(v)$ and $Out(v)$; let $In(s)$ be the empty set $\emptyset$. An ordered pair $(e',e)$ of edges is called an \emph{adjacent pair} when there exists a node $v$ with $e'\in In(v)$ and $e\in Out(v)$. Since edges are directed, we use the terms edge and arc interchangeably in this paper. The symbol alphabet is represented by a base field, $\mathbb{F}_q$. Assume $s$ generates a source \emph{message} per unit time, consisting of a fixed number of $m$ source \emph{symbols} represented by a size $m$ \emph{row} vector $x_t =(x_{1,t},x_{2,t},\cdots,x_{m,t})$, $x_{i,t}\in\mathbb{F}_q$. Time $t$ is indexed from 0, with the $(t+1)$-th message is generated at time $t$. The source messages can be collectively represented by a power series $x(z)=\sum_{t\geq 0}{x_t z^{t}}$, where $x_t$ is the message generated at time $t$ and $z$ denotes a unit-time delay. $x(z)$ is therefore a row vector of polynomials from the ring $\mathbb{F}_q[z]$. Denote the data propagated over a channel $e$ by $y_e(z)=\sum_{t\geq 0}{y_{e,t} z^{t}}$, where $y_{e,t}\in \mathbb{F}_q$ is the data symbol sent on edge $e$ at time $t$. For edges connected to the source, let $y_e(t)$ be a linear function of the source messages, i.e., for all $e\in Out(s)$, $y_e(z) = x(z)f_e(z)$, where $f_e(z)=\sum_{t\geq0}f_{e,t}z^{t}$ is a size $m$ column vector of polynomials from $\mathbb{F}_q[z]$. For edges not connected directly to the source, let $y_e(z)$ be a linear function of data transmitted on incoming adjacent edges $e'$, i.e., for all $v \neq s$, $e\in Out(v)$, \begin{align} y_e(z) = \sum_{e' \in In(v)}{k_{e',e}(z)y_{e'}(z)}\,. \label{eq:yet_k} \end{align} Both $k_{e',e}(z)$ and $y\,_e(z)$ are in $\mathbb{F}_q[z]$. Define $k_{e',e}(z)= \sum_{t\geq0}k_{e', e, t}z^{t}$ as the \emph{local encoding kernel} over the adjacent pair $(e',e)$, where $k_{e',e,t}\in \mathbb{F}_q$. Thus, for all $e\in \mathcal{E}$, $y_e(z)$ is a linear function of the source messages, \begin{align} y_e(z)= x(z) f_e(z)\,, \label{eq:yet_f} \end{align} \noindent where $f_e(z)=\sum_{t\geq0}f_{e,t}z^{t}$ is the size $m$ \emph{column} vector defined as the \emph{global encoding kernel} over channel $e$, and for all $v \neq s$, $e\in Out(v)$, \begin{align} f_e(z) & = \sum_{e' \in In(v)}{k_{e',e}(z)f_{e'}(z)}\label{eq:fe}\,, \\ \text{i.e.\,,} \quad \quad f_{e,t} & = \sum_{e'\in In(v)}\left(\sum_{i=0}^t k_{e',e,i}f_{e',t- i}\right)\,. \label{eq:fet} \end{align} Note that $f_{e,t}\in \mathbb{F}_q^m$, and $f_{e'}(z), f_e(z)\in \mathbb{F}_q^m[z]$. Expanding Eq.~\eqref{eq:yet_k} term by term gives an explicit expression for each data symbol $y_{e,t}$ transmitted on edge $e$ at time $t$, in terms of source symbols and global encoding kernel coefficients: \begin{align} y_{e,t} & = \sum_{e'\in In(v)}\left(\sum_{i=0}^t k_{e',e,i}y_{e',t- i}\right) = \sum\limits_{i=0}^t{x_{t-i}f_{e,i}}\,. \label{eq:yet} \end{align} Each intermediate node $v\neq s$ is therefore required to store in its memory received data symbols $y_{e',t-i}$ for values of $i$ at which $k_{e',e,i}$ is non-zero. The design of a CNC is the process of determining local encoding kernel coefficients $k_{e',e,t}$ for all adjacent pairs $(e',e)$, and $f_{e,t}$ for $e\in Out(s)$, such that the original source messages can be decoded correctly at the given set $R$ of sink nodes. With a random linear code, these coding kernel coefficients are chosen uniformly randomly from the finite field $\mathbb{F}_q$. This paper studies an adaptive scheme where kernel coefficients are generated one at a time until decodability is achieved at all sinks. Collectively, we call the $|In(v)|\times|Out(v)|$ matrix $K_v(z)=(k_{e',e}(z))_{e'\in In(v), e\in Out(v)}=K_{v,0}+K_{v,1}z+K_{v,2}z^2+\ldots$ the \emph{local encoding kernel matrix} at node $v$, and the $m \times |In(v)|$ matrix $F_v(z)=(f_{e}(z))_{e\in In(v)}$ the \emph{global encoding kernel matrix} at node $v$. Observe from Eq.~\eqref{eq:yet_f} that, at sink $r$, $F_{r}(z)$ is required to deconvolve the received data messages $y_{e^\prime_i}(z)$, $e^\prime_i \in In(r)$. Therefore, each intermediate node $v$ computes $f_e(z)$ for outgoing edges from $F_v(z)$ according to Eq.\eqref{eq:fe}, and sends $f_e(z)$ along edge $e$, together with data $y_e(z)$. This can be achieved by arranging the coefficients of $f_e(z)$ in a vector form and attaching them to the data. In this paper, we ignore the effect of this overhead transmission of coding coefficients on throughput or delay: we show in Section~\ref{sec:stoppingTime} that the number of terms in $f_e(z)$ is finite, thus the overhead can be amortized over a long period of data transmissions. Moreover, $F_v(z)$ can be written as $F_v(z)=F_{v,0}+F_{v,1}z+\cdots+F_{v,t}z^{t}$, where $F_{v,t}\in \mathbb{F}_q^{m\times In(v)}$ is the global encoding kernel matrix at time $t$. $F_v(z)$ can thus be viewed as a polynomial, with $F_{v,t}$ as matrix coefficients. Let $L_v$ be the degree of $F_v(z)$. $L_v+1$ is a direct measure of the amount of memory required to store $F_v(z)$. We shall define in Section~\ref{sec:memory} the metric used to measure memory overhead of ARCNC. \subsection{Acyclic and Cyclic Random Geometric Networks} \label{subsec:simuAcyclicRandom} To see the performance of ARCNC in random networks, we use random geometric graphs \cite{newman2003random} as the network model, with added acyclic or cyclic constraints. In random geometric graphs, nodes are put into a geometrically confined area $[0,1]^2$, with coordinates chosen uniformly randomly. Nodes which are within a given distance are connected. Call this distance the connection radius. In our simulations, we set the connection radius to 0.4. The resulting graph is inherently bidirectional. For acyclic random networks, we number all nodes, with source as node 1, and sinks as nodes with the largest numbers. A node is allowed to transmit to only nodes with numbers larger than its own. An intermediate node on a path from the source to a sink can be a sink itself. To ensure the max-flow to each receiver is non-zero, one can choose the connection radius to make the graph connected with high probability; we fix this value to $0.4$, and throw away instances where at least one receiver is not connected to the source. Once an acyclic random geometric network is generated, we use the smallest min-cut over all sinks as the source symbol rate, which is the number of source symbols generated at each time instant. Figs.~\ref{fig:random_acyclic_and_cyclic_total25_rx2to12_qq2_sinksLast} and \ref{fig:random_acyclic_and_cyclic_total10to45_rx3_qq2_sinksLast} plot the average first decoding time $T_\text{avg}$ and average memory use $W_\text{avg}$ in acyclic random geometric networks. Fig.~\ref{fig:random_acyclic_and_cyclic_total25_rx2to12_qq2_sinksLast} shows the case where there are 25 nodes within the network, with more counted as sinks, while Fig.~\ref{fig:random_acyclic_and_cyclic_total10to45_rx3_qq2_sinksLast} shows the case where the number of sinks is fixed to 3, but more nodes are added to the network. In both cases, $T_{\text{avg}}$ is less than 1, indicating that decodability is achieved in 2 steps with high probability. In Fig.~\ref{fig:random_acyclic_and_cyclic_total25_rx2to12_qq2_sinksLast}, the dependence of $W_{\text{avg}}$ on the number of sinks is not very strong, since there are few sinks, and each node is connected to only a small portion of all nodes. In Fig.~\ref{fig:random_acyclic_and_cyclic_total10to45_rx3_qq2_sinksLast}, $W_{\text{avg}}$ grows as the number of nodes increases, since on average, each node is connected to more neighboring nodes, thus its memory use is more likely to be affected by other sinks. \begin{figure}[t!] \centering \includegraphics[width=8.6cm]{./graphics/random_acyclic_and_cyclic_total25_rx2to12_qq2_sinksLast.eps}\\ \caption{Average first decoding time and average memory use in acyclic and cyclic random geometric graphs with 25 nodes, as a function of the number of receivers. Field size is $q=2^2$.} \label{fig:random_acyclic_and_cyclic_total25_rx2to12_qq2_sinksLast} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./graphics/random_acyclic_and_cyclic_total10to45_rx3_qq2_sinksLast.eps}\\ \caption{Average first decoding time and average memory use in a cyclic random geometric graph with 3 receivers, as a function of the total number of nodes in the network. Field size is $q=2^2$.} \label{fig:random_acyclic_and_cyclic_total10to45_rx3_qq2_sinksLast} \end{figure} \subsection{Combination Network}\label{subsec:simuCombination} Recall from Section~\ref{example:combination} that an upper bound $ET_{UB}$ and a lower bound $ET_{LB}$ for the average expected first decoding time $E[T_{\text{avg}}]$ can be computed for a $n \choose m$ combination network. Both are functions of $m$ and $q$, independent of $n$. In evaluating $var[T_{\text{avg}}]$, three cases were considered, $n>2m$, $n=2m$, and $n<2m$. When $n>2m$, the number of sinks unrelated to a given sink $r$ is significant. If it takes $r$ multiple time steps to achieve decodability, not all other sinks and intermediate nodes have to continue increasing their encoding kernel length to accommodate $r$. Thus, ARCNC can offer gains in terms of decoding delay and memory use. We show simulation results below for the case when $m$ is fixed at the value of $2$, while $n$ increases. By comparison, if $n\leq2m$, there is a maximum of one sink related to a given sink $r$. We show simulation results below for the case of $n=2m$. \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./graphics/combination_fixedM_2_qq_withbound.eps}\\ \caption{Average first decoding time and average memory use. $m=2$, $n$ increases, field size $q$ also increases. Also plotted are the computed upper and lower bounds on $T_{\text{avg}}$ for $q=2$.} \label{fig:combination_fixedM_2_qq} \end{figure} \subsubsection{$n>2m$, fixed $m$, $m=2$} Fig.~\ref{fig:combination_fixedM_2_qq} plots the average first decoding time $T_{\text{avg}}$, corresponding upper and lower bounds $ET_{UB}$, $ET_{LB}$, and average memory use $W_{\text{avg}}$, as defined in Section~\ref{sec:memory}. Here $m$ is fixed to the value of 2, $n$ increases from 4 to 16, and the field size is $q=2$. As discussed in Section~\ref{example:combination}, $ET_{UB}$ and $ET_{LB}$ are independent of $n$. As $n$ increases, observe that $T_{\text{avg}}$ stays approximately constant at about 1.3, while $W_{\text{avg}}$ increases sublinearly. When $n=16$, $W_{\text{avg}}$ is approximately 6.3. On the other hand, recall from \cite{BYZ2009} that a lower bound on the success probability of RLNC is $(1-d/(q-1))^{|J|+1}$, where $|J|$ is the number of encoding nodes. In a combination network with $n=16$ and $m=2$, $|J|=1$ since only the source node codes. For a target decoding probability of $0.99$, we have $(1-{16\choose 2}/(q-1))^2 \geq 0.99$, thus $q>2.4\times 10^4$, and $\lceil \log_2q \rceil \geq 15$. Since each encoding kernel contains at least one term, $W_{\text{avg}}$ is lower bounded by $\lceil \log_2q\rceil$. Hence, using ARCNC here reduces memory use by half when compared with RLNC. Fig.~\ref{fig:combination_fixedM_2_qq} also plots $T_{\text{avg}}$ and $W_{\text{avg}}$ when field size $q$ increases from $2^1$ to $2^8$. As field size becomes larger, $T_{\text{avg}}$ approaches 0. When $q=2^8$, the value of $T_{\text{avg}}$ is close to $0.004$. As discussed in Section~\ref{example:combination}, when $q$ becomes sufficiently large, ARCNC terminates at $t=0$, and generates the same code as RLNC. Also observe from this figure that as $n$ increases from 4 to 16, $W_{\text{avg}}$ increases as well, but at different rates for different field sizes. Again, $W_{\text{avg}}$ is lower bounded by $\lceil \log_2q \rceil$. When $q=2^8$, $W_{\text{avg}}$ follows an approximately linear trend, with an increment of less than 1 between $n=4$ and $n=16$. One explanation for this observation is that for $m=2$, a field size of $q=2^8$ is already sufficient for making ARCNC approximately the same as RLNC. \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./graphics/combination_variedM_nOver2_qq_withboundq=2.eps}\\ \caption{Average first decoding time and average memory use. $n=2m$, n increases, field size $q$ also increases. Also plotted are the computed upper and lower bounds on $T_{\text{avg}}$ for $q=2$.} \label{fig:combination_variedM_nOver2_qq} \end{figure} \subsubsection{$n=2m$} Fig.~\ref{fig:combination_variedM_nOver2_qq} plots $T_{\text{avg}}$, $W_{\text{avg}}$, and corresponding bounds on $T_{\text{avg}}$ when $n=2m$, $q=2$. Since $m$ increases with $n$, $ET_{UB}$ and $ET_{LB}$ change with the value of $n$ as well. Observe that $T_{\text{avg}}$ increases from approximately 1.27 to approximately 1.45 as $n$ increases from 4 to 12. In other words, even though more sinks are present, with each sink connected to more intermediate nodes, the majority of sinks are still able to achieve decodability within very few coding steps. However, since now $n=2m$, any given sink $r$ is related to all but one other sink; even a single sink requiring additional coding steps would force almost all sinks to use more memory to store longer encoding kernels. Compared with Fig.~\ref{fig:combination_variedM_nOver2_qq}, $W_{\text{avg}}$ appears linear in $n$ in this case. Fig.~\ref{fig:combination_variedM_nOver2_qq} also plots $T_{\text{avg}}$ and $W_{\text{avg}}$ when $q$ increases. Similar to the $m=2$ case shown in Fig.~\ref{fig:combination_variedM_nOver2_qq}, $T_{\text{avg}}$ approaches 0 as $q$ becomes larger. $W_{\text{avg}}$ appears linear in $n$ for $q\leq 2^6$, and piecewise linear for $q=2^8$. This is because $W_{\text{avg}}$ is lower bounded by $\lceil \log_2q \rceil$. When $n$ becomes sufficiently large, this lower bound is surpassed, since $q=2^8$ no longer suffices in making all nodes decode at time 0, thus making ARCNC a good approximation of RLNC. \subsection{cyclic random networks}\label{subsec:simuCyclicRandom} To see the performance of ARCNC in cyclic random networks, note that random geometric graphs are inherently bidirectional. We apply the following modifications to make the network cyclic. First, we number all nodes, with source as node 1, and sinks as the nodes with the larges numbers. Second, we replace each bidirectional edge with 2 directed edges. Next, a directed edge from a lower numbered to a higher numbered node is removed from the graph with probability 0.2, and a directed edge from a higher numbered to a lower numbered node is removed from the graph with probability 0.8. Such edge removals ensure that not all neighboring node pairs form cycles, and cycles can exist with positive probabilities. We do not consider other edge removal probabilities in our simulations. The effect of random graph structure on the performance of ARCNC is a non-trivial problem and will not be analyzed in this paper. Figs.~\ref{fig:random_acyclic_and_cyclic_total25_rx2to12_qq2_sinksLast} and \ref{fig:random_acyclic_and_cyclic_total10to45_rx3_qq2_sinksLast} also plot the average first decoding time and average memory use in cyclic random geometric networks. Again, in both cases, the average first decoding time $T_{\text{avg}}$ is less than 1, indicating that decodability is achieved in 2 steps with high probability. $W_{\text{avg}}$ stays approximately constant when more nodes becomes sinks. On the other hand, when the number of sinks is fixed to 3, while more nodes are added to the network, $W_{\text{avg}}$ first increases, then decreases in value. This is because as more nodes are added, since the connection radius stays constant at 0.4, each node is connected to more neighboring nodes. Sharing parents with more nodes first increase the memory use of a given node. However, as more nodes are added and more cycles form, edges are utilized more efficiently, thus bringing down both $T_{\text{avg}}$ and $W_{\text{avg}}$. Note that when compared with the acyclic case, cyclic networks with the same number of nodes or same number of sinks require longer decoding time as well as more memory. This is expected, since with cycles, sinks are related to more nodes in general. \subsection{Shuttle Network}\label{subsec:simuShuttle} \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./graphics/shuttle_qq.eps}\\ \caption{Average decoding delay and average code length for the shuttle network as a function of field size.} \label{fig:shuttle_qq} \end{figure} When there are cycles in the network, as discussed in Section~\ref{sec:algCyclic}, we numerically index edges, and assign local encoding kernels at $t=0$ according to the indices such that no logical contradictions exist in data transmitted around each cycle. Fig.~\ref{fig:shuttle_qq} plots $T_{\text{avg}}$ and $W_{\text{avg}}$ for the shuttle network, with the index assignment given in Fig.~\ref{fig:shuttlenet}. As discussed in the example shown in Figs.~\ref{fig:shuttle0} and \ref{fig:shuttle1}, with this edge index index, both $r_1$ and $r_2$ require at least 2 time steps to achieve decodability. This conclusion is verified by the plot shown in Figure~\ref{fig:shuttle_qq}. As field size $q$ increases, $T_{\text{avg}}$ converges to 1, while $W_{\text{avg}}$ converges to $2\log_2q$. When $q=2^1$, $T_{\text{avg}}$ is 5.1. \subsection{Umbrella Network}\label{subsec:simuUmbrella} \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./graphics/umbrella_alpha5_beta3to10_qq2.eps}\\ \caption{Average first decoding time and average memory use in an umbrella network. $\alpha=5$, $\beta$ is between 3 to 10, field size is $q=2^2$.} \label{fig:umbrella_alpha5_beta3to10} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./graphics/umbrella_beta3_alpha5to30_qq2.eps}\\ \caption{Average first decoding time and average memory use in an umbrella network. $\beta=3$, $\alpha$ is between 5 and 29, field size is $q=2^2$.} \label{fig:umbrella_beta3_alpha5to30} \end{figure} Figures~\ref{fig:umbrella_alpha5_beta3to10} and \ref{fig:umbrella_beta3_alpha5to30} plot the average first decoding time and average memory use in umbrella networks when $q=2^2$. Figure~\ref{fig:umbrella_alpha5_beta3to10} shows the performance of ARCNC when the number of layer in the umbrella is increased from $\beta=3$ to 10, while the number of nodes in the top Layer 1, is kept constant at $2\alpha= 10$. With each additional level, two more nodes are added to the set of sinks, and the maximum depth of sinks is incremented by 1. The resulting $T_{\text{avg}}$ and $W_{\text{avg}}$ are approximately linear in $\beta$, increasing in value as $\beta$ becomes larger. When $\beta=10$, $T_{\text{avg}}$ is approximately 2.0, and $W_{\text{avg}}$ is approximately 11.9. Since $\alpha=5$, the number of sinks in the network is $\alpha+2\beta=25$, and the number of coding nodes is $10$, including the source. Similar to the combination network case discussed in the previous subsection, to achieve a decoding probability of 0.99, we have $(1-25/(q-1))^{10+1}\geq0.99$, thus $q>2.7\times10^4$, and $W_{avg,\text{RLNC}}=\lceil \log_2q \rceil \geq 15$, slightly larger than $W_{avg,\text{ARCNC}}$. Figure~\ref{fig:umbrella_beta3_alpha5to30} shows the performance of ARCNC when there are only $\beta=3$ layer in the umbrella, while $\alpha$ increases from 5 to 29. Unlike in Figures~\ref{fig:umbrella_alpha5_beta3to10}, both $T_{\text{avg}}$ and $W_{\text{avg}}$ decreases as $\alpha$ becomes larger. This is because, first, the added sinks in Layer 1 do not affect code generation for nodes on the handle of the umbrella; second, unlike the combination network, each sink in Layer 1 shares parents with only two other sinks in Layer 1. When $\alpha=29$, $W_{\text{avg}}$ is approximately 4. We can also compute in this case $W_{avg,\text{RLNC}}=\lceil \log_2q \rceil \geq 14$, 2.5 times more than $W_{avg,\text{ARCNC}}$. When $\alpha$ becomes even larger, the gain in memory use can be even larger when ARCNC is used instead of RLNC. \subsection{Combination Network}\label{subsec:simuCombination} \subsubsection{fixed m} In Section~\ref{example:combination}, we computed upper and lower bounds for the expected decoding delay on combination networks. We have also simulated the ARCNC encoder and decoder in Matlab. Figure~\ref{fig:combination_fixedM_2_n} plots these upper bounds, as well as an upper bound for the standard deviation of $E[T]$. Observe that the average decoding delay is only slightly more than 1, even though there are more receivers in the network. The estimated upper and lower bounds for the average decoding delay are independent of the value of $n$. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/combination_fixedM_2_n.eps}\\ \caption{Computed upper and lower bounds on the expected decoding delay for combination networks. $m$ is fixed the value of 2, while $n$ increase in value. Field size $q=2$.} \label{fig:combination_fixedM_2_n} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/combination_fixedM_2_qq.eps}\\ \caption{$m$ is fixed the value of 2, while $n$ increase in value. Field size varies.} \label{fig:combination_fixedM_2_qq} \end{figure} \subsubsection{m proportional to n} In the case where $m$ is comparable to the number of $n$, we simulate the behavior of the system when $n$ is increased. Figure~\ref{fig:combination_variedM_nOver2} illustrates the estimated upper and lower bounds for the average decoding delay when $m$ and $n$ are comparable in magnitude. Here we have set $m=n/2$. Comparing the simulation results with Figure~\ref{fig:combination_fixedM_2}, we see that a larger decrease in average decoding delay can be observed, even though the number of sinks in the network has increased significantly. For example, when $n=12$, $m=6$, there are 924 sinks for which transmission demands need to be satisfied, compared with 66 sinks when $n=12$ and $m=2$. The upper and lower bound we have computed depends on $n$ since $m$ is linear in $n$. In combination networks, simulation results have verified that decodability can be achieved with probability 1. \subsection{Umbrella Network}\label{subsec:simuUmbrella} Figures~\ref{fig:umbrella_alpha5_beta3to10} and ~\ref{fig:umbrella_beta3_alpha5to30} plot the probability of successful decoding, average decoding delay, and average code length in umbrella networks. Figure~\ref{fig:umbrella_alpha5_beta3to10} compares the performance of ARCNC when $\beta$, the number of levels in the network, is set to 3, while the number of nodes in the top layer, $2\alpha$, is kept constant. Since there are more sinks in the network, and the maximum depth of sinks increases with $\beta$, the average decoding delay and average code length also increase linearly in $\beta$. On the other hand, when $\beta$ is kept constant at 3, while $\alpha$ increases, the average decoding delay and average code lengths decrease, since the added sinks does not affect the code generation for nodes on the handle of the umbrella. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/umbrella_alpha5_beta3to10_qq2.eps}\\ \caption{Average decoding delay and average code length. $\alpha=5$, $\beta$ is between 3 to 10, field size is $2^2$.} \label{fig:umbrella_alpha5_beta3to10} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/umbrella_beta3_alpha5to30_qq2.eps}\\ \caption{Average decoding delay and average code length. $\beta=3$, $\alpha$ is between 5 and 30, field size is 4.} \label{fig:umbrella_beta3_alpha5to30} \end{figure} \subsection{Acyclic Random Geometric Networks}\label{subsec:simuAcyclicRandom} To see the performance of ARCNC in acyclic random networks, we use random geometric graphs \cite{newman2003random} as the network model, with added acyclic constraints. In random geometric graphs, nodes are put into a geometrically confined area $[0,1]^2$, with the coordinates chosen uniformly randomly. Nodes which are within a given distance $d$ are connected. The graph is inherently undirected. In our simulations, we number all of the nodes in the network, with the source as node 1, and the sinks as the nodes with the largest numbers. We assume that a node can transmit to only nodes with number higher than its own. Sinks are also chosen uniformly randomly from all of the non-source nodes. An intermediate node on a path from the source to a sink can be a sink itself. To ensure the max-flow to each receiver is non-zero, one can choose the value of $d$ to make the graph connected with high probability. In our simulations, we throw away instances where any receiver is not connected to the source. Once a acyclic random geometric network is generated, we find the min-cuts of each of the sink nodes, and use the smallest min-cut over all sinks as the number of symbols that the source is sending out at each time instant Figures~\ref{fig:random_acyclic_total25_rx2to6_qq2_sinksLast} and \ref{fig:random_acyclic_total10to45_rx3_qq2_sinksLast} plot the average decoding delay and average code length in acyclic random geometric networks, when the radius of connection is set to $0.4$. A sink can be on the path from the source to another sink. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/random_acyclic_total25_rx2to6_qq2_sinksLast.eps}\\ \caption{Average decoding delay and average code length for a acyclic random geometric graph with 25 nodes, as a function of the number of receivers, field size is 4.} \label{fig:random_acyclic_total25_rx2to6_qq2_sinksLast} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/random_acyclic_total10to45_rx3_qq2_sinksLast.eps}\\ \caption{Average decoding delay and average code length for a acyclic random geometric graph with 3 receivers, as a function of the total number of nodes in the network, field size is 4.} \label{fig:random_acyclic_total10to45_rx3_qq2_sinksLast} \end{figure} \subsection{Shuttle Network}\label{subsec:simuShuttle} When there are cycles in the network, we randomly remove edges from lower numbered nodes to the higher numbered nodes one by one until the network becomes acyclic. Figure~\ref{fig:shuttle_qq} plots the average decoding delay and average code length for the shuttle network. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/shuttle_qq.eps}\\ \caption{Average decoding delay and average code length for the shuttle network as a function of field size.} \label{fig:shuttle_qq} \end{figure} \subsection{cyclic random networks}\label{subsec:simuCyclicRandom} Cyclic networks are of strong importance to our study, as the Figures~\ref{fig:random_cyclic_total25_rx2to6_qq2_sinksLast} and \ref{fig:random_cyclic_total10to45_rx3_qq2_sinksLast} plot the average decoding delay and average code length in cyclic random geometric networks, when the radius of connection is set to $0.4$. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/random_cyclic_total25_rx2to6_qq2_sinksLast.eps}\\ \caption{Average decoding delay and average code length for a cyclic random geometric graph with 25 nodes, as a function of the number of receivers, field size is 4.} \label{fig:random_cyclic_total25_rx2to6_qq2_sinksLast} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./graphics/random_cyclic_total10to45_rx3_qq2_sinksLast.eps}\\ \caption{Average decoding delay and average code length for a cyclic random geometric graph with 3 receivers, as a function of the total number of nodes in the network, field size is 4.} \label{fig:random_cyclic_total10to45_rx3_qq2_sinksLast} \end{figure}
1,116,691,498,745
arxiv
\section{Introduction} Social networks take the form of a graph consisting a set of nodes and edges. Typically, the nodes represent persons or organizations, and each edge is a measure of the relation between a pair of nodes. For example, in the citation network of Statisticians \cite{ji2014coauthorship}, a (directed) link variable $Y_{i,j}$ indicates individual $i$ has cited $j$'s work if $Y_{i,j} = 1$, otherwise if $Y_{i,j} = 0$. Statistical analysis beyond descriptive is focused on modeling dependencies of the link formation.\\ As data getting collected at larger scales, social networks often exhibits a hierarchical structure: people are from different communities, within each community there are various types of link formation process taking action while between communities the connections are much sparser. Communities could be not only physical such as geographical, but also abstract such as by political attitude. Within a community, transitivity is often a major type of force that generate links, for instance, if $i$ cites $j$ and $j$ cites $k$, then it is more likely that $i$ also cites $k$. However, we should also expect different strength of transitivity among statisticians working in different areas (clustering). Figure \ref{fig_demo} is a simulated (undirected) network of three communities each has the same number of nodes (20) but with different transitivities, while any probability of a between-cluster tie is the same (0.05). Using the R package \textit{latentnet} \citep{krivitsky2008latentnet}, we can see the structures clearly: the cluster with high transitivity (blue) has its nodes closer to each other, and the one with low transitivity (red) is more spreading. The uncertainties of clustering are indicated by the pie of colors.\\ \begin{figure*} \centering \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{before_clust} \end{minipage} \hfill \begin{minipage}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{after_clust} \end{minipage} \caption{A simulated network to illustrate its hierarchical structure: on the left is a mix of three communities each has 20 nodes, between-cluster ties $Y_{i,j}$ ($i\in k \text{th cluster}$, $j\in l \text{th cluster}$, $k \ne l$) are i.i.d. Bernoulli(0.05). The clusters are generated from an exponential random graph model with a baseline density of $0.05$ plus a transitivity parameter (in terms of gwdsp and gwesp, which will be explained in Section \ref{ergm}) of $0.2$, $0.5$ and $1.0$, respectively. The cluster of low transitivity is not visually apparent. On the right is the clustered visualization of this network by a latent space model (see Section \ref{lsm}).} \label{fig_demo} \end{figure*} When the network is homogeneous, i.e. has one single community, Exponential Random Graph Models (ERGM) is a popular tool for modeling as it provides researchers an intuitive formulation of related structures to test various social theories \citep{wasserman1996logit}. The Monte Caolo Markov Chain (MCMC) techniques developed in \cite{snijders2002markov,hunter2006curved,hummel2012improving} and computer programs \textit{statnet} \citep{handcock2008statnet} led to its widespread use. However, practitioners often found that the programs had convergence problems for many specifications, and the statistical properties of the MLE are not comprehensive. Recently, \cite{schweinberger2015local} suggests that the distribution of sufficient statistics in the traditional ERGM can be asymptotically normal if some local dependence is imposed. Hence solves the notorious degeneracy problem, which is mainly caused by the global dependence introduced by the Markov property \citep{frank1986markov}. However, the Bayesian inference procedure proposed by their paper is extremely expensive in computation as it involves two exponentially increasing terms, one nested in the other.\\ \xhdr{Present work} Motivated by the Hierarchical ERGM construction in \cite{schweinberger2015local}, we attempt to find a general and feasible procedure to infer clustering and the within-cluster dependences simultaneousy. However, our purpose is not to archive the large sample properties as the number of clusters goes to $\infty$ in a Frequentist way, because we view the network as a fixed set of nodes and the parameter estimates not only for interpretation but also as a parsimonious local mechanism to represent global structure, e.g. improving model goodness of fit \citep{hunter2012goodness}. \\ To illustrate our ideas more efficiently, from here we focus on the \textit{undirected binary} graph though the extension to the \textit{directed} and/or \textit{weighted} graph is straightforward. Notations are given as following: an \textit{undirected binary graph} $G=(V,E)$ consists a set of \textit{vertices} $V=\{1, \dots, n\}$ and \textit{edges} $E=\{(i,j) | i,j \in V \}$. Typically, $G$ is represented by its adjacency matrix $Y = \{Y_{i,j}\}_{1\le i\ne j\le n}$ where \[ Y_{i,j}=\begin{cases} 1 & \text{if there is an edge between vertices } i \text{ and } j\\ 0 & \text{otherwise} \end{cases} \] and $Y_{i,i}=0$ for all $i$ as loops are not allowed. We may also observe pair-specific characteristics $X=\{x_{i,j}\}$. It could be a function of node-specific attributes, for example, $x_{i,j}=I(x_i = x_j)$ where $I$ is the indicator function. This so called \textit{homophily} means individuals with similar attributes are more(less) likely to be linked. Furthermore, the \textit{vertices} belong to $K$ clusters/neighborhoods/blocks, so each of them has membership/color $m_i=k$ for $k \in {1, \dots, K}$. In math literatures, $(Y, M=\{m_i\})$ is called \textit{colored graph}.\\ In Section \ref{sec_ergm} we take a brief review of recent works on ERGM with a two folded purpose. One is to argue that with a hierarchical construction, it has capability to serve as a (hypothetical) true model for networks in the real world. The other is to show difficulty in estimation. In Section \ref{sec_TwoPhase}, we uncover its connection to LSM by changing the angle of view, hence propose our two-stage strategy using a working model. In Section \ref{sec_examples}, two examples are given to apply the proposed strategy, which is essentially about how to choose a working model appropriately. We summarize our idea, discuss its limitation, and point out some directions to improve in Section \ref{sec_disc}. \section{The Evolution of ERGM} \label{sec_ergm} \cite{frank1986markov} defined a probability distribution for a graph to be a Markov graph, if the number of nodes is fixed at $n$ and possible edges between disjoint pairs of nodes are independent conditional on the rest of the graph. It is motivated by the Markov property of stochastic process and spatial statistics on a lattice \citep{besag1974spatial}. With the Hammersley-Clifford theorem, and under the permutation invariance assumption, it is proved that a random undirected graph is a Markov graph if and only if the probability distribution can be written as \begin{equation} \label{markov_graph} P_{\theta}\{Y = y\} = \text{exp}\left( \sum\limits_{k=1}^{n-1}\theta_k S_k(y) + \tau T(y) - \psi(\theta, \tau)\right) \end{equation} where the statistics $S_k$ and $T$ are defined by \begin{align} S_1(y) &= \sum_{1 \leq i < j \leq n}y_{ij} & \text{number of edges} \nonumber \\ S_k(y) &= \sum_{1 \leq i \leq n}{y_{i+} \choose k} & \text{number of k-stars} (k \geq 2)\\ T(y) &= \sum_{1 \leq i < j < h \leq n}y_{ij}y_{ih}y_{jh} & \text{number of triangles} \nonumber \end{align} with $\theta_k$ and $\tau$ denoting the parameters, and $\psi(\theta, \tau)$ is the normalizing constant. A practical model will truncate $k$ to a small number say 2, i.e. the sufficient statistics is a vector of count for how many edges, 2-stars and triangles are in the graph. \cite{wasserman1996logit} further proposed to use a model of this form with arbitrary statistics $S(y)$ in the exponent which yields the probability functions: \begin{equation} \label{ergm distribution} P_{\theta}\{Y = y\} = \text{exp}\left({\theta}'S(y) - \psi(\theta)\right) \end{equation} where $S$ can be a vector of any dimension, so that it leaves space for researchers to specify structures of scientific interests. The interpretation of the parameters is typically based on the log odds ratio of forming a tie, conditional on the rest of the graph since: \begin{equation} \label{conditional} \text{logit}\left( P_{\theta}\{ Y_{i,j}=1 | Y^c_{i,j} \} \right) = {\theta}^{'} c_{i,j} \end{equation} where $Y^c_{i,j}=\{Y_{u,v} | \text{ for all } u<v, (u,v)\neq(i,j)\}$ represents all other ties except $Y_{i,j}$, $c_{i,j} = S\left(y^{(ij1)}\right) - S\left(y^{(ij0)}\right)$ is the \textit{change statistic} with $y^{(ij0)}$ and $y^{(ij1)}$ denoting the adjacency matrices with the $(i,j)$th element equal to $1$ and $0$ while all others are the same as $y$. One example using formula \ref{markov_graph} is that when the triangle parameter $\tau$ is positive, the log odds of forming a tie will increase by $\tau$ if this tie also completes a triangle (conditional on the status of all other ties in the graph). It is an indication of transitivity, which means that if we have a friend in common, we are more likely to be friends. Not only facilitated a good interpretation, this conditional formula \ref{conditional} also induced a pseudo-likelihood \citep{strauss1990pseudolikelihood} defined by $l(\theta) = \sum\limits_{i<j}ln\left( P_{\theta}\{ Y_{i,j}=y_{i,j} | y^c_{i,j} \} \right)$, which is a sum of the (log) conditional probabilities and can be fitted by a logistic regression. \subsection{Estimation is difficult} However, the maximum likelihood estimation has a major barrier that the normalizing function $\psi(\theta) = \text{log}\sum\limits_{y \in \mathcal{Y}} \text{exp} ({\theta}'S(y)) $ is typically intractable. The summation is over the sample space $\mathcal{Y}$ where the number of possible graphs became astronomically large even when the number of nodes are only dozens. To tackle this intractable likelihood problem, \cite{snijders2002markov} proposed an Markov Chain Monty Carlo (MCMC) way for approximating the ML by following the approach of \cite{geyer1992constrained}. Random samples from the distribution \ref{ergm distribution} can be obtained using the Gibbs sampler \citep{geman1984gibbs}: cycling through the set of all random variables $Y_{i,j}$ ($i \neq j$), or by mixing \citep{tierney1994mixing}, to generate each value according to the conditional distribution in \ref{conditional}. A comparison of the statistical properties of the Maximum Likelihood Estimator (MLE) and Maximum Pseudo Likelihood Estimator (MPLE) showed that MLE could perform much better than MPLE on Bias, Efficiency and Coverage Percentage, especially in terms of the mean value re-parametrization $\mu(\theta) = {\textbf{E}}_{\theta} \left[ S(Y)\right]$ \citep{van2007comparison}.\\ A problem of the above mentioned sampling approximation to the ML inference of ERGM, termed \textit{inferential degeneracy}, persists as an obstacle to real applications. While it appears to be a MCMC algorithm issue of not converging or always converging to a degenerate (complete or empty) graph, this problem is also rooted in the geometry of Markov Graph Models as an exponential family \citep{handcock2003assessing}. There are two lines of efforts to fix it, one line along \cite{snijders2006new} is introduced here and adopted in our proposed procedure, the other line initiated by \cite{schweinberger2015local}, which motivated this paper, will be detailed in the rest of this section. \cite{snijders2006new} extended the scope of modeling social networks using ERGM by representing transitivity not only by the number of transitive triads, but in other ways that are in accordance with the concept of partial conditional independence of \cite{pattison2002neighborhood}. This type of dependence formulates a condition that takes into account not only which nodes are being potentially linked, but also the other ties that exist in the graph: i.e., the dependence model is realization-dependent. Specifically, it states that two possible edges with four distince nodes are conditionally dependent whenever their existence in the graph would create a four-cycle. Along this line, \cite{hunter2006curved} proposed the Curved Exponential Family Models and \cite{hummel2012improving} proposed a lognormal approximation and "stepping" algorithm. Together with the development of a suite of R packages called \textit{statnet}, the applied work began to adopt the ML inference widely. \subsection{Hierarchical ERGM} Finally it comes to the model exactly motivated our work. Inspired by the notion of finite neighborhoods in spatial statistics and M-dependence in time series, \cite{schweinberger2015local} proposed the local dependence in random graph models, which could be constructed from observed or unobserved neighborhood structure. Their paper shows that while the conventional ERGM do not satisfy a natural domain consistency condition, the local dependence satisfy it such that a central limit theorem can be established. Their effort is trying to fix the fundamental flaw of Markov random graph models that, for any given pair of nodes $\{i,j\}$, the number of neighbors is $2(n-2)$ and thus increases with the number of nodes $n$. This insight leads to a natural and reasonable assumption that each edge variable depends on a finite subset of other edge variables. They define: \begin{definition}{(local dependence)} The dependence induced by a probability measure $\mathbb{P}$ on the sample space $\mathbb{Y}$ is called \textit{local} if there is a partition of the set of nodes A into $K \geq 2$ non-empty finite subsets $A_1, \dots, A_K$, called neighbourhoods, such that the within- and between- neighbourhood subgraphs $Y_{k,l}$ with domains $A_k \times A_l$ and sample spaces $Y_{k,l}$ satisfy, \begin{equation} P_K(Y \in \mathcal{Y}) = \prod_{k=1}^{K} \mathbb{P}_{k,k}(Y_{k,k} \in \mathcal{Y}_{k,k}) \prod_{l=1}^{k-1}\mathbb{P}_{k,l}(Y_{k,l} \in \mathcal{Y}_{k,l}, Y_{l,k} \in \mathcal{Y}_{l,k}) \end{equation} where within-neighborhood probability measures $\mathbb{P}_{k,k}$ induce dependence within subgraphs $Y_{k,k}$, whereas between-neighborhood probability measures $\mathbb{P}_{k,l}$ induce independence between subgraphs \end{definition} Thus, local dependence breaks down the dependence of random graph $Y$ into subgraphs, while leaving Scientists freedom to specify dependence of interest within subgraphs. Under some sparsity condition, local and sparse random graphs tend to be well behaved in the sense that neighborhoods cannot dominate the whole graph and the distribution of statistics tends to be Gaussian, provided the number of neighborhoods K is large.\\ To estimate, they proposed a fully Bayesian approach. With the following conditional likelihood: \begin{equation} P(Y=y | M=m) = \prod_{k=1}^{K}P(Y_{k,k}=y_{k,k} | M=m)\prod_{l=1}^{k-1}P(Y_{k,l}=y_{k,l} | M=m) \end{equation} where the between-neighborhood ties are assumed to be independent $P(Y_{k,l}=y_{k,l} | M=m) = \prod_{i \in A_k, j \in A_l}P(Y_{i,j} = y_{i,j} | M=m)$, the within-neighborhood probability has specific ERGM parameters as $P_{\theta}(Y_{k,k}=y_{k,k} | M=m) = exp\{\theta_k'S_k(y_{k,k}) - \psi_k(\theta_k) \}$. The marginal distribution of membership is assumed as $M_i \overset{\text{iid}}{\sim} \textbf{Multinomial}(\pi)$, for all $i=1, \dots, n$. For the sake of illustration purpose, we omit the non-parametric priors on the neighborhood structure (membership), only stating the parametric one here: \begin{align*} \pi = (\pi_1, \dots, \pi_K) & \sim Dirichlet(\omega_1, \dots, \omega_k) \\ \theta_k & \overset{\text{iid}}{\sim} \textbf{MVN}(\mu_k, \Sigma_k^{-1}) & k=1, \dots, K\\ \theta_B & \sim \textbf{MVN}(\mu_B, \Sigma_B^{-1}) \end{align*} where $\theta_B = \{\theta_{kl} | k \ne l; k,l \in 1,\dots,K\}$ is a vector of parameters governing the between-neighborhood distribution. It could be simplified to just one scalar $p$ by assuming all between-neighborhood ties are i.i.d. Bernoulli($p$) as in Figure \ref{fig1} and Section \ref{sec_TwoPhase}. \section{Two-stage estimation} \label{sec_TwoPhase} In this section, we change the angle of viewing Hierarchical ERGM (HERGM) to uncover its connection to another widely used class of network models, namely Latent Space Models (LSM). Recall that our major concern is unobserved (latent) clustering structure will ``confound'' the true within-cluster effect(s). If this is a bottom-up way of first pick a specific ERGM and then consider (possibly) multiple communities, now for estimation we could follow a top-down direction of first tackle clustering problem while taking local structures into account. \subsection{HERGM as a extension of Stochastic Blockmodels} A careful inspection of the Bayesian formulation of the HERGM reveals its connection to another class of network models which is initially intended for community (block) detection, namely Stochastic Blockmodels (SBM) \citep{snijders1997block,nowicki2001block}. The purpose of blockmodeling is to partition the vertex set into subsets called \textit{blocks} in such a way that the block structure and the pattern of edges between the blocks capture the main structural features of the graph. \cite{lorrain1971block} proposed blockmodeling based on the concept of \textit{structural equivalence}, which states that two vertices are structually equivalent (belong to the same block) if they relate to the other vertices in the same way. The adjacency matrix should show a block pattern if it is permuted in a certain way. So that type of models are formulated this way: \begin{definition}{(Stochastic Blockmodels)} membership $(M_i)_{i=1}^n$ are assumed i.i.d. random variables with $P(M_i = k) = \pi_k$ for $k=1,\dots, K$ and conditional on $M=\{M_i\}$, the edges $Y_{i,j}$ are independent Bernoulli($\theta_{M_i, M_j}$). \end{definition} If we keep the assumptions on membership and between-block edges but relax the independence to ERGM for within-block edges, it became the HERGM essentially. While this extension is conceptually attractive, the computational cost is prohibitive as it involves two exponentially increasing functions. In the SBM part, the sample space of membership $M$ is $K^n$ where $K$ is the number of blocks and $n$ is the number of nodes. In the within-block ERGM part, the sample size of edge variable $Y_k$ is $2^{n_k \choose 2}$ where $n_k$ is the number of nodes in $k$th block (each pair of nodes can have a link present or absent in an undirected binary network). So both parts need MCMC or other sampling methods to do the Bayesian inference or approximate the MLE (see Section \ref{sec_ergm}), directly combining them makes the problem \textit{intractable}. To provide a more feasible way to tackle the inference of HERGM, we import LSM to account for local structures in an indirect way. \subsection{Generalized to Latent Space Models} \label{lsm} Instead of explicitly modeling dependence, the Latent Space Models (LSM) postulate latent nodal variables $Z$ and conditional independence of $Y_{i,j}$ given those variables $Z_i$ and $Z_j$. SBM can be viewed as a simple case of LSM in which membership $M_i$ is the only $Z_i$.\\ \cite{hoff2002latent} introduced the concept of unobserved "social space" within which each node has a position so that a tie is independent of all others given the unobserved positions of the pair it connects to: \begin{equation} P(Y=y | Z, X, \beta) = \prod_{i \neq j}P(Y_{i,j}=y_{i,j} | x_{i,j}, z_i, z_j, \beta) \end{equation} where $X$ are observed covariates, and $\beta$ and $Z$ are parameters and positions to be estimated. \cite{handcock2007cluster} took a subclass Distance Models where the probability of a tie is modeled as a function of some measure of distance between the latent space positions of two nodes: \begin{equation} \text{logit}\{P(Y_{i,j} = 1 | x_{i,j}, z_i, z_j, \beta)\} = \beta_0^{T}x_{i,j} - \beta_1|z_i - z_j| \end{equation} with restriction of $\sqrt{\frac{1}{n}\sum_{i}{|z_i|}^2} = 1$ for the identification purpose. Then they imposed a finite mixture of multivariate Gaussian distribution for $z_i$ to represent clustering: \begin{equation} z_i \overset{\text{iid}}{\sim} \sum\limits_{k=1}^{K} \lambda_k \textbf{MVN}_{d}(\mu_k, \sigma_k^{2}\textbf{I}_d) \end{equation} where non-negative $\lambda_k$ is the probability that an individual belongs to the $k$th group, with $\sum\limits_{k=1}^{K}\lambda_k = 1$. A fully Bayesian estimation of this Latent Position Cluster Model was proposed by specifying the priors: \begin{align*} \beta & \sim \textbf{MVN}_p(\epsilon, \psi) \\ \lambda & \sim \textbf{Dirichlet}(\nu) \\ \mu_k &\overset{\text{iid}}{\sim} \textbf{MVN}_{d}(0, \omega^2\textbf{I}_d) & k=1, \dots, K\\ \sigma_k^2 &\overset{\text{iid}}{\sim} \sigma_0^{2}\textbf{Inv}\chi_{\alpha}^2 & k=1, \dots, K \end{align*} where $\epsilon$, $\psi$, $\nu = (\nu_1, \dots, \nu_G)$, $\sigma_0^{2}$, $\alpha$ and $\omega^2$ are hyper-parameters. The posterior membership probabilities are: \begin{equation} P(M_i = k | \text{others}) = \frac{\lambda_k \phi_d(z_i;\mu_k,\sigma_k^2 I_d)}{\sum\limits_{l=1}^{K}\lambda_l \phi_d(z_i;\mu_l,\sigma_l^2 I_d)}, \end{equation} where $\phi_d(\cdot)$ is the \textit{d}-dimensional multivariate normal density. \subsection{Working Model Strategy} Now a natural idea comes: can we use LSM as a working model to infer the membership in the HERGM, and then use this information to infer cluster-specific ERGM? Our initial attempt is by \cite{gong1981twostage} type of pseudo maximum likelihood estimation as the following theorem tells us: \begin{theorem}{(\cite{gong1981twostage})} Let $Y_1, \dots, Y_n \overset{\text{iid}}{\sim} F_{\theta_0, \pi_0}$, and let $\hat{\pi}_n(Y_1, \dots, Y_n)$ be a consistent estimate of $\pi_0$. Under certain regularity conditions, for $\epsilon>0$, let $A_n(\epsilon)$ be the event that there exists a root $\hat{\theta}$ of the equation \begin{equation} \frac{\partial}{\partial \theta} \ell(\theta, \hat{\pi}) = 0 \end{equation} for which $|\hat{\theta} - \theta_0| < \epsilon$. Then, for any $\epsilon>0$, $P\{A_n(\epsilon)\} \rightarrow 1$. \end{theorem} when the pseudo maximum likelihood equation has a unique solution, then the pseudo MLE is consistent. The analog in our application is that when the clustering estimator, e.g. the posterior membership predictor of LSM, has good large sample properties, the MLE of cluster specific ERGM parameters conditioned on it should does. However, the problem is that those two models, HERGM and LSM, may be uncongenial to each other, meaning that no model can be compatible with both of them \citep{meng2014trio}. Apparently, they make very different assumptions about data, as in ERGM they follow an exponential family and in LSM they are conditional independent, as well as the ERGM MLE is a frequentist's procedure and LSM is of Bayesian. So is there a way to show they are operationally, although not theoretically, equivalent? In other words, can LSM fully capture the network structures in the true underlying generating mechanism (assumed to be HERGM), to the extent that membership estimator is consistent. \\ \cite{snijders1997block} proposed a property called \textit{the asymptotically correct distinction of vertex colors}, which means that the probability of correctly identifying the membership (color) for all nodes tends to $1$ as $n$ goes to $\infty$. The implication of this property is that once we can find a function $F(Y)$ such that \begin{equation} \label{asym_color} P(M = F(y) | \theta) \rightarrow 1 \qquad \text{for all } \theta \text{ as n} \rightarrow \infty \end{equation} then any statistical test or estimator $T(Y,F(Y))$ has asymptotically the same properties as $T(Y, G)$: \begin{equation} \lim_{n \rightarrow \infty} P(T(Y,F(Y))=T(Y, M) | \theta) = 1 \end{equation} Note that $T(Y, G)$ is when membership $G$ is observed but $T(Y,F(Y))$ is based on network $Y$ only. In our situation, the probability $P$ is under HERGM and the function $F$ is through LSM. \section{Applications} \label{sec_examples} In this section, we give two examples of how to apply our working model strategy. From basic ideas in ERGM and LSM, we can see the point that although they impose very different assumptions, their targeting network structures, e.g. homophily, degree heterogeneous, and transitivity, could be the same. Since both of them are a class of models rather than a single model, the key property of an appropriate working LSM model is targeting networks structures as close as possible to the hypothesized true ERGM model.\\ \subsection{A Transitivity Example} We first specifically consider one important example where the only dependence are within-cluster transitivities. Without generality, we assume between-cluster densities are all equal since the likelihood governed by those nuisance parameters are completely factored out, and the estimates are trivial. The probability mass function is as following: \begin{eqnarray} P(Y=y | M=m) &=& p^{y_B}(1-p)^{n_B-y_B} \nonumber\\ & &\prod_{k=1}^{K}\text{exp}\big( \theta_1^{(k)} \text{Edges}(y) + \theta_2^{(k)} \text{GWDSP}(y) \nonumber\\ & & + \theta_3^{(k)} \text{GWDEP}(y) - \psi(\theta^{(k)})\big) \end{eqnarray} where the number of between-cluster edges $y_B = \sum\limits_{i \in A_k, j \in A_l}^{k \ne l} y_{i,j}$ follows a Binomial distribution with total number of possible ties $n_B = \sum\limits_{k \ne l} n_k * n_l$ and probability $p$. Each cluster has two statistics, Geometrically Weighted Dyadwise Shared Partner (GWDSP) and Geometrically Weighted Edgewise Shared Partner (GWESP) (see \cite{snijders2006new} for details), to represent the transitivity. Since there are no homophily or degree heterogeneity, we can also omit the covariates and node specific random effects in LSM \citep{krivitsky2009representing} and simply have the probability condition on the distance between latent positions only: \begin{equation} \text{logit}\{P(Y_{i,j} = 1 | x_{i,j}, z_i, z_j, \beta)\} = \beta_1|z_i - z_j| \end{equation} As long as the defined distance satisfies triangle inequality, it captures the transitivity in the sense that if $i$ and $j$ are both close to $k$, then $i$ and $j$ should be also close to each other. Intuitively, if the true model has the transitivity as its only dependency structure, then the working model should be able to recover the membership. \subsubsection{Stage 1: clustering} First we evaluate the performance, in terms of mis-clustering rate, of the working model along sample size and transitivity strength. From Figure \ref{fig_misrate}, we can see that mis-clustering rates drop as sample size increase, and the stronger the transitivity, the faster it hits zero. \begin{figure*} \includegraphics[width=\textwidth]{asym_recover} \caption{The mis-clustered rates drop as the number of nodes per cluster increase, with different speed at different transitivity strengths.} \label{fig_misrate} \end{figure*} \subsubsection{Stage 2: fine tuning} One question a practitioner may ask is: if my working model is good enough, why should I bother to fit cluster specific ERGM? The answer is two folded. One is for estimation / hypothesis testing, the other is for overall goodness of fit. Figure \ref{fig_gof} shows that a second fine tuning step may greatly improve the model goodness of fit, even then the working model did a perfect job on clustering. \begin{figure*} \centering \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{gof_lsm} \end{minipage} \hfill \begin{minipage}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{gof_clust_ergm} \end{minipage} \caption{The left plot is model goodness of fit by a Latent Space Model, the right one is by cluster-specific ERGM using the membership estimates from the left model. Notes that although the estimation of parameters: (-0.87, 0.28, 0.20), (-5.00, 1.39, 0.58), (-3.84, 0.66, 1.38) are not close to the truth: (-2, 0.5, 0.5), (-2, 0.5, 0.5), (-2, 0.5, 0.5), the model fit is still very good.} \label{fig_gof} \end{figure*} \subsubsection{Sensitivity to mis-clustering} \subsection{A Degree Distribution Example} Another major type of network dependence we would like to use as an example is the degree distribution. Since there is a specific spectral clustering method designed to the so called Degree Corrected Stochastic Block Models (DC-SBM) \cite{jin2015score}. We evaluate mis-clustering rate of that method on a degree distribution ERGM. \section{Discussion and conclusion} \label{sec_disc} In this paper, we analyze the complementary strengths and limitations of ERGM and LSM, both in model specification and the interpretation of parameters. We start from the computational non-scalability of the Bayesian inference approach for the Hierarchical ERGM and propose a two phase procedure as a feasible way to do data analysis. We intuitively formulate this procedure, that is to find clusters using Latent Space Models (LSM) first and then to fit cluster specific ERGM, conditioned on the first phase result. The key idea is to decouple the estimates of membership $M$ and ERGM parameters $\theta_k$, so we can provide a feasible way to improve goodness of fit, rather than to archive good asymptotic properties.\\ When modeling social networks or other types of relational data, valid statistical inference is especially challenging if only one single network is observed. This network can be viewed as a snapshot of the accumulated effects of possibly more than one relation forming processes. So, our future direction along this line is to propose a new class of dynamic HERGM for longitudinal network data. \bibliographystyle{abbrv}
1,116,691,498,746
arxiv
\section{Introduction} As the oldest readily identifiable stellar systems in the universe, globular clusters (GCs) are important tracers of the formation and early evolution of galaxies, the Milky Way (MW) included. Noting the apparent lack of a metallicity gradient among remote Galactic GCs, Searle \& Zinn (1978) proposed an accretion origin for the Galactic halo extending over a period of several Gyr. Evidence for this picture of hierarchical halo growth has come from the existence of a second-parameter problem among outer halo GCs (e.g., Catelan 2000; Dotter et al. 2010), which points to a significant age range within this population. The remote GC Pal 4 is such an example of a second parameter cluster. At a Galactocentric distance of $R_ G = 109$ kpc (Stetson et al. 1999), it is one of only a few halo GCs at distances of $\sim$100 kpc or beyond. With a half-light radius of $r_h \approx 23$ pc, it is also one of the most extended Galactic GCs currently known, being significantly larger than ``typical" GCs in the Milky Way or external galaxies (which have $\langle{r_h}\rangle \approx 3$~pc; see, e.g., Jord\'an et~al. 2005). In fact, with a total luminosity of just $L_V \sim 2.1\times10^4~L_{V,{\odot}}$, it is similar in several respects to some of the more compact ``ultra-faint'' dwarf spheroidal (dSph) galaxies (Simon \& Geha 2007) that are being discovered in the outer halo with increasing regularity (e.g., Belokurov et~al. 2007). Since almost nothing is known about their proper motions and internal dynamics, the relationship of faint, extended GCs like Pal~4 to such low-luminosity galaxies is currently an open question. While there is a general consensus that Pal~4 is likely to be $\approx$ 1--2 Gyr younger than inner halo GCs of the same metallicity, such as M5, age estimates in the literature do not fully agree (e.g., Stetson et al. 1999 vs. Vandenberg 2000). In particular, Stetson et al. (1999) note that an age difference with respect to the inner halo GCs could be explained if ``either [Fe/H] or [$\alpha$/Fe] for the outer halo clusters is significantly lower than ... assumed''. Conversely, Cohen \& Mel\'endez (2005a) found that the outer halo GC NGC~7492 (R$_{\rm GC}$ = 25 kpc) shows chemical abundance patterns that are very similar to inner halo GCs like M3 or M13. This similarity in the chemical enrichment now appears to extend into the outermost halo for at least some GCs: it has recently been shown that the abundance ratios in the remote (R$_{\rm GC}$ = 92 kpc) cluster Pal~3 (Koch et al. 2009; hereafter Paper~I) bear a close resemblance to those of inner halo GCs. The chemical abundance patterns of remote halo GCs like Pal~3 and Pal~4 are important clues to the formation of the Milky Way, as they allow for direct comparisons to those of the dSph galaxies, which are widely believed to have been accreted into the halo (e.g., Klypin et~al. 1999; Bullock et~al. 2001; Font et~al. 2006; Li et~al. 2009). In this spirit, Mackey \& Gilmore (2004) conclude that all young halo clusters (i.e., 30) did not originate in the MW but were donated by at least seven mergers with ``cluster-bearing'' dSph-type galaxies. There are, however, no high-dispersion abundance data yet published for this remote cluster. Previous low-resolution spectroscopic and photometric studies have established Pal~4 as a mildly metal-poor system, with [Fe/H] estimates ranging from $-1.28$ to $-1.7$ dex (Armandroff et al. 1992; Stetson et al. 1999; Kraft \& Ivans 2003). In this paper, one of a series, we aim to extend the chemical element information for GCs in the Galactic halo out to the largest possible distances, and to carry out a first analysis of Pal~4's chemical abundance patterns. As we have shown in Paper~I, which presented a similar analysis for Pal~3, it is possible to derive reliable abundance measurements for remote Galactic GCs by performing an integrated analysis of stacked, low signal-to-noise (S/N) --- but high-resolution --- spectra (see also McWilliam \& Bernstein 2008). Note, however, that this method presupposes that there is no significant abundance scatter present along the RGB and that all stars have the same mean abundances for all chemical elements. We have therefore no means of distinguishing Pal~4 as a genuine GC with no internal abundance spread from a dSph that may have a very broad abundance range (e.g., Shetrone et~al. 2001, 2003; Koch 2009), nor of discerning any intrinsic abundance variations (e.g., Lee et al. 2009). We will return to this question in Section~5.2. Neverthess, such studies can provide an important first step towards an overall characterization of the chemical element distribution, and enrichment history, of the outer halo. \section{Data} The spectra for Pal~4 were obtained during the same three nights in February and March 1999 as the spectra used in our analysis of Pal~3 (Paper I). During these observing runs, a total of 24 stars in Pal~4 were observed using the HIRES echelle spectrograph (Vogt et al. 1994) on the Keck I telescope. Our spectroscopic targets were selected from a colour-magnitude diagram (CMD) constructed from $BV$ imaging obtained with the Low-Resolution Imaging Spectrometer (LRIS; Oke et al. 1995) on the night of 13/14 January 1999. A CMD reaching roughly one magnitude below the main-sequence turnoff was constructed using a series of short and long exposures taken in both bandpasses (i.e., five exposures between 60s and 120s in $V$, and four exposures between 60s and 240s in $B$). \begin{figure} \centering \includegraphics[width=1\hsize]{f1.eps} \caption{CMD of Pal~4 based on photometry from Saha et al. (2005). The HIRES targets are highlighted as red symbols, with open stars denoting AGB candidates. Also shown is a scaled-solar Dartmouth isochrone (Dotter et al. 2008) with an age of 10 Gyr and a [Fe/H] of $-$1.4 dex, corrected for E(B$-V)$=0.01 and a distance modulus of 20.22 mag (Stetson et al. 1999).} \end{figure} Spectroscopic targets were identified from this CMD by selecting probable red giant branch (RGB) stars with $V \lesssim 20.25$. These stars all have cross identifications with the more recent work of Saha et al. (2005). Fig.~1 shows the location of the target stars in the CMD from this latter work. As for Paper~I, we used a spectrograph setting that covers the wavelength range 4450--6880 \AA\ with spectral gaps between adjacent orders, a slit width of 1.15$\arcsec$ and a CCD binning of 2$\times$2 in the spatial and spectral directions. This gives a spectral resolution of $R\approx34000$. Each programme star was observed for a total of 300--2400~s, depending on its apparent magnitude (see Table~1). \begin{table*} \caption{Observation log and properties of the target stars} \centering \begin{tabular}{cccccccccc} \hline\hline & & Exposure time & $\alpha$ & $\delta$ & V & B$-$V & V$-$I & V$-$K & S/N \\ \raisebox{1.5ex}[-1.5ex]{ID$^a$} & \raisebox{1.5ex}[-1.5ex]{Date} & [s] & (J2000.0) & (J2000.0) & [mag] & [mag] & [mag] & [mag] & [pixel$^{-1}$] \\ \hline Pal4-1 (S196) & Feb 11 1999, Mar 10 1999 & 3$\times$300 & 11 29 17.13 & +28 57 59.9 & 17.81 & 1.46 & 1.52 & 3.43 & 8 \\ Pal4-2 (S169) & Feb 11 1999 & 2$\times$300 & 11 29 17.02 & +28 57 51.5 & 17.93 & 1.46 & 1.45 & 3.57 & 8 \\ Pal4-3 (S277) & Feb 11 1999 & 1$\times$300 & 11 29 13.24 & +28 58 13.6 & 17.82 & 1.66 & 1.53 & 3.69 & 8 \\ Pal4-5 (S434) & Feb 11 1999, Feb 12 1999 & 2$\times$300 & 11 29 16.67 & +28 58 42.1 & 17.95 & 1.44 & 1.47 & 2.81 & 7 \\ Pal4-6 (S158) & Feb 11 1999, Mar 10 1999 & 3$\times$420 & 11 29 15.50 & +28 57 47.0 & 18.22 & 1.30 & 1.33 & 3.06 & 7 \\ Pal4-7 (S381) & Feb 11 1999, Mar 10 1999 & 2$\times$600 & 11 29 14.83 & +28 58 32.2 & 18.55 & 1.19 & 1.24 & 2.79 & 7 \\ Pal4-8 (S364) & Feb 11 1999 & 1$\times$600 & 11 29 12.66 & +28 58 29.6 & 18.65 & 1.17 & 1.23 & 2.85 & 6 \\ Pal4-9 (S534) & Feb 11 1999 & 1$\times$750 & 11 29 13.32 & +28 59 07.6 & 19.00 & 1.08 & 1.18 & 3.62 & 6 \\ Pal4-10 (S325) & Feb 11 1999 & 2$\times$900 & 11 29 15.71 & +28 57 23.4 & 19.09 & 1.05 & 1.13 & \dots & 6 \\ Pal4-11 (S430)$^b$ & Feb 11 1999 & 1$\times$1200 & 11 29 13.82 & +28 58 40.9 & 19.35 & 0.89 & 1.04 & \dots & 5 \\ Pal4-12 (S328)$^b$ & Feb 11 1999, Mar 10 1999 & 1$\times$1200,1$\times$2400 & 11 29 17.63 & +28 58 25.1 & 19.35 & 0.90 & 1.03 & \dots & 8 \\ Pal4-15 (S307)$^b$ & Feb 11 1999 & 1$\times$1200 & 11 29 16.45 & +28 58 18.4 & 19.38 & 0.88 & 1.00 & \dots & 5 \\ Pal4-16 (S306)$^b$ & Feb 11 1999 & 1$\times$1200 & 11 29 17.77 & +28 58 19.5 & 19.43 & 0.88 & 1.02 & \dots & 5 \\ Pal4-17 (S472)$^b$ & Feb 12 1999 & 1$\times$1080 & 11 29 15.95 & +28 58 47.8 & 19.45 & 0.85 & 0.99 & \dots & 5 \\ Pal4-18 (S186) & Feb 11 1999 & 1$\times$1200 & 11 29 15.37 & +28 57 55.8 & 19.48 & 0.98 & 1.06 & \dots & 5 \\ Pal4-19 (S283) & Feb 12 1999 & 1$\times$1080 & 11 29 15.65 & +28 57 14.7 & 19.53 & 0.95 & 1.08 & \dots & 4 \\ Pal4-21 (S457) & Feb 11 1999 & 1$\times$1200 & 11 29 14.03 & +28 58 45.7 & 19.64 & 0.93 & 1.04 & \dots & 4 \\ Pal4-23 (S235) & Feb 12 1999 & 1$\times$1500 & 11 29 16.93 & +28 58 06.8 & 19.70 & 0.93 & 1.05 & \dots & 5 \\ Pal4-24 (S154) & Feb 11 1999 & 1$\times$1500 & 11 29 17.24 & +28 57 46.7 & 19.74 & 0.92 & 1.03 & \dots & 5 \\ Pal4-25 (S476) & Feb 12 1999 & 1$\times$1500 & 11 29 15.95 & +28 58 47.8 & 19.77 & 0.91 & 1.02 & \dots & 5 \\ Pal4-26 (S265) & Feb 12 1999 & 1$\times$1500 & 11 29 17.32 & +28 58 12.8 & 19.83 & 0.91 & 1.02 & \dots & 5 \\ Pal4-28 (S426) & Feb 12 1999 & 1$\times$1500 & 11 29 18.50 & +28 58 41.0 & 19.87 & 0.91 & 1.02 & \dots & 4 \\ Pal4-30 (S276) & Mar 10 1999 & 1$\times$1800 & 11 29 08.80 & +28 58 13.1 & 19.89 & 0.90 & 1.02 & \dots & 5 \\ Pal4-31 (S315) & Feb 12 1999 & 1$\times$1500 & 11 29 16.82 & +28 58 21.5 & 19.89 & 0.93 & 1.03 & \dots & 5 \\ \hline \end{tabular} \\$^a$IDs preceded by ``S'' are cross-identifications from Table~7 of Saha et al. (2005). \\$^b$Likely AGB stars. \end{table*} Table~1 also lists the photometric properties of the target stars, where the $BVI$ photometry is taken from Saha et al. (2005) and the $K$-band magnitudes are from 2MASS (Skrutskie et al. 2006). The spectroscopic data were reduced using the MAKEE\footnote{MAKEE was developed by T. A. Barlow specifically for reduction of Keck HIRES data. It is freely available on the World Wide Web at the Keck Observatory home page, \tt{http://www2.keck.hawaii.edu/inst/hires/makeewww}} data reduction package. Because our spectra were obtained within a program to study the internal cluster dynamics (C\^ot\'e et al. 2002; Jordi et~al. 2009; Baumgardt et~al. 2009), the short exposure times --- which were chosen adaptively based on target magnitude --- lead to low signal-to-noise (S/N) ratios. Hence, the spectra are adequate for the measurement of accurate radial velocities but not for abundance measurements of individual stars. For instance, we typically reach S/N ratios of 4--8 per pixel in the order containing H$\alpha$. Radial velocities of the individual targets were measured from a cross correlation against a synthetic red giant spectrum with stellar parameters representative of the Pal~4 target stars. The template covered the entire HIRES wavelength range, but excluded the spectral gaps. All our targets are consistent with the cluster's mean radial velocity of $\langle v_r \rangle = $ 72.9$\pm$0.3 km\,s$^{-1}$ (mean error) within the measurement errors (see also Olszewski et~al. 1986). A detailed account of the dynamics of Pal~4 will be given in a separate paper. As in Paper~I, we stack the individual spectra to increase their S/N ratio and to be able to perform an {\em integrated} abundance analysis (see also McWilliam \& Bernstein 2008). In practice, the spectra were Doppler-shifted and average-combined after weighting by their individual S/N ratios to yield a higher S/N spectrum which we can use to place constraints on the chemical element abundances. As the CMD (Fig.~1) shows, five of the stars appear to lie on the AGB (open symbols). We therefore constructed three different co-added spectra: i.e., using only the RGB stars, only the AGB stars, and the entire sample. The overall S/N ratios of the co-added spectra are (12, 25, 28) for the AGB, RGB, and the entire sample, respectively. A sample region of those spectra is shown in Fig.~2. \begin{figure} \centering \includegraphics[width=1\hsize]{f2.eps} \caption{A portion of the co-added spectra in one order with relatively high S/N ratio. A few absorption lines are designated. Also indicated is the subsample of stars that was included in the co-additions.} \end{figure} It is obvious from this figure that the pure AGB spectrum still has a too low S/N ratio for meaningful abundance measurements. Moreover, adding the AGB spectra to those of the higher-S/N spectra for the RGB stars may introduce additional noise to some features rather than improving the spectral quality. We therefore choose focus our abundance analysis on the co-added RGB sample only. \section{Abundance analysis} As in our previous works (e.g., Paper~I), we used model atmospheres interpolated from the updated grid of the Kurucz\footnote{\tt http://cfaku5.cfa.harvard.edu/grids.html} one-dimensional 72-layer, plane-parallel, line-blanketed models without convective overshoot and assuming local thermodynamic equilibrium (LTE) for all species. For this GC study, we used models that incorporate Castelli \& Kurucz's ( 2003)\footnote{{\tt http://wwwuser.oat.ts.astro.it/castelli}} $\alpha$-enhanced opacity distribution functions, AODFNEW. This choice seems justified, since the majority of the metal-poor Galactic halo GCs, as well as the outer halo object Pal~3 (Paper~I), are enhanced in the $\alpha$-elements by $\approx +0.4$ dex, so it seems plausible that Pal~4 will also follow this trend. Throughout this work, we used the 2002 version of the stellar abundance code MOOG (Sneden 1973) for all abundance calculations. We place our measurements on the Solar scale of Asplund et al. (2009). \subsection{Linelist} We derive chemical element abundances through standard equivalent width (EW) measurements that closely follow the procedures outlined in Paper~I. The main difference is that we are {\em only} dealing with an analysis of the co-added EWs in the present study, which thus requires an analogous treatment of the synthetic EWs. The linelist for the present study is the same as already used in Paper~I and we refer the reader to that work for full details on the origin of the line data. In practice, we measured EWs in the co-added spectra (\S~2) by fitting a Gaussian profile to the absorption lines using IRAF task {\em splot}; those value are recorded in Table~2. \onllongtab{2}{ \begin{longtable}{cccrr|cccrr} \caption{Linelist. ``HFS'' indicates that hyperfine splitting was taken into account for these transitions.} \\ \hline \hline & $\lambda$ & E.P. & & EW [m\AA] & & $\lambda$ & E.P. & & EW [m\AA] \\ \raisebox{1.5ex}[-1.5ex]{Element} & [\AA] & [eV] & \raisebox{1.5ex}[-1.5ex]{log\,$gf$} & (RGB) & \raisebox{1.5ex}[-1.5ex]{Element} & [\AA] & [eV] & \raisebox{1.5ex}[-1.5ex]{log\,$gf$} & (RGB) \\ \hline \endfirsthead \caption{Continued.} \\ \hline & $\lambda$ & E.P. & & EW [m\AA] & & $\lambda$ & E.P. & & EW [m\AA] \\ \raisebox{1.5ex}[-1.5ex]{Element} & [\AA] & [eV] & \raisebox{1.5ex}[-1.5ex]{log\,$gf$} & (RGB) & \raisebox{1.5ex}[-1.5ex]{Element} & [\AA] & [eV] & \raisebox{1.5ex}[-1.5ex]{log\,$gf$} & (RGB) \\ \hline \endhead \hline \endfoot \hline \endlastfoot Mg I & 5528.42 & 4.35 & $-$0.357 & 177 & Cr I & 5300.75 & 0.98 & $-$2.120 & 113 \\ Mg I & 5711.09 & 4.33 & $-$1.728 & 108 & Cr I & 5329.14 & 2.91 & $-$0.064 & 79 \\ Al I & 6696.03 & 3.14 & $-$1.347 & 36 & Cr I & 5345.81 & 1.00 & $-$0.980 & 165 \\ Si I & 5684.48 & 4.95 & $-$1.650 & 33 & Cr I & 5348.33 & 1.00 & $-$1.290 & 144 \\ Si I & 5708.41 & 4.95 & $-$1.470 & 88 & Cr I & 6330.09 & 0.94 & $-$2.914 & 50 \\ Si I & 5948.55 & 5.08 & $-$1.230 & 64 & Mn I$^{\rm HFS}$ & 5394.63 & 0.00 & $-$3.503 & 166 \\ Si I & 6142.48 & 5.62 & $-$0.920 & 22 & Mn I$^{\rm HFS}$ & 5432.51 & 0.00 & $-$3.800 & 136 \\ Si I & 6155.13 & 5.61 & $-$0.750 & 64 & Mn I$^{\rm HFS}$ & 6013.48 & 3.07 & $-$0.251 & 102 \\ Ca I & 5261.71 & 2.52 & $-$0.580 & 107 & Mn I$^{\rm HFS}$ & 6016.62 & 3.08 & $-$0.216 & 111 \\ Ca I & 5590.13 & 2.52 & $-$0.570 & 114 & Mn I$^{\rm HFS}$ & 6021.75 & 3.08 & 0.034 & 93 \\ Ca I & 5601.29 & 2.53 & $-$0.520 & 116 & Fe I & 4903.32 & 2.88 & $-$0.926 & 171 \\ Ca I & 5857.46 & 2.93 & 0.230 & 157 & Fe I & 4938.82 & 2.88 & $-$1.077 & 121 \\ Ca I & 6166.44 & 2.52 & $-$1.140 & 103 & Fe I & 4939.69 & 0.86 & $-$3.240 & 142 \\ Ca I & 6169.04 & 2.52 & $-$0.800 & 126 & Fe I & 5001.87 & 3.88 & 0.050 & 116 \\ Ca I & 6169.56 & 2.52 & $-$0.480 & 143 & Fe I & 5006.12 & 2.82 & $-$0.662 & 173 \\ Ca I & 6455.60 & 2.52 & $-$1.290 & 95 & Fe I & 5028.13 & 3.57 & $-$1.122 & 84 \\ Ca I & 6471.67 & 2.52 & $-$0.875 & 122 & Fe I & 5044.21 & 2.85 & $-$2.059 & 149 \\ Ca I & 6499.65 & 2.52 & $-$0.820 & 115 & Fe I & 5048.44 & 3.94 & $-$1.029 & 118 \\ Ca I & 6717.69 & 2.71 & $-$0.610 & 136 & Fe I & 5060.07 & 0.00 & $-$5.460 & 147 \\ Sc II & 5031.02 & 1.36 & $-$0.260 & 95 & Fe I & 5068.77 & 2.94 & $-$1.041 & 159 \\ Sc II & 5239.81 & 1.46 & $-$0.770 & 85 & Fe I & 5131.48 & 2.22 & $-$2.515 & 102 \\ Sc II & 5669.04 & 1.50 & $-$1.120 & 78 & Fe I & 5145.10 & 2.20 & $-$2.876 & 106 \\ Sc II & 5684.19 & 1.51 & $-$1.050 & 77 & Fe I & 5159.05 & 4.28 & $-$0.820 & 79 \\ Sc II & 6245.62 & 1.51 & $-$0.980 & 88 & Fe I & 5162.28 & 4.18 & 0.020 & 157 \\ Sc II & 6604.60 & 1.36 & $-$1.480 & 36 & Fe I & 5166.28 & 0.00 & $-$4.123 & 170 \\ Ti I & 4997.10 & 0.00 & $-$1.722 & 132 & Fe I & 5192.35 & 3.00 & $-$0.421 & 163 \\ Ti I & 4999.51 & 0.83 & 0.140 & 180 & Fe I & 5195.48 & 4.22 & $-$0.002 & 109 \\ Ti I & 5001.01 & 2.00 & $-$0.052 & 78 & Fe I & 5196.08 & 4.26 & $-$0.451 & 57 \\ Ti I & 5009.65 & 0.02 & $-$1.900 & 105 & Fe I & 5215.19 & 3.27 & $-$0.871 & 133 \\ Ti I & 5039.96 & 0.02 & $-$1.170 & 134 & Fe I & 5216.28 & 1.61 & $-$2.150 & 175 \\ Ti I & 5064.65 & 0.05 & $-$0.985 & 152 & Fe I & 5217.39 & 3.21 & $-$1.070 & 130 \\ Ti I & 5147.48 & 0.00 & $-$1.876 & 149 & Fe I & 5225.52 & 0.11 & $-$4.789 & 175 \\ Ti I & 5152.19 & 0.02 & $-$1.912 & 134 & Fe I & 5242.49 & 3.62 & $-$0.967 & 110 \\ Ti I & 5173.75 & 0.00 & $-$1.120 & 178 & Fe I & 5247.05 & 0.09 & $-$4.946 & 175 \\ Ti I & 5219.70 & 0.02 & $-$1.980 & 137 & Fe I & 5250.22 & 0.12 & $-$4.938 & 153 \\ Ti I & 5866.46 & 1.07 & $-$0.840 & 133 & Fe I & 5266.56 & 3.00 & $-$0.490 & 149 \\ Ti I & 5922.12 & 1.05 & $-$1.470 & 91 & Fe I & 5281.80 & 3.04 & $-$0.833 & 178 \\ Ti I & 5965.83 & 1.88 & $-$0.410 & 120 & Fe I & 5302.31 & 3.28 & $-$0.720 & 160 \\ Ti I & 6064.63 & 1.05 & $-$1.970 & 65 & Fe I & 5307.37 & 1.61 & $-$2.987 & 141 \\ Ti I & 6126.22 & 1.07 & $-$1.420 & 90 & Fe I & 5339.94 & 3.27 & $-$0.720 & 136 \\ Ti I & 6258.10 & 1.44 & $-$0.355 & 105 & Fe I & 5369.97 & 4.37 & 0.536 & 141 \\ Ti I & 6556.08 & 1.46 & $-$0.943 & 57 & Fe I & 5379.57 & 3.68 & $-$1.514 & 88 \\ Ti I & 6743.13 & 0.90 & $-$1.630 & 90 & Fe I & 5389.49 & 4.42 & $-$0.410 & 108 \\ Ti II & 5005.16 & 1.57 & $-$2.550 & 46 & Fe I & 5393.18 & 3.24 & $-$0.715 & 147 \\ Ti II & 5013.68 & 1.58 & $-$1.935 & 119 & Fe I & 5424.08 & 4.32 & 0.520 & 133 \\ Ti II & 5185.91 & 1.89 & $-$1.350 & 115 & Fe I & 5569.63 & 3.42 & $-$0.500 & 138 \\ Ti II & 5226.55 & 1.57 & $-$1.300 & 173 & Fe I & 5618.64 & 4.21 & $-$1.275 & 68 \\ Ti II & 5336.78 & 1.58 & $-$1.700 & 150 & Fe I & 5753.12 & 4.26 & $-$0.688 & 83 \\ Ti II & 5396.23 & 1.58 & $-$2.925 & 36 & Fe I & 5763.00 & 4.21 & $-$0.450 & 92 \\ Ti II & 5418.77 & 1.58 & $-$1.999 & 69 & Fe I & 5862.36 & 4.55 & $-$0.058 & 108 \\ Ti II & 6606.95 & 2.06 & $-$2.790 & 50 & Fe I & 5909.98 & 3.21 & $-$2.587 & 100 \\ V I & 6039.72 & 1.06 & $-$0.651 & 78 & Fe I & 5916.25 & 2.45 & $-$2.834 & 124 \\ V I & 6081.44 & 1.05 & $-$0.578 & 52 & Fe I & 5934.65 & 3.93 & $-$1.170 & 77 \\ V I & 6135.36 & 1.05 & $-$0.746 & 68 & Fe I & 5956.71 & 0.86 & $-$4.605 & 146 \\ V I & 6243.10 & 0.30 & $-$0.978 & 128 & Fe I & 5976.78 & 3.94 & $-$1.310 & 51 \\ V I & 6251.83 & 0.29 & $-$1.342 & 98 & Fe I & 6024.06 & 4.55 & $-$0.120 & 106 \\ V I & 6274.66 & 0.27 & $-$1.670 & 66 & Fe I & 6027.06 & 4.08 & $-$1.089 & 66 \\ Cr I & 5247.57 & 0.96 & $-$1.640 & 146 & Fe I & 6056.01 & 4.73 & $-$0.460 & 84 \\ Cr I & 5296.70 & 0.98 & $-$1.400 & 146 & Fe I & 6065.48 & 2.61 & $-$1.530 & 161 \\ Fe I & 6078.49 & 4.79 & $-$0.424 & 81 & Fe II & 6432.68 & 2.89 & $-$3.708 & 23 \\ Fe I & 6137.00 & 2.20 & $-$2.950 & 146 & Fe II & 6516.08 & 2.89 & $-$3.380 & 43 \\ Fe I & 6173.34 & 2.22 & $-$2.880 & 155 & Co I$^{\rm HFS}$ & 5301.01 & 1.71 & $-$2.000 & 72 \\ Fe I & 6180.21 & 2.73 & $-$2.586 & 100 & Co I$^{\rm HFS}$ & 5483.31 & 1.71 & $-$1.488 & 111 \\ Fe I & 6213.44 & 2.22 & $-$2.481 & 142 & Co I$^{\rm HFS}$ & 6814.89 & 1.96 & $-$1.900 & 86 \\ Fe I & 6219.29 & 2.20 & $-$2.448 & 137 & Ni I & 5035.36 & 3.63 & 0.290 & 96 \\ Fe I & 6229.23 & 2.83 & $-$2.805 & 68 & Ni I & 5080.53 & 3.65 & 0.134 & 75 \\ Fe I & 6232.64 & 3.65 & $-$1.223 & 115 & Ni I & 5084.09 & 3.68 & 0.034 & 111 \\ Fe I & 6240.65 & 2.22 & $-$3.173 & 91 & Ni I & 5146.48 & 3.71 & $-$0.060 & 70 \\ Fe I & 6246.32 & 3.60 & $-$0.733 & 98 & Ni I & 5578.71 & 1.68 & $-$2.641 & 130 \\ Fe I & 6252.56 & 2.40 & $-$1.687 & 141 & Ni I & 5587.85 & 1.94 & $-$2.142 & 96 \\ Fe I & 6254.25 & 2.28 & $-$2.443 & 145 & Ni I & 5592.26 & 1.95 & $-$2.588 & 80 \\ Fe I & 6265.14 & 2.18 & $-$2.550 & 135 & Ni I & 6128.97 & 1.68 & $-$3.390 & 73 \\ Fe I & 6270.23 & 2.86 & $-$2.000 & 96 & Ni I & 6176.82 & 4.09 & $-$0.430 & 52 \\ Fe I & 6271.28 & 3.33 & $-$2.703 & 53 & Ni I & 6177.25 & 1.83 & $-$3.600 & 48 \\ Fe I & 6322.69 & 2.59 & $-$2.426 & 139 & Ni I & 6327.59 & 1.68 & $-$3.090 & 97 \\ Fe I & 6335.34 & 2.20 & $-$2.177 & 141 & Ni I & 6378.26 & 4.15 & $-$0.820 & 40 \\ Fe I & 6336.83 & 3.69 & $-$0.856 & 136 & Ni I & 6482.81 & 1.94 & $-$2.630 & 84 \\ Fe I & 6344.15 & 2.43 & $-$2.923 & 119 & Ni I & 6586.32 & 1.95 & $-$2.812 & 91 \\ Fe I & 6355.03 & 2.84 & $-$2.350 & 84 & Ni I & 6767.78 & 1.83 & $-$2.170 & 121 \\ Fe I & 6358.69 & 0.86 & $-$4.468 & 147 & Cu I$^{\rm HFS}$ & 5105.51 & 1.39 & $-$1.505 & 90 \\ Fe I & 6400.00 & 3.60 & $-$0.520 & 128 & Cu I$^{\rm HFS}$ & 5782.06 & 1.64 & $-$1.720 & 79 \\ Fe I & 6400.31 & 0.91 & $-$3.897 & 168 & Y II & 4883.68 & 1.08 & 0.071 & 110 \\ Fe I & 6475.63 & 2.56 & $-$2.941 & 103 & Y II & 4900.11 & 1.03 & $-$0.090 & 115 \\ Fe I & 6481.88 & 2.28 & $-$2.960 & 130 & Y II & 5087.42 & 1.08 & $-$0.156 & 84 \\ Fe I & 6498.95 & 0.96 & $-$4.687 & 144 & Y II & 5200.41 & 0.99 & $-$0.570 & 98 \\ Fe I & 6518.37 & 2.83 & $-$2.450 & 105 & Y II & 5509.90 & 0.99 & $-$1.015 & 93 \\ Fe I & 6574.22 & 0.99 & $-$5.004 & 115 & Zr II & 5112.28 & 1.66 & $-$0.590 & 42 \\ Fe I & 6581.21 & 1.48 & $-$4.680 & 57 & Ba II & 4554.03 & 0.00 & 0.170 & 274 \\ Fe I & 6609.12 & 2.56 & $-$2.692 & 128 & Ba II & 5853.00 & 0.60 & $-$1.010 & 130 \\ Fe I & 6739.52 & 1.56 & $-$4.794 & 59 & Ba II & 6141.73 & 0.70 & $-$0.077 & 212 \\ Fe I & 6750.15 & 2.42 & $-$2.608 & 140 & Ba II & 6496.91 & 0.60 & $-$0.380 & 201 \\ Fe II & 4923.93 & 2.89 & $-$1.307 & 166 & La II & 5114.56 & 0.23 & $-$1.060 & 55 \\ Fe II & 4993.35 & 2.81 & $-$3.485 & 37 & La II & 6390.46 & 0.32 & $-$1.400 & 45 \\ Fe II & 5197.58 & 3.23 & $-$2.233 & 88 & Ce II & 5274.23 & 1.04 & 0.150 & 33 \\ Fe II & 5234.63 & 3.22 & $-$2.220 & 94 & Nd II & 5249.59 & 0.98 & 0.217 & 54 \\ Fe II & 5425.26 & 3.20 & $-$3.372 & 22 & Dy II & 5169.69 & 0.10 & $-$1.660 & 12 \\ Fe II & 6247.56 & 3.89 & $-$2.329 & 24 & & & & & \\ \end{longtable} } Aided by the stellar atmospheres (described in detail in the next section), we computed theoretical EWs for the transitions in our line list using MOOG's {\em ewfind} driver and combined them into a mean value, $\langle{EW}\rangle$, using the same weighting scheme as for the observations, \begin{equation} \langle{EW}\rangle = \, \frac{ \sum_{i=1}^N\,w_i\,EW_i\, }{ \,\sum_{i=1}^N\,w_i}, \end{equation} where the weights $w_i$ are proportional to the S/N ratios as in the case of co-adding the observed spectra. The abundance ratio of each element was then varied until the predicted $\langle{EW}\rangle$ matched the observed EW for each line to yield the cluster's integrated chemical element ratio. Note that this method presupposes that there is no significant abundance scatter present along the RGB and all stars have the same mean abundances for all chemical elements. For the following analysis, we restricted the linelist to the more reliable features with EW$<$180 m\AA. For a few cases, such as Al, Zr, Ce, and Dy, the stated abundance ratios are based on marginal detections of only one line with usually about 30--40 m\AA~widths. Unfortunately, neither of the important elements O and Eu could be detected: while the stronger [O~I] 6300 \AA~and Eu~II 6645 \AA~lines fall on the gap between the HIRES CCDs, the weaker 6363 \AA~(O) and 6437 \AA~(Eu) lines are strongly affected by telluric blends and spectral noise, which renders them unusable for the present work. Likewise, the Na-D lines are too strong to be reliably used in our analysis, while the only other transition covered by our spectra, the Na~I 5688 \AA~line, is too strongly affected by the low S/N ratio around that feature. We accounted for the effects of hyperfine structure for the stronger lines of the odd-Z elements Mn~I, Co~I, and Cu~I by extracting the predicted EW from MOOG's {\em blends} driver and using atomic data for the splitting from McWilliam et al. (1995). The effect on all other elements (such as Ba~II or La~II) was found to be typically less than 0.03 dex and thus much smaller than the usual systematic errors (\S~4) so that we ignored hyperfine splitting for all other elements. \subsection{Stellar parameters} We derived effective temperatures (T$_{\rm eff}$) for each star using its photometry, in particular, from B$-$V and V$-$I colors using the data from Saha et al. (2005). This information was supplemented with 2MASS K-band photometry (Skrutskie et al. 2006) to obtain V$-$K estimates for the eight brightest stars (see Table~1). We assumed a reddening of E(B$-$V)=0.01 (Stetson et al. 1999) with the extinction law of Winkler (1997). In practice, the T$_{\rm eff}$-color calibrations of Ram\'{\i}rez \& Mel\'endez (2005) were applied for V$-$I and V$-$K, and the Alonso et al. (1999) transformations for B$-$V. All three values agree well with offsets of 6 and 20 K for B$-$V vs. V$-$I and V$-$K, with an $rms$ scatter of 50 and 80~K, respectively. For all these calibrations, we adopted the cluster mean metallicity of $-$1.43 dex on the Kraft \& Ivans (2003) scale. The resulting temperatures have a formal mean random error due to color and calibration uncertainties of 136 K on average. In practice, we adopt an error-weighted mean of all three color indicators as the final T$_{\rm eff}$ for the atmospheres. Fig.~3 shows the distribution of effective temperatures for our targets. \begin{figure}[htb] \centering \includegraphics[width=1\hsize]{f3.eps} \caption{Distribution of stellar parameters for the RBG and AGB stars (open and shaded histograms, respectively).} \end{figure} Surface gravities, log\,$g$, were derived from the photometry, an adopted distance modulus of 20.22 mag (Stetson et al. 1999), and the above temperature and metallicity estimates. A mass of 0.85 M$_{\odot}$ was adopted for the red giants, as indicated by a comparison with the Dartmouth isochrones (Dotter et al. 2008; Fig.~1). Errors on the input parameters (predominantly that on T$_{\rm eff}$) lead to a typical uncertainty in log\,$g$ of $\pm$0.16 dex. As in Paper~I, we derived microturbulent velocities, $\xi$, from a linear fit to the temperatures of halo stars that have similar parameters to ours (Cayrel et~al. 2004). The scatter around the best-fit relation implies a typical error of $\sigma(\xi)$ $\approx$ 0.25 km\,s$^{-1}$. Since we have no prior knowledge of the individual stellar metallicities, we adopt the value of $-1.43$ dex (Kraft \& Ivans 2003) as representative of the cluster mean and as an input metallicity for the atmospheres. This value is then refined iteratively using the Fe I abundance from the previous step as input for the following atmosphere calculations. In addition, we calculate an independent metallicity estimate for individual stars from the Mg I line index at 5167, 5173 \AA, which is defined and calibrated on the scale of Carretta \& Gratton (1997) as in Walker et al. (2007) and Eq.~2 in Paper~I. For this, we assume a horizontal branch magnitude, V$_{\rm HB}$, of 20.8 mag (Stetson et al. 1999). Although we list the Mg~I indicator in our final abundance in Table~3, we emphasize that this value is meant as an initial estimate of the cluster metallicity, rather than a reliable measurement of its abundance scale. Table~3 lists the final abundance ratios derived from the co-added red giant sample. Here, neutral species are given relative to Fe I, while the ratios of ionized species are listed with respect to the ionized iron abundance as [X~II / Fe II]. \begin{table} \caption{Abundance results from the co-added red giant spectrum} \centering \begin{tabular}{crcrc} \hline\hline Element & [X/Fe] & $\sigma$ & N & $\sigma_{\rm tot}$ \\ \hline Fe$_{\rm Mg I}^a$ & $-1.41$ & 0.28 & \dots & \dots \\ Fe I & $-$1.41 & 0.35 & 81 & 0.17 \\ Fe II & $-$1.54 & 0.21 & 8 & 0.25 \\ Mg I & 0.25 & 0.21 & 2 & 0.20 \\ Al I & 0.36 & \dots & 1 & 0.19 \\ Si I & 0.47 & 0.32 & 5 & 0.15 \\ Ca I & 0.40 & 0.16 & 11 & 0.21 \\ Sc II & 0.29 & 0.28 & 6 & 0.14 \\ Ti I & 0.24 & 0.32 & 18 & 0.30 \\ Ti II & 0.61 & 0.39 & 8 & 0.17 \\ V I & 0.16 & 0.17 & 6 & 0.29 \\ Cr I & $-$0.18 & 0.19 & 7 & 0.28 \\ Mn I & $-$0.18 & 0.20 & 5 & 0.23 \\ Co I & 0.38 & 0.07 & 3 & 0.18 \\ Ni I & 0.04 & 0.29 & 16 & 0.14 \\ Cu I & $-$0.66 & 0.16 & 2 & 0.18 \\ Y II & 0.30 & 0.29 & 5 & 0.17 \\ Zr II & 0.53 & \dots & 1 & 0.16 \\ Ba II & 0.36 & 0.16 & 4 & 0.19 \\ La II & 0.67 & 0.10 & 2 & 0.12 \\ Ce II & 0.34 & \dots & 1 & 0.16 \\ Nd II & 0.45 & \dots & 1 & 0.16 \\ Dy II & 0.32 & \dots & 1 & 0.16 \\ \hline \end{tabular} \\$^a$Metallicity estimate based on the Mg I calibration of Walker et al. (2007), on the metallicity scale of Carretta \& Gratton (1997). \end{table} \section{Abundance errors} As a measure of the random uncertainties on our abundance ratios, Table~3 also lists the 1$\sigma$-scatter of the line-by-line measurements together with the number of transitions, $N$, used in the analysis. This contribution is generally small for those species with many suitable transitions (e.g., Fe~I, Ca, Ti~I, Ni) yet dominates for the other, poorly-sampled elements. As in Paper~I, we adopt in what follows a minimum random abundance error of 0.10 dex and assign an uncertainty of 0.15 dex if only one line could be measured. In order to investigate the extent to which inaccurate radial velocity measurements can lead to a broadening of the observed lines during the co-addition of the individual, Doppler-shifted spectra, we carried out a series of 1000 Monte Carlo simulations. In each simulation, we corrected every spectrum by a velocity that accounted for the velocity error before combining those falsified spectra into a new spectrum. The EWs for the entire line list were then re-measured from each of those spectra in an automated manner. As a result, the EWs changed by (10$\pm$5)\% on average, with 1$\sigma$ of the widths changing by less than 15\%. We then repeated our abundance determinations by Monte Carlo varying the EWs by this amount and deriving new means and dispersions. This revealed that a 15\% uncertainty in the measured EW incurs an error of 0.04 dex on the mean iron abundance. For this, we conclude that inaccurate Doppler-shifts of the spectra are not a major source of uncertainty in an analysis of this sort. The main contributor to the random errors are instead the EW measurements at these still-low S/N levels and, to a lesser extent, the standard uncertainties in the atmosphere models and atomic parameters themselves. Although none of our stars is a likely non-member in terms of our CMD selection, nor indicated by deviating gravity sensitive features as the Mg~b triplet or the Na-D lines, nor by discrepant radial velocity, we explored the effect of co-adding undesired foreground dwarfs to the red giant sample on the resulting abundance ratios. To this end, we computed synthetic spectra for each star, using the atmospheric parameters determined above and adopting the element ratios listed in Table~3. We then synthesized a spectrum of a metal-poor dwarf star (T$_{\rm eff}$ = 5700~K, log\,$g$ = 4.2, and $\xi$ = 1.1 km\,s$^{-1}$) and randomly replaced one or two of the RGB stars with a dwarf spectrum in the co-addition. The EWs of the resulting, co-added synthetic spectrum were then re-measured as above. As a consequence, the presence of one (two) underlying dwarf stars in the co-added spectrum does not change the co-added EWs by more than 5\% (9\%), on average. Thus our abundance ratios are insensitive to any residual foreground contamination, with no expected effect larger than 0.02 dex. Systematic uncertainties of the stellar parameters were evaluated from a standard error analysis (e.g., Koch \& McWilliam 2010). To this end, each parameter was varied by the typical uncertainty (T$_{\rm eff}\pm$150 K; log\,$g\pm$0.2 dex; $\xi\pm$0.25 km\,s$^{-1}$; see previous section), from which new atmospheres were interpolated for each star. This assumes that all stars are systematically affected in the same manner by the same absolute error. Furthermore, the column labeled ``ODF'' shows the changes induced by using the Solar-scaled opacity distributions ODFNEW, which corresponds to an error in the $\alpha$-enhancement of 0.4 dex. Using these changed atmospheres, theoretical EWs were computed for each star and then combined into a new $\langle{EW}\rangle$ to be compared with the observed EW as before. We list in Table~4 the deviations of the resulting new abundances from the nominal values, [X/Fe], obtained from the unchanged atmospheres. Overall, the largest effect is naturally found with regard to T$_{\rm eff}$ errors, while changes in log\,$g$ mostly affect the ionized species (see also Paper~I). \begin{table} \caption{Error analysis: deviations from the abundances in Table~3} \centering \begin{tabular}{ccccr} \hline \hline & $\Delta$T$_{\rm eff}$ & $\Delta\,\log\,g$ & $\Delta\xi$ & \\ \raisebox{1.5ex}[-1.5ex]{Ion} & $\pm$150\,K & $\pm$0.2\,dex & $\pm$0.25\,km\,s$^{-1}$ & \raisebox{1.5ex}[-1.5ex]{ODF} \\ \hline Fe I & $\pm$0.13 & $\pm$0.01 & $\mp$0.12 & $-$0.02 \\ Fe II & $\mp$0.20 & $\pm$0.12 & $\mp$0.09 & $-$0.13 \\ Mg I & $\pm$0.10 & $\mp$0.02 & $\mp$0.10 & 0.02 \\ Al I & $\pm$0.13 & $\mp$0.02 & $\mp$0.01 & 0.04 \\ Si I & $\mp$0.03 & $\pm$0.03 & $\mp$0.03 & $-$0.02 \\ Ca I & $\pm$0.19 & $\mp$0.03 & $\mp$0.08 & 0.03 \\ Sc II & $\mp$0.02 & $\pm$0.07 & $\mp$0.05 & 0.04 \\ Ti I & $\pm$0.30 & $\mp$0.03 & $\mp$0.08 & 0.01 \\ Ti II & $\mp$0.03 & $\pm$0.07 & $\mp$0.08 & 0.04 \\ V I & $\pm$0.32 & $\mp$0.02 & $\mp$0.03 & 0.00 \\ Cr I & $\pm$0.26 & $\mp$0.03 & $\mp$0.10 & 0.02 \\ Mn I & $\pm$0.22 & $\pm$0.01 & $\pm$0.04 & 0.04 \\ Co I & $\pm$0.16 & $\pm$0.03 & $\pm$0.03 & 0.01 \\ Ni I & $\pm$0.11 & $\pm$0.01 & $\mp$0.06 & $-$0.01 \\ Cu I & $\pm$0.15 & $\pm$0.02 & $\mp$0.03 & 0.01 \\ Y II & $\pm$0.01 & $\pm$0.06 & $\mp$0.10 & 0.04 \\ Zr II & $\mp$0.02 & $\pm$0.07 & $\mp$0.02 & 0.05 \\ Ba II & $\pm$0.05 & $\pm$0.06 & $\mp$0.15 & 0.01 \\ La II & $\pm$0.04 & $\pm$0.07 & $\mp$0.02 & 0.03 \\ Ce II & $\pm$0.02 & $\pm$0.07 & $\mp$0.01 & 0.04 \\ Nd II & $\pm$0.02 & $\pm$0.06 & $\mp$0.03 & 0.04 \\ Dy II & $\pm$0.05 & $\pm$0.06 & $\pm$0.00 & 0.04 \\ \hline \end{tabular} \end{table} Finally, we interpolated the values in Table~4 to the actual parameter uncertainties estimated in Sect.~3.2 and adopted an error of the atmosphere $\alpha$-enhancement of $\pm$0.2 (in accordance with the results for [$\alpha$/Fe] in Table~3). These contributions were added in quadrature to the random error to yield the total abundance error, which we list as $\sigma_{\rm tot}$ in the last column of Table~3 and which we will show in the following figures unless noted otherwise. Since this procedure neglects the covariances between the stellar parameters, these errors can be regarded as {\it upper limits} on the actual abundance uncertainties. In the end, our measurements yield element ratios that are typically accurate to within 0.2 dex for the $\alpha$-elements, 0.15--0.30 dex for the iron peak elements, and approximately 0.2 dex for the heavy elements. Although these error estimates may seem relatively large (and dominated by the systematic uncertainties), we have shown in Paper~I that the results from a co-added abundance analysis of this kind are largely consistent with those obtained from individual, high-S/N spectroscopic measurements. Thus, the present data are adequate for placing useful limits on the chemical abundances in Pal~4 and characterizing the general trends (see also Shetrone et al. 2009). \section{Abundance results} Our abundance measurements based on the co-added RGB spectrum are plotted in Fig.~4. Note that the values for [Al, Zr, Ce, Dy/Fe] are only upper limits (\S~3.1), although we show their formal, total error bars in this figure (cf. Fig.~9). \begin{figure}[htb] \centering \includegraphics[width=1\hsize]{f4.eps} \caption{Abundances ratios from the co-added RGB star spectrum. The dashed error bars indicate the total uncertainties (Tables~3 in 4), while the solid symbols represent 1$\sigma$ random errors.} \end{figure} \subsection{Iron} Based on our sample of 19 RGB stars, we find a mean iron abundance of $${\rm [Fe I/H]} = -1.41\pm0.04~{\rm (statistical)} \pm0.17~{\rm (systematic)}.$$ This value is in excellent agreement with the Fe II based abundance scale of Kraft \& Ivans (2003), and slightly more metal-poor than the value of $-1.28\pm0.20$ dex reported by Armandroff et al. (1992) from the calcium triplet on the Zinn \& West (1984) scale, and by Stetson et al. (1999) from photometry. It is interesting to note that also the mean [Fe/H]$_{\rm Mg~I}$ from the Mg\,b index (\S~3.2) agrees very well with the Fe~I scale: for the red giant sample we find the same mean of $-1.41$ dex, with a 1$\sigma$ spread of 0.28 dex. Ionization equilibrium is not fulfilled in this integrated analysis to within the {\em random} uncertainties, while both stages agree if one accounts for their total errors; the mean deviation of the neutral and ionized species is [Fe\,{\sc i}/Fe\,{\sc ii}]=$0.13\pm$0.08 dex. A similar deviation was found in an identical analysis of co-addded RGB star spectra in Paper~I, although in the opposite sense (i.e., with Fe I yielding higher abundances). As in Paper~I, we conclude that Fe II lines in general seem ill suited to establishing a population's iron abundance from a low-S/N spectral co-addition (cf. Kraft \& Ivans 2003; McWilliam \& Bernstein 2008). Typical EWs of the eight Fe II lines we used in the analysis fall in the range 20--90 m\AA. As Table~4 indicates, a systematic increase of 0.24 dex in the surface gravity would settle the ionization equilibrium at [Fe/H] of $-$1.39 dex, which is entirely consistent with the value found above from the neutral species. Moreover, a change in the temperature scale of just $-$54 K (without altering log\,$g$) would re-install the equilibrium at $-$1.44 dex (see also Koch \& McWilliam 2010). In what follows, we therefore proceed with our adopted log\,$g$ scale and take the imbalance between ionized and neutral species at face value. \subsection{Tests for abundance spreads} As argued earlier, an integrated abundance analysis works reliably under the {\em ad-hoc} assumption of the same chemical abundance for all stars that enter the co-added spectrum. Here we discuss several tests of how realistic this assumption is for our analysis of Pal~4. As a first test, we consider the spread in colour about the fiducial isochrone shown in Fig.~1. By interpolating a finely spaced isochrone grid in metallicity and using the identical values for age, distance modulus, and reddening as above, we find that the colour range of the RGB targets translates into a metallicity spread of 0.036 dex. Accounting for photometric errors, which propagate to a mean metallicity error of 0.026 dex, we find an intrinsic spread of 0.025 dex in the photometric metallicities. Since this procedure did not include errors on the distance modulus or reddening, and uncertainties in the adopted age and $\alpha$-enhancement of the isochrones will lead to even larger uncertainties, we conclude that there is no evidence of any global abundance spread on the RGB, based on the photometric metallicities alone. This notion is consistent with the homogeneity (in iron or overall metallicity) of most genuine Galactic GCs (e.g., Carretta et al. 2009). Secondly, we divided the RGB sample in two halves and co-added spectra for each subsample\footnote{In practice stars were chosen to alternate in magnitude so that sample \#1 includes Pal4-1,3,6,8,10,19,23,25,28,31, and the remainder constitutes sample \#2.}. The above procedures to obtain iron abundance constraints from the co-added EWs were repeated and we find slightly more metal poor values for either subsample: $-1.46\pm0.05$ and $-1.45\pm0.06$ dex, respectively, where the stated uncertainties account for random errors only. Therefore, there is no evidence of an abundance difference between the subsets within the measurement errors. Strictly, one would need to repeat this exercise for all (92378) possible combinations in order to detect the maximum abundance difference, which could be indicative of any real spread. The measurement of the 81 Fe lines in this amount of spectra is, however, computational expensive and beyond our present scope. As a last test, we employed a line-coaddition technique within the spectrum of each individual star, similar to that outlined in Norris et al. (2007; and references therein); see also Koch et al. (2008c): For each star, the useable 81 Fe lines were thus shifted to zero wavelength at each line center, and then co-added into a composite, ``master line''. The same was carried out for a synthetic spectrum that matches the stellar parameters of the stars. This way we find a 1$\sigma$ dispersion of the 19 [Fe/H] values of 0.176 dex. If we account for the random measurement errors from this procedure and assume the same systematic uncertainties as in our proper analysis (Sect.~4), we estimate an intrinsic abundance spread of no more than 0.05 dex. This is most likely an upper limit, since radial velocity uncertainties may have a larger impact on this method, and it is also not self evident that the systematic errors are identical to those in Table~3. At this low internal dispersion, however, Pal~4 does not comply with the broad ranges found in the dSphs (e.g., Table~1 in Koch 2009), while it is consistent with the upper limit for GC homogeneity found in Carretta et al. (2009). We conclude that, within the limitations of our spectral co-addition techniques, Pal~4 most likely shows little to no abundance spread, rendering it a genuine (MW) GC and arguing against an origin in a dSph-like environment. \subsection{Alpha-elements} All $\alpha$-elements measured in this study are enhanced with respect to Fe. While the [Ca/Fe] and [Si/Fe] ratios show the canonical value of $\ga$ 0.4~dex typical for Galactic halo field and GC stars, the abundance ratios of Mg and Ti are slightly lower, at about 0.25~dex. Because the latter species have slightly larger errors, the error-weighted mean of all four elements is $${\rm {[}}\alpha/{\rm Fe]} = 0.38\pm0.11~{\rm dex}.$$ The $\alpha$-element ratios are shown for Mg, Ca, and Ti in Fig.~5 where they are compared to Galactic halo and disk data from the literature (small black dots). The data shown here are taken from the same sources as in Paper~I. At this point, we draw the reader's attention to an important caveat in Fig.~5 and subsequent figures: the selection of halo stars used in these comparisons is, by necessity, a local sample. How appropriate it is to use local halo field stars in a comparison to remote halo GCs is unclear, particularly if there are radial gradients in the abundance ratios, as has sometimes been claimed (e.g., Nissen \& Schuster 1997; Fulbright 2002). We shall return in \S~5.6 to the issue of $\alpha$-element enhancements amongst different populations in the Galactic halo. \begin{figure}[htb] \centering \includegraphics[width=1\hsize]{f5.eps} \caption{[$\alpha$/Fe] abundance ratios for Pal~4 from this work (filled red star) in comparison with the GCs listed in Table~5 (solid dark blue circles). The solid and dashed lines illustrate the mean star relation and its $\pm$1$\sigma$ spread, respectively, from linear fits to the halo star data (black dots). See text for details.} \end{figure} We note in passing that, although the difference [Ti I/Ti II] = $-$0.24 dex is large, ionization equilibrium for Ti is satisfied considering the large combined total error for both species. This discrepancy is only significant at the 0.6$\sigma$-level and is in the opposite sense of the deviation in Fe. In any case, a detailed interpretation of any imbalances in terms of cumulative non-LTE effects along the RGB in our integrated abundance analysis would be beyond the scope of the present work (e.g., Koch \& McWilliam 2010). At $-0.15$ dex, the [Mg/Ca] ratio is comparably low. While Mg is produced during the hydrostatic burning phases in the type II supernova (SN) progenitors, Ca nucleosynthesis proceeds during the SN explosion itself (e.g., Woosley \& Weaver 1995). Thus, it is not evident that one element should trace the other over a broad metallicity range. In fact, theoretical yields predict a delicate mass dependance of the [Mg/Ca] ratio. In Fig.~6, we show the distributions of this ratio for Galactic halo stars (gray shaded histogram) using the data of Gratton \& Sneden (1988; 1994), McWilliam et~al. (1995), Ryan et~al. (1996), Nissen \& Schuster (1997), McWilliam (1998), Hanson et al. (1998), Burris et~al. (2000), Fulbright (2000, 2002), Stephens \& Boesgaard (2002), Johnson (2002), Ivans et~al. (2003) and Cayrel et~al. (2004). Fig.~6 also shows the currently available measurements for Local Group dSph galaxies (black line in Fig.~6) by Shetrone et~al. (2001; 2003; 2009), Sadakane et~al. (2004), Monaco et~al. (2005), Letarte (2007), Koch et~al. (2008a,b), Frebel et~al. (2010), Aoki et~al. (2009), Cohen \& Huang (2009) and Feltzing et al. (2009); see also Koch (2009). Halo stars scatter around a [Mg/Ca] of zero, with mean and 1$\sigma$ dispersion of 0.05 and 0.15 dex, respectively. Stars with very low abundance ratios are the exception (e.g., Lai et~al. 2009). In fact, the third moment of the halo distribution, at +0.55, indicates a higher-[Mg/Ca] tail. The dSph galaxies, on the other hand, have a formal mean and dispersion of 0.12 and 0.23 dex. It is important to bear in mind, though, that the abundance ratios in the dSphs are inevitably unique characteristics of each galaxy and should be governed by their individual star formation histories and global properties (e.g., Lanfranchi \& Matteucci 2004). In particular the so-called ultra-faint dSph galaxies, which have very low masses, show a propensity to reach higher [Mg/Ca] ratios as a result of a stochastical sampling of the high-mass end of the IMF, which in turn causes an imbalance between the Mg- and Ca-production (e.g., Koch et al. 2008a; Feltzing et al. 2009; Norris et al. 2010). In addition, the dSph galaxies show a clear extension towards low [Mg/Ca] ratios, which reflects in an overall skewness of $-$0.13 in the dSph distribution. Notably, all of the ``reference GCs" considered here (Table~5) have positive Mg/Ca values. Given the rather large formal uncertainty of $\pm$0.30 dex on the [Mg/Ca] ratio (adding the total errors on Mg and Ca/Fe in quadrature) our measurement does not serve as an especially strong discriminator between halo field or dSph origin for Pal~4. Nevertheless, its value is clearly different from those of the remainder of inner and outer halo GCs, and may point to different enrichment processes in the environment where Pal~4 formed. \begin{figure}[htb] \centering \includegraphics[width=1\hsize]{f6a.eps} \includegraphics[width=1\hsize]{f6b.eps} \caption{Histograms (top panel) and cumulative distribution (bottom panel) of the [Mg/Ca] abundance ratio in Galactic halo stars (shaded histogram/black solid line) and dSph galaxies (open histogram/dashed line). Also indicated are the measurements for Pal~4 and the Galactic GCs listed in Table~5 (see \S~5.4). The error bar on the Pal~4 data point is the squared sum of the total Mg and Ca/Fe errors. } \end{figure} \subsection{Iron peak elements} Our measured [Sc/Fe], [Mn/Fe] and[ Ni/Fe] ratios are shown in Fig.~7. \begin{figure}[htb] \centering \includegraphics[width=1\hsize]{f7.eps} \caption{Same as Fig.~5, but for [Sc/Fe], [Mn/Fe], [Ni /Fe] and [Co/Cr]. Black lines denote the regression lines and 1$\sigma$ scatter adopted from Cayrel et al. (2004), extrapolated to [Fe/H] = $-$1 dex.} \end{figure} Owing to the relatively large number of available Ni absorption lines, [Ni/Fe] is the best determined of these ratios, and, at 0.04 dex, has a value that is fully compatible with the Solar value (that is found over a broad range of iron abundances). This is not unexpected, since the iron-peak elements strictly trace the iron production in the long-lived SNe~Ia. Cr is underabundant with respect to Fe, but fully compatible with Galactic halo stars, while [Co/Fe] is slightly higher than halo stars at the same metallicity. In Fig.~7, we choose to plot [Co/Cr] as this abundance ratio has proven to be relatively insensitive to systematic effects in the stellar parameters (e.g., McWilliam et al. 1995). The high Co abundance in Pal~4, coupled with a relatively low Cr abundance, leads to the marginally higher [Co/Cr] ratio indicated in this figure. Given its large uncertainty, and because we cannot rule out the possibility that this ratio has been affected by non-LTE effects, we will refrain from drawing any conclusions about the contributions from massive stars yields to these elements' production in Pal~4 (cf. McWilliam et al. 1995; Koch et al. 2008a). Likewise, the [Mn/Fe] ratio in Pal~4, at $-$0.18 dex, is marginally higher than the value of $\approx -0.4$~dex found for halo stars in the same [Fe/H] interval (for which we supplemented the plot with data from Gratton 1989; Feltzing \& Gustafsson 1998; Prochaska et al. 2000; Nissen et al. 2000; Johnson 2002, and Cayrel et al. 2004; see also McWilliam et al. 2003). However, an intercomparison of Mn data usually suffers from zero point uncertainties (e.g., McWilliam et al. 2003) in that abundances derived from the $\sim$4030 \AA\ triplet lines are systematically lower by 0.3--0.4 dex on average relative to the redder, high-excitation lines we employed in this study (e.g., Roederer et al. 2010). Thus Pal~4's elevated [Mn/Fe] does not appear unusual and we do not pursue this ratio any further. Finally, the [Cu/Fe] ratio (shown in Fig.~8 on top of the measurements in Galactic disk and halo stars by Prochaska et al. 2000 and Mishenina et al. 2002) seem to agree well with the Galactic trend, suggestive of a common origin, although zero-point difficulties may also affect conclusions about the behavior of this element (e.g., McWilliam \& Smecker-Hane 2005), as was the case for Mn. \subsection{Neutron capture elements} We show in Fig.~8 the [Y/Fe] and [Ba/Fe] ratios as representatives of the heavy elements. \begin{figure}[htb] \centering \includegraphics[width=1\hsize]{f8.eps} \caption{Same as Fig.~7, but for [Cu/Fe], [Y/Fe], [Ba /Fe] and the $s$-process abundance ratio [Ba/Y].} \end{figure} All the elements with Z$>$38 are markedly enhanced relative to Fe. Unfortunately, our spectra lack information about the $r$-process element Eu, which prohibits any conclusions about the relative contributions of the AGB stars that produce the $s$-process elements to the early $r$-process production (most likely in massive SNe~II). On the other hand, the [Ba/Y] of $\sim$0.06 is fully compatible with the values found in Galactic halo stars, while it is strongly enhanced in the majority of the dSph stars studied to date owing to the importance of metal-poor AGB yields in the slow chemical evolution in these low-mass systems (e.g., Shetrone et al. 2003; Lanfranchi et al. 2008). Fig.~9 shows the heavy element abundances for Pal~4 together with the solar $r$-, $s$- and total scaled solar abundances from Burris et al. (2000). We have normalized the curves to the same Ba abundance. Unlike Pal~3, which was found to exhibit interesting evidence for a pure $r$-process origin, Pal~4's abundance data fall between the $r$-process curve and the solar $r$+$s$-mix. However, the majority of these elements provide upper limits at most, so we refrain from a deeper discussion of the heavy element nucleosynthesis in this GC. \begin{figure \centering \includegraphics[width=1.1\hsize]{f9.eps} \caption{Neutron capture elements in Pal~4, normalized to Ba. Black lines display the solar $r$- and $s$-process contributions from Burris et al. (2000). Triangles indicate upper limits.} \end{figure} \section{Comparison to Galactic Halo Tracers} \subsection{Halo Globular Clusters} Figure~10 and Table~5 compare our abundances for Pal~4 to those of a subsample of Galactic GCs using data taken from the literature. Here we do not aim for a comprehensive comparison with the entire MWGC population (e.g., Pritzl et al. 2005; Geisler et al. 2007). Rather, we wish to simply compare Pal~4 to a few clusters that have been selected as broadly representative of the inner and outer halo cluster systems. Specifically, we use data for M3 and M13 from Cohen \& Mel\'endez (2005b), which are archetypical {\em inner} halo GCs at R$_{\rm GC}\sim$9, 12 kpc (Z$_{\rm max}\sim$ 9,15 kpc) with metallicities similar to those of Pal~4. We also include NGC6752 in this comparison as one of the nearest, inner halo clusters at a comparable metallicity (Yong et al. 2008). Finally, we include the {\em outer} halo clusters NGC7492 (Cohen \& Mel\'endez 2005a) and Pal~3 (Paper~I) as rare examples of remote clusters with published abundances, as well as NGC5694, a GC that has been claimed to show abundance patterns more typical of dSph stars than GCs (Lee et al. 2006). \begin{figure}[htb] \centering \includegraphics[width=1\hsize]{f10.eps} \caption{Abundance differences for Pal~4, in the sense [X/Fe]$_{\rm Pal 4} - $ [X/Fe]$_{\rm GC}$. The six GCs are results assembled from the literature and corrected for different Solar abundance scales. Details for the GCs used in this comparison are given in Table~5. Error bars include the 1$\sigma$-spreads from both this work and from the reference GC abundance ratios. For clarity, alternating labels are shown.} \end{figure} Table~5 shows the mean deviation $\langle\Delta$[X/Fe]$\rangle$ of Pal 4's abundance ratios from the literature values for the GCs chosen for reference. The fourth column lists the number of chemical elements, $N$, that the different studies have in common. Since Figs.~8 and 9 indicate that Pal~4 exhibits relatively high heavy element ratios with regard to the reference sample, we also computed the statistics for elements with Z$<$39 (Y) only. \begin{table*} \caption{Pal~4 Abundances Relative to Comparison Clusters.} \centering \renewcommand{\footnoterule}{} \begin{tabular}{rccrccccc} \hline \hline Name & [Fe/H] & R$_{\rm GC}$ [kpc] & $N$ & $\langle\Delta$[X/Fe]$\rangle_{\rm all}$ & $\langle\Delta$[X/Fe]$\rangle_{\rm Z<39}$ & ${\langle}{\Delta}$[X/Fe]/${\sigma}{\rangle}_{\rm all}$ & $\langle\Delta$[X/Fe]/${\sigma}{\rangle}_{\rm Z<39}$ & Reference$^a$\\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline NGC 6752 & $-$1.61 & 5.2 & 17 & 0.18$\pm$0.06 & \phs0.10$\pm$0.08 & 1.0 & \phs0.5 & (1), (2) \\ M 13 & $-$1.56 & 8.7 & 17 & 0.15$\pm$0.06 & \phs0.04$\pm$0.06 & 0.8 & \phs0.2 & (1), (3) \\ M 3 & $-$1.39 & 12.3 & 18 & 0.09$\pm$0.06 & $-$0.01$\pm$0.08 & 0.5 & \phs0.0 & (1), (3) \\ NGC 7492 & $-$1.87 & 24.9 & 16 & 0.09$\pm$0.05 & \phs0.02$\pm$0.05 & 0.4 & \phs0.1 & (1), (4) \\ NGC 5694 & $-$2.06 & 29.1 & 10 & 0.36$\pm$0.12 & \phs0.14$\pm$0.06 & 1.8 & \phs0.6 & (1), (5) \\ Pal 3 & $-$1.58 & 95.9 & 19 & 0.05$\pm$0.05 & $-$0.05$\pm$0.06 & 0.3 & $-$0.1 & (1), (6) \\ \hline \hline \end{tabular} \begin{list}{}{} \item[$^a$] (1) web-version (2003) of Harris (1996); (2) Yong et al. (2005); (3) Cohen \& Mel\'endez (2005b); (4) Cohen \& Mel\'endez (2005a); (5) Lee et al. (2006); (6) Paper~I. \end{list} \end{table*} This comparison suggests that Pal~4 is, on average, enhanced with respect to each GC considered here if we account for all elements. On the other hand, the differences are statistically insignificant if we restrict the comparison to elements lighter than Y. Two of the comparison GCs in Table~5 show interesting discrepancies. The first, NGC6752, is the innermost object in the comparison sample and only slightly more metal-poor than Pal 4. Although its abundance patterns are similar to the comparison GCs and field stars at this metallicity, Yong et al. (2005, and references therein) found significant variations in the light {\em and} heavy elements, which supports the view that AGB stars alone cannot have carried the enrichment in the proto-cluster medium, although they likely played a significant role as indicated by the observed [Ba/Eu] ratios. While the observed differences for Z $\ga$ 39 would seem to suggest that the respective processes differed between the inner (NGC6752-like) and outer (Pal~4-like) halo, it is clear that more measurements --- particularly Eu abundances --- are needed. The second noteworthy example, NGC5694, exhibits heavy element abundance ratios that are incompatible with those of Pal~4. On the other hand, while its [Ca/Fe] is also significantly lower, we find an identical [Mg/Fe] ratio in Pal~4 (which, in turn, reflects in the different [Mg/Ca] ratios; see Fig.~6). The low values of the $\alpha$-element ratios with respect to the Galactic halo have prompted Lee et al. (2006) to conclude that this GC is likely of an extragalactic origin. We shall return to this issue below. \subsection{Halo Field Stars} Is it safe to conclude that Pal~4 is typical of the Galactic halo population? In view of the relatively large errors that arise from the integrated nature of our analysis, we follow Norris et al. (2010) in first considering the {\em mean} halo abundance distribution. To this end, we computed the mean and dispersion for the Galactic halo and disk stars (shown as small black dots in Figs.~5, 7 and 8) as a function of [Fe/H] and fitted these relations with straight lines. Although this is an obvious oversimplification, the halo data is adequately represented with these linear relations. The resulting range in the Galactic abundance ratios is shown by the black lines in Fig.~5. We emphasize that no efforts have been taken to homogenize the various data with respect to different approaches used in the analyses (i.e., regarding log $gf$ values and atmospheres), although we did correct for differences in the adopted Solar abundance scales when necessary. Note that Cayrel et al. (2004) also provide regression lines for [X/Fe] versus [Fe/H] based on their 35 metal-poor halo stars, but those stars have [Fe/H]$<-2.1$ dex and an extrapolation to metallicities of Pal~4 yields slopes that are too high to describe the $\alpha$-element abundances shown here. Pal~4 falls squarely on the regression lines for all $\alpha$-elements, except for Ca, although it is still consistent within the errors even in this case. Indeed, Pal~4 is generally in good agreement with the GCs shown in this comparison, with the exceptions noted above. In this picture, the proto-GC cloud from which Pal~4 formed was considerably enriched by the short-lived SNe~II that produced the $\alpha$-elements on rapid time scales --- a generic characteristic of the halo field stars and its genuine GC system. For the even- and odd-Z iron-peak element ratios shown in Fig.~7, an extrapolation of the regression lines of Cayrel et al. (2004) provide good representations of the overall halo trends up to the metallicity regime around Pal~4 and higher. As argued above, the [Ni/Fe] ratio is well determined in Pal~4 and is fully compatible with the Solar value that is observed in halo field and GC stars, bolstering the ubiquity of iron-peak nucleosynthesis in the SNe~Ia at [Fe/H] above $\sim-2$ dex. The [Sc/Fe] ratio in Pal~4 falls towards the upper limit of the halo distribution, which holds for all the reference GCs in our sample, except NGC~6752. Likewise, the slight Mn enhancement is not atypical and agrees well with, for instance, M3. This may indicate that the metallicity dependence of the SNe~II yields was less pronounced in the Pal~4 proto-GC cloud (cf. McWilliam et al. 2003). Finally, we note that the [Co/Cr] ratio is significantly larger that those observed for the three GCs in this metallicity range. These GCs show roughly solar values, as would be expected as both elements fall close together on the iron peak. McWilliam et al. (1995) first detected a strong rise of this ratio in metal-poor halo stars below $\sim-2.4$ dex. In fact, the observed [Co/Cr] of 0.55$\pm$0.33 dex is reminiscent of NGC~7492, albeit at a metallicity that is higher by roughly 0.5~dex. In the case of Cu, and the n-capture elements Y and Ba, the scatter in the halo abundance ratios is more difficult to evaluate due to a much sparser sampling of those elements and a notably increased (and real) abundance scatter among the metal-poor stars below $\sim -2$~dex. We therefore restrict the following brief discussion of Fig.~8 to the scatter plots without quantifying any linear trends. While the [Y/Fe] ratio lies above the bulk of the halo data, and is also higher than our comparison clusters by more than 0.3~dex, Ba seems only mildy enhanced with respect to these populations. Overall, the $s$-process ratio [Ba/Y] is in full agreement with the halo fields stars within the scatter. However, Pritzl et al. (2005) have shown that, in comparison with (thick) disk GCs, the halo clusters tend to be offset more towards higher [Ba/Y] ratios, and so are the dSphs. The latter is usually interpreted in terms of the low star formation efficiencies of the dSph galaxies, which leaves room for a much stronger contribution from metal-poor AGB stars that are the main sites of the $s$-process (e.g., Busso et al. 2001; Lanfranchi et al. 2008). The three GCs with the very high [Ba/Y] ratios in Fig.~10 are M3, M13 and NGC7492 and therefore representatives of the inner {\em and} outer halo. Following this line of reasoning, the slow star forming rates and metallicity dependent AGB-yields that cause enhancements in this ratio appear to be unrelated to location within the halo. In Paper~I we found that Pal~3's heavy elements are largely governed by $r$-process nucleosynthesis. From the sparse data for Z$>$38 in NGC~5496, it cannot be excluded that this cluster also follows this trend, so that the above arguments regarding bimodal $s$-process ratios may not apply to these remote halo clusters. In any case, we emphasize that detailed $r$- and $s$-process abundance measurements for individual stars are vital for resolving these questions. \subsection{Comparison with other Substructures in the Outer halo} In this section, we compare our abundances for Pal~3 (Paper~I) and Pal~4 to published values for other ``substructures" or ``overdensities" in the outer halo of the Milky Way, regardless of their morphological classification. Our comparison therefore focuses on a sample of 13 halo GCs, seven dSph galaxies (Sagittarius, Fornax, Draco, Sextans, Carina, Ursa Minor and Leo II) with abundance data from Shetrone et al. (2001; 2003; 2009); Sadakane et al. (2004); Monaco et al. (2005); Letarte (2007); Koch et al. (2008b); Cohen \& Huang (2009); Aoki et al. (2009); and the five so-called ``ultra-faint" dSph galaxies (hereafter UF-dSphs; Hercules, Coma Berenices, Ursa Major~II, Bootes~I and Leo~IV) with published abundance information (Koch et al. 2008a; Frebel et al. 2010; Feltzing et al. 2009; Simon et al. 2010). All GCs shown here were selected to have Galactocentric distances $R_{GC} \gtrsim 8$~kpc; including Pal~3 and Pal~4 gives us a total of five GCs beyond $R_{GC} = 25$~kpc, and three of these GCs (Pal~3, Pal~4 and NGC2419) are at $R_{GC} \ge 90$~kpc. Note that only two other GCs in the catalog of Harris (1996) lie at or beyond this distance (Eridanus and AM~1). Thus, while the available abundance measurements are certainly still sparse (i.e., being based on just a single RGB star in NGC2419, four RGB stars in Pal~3, and co-added spectra for 19 RGB stars in Pal~4; Shetrone et~al. 2001; Paper~I), it is now possible to have first glimpse into the abundance patterns of the most remote Galactic GCs, and their relationship, if any, to the dSph and UF-dSph galaxies residing in the outer halo. Because the number of element abundance measurements is generally limited (and differs amongst the various studies), we restrict our comparison to [Fe/H] and [$\alpha$/Fe], where we take [$\alpha$/Fe] $\equiv$ ([Mg/Fe] + [Ca/Fe] + [Ti/Fe])/3. \begin{figure*}[htb] \centering \includegraphics[width=1.07\hsize]{f11.eps} \caption{Dependence of [Fe/H] and [$\alpha$/Fe] on structural parameters for various types overdensities in the Galactic halo: globular clusters (open and filled blue squares), dSph galaxies (orange circles) and ``ultra-faint" dSph galaxies (brown circles). The large red symbols show our results for Pal 3 (Paper~I) and Pal~4 (this paper). The luminous globular cluster NGC2419, which lies at a Galactocentric distance comparable to Pal~3 and Pal~4, is labeled in each panel. The structural parameters shown in this figure are absolute $V$-band magnitude {\it (panels a,d)}, central $V$-band surface brightness {\it (panels b,e)} and effective radius {\it (panels c,f)}.} \end{figure*} In Fig.~11, we show the behaviour of [Fe/H] and [$\alpha$/Fe] for stars belonging to halo GCs (blue squares), the more luminous dSph galaxies (orange circles) and UF-dSph galaxies (brown circles). Note that we plot GCs in the range $8 \ge R_{GC} \ge 25$~kpc as open blue squares, while GCs in the range $R_{GC} \ge 25$~kpc are shown as filled blue squares. Abundances are plotted against total $V$-band magnitude, $M_V$, central $V$-band surface brightness, $\mu_V(0)$, and effective (or half-light) radius, $R_e$ (Harris 1996; Irwin \& Hatzidimitriou 1995; Mateo 1998; McLaughlin \& van der Marel 2005; Martin et al. 2008). Pal~3 and 4 are highlighted as the large red square and star, respectively, while the third GC at $R_{GC} \gtrsim 90$~kpc, NGC~2419, is labelled in each panel. There are several interesting conclusions to be drawn from this figure. First, Pal~3 and 4 appear as near ``twins" in this comparison, having similar Galactocentric distances, structural parameters (notably large radii), $V$-band luminosities, metallicities and $\alpha$-element enhancements. NGC2419, although much more luminous than either Pal~3 or Pal~4, appears similar in terms of its $\alpha$-enhancement. For these three GCs, which lie in the range $91 \lesssim R_{GC} \lesssim 112$~kpc, we find a mean of $$\langle{\rm {[}}\alpha/{\rm Fe]}\rangle = +0.31\pm0.09~{\rm dex}.$$ Adding NGC5694 and NGC7006, we find $$\langle{\rm {[}}\alpha/{\rm Fe]}\rangle = +0.24\pm0.13~{\rm dex}$$ for the five GCs with $R_{GC} \gtrsim 25$~kpc. Thus, on the whole, Pal~3 and Pal~4 seem to have levels of $\alpha$ enhancement that are similar to most other halo GCs and nearby halo field stars, but slightly higher than dSphs at comparable metallicities (e.g., Shetrone et~al. 2001, 2003; Venn et~al. 2004; Koch 2009). It is important to bear in mind, however, that stars in individual dSph galaxies show significant scatter, and it is certainly true that some dSph stars fall close to the region in the [Fe/H]--[$\alpha$/Fe] diagram occupied by these remote GCs: i.e., 10/157 $\approx$ 6\% of the dSph stars plotted in Fig.~11 fall within the 2$\sigma$ uncertainties for Pal~3 and Pal~4. In absolute terms, the {\em mean} [$\alpha$/Fe] for the most remote GCs is indistinguishable from that found in the UF-dSph galaxies shown in Fig.~11, which have $+0.36\pm0.17$~dex and a full range of $+0.03$ to $+0.65$~dex (based on measurements for nine stars in Her, UMa~II, Com, and Leo~IV). Note that Pal~3 and 4 are atypical of Galactic GCs in terms of their structural parameters, being unusually extended ($R_e \gtrsim 15$~pc, or roughly fives times as large as ``typical" GCs; Jord\'an et~al. 2005) and having low surface brightness (with $\mu_V(0) \gtrsim 22-22.5$~mag~arcsec$^{-2}$). Thus, at least superficially, these remote GCs may have more in common with some UF-dSph galaxies than their apparent counterparts in the inner halo. There are, at present, two characteristics of the UF-dSph population that suggest they are indeed low-luminosity galaxies rather than faint, extended GCs (e.g., Larsen \& Brodie 2002; Mackey et~al. 2006; Peng et~al. 2006). The first such characteristic is their very large mass-to-light ratios, which point to the presence of significant dark matter halos (Simon \& Geha 2007; Strigari et~al. 2008). Secondly, the UF-dSphs seem to have abundances that fall along the extrapolation of the dwarf galaxy metallicity-luminosity relation, with significant intrinsic dispersions in metallicity (Kirby et~al. 2008). Using these criteria, what can we conclude about the origin of the most remote halo GCs? Unfortunately, it is difficult to draw firm conclusions on possible metallicity spreads in these systems since there are measurements for just a single RGB star in NGC2419 (Shetrone et~al. 2001), and our analysis of co-added spectra in Pal~4 presupposes that there is no abundance spread (see \S1; \S5.2.). In the case of Pal~3, where high-quality MIKE spectra are available for four RGB stars, we can confidently rule out an abundance spread larger than $\sim$~0.1~dex (Paper~I). Regarding the dark matter content of these systems, Baumgardt et~al. (2009) have recently carried out a dynamical analysis of NGC2419, finding $M/L_V = 2.05\pm0.50$ in solar units. This value is typical of GCs (McLaughlin \& van der Marel 2005) and {\it much} smaller than the extreme values reported for UF-dSphs (e.g., Simon \& Geha 2007; Strigari et~al. 2008). Detailed dynamical modeling of Pal~3 and 4 will be the subject of a future paper in this series, but it is clear that the extreme $M/L_V$ values for UF-dSph galaxies can be ruled out at a very high confidence (i.e., for a system like Pal~4, with $L_V \sim 2.1\times10^4~L_{V,{\odot}}$, known UF-dSphs have mass-to-light ratios of $\approx$ $10^3$ to 10$^4$; Strigari et~al. 2008; Geha et~al. 2009). In short, the available evidence suggests that Pal~4 (and Pal~3) formed in a manner resembling that of typical halo GCs, although it is clear that additional abundance measurements for stars in these and other remote GCs is needed urgently. Indeed, each contains many RGB stars that are well within the reach of high-resolution spectrographs on 8m-class telescopes. Such observations would allow a direct measurement of the intrinsic abundance spread within these systems --- an important clue to their origin and relationship to other halo substructures such as dSph and UF-dSph galaxies. \section{Summary} Motivated by the good agreement between the abundance ratios measured from high-S/N spectra of individual stars in Pal~3 and those found using co-added, low-S/N spectra (Paper~I), we have used the same technique to measure chemical abundance ratios in the remote halo GC Pal~4. Although systematic uncertainties and the low S/N ratios complicate such studies, an accuracy of 0.2 dex is possible for most abundance ratios, sufficient to place such faint and remote systems into a context with both the inner and outer halo GCs, as well as dSph and UF-dSph galaxies. In the future, this technique may enable the global abundance patterns to be characterized in additional remote systems, allowing a first reconnaissance of the chemical enrichment histories of remote Galactic satellites. Perhaps the most striking finding in Pal~4 is the subsolar [Mg/Ca] ratio, which is not observed in the sample of reference GCs that span a broad range of Galactocentric distances. Despite an overlap of our observed ratio with the halo field population, its low value may rather resemble the low-[Mg/Ca] tail of the distribution for dSph stars. In contrast, we see tentative evidence for a solar [Ba/Y] ratio, which militates against a slow chemical evolution and accompanying AGB enrichment as suggested by enhanced [Ba/Y] values in about two thirds of the dSph stars studied to date. Overall, most of the element ratios determined in this study overlap with the corresponding measurements for halo field stars, although a few ratios seem to fall above the halo star trends (see \S5). This favors a scenario in which the material from which both Pal~4 and the Galactic halo formed underwent rather similar enrichment processes. In their analysis of the CMD of Pal~4, Stetson et al. (1999) state that the cluster is younger than the inner halo GC M5 by about 1.5 Gyr (at [Fe/H]=$-$1.33 dex; Ivans et al. 2003; Koch \& McWilliam 2010) {\em if} they ``all have the same composition -- and [...] this means both [Fe/H] and [$\alpha$/Fe]''. Our work has shown that Pal~4 is enhanced by +0.38$\pm$0.11~dex in the $\alpha$-elements, which is consistent with the value of 0.3 dex assumed in the above CMD modeling. On the other hand, the CMD analysis suggested an [Fe/H] of $-$1.28 dex, which is slightly more metal rich than what we found in the present spectroscopic study: $\langle$[Fe/H]$\rangle$ = $-$1.41 dex. As noted in Vandenberg (2000), ``an increase in [Fe/H] or [$\alpha$/Fe] would result in slightly younger [...] ages'' for Pal~4 (as determined via the magnitude offset between the horizontal branch and main-sequence turnoff). This would imply that Pal~4 is slightly older than found in Stetson et~al (1999) and hence more similar in age to the older halo population. This, however, is in contradiction to the younger age suggested by its peculiar (i.e., red) horizontal branch morphology, unless further parameters, such as red giant mass loss, are invoked (Catelan 2000). Based on the evidence at hand, Pal~4 seems to have an abundance pattern that is typical of other remote GCs in the outer halo. An open question, given the nature of our analysis which relies on co-adding individual RGB star spectra, is whether Pal~4 is monometallic or, like dSph and UF-dSph galaxies, shows an internal spread in metallicity. We argued in Sect.~5.2. judging from our limited quality spectra, however, that it is unlikely that this object exhibits any significant intrinsic iron scatter. It is clear that high-quality abundance ratio measurements for individual stars in Pal~4 and other remote substructures are urgently needed to understand the relationship, if any, between remote GCs and other substructures in the outer halo. \begin{acknowledgements} We thank I.U. Roederer for discussions and an anonymous referee for a very helpful report. AK acknowledges support by an STFC postdoctoral fellowship. This work was based on observations obtained at the W. M. Keck Observatory, which is operated jointly by the California Institute of Technology and the University of California. We are grateful to the W. M. Keck Foundation for their vision and generosity. We recognize the great importance of Mauna Kea to both the native Hawaiian and astronomical communities, and we are grateful for the opportunity to observe from this special place. \end{acknowledgements}
1,116,691,498,747
arxiv
\section{Introduction} Our understanding of galaxy formation and evolution is directly linked to understanding the physical properties of the interstellar medium (ISM) of galaxies \citep{Kennicutt1998, Leroy2008, Hopkins2012, Magdis2012, Scoville2016}. Dusty star-forming galaxies (DSFGs), with star-formation rates in excess of 100\,M$_{\odot}$\,yr$^{-1}$, are an important contributor to the star-formation rate density of the Universe \citep{Chary2001, Elbaz2011}. However, our knowledge of the interstellar medium within these galaxies is severely limited due to high dust extinction with typical optical attenuations of $A_V \sim 6–10$\,mag \citep{Caseycoorayreview}. Instead of observations of rest-frame UV and optical lines, crucial diagnostics of the ISM in DSFGs can be obtained with spectroscopy at mid- and far-infrared wavelengths \citep{Spinoglio1992}. In particular, at far-infrared wavelengths, the general ISM is best studied through atomic fine-structure line transitions, such as the [C\,II]\,158 $\mu$m line transition. Such studies complement rotational transitions of molecular gas tracers, such as CO, at mm-wavelengths that are effective at tracing the proto-stellar and dense star-forming cores of DSFGs (e.g. \citealp{Carilli2013}). Relative to the total infrared luminosities, certain atomic fine-structure emission lines can have line luminosities that are the level of a few tenths of a percent \citep{Stacey1989, Carilli2013, Riechers2014, Aravena2016, Spilker2016, Hemmati2017}. Far-infrared fine-structure lines are capable of probing the ISM over the whole range of physical conditions, from those that are found in the neutral to ionized gas in photodissociation regions (PDRs; \citealt{Tielens1985, Hollenbach1997, Hollenbach1999, Wolfire1993, Spaans1994, Kaufman1999}) to X-ray dominated regions (XDRs; \citealt{Lepp1988, Bakes1994, Maloney1996, Meijerink2005}), such as those associated with an AGN, or shocks \citep{Flower2010}. Different star-formation modes and the effects of feedback are mainly visible in terms of differences in the ratios of fine-structure lines and the ratio of fine-structure line to the total IR luminosity \citep{Sturm2011, Kirkpatrick2012, Fernandez2016}. Through PDR modeling and under assumptions such as local thermodynamic equilibrium (LTE), line ratios can then be used as a probe of the gas density, temperature, and the strength of the radiation field that is ionizing the ISM gas. An example is [C\,II]/[O\,I] vs. [O\,III]/[O\,I] ratios that are used to separate starbursts from AGNs (e.g. \citealt{Spinoglio2000,Fischer1999}). \begin{figure*}[th] \centering \includegraphics[trim=2cm 1cm 0cm 0cm, scale=0.9]{lum_histograms.pdf} \caption{\textit{Top:} Distribution of redshifts for sources included in each of the five redshift bins: (a) 115 sources with $0.005 < z < 0.05$, (b) 34 sources with $0.05 < z < 0.2$, (c) 12 sources with $0.2 < z < 0.5$, (d) 8 sources with $0.8 < z < 2$, and (e) 28 sources with $2 < z < 4$. The low number of sources in the two intermediate redshift bins of $0.2 < z < 0.5$ and $0.8 < z < 2$ is due to lack of observations. \textit{Bottom:} Total infrared luminosities (rest-frame $8-1000\,\mu$m) for sources included in each of the five redshift bins above with a median luminosity of log$_{10}$(L$_{\rm IR}$/L$_{\odot}$) = 11.35, 12.33, 11.89, 12.53, and 12.84, respectively. For lensed sources in the $2 < z < 4$ range, we have made a magnification correction using best-determined lensing models published in the literature (See Section 2).} \label{fig:lumhist} \end{figure*} In comparison to the study presented here using {\it Herschel} SPIRE/FTS (\citealt{Pilbratt2010, Griffin2010}) data, we highlight a similar recent study by \citet{Wardlow2017} on the average rest-frame mid-IR spectral line properties using all of their archival high-redshift data from the {\it Herschel}/PACS instrument \citep{Poglitsch2010}. While the sample observed by SPIRE/FTS is somewhat similar, the study with SPIRE extends the wavelength range to rest-frame far-IR lines from the mostly rest-frame mid-IR lines detected with PACS. In a future publication, we aim to present a joint analysis of the overlap sample between SPIRE/FTS and PACS, but here we mainly concentrate on the analysis of FTS data and the average stacked spectra as measured from the SPIRE/FTS data. We also present a general analysis with interpretation based on PDR models and comparisons to results in the literature on ISM properties of both low- and high-$z$ DSFGs. The paper is organized as follows. In Sections 2 and 3, we describe the archival data set and the method by which the data were stacked, respectively. Section 4 presents the stacked spectra. In Section 5, the average emission from detected spectral lines is used to model the average conditions in PDRs of dusty, star-forming galaxies. In addition, the fluxes derived from the stacked spectra are compared to various measurements from the literature. We discuss our results and conclude with a summary. A flat-$\Lambda$CDM cosmology of $\Omega_{m_0}$ = 0.27, $\Omega_{\Lambda_0}$ = 0.73, and $H_0$ = 70 $\rm{km} ~ \rm{s^{-1}} ~ \rm{Mpc^{-1}}$ is assumed. With {\it Herschel}\ operations now completed, mid- and far-IR spectroscopy of DSFGs will not be feasible until the launch of next far-IR mission, expected in the 2030s, such as SPICA \citep{SPICA2010} or the Origins Space Telescope \citep{Meixner2016}. The average spectra we present here will remain the standard in the field and will provide crucial input for the planning of the next mission. \begin{figure*} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.7]{stack_all.pdf} \caption{\textit{Top:} Average far-infrared stacked spectrum containing all data. Sources range in redshift from $0.005<z<4$. This stack serves as a qualitative representation of the average spectrum of all of the \textit{Herschel} spectra. For the purposes of analysis and interpretation, the dataset is split into redshift and luminosity bins for the remainder of this paper. Dashed blue vertical lines indicate the locations of main molecular emission lines. We detect the fine-structure lines [C\,II], [O\,I], and [O\,III] as well as the CO emission line ladder from $J = 13-12$ to $J = 5-4$. Also detected are the two lowest [C\,I] emissions at 492\,GHz (609 $\mu$m) and 809\,GHz (370 $\mu$m), [N\,II] at 1461\,GHz (205\,$\mu$m) and the water lines within the frequency (wavelength) range covered in this stack from 50\,$\mu$m to 652\,$\mu$m). \textit{Middle}: Signal-to-noise ratio. The horizontal dashed line indicates $\rm S/N = 3.5$, and the solid red line represents $\rm S/N = 0$. \textit{Bottom}: The number of sources that contribute to the stack at each wavelength.} \label{fig:stack_all} \end{figure*} \begin{figure*} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.7]{z0-005.pdf} \caption{\textit{Top}: Stacked SPIRE/FTS spectrum of archival sources with $0.005 < z < 0.05$. Overlaid is the $1\sigma$ jackknifed noise level in red and dashed vertical lines showing the locations of main molecular emission lines. We detect the CO emission line ladder from $J = 13-12$ to $J = 5-4$, as well as the two lowest [C\,I] emissions at 492\,GHz (609 $\mu$m) and 809\,GHz (370 $\mu$m), [N\,II] at 1461\,GHz (205\,$\mu$m) and the water lines within the rest frequencies (wavelengths) covered in this stack from 460\,GHz to 1620\,GHz (185\,$\mu$m to 652\,$\mu$m). \textit{Middle}: Signal-to-noise ratio. The horizontal dashed line indicates $\rm S/N = 3.5$, and the solid red line indicates $\rm S/N = 0$. Lines with $\rm S/N > 3.5$ were considered detected. \textit{Bottom}: The number of sources that contribute to the stack at each frequency. } \label{fig:z0-005} \end{figure*} \begin{figure*} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.7]{z005-02.pdf} \caption{Same as Figure \ref{fig:z0-005}, but for the redshift range $0.05 < z < 0.2$. We detect all the CO emission line ladder within the frequency (wavelength) covered by the stack from 480\,GHz to 1760\,GHz (170\,$\mu$m to 625 $\mu$m). The stacked spectrum also shows 3.5$\sigma$ detection for $\rm [C\,I](2-1)$ at 809 GHz ($370 \,\mu$m), [N\,II] at 1461\,GHz (205 $\mu$m), and water lines.} \label{fig:z005-02} \end{figure*} \begin{figure*} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.7]{z02-5.pdf} \caption{Same as Figure \ref{fig:z0-005}, but for the redshift range $0.2 < z < 0.5$. We only detect the [C\,II] at 1901\,GHz (158\,$\mu$m) line in this stack with frequency (wavelength) coverage 580\,GHz to 2100\,GHz (143\,$\mu$m to 517\,$\mu$m).} \label{fig:z02-5} \end{figure*} \begin{figure*} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.7]{z08-2.pdf} \caption{Same as Figure \ref{fig:z0-005}, but for the redshift range $0.8 < z < 2$. We detect [N\,II] at 1461\,GHz (205\,$\mu$m), [C\,II] at 1901\,GHz (158\,$\mu$m) and [O\,III] at 3391\,GHz (88\,$\mu$m) in the frequency (wavelength) range of 950\,GHz to 4100\,GHz (70\,$\mu$m to 316\,$\mu$m) covered by the stack.} \label{fig:z08-2} \end{figure*} \begin{figure*} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.7]{z2-4.pdf} \caption{Same as Figure \ref{fig:z0-005}, but for the redshift range $2 < z < 4$. We detect [C\,II] at 1901\,GHz (158\,$\mu$m) and [O\,III] at 3391\,GHz (88\,$\mu$m) in the frequency (wavelength) range of 1400\,GHz to 6200\,GHz (48\,$\mu$m to 214\,$\mu$m).} \label{fig:z2-4} \end{figure*} \begin{figure*} \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.7]{lowz_lirbins.pdf} \caption{The lowest redshift bin ($0.005 < z < 0.05$) is stacked using a straight mean (without inverse-variance weighting) in five luminosity bins as outlined in each panel. From top to bottom, the median luminosities in each bin are $10^{11.12}$ L$_{\odot}$, $10^{11.32}$ L$_{\odot}$, $10^{11.49}$ L$_{\odot}$, $10^{11.69}$ L$_{\odot}$, and $10^{12.21}$ L$_{\odot}$. The mean redshifts in each bin are 0.015, 0.018, 0.021, 0.027, and 0.038. The number of sources contributing to each bin are 37, 28, 17, 24, 9. and The CO molecular line excitations, [C\,I] atomic emissions, and [N\,II] at 205\,$\mu$m are detected in all five luminosity bins. } \label{fig:lowz_lirbins} \end{figure*} \section{Data} Despite the potential applications of mid- and far-IR spectral lines, the limited wavelength coverage and sensitivity of far-IR facilities have restricted the vast majority of observations to galaxies in the nearby universe. A significant leap came from the {\it Herschel} Space Observatory \citep{Pilbratt2010}, thanks to the spectroscopic capabilities of the Fourier Transform Spectrometer (FTS; \citealt{Naylor2010, Swinyard2014}) of the SPIRE instrument \citep{Griffin2010}. SPIRE covered the wavelength range of $194\,\rm{\mu m} - 671 \,\rm{\mu m}$, making it useful in the detection of ISM fine structure cooling lines, such as [C\,II] 158\,$\rm{\mu m}$, [O\,III] 88\,$\rm{\mu m}$, [N\,II] 205\,$\rm{\mu m}$, and [O\,I] 63\,$\rm{\mu m}$, in high-redshift galaxies and carbon monoxide (CO) and water lines (H$_2$O) from the ISM of nearby galaxies. The {\it Herschel} data archive contains SPIRE/FTS data for a total of 231 galaxies, with 197 known to be in the redshift interval $0.005 < z< 4.0$, completed through multiple programs either in guaranteed-time or open-time programs. While most of the galaxies at $0.5 < z < 4$ are intrinsically ultra-luminous IR galaxies (ULIRGS; \citealt{Sanders1996}), with luminosities greater than $10^{12}\,$L$_{\odot}$, archival observations at $z > 2$ are mainly limited to the brightest dusty starbursts with apparent L $ > 10^{13}\,$L$_{\odot}$ or hyper-luminous IR galaxies (HyLIRGs). Many of these cases, however, are gravitationally lensed DSFGs and their intrinsic luminosities are generally consistent with ULIRGS. At the lowest redshifts, especially in the range $0.005< z< 0.05$, many of the targets have L $ < 10^{12}\,$L$_{\odot}$ or are luminous IR galaxies (LIRGs). While fine-structure lines are easily detected for such sources, most individual archival observations of brighter ULIRGs and HyLIRGs at $z > 1$ do not reveal clear detections of far-infrared fine-structure lines despite their high intrinsic luminosities \citep{Georgethesis}, except in a few very extreme cases such as the Cloverleaf quasar host galaxy \citep{Uzgil2016}. Thus, instead of individual spectra, we study the averaged stacked spectra of DSFGs, making use of the full SPIRE/FTS archive of {\it Herschel}. Given the wavelength range of SPIRE and the redshifts of observed galaxies, to ease stacking, we subdivide the full sample of 197 galaxies into five redshift bins (Figure \ref{fig:lumhist}), namely, low-redshift galaxies at $0.005 < z < 0.05$ and $0.05 < z < 0.2$, intermediate redshifts $0.2< z < 0.5$, and high-redshift galaxies at $0.8 < z < 2$ and $2 < z < 4$. Unfortunately, due to lack of published redshifts, we exclude observations of 24 targets or roughly 10\% of the total archival sample (231 sources) from our stacking analysis expected to be mainly at $z > 1$ based on the sample selection and flux densities. This is due to the fact that redshifts are crucial to shift spectra to a common redshift, usually taken to be the mean of the redshift distribution in each of our bins. For these 24 cases we also did not detect strong individual lines, which would allow us to establish a redshift conclusively with the SPIRE/FTS data. Most of these sources are likely to be at $z > 1$ and we highlight this subsample in the Appendix to encourage follow-up observations. We also note that the SPIRE/FTS archive does not contain any observations of galaxies in the redshift interval of 0.5 to 0.8 and even in the range of $0.8 < z < 2$, observations are simply limited to 8 galaxies, compared to attempted observations of at least 28 galaxies, and possibly as high as 48 galaxies when including the subsample without redshifts, at $z > 2$. The data used in our analysis consist of 197 publicly-available {\it Herschel} \, SPIRE/FTS spectra, as part of various Guaranteed Time (GT) and Open-Time (OT) {\it Herschel} programs summarized in the Appendix (Table \ref{table:obsids}). Detailed properties of the sample are also presented in the Appendix (Table \ref{table:all_targets}) for both low and high redshifts where the dividing line is at $z=0.8$, with 161 and 36 objects respectively. Table \ref{table:all_targets} also lists 34 sources at the end with existing FTS observations but which were not used in the analysis. The majority of unused sources have unknown or uncertain spectroscopic redshifts. This includes MACS J2043-2144 for which a single reliable redshift is currently uncertain as there is evidence for three galaxies with $z=2.040$, $z=3.25$, and $z=4.68$ within the SPIRE beam \citep{Zavala2015}. The sources SPT 0551-50 and SPT 0512-59 have known redshifts but do not have magnification factors. The low-redshift sample is restricted to DSFGs with $z > 0.005$ only. This limits the bias in our stacked low-$z$ spectrum from bright near-by galaxies such as M81 and NGC 1068. Our selection does include bright sources such as Arp 220 and Mrk 231 in the stack, but we study their impact by breaking the lowest redshift sample into luminosity bins, including a ULIRG bin with L$_{\rm IR} > 10^{12}\,$L$_{\odot}$. The {\it Herschel} sample of dusty, star-forming galaxies is composed of LIRGS with $10^{11}$ L$_{\odot} \, < $ L $ < 10^{12}\,$L$_{\odot}$ and ULIRGS with L $> 10^{12}\,$L$_{\odot}$. The sample is heterogeneous, consisting of AGN, starbursts, QSOs, LINERs, and Seyfert types 1 and 2. The low-redshift SPIRE/FTS spectra were taken as part of the HerCULES program (\citealp{Rosenberg2015}; PI van der Werf), HERUS program (\citealp{Pearson2016}; PI Farrah), and the Great Observatory All-Sky LIRG Survey (GOALS; \citealp{Armus2009}, \citealp{Lu2017}, PI: N. Lu) along with supplementary targets from the $\rm{KPGT\textunderscore wilso01\textunderscore 1}$ (PI: C. Wilson) and $\rm{OT2\textunderscore drigopou\textunderscore 3}$ (PI: D. Rigopoulou) programs. At $0.2 < z < 0.5$, the SPIRE/FTS sample of 11 galaxies is limited to \citet{Magdis2014}, apart from one source, IRAS 00397-1312, from \citet{Helouwalker1988} and \citet{Farrah2007}. Note that the \citet{Magdis2014} sample contained two galaxies initially identified to be at $z < 0.5$, but later found to be background $z > 2$ galaxies that were lensed by the $z < 0.5$ foreground galaxy. Those data are included in our high-redshift sample. The high-redshift sample at $z > 0.8$ primarily comes from open-time programs that followed-up lensed galaxies from HerMES \citep{Oliver2012} and {\it H}-ATLAS \citep{Eales2010}, and discussed in \citet{Georgethesis}. Despite the boosting from lensing, only a few known cases of individual detections exist in the literature: NB.v1.43 at $z=1.68$ \citep{George2013, Timmons2016}, showing a clear signature of [C\,II] that led to a redshift determination for the first-time with a far-IR line, SMMJ2135-0102 (Cosmic eyelash; \citealp{Ivison2010}), ID.81 and ID.9 \citep{Negrello2014}. With lens models for {\it Herschel}\,-selected lensed sources now in the literature (e.g., \citealp{Bussmann2013,Calanog2014}), the lensing magnification factors are now known with reasonable enough accuracy that the intrinsic luminosities of many of these high-redshift objects can be established. The $z > 0.8$ sample is composed of 30 high-redshift, gravitationally-lensed galaxies (e.g., OT1\textunderscore rivison\textunderscore 1, OT2\textunderscore rivison\textunderscore 2) and 6 un-lensed galaxies (OT1\textunderscore apope\textunderscore 2 and one each from OT1\textunderscore rivison\textunderscore 1 and OT2\textunderscore drigopou\textunderscore 3). The distribution of redshifts can be found in Figure \ref{fig:lumhist}, where we have subdivided the total distribution into five redshift bins: $0.005 < z < 0.05$, $0.05 < z < 0.2$, $0.2 < z < 0.5$, $0.8 < z < 2$, and $2 < z < 4$. The mean redshifts in the five redshift bins are $z = 0.02$, $z = 0.1$, and $z = 0.3$, $z = 1.4$, and $z = 2.8$, respectively. For reference, in Figure \ref{fig:lumhist}, we also show the $8-1000\,\mu$m luminosity distribution in the five redshift bins. The distribution spans mostly from LIRGS at low-redshifts to ULIRGS at $0.05 < z < 0.2$ and above. In the highest redshift bins we find ULIRGS again, despite increase in redshift, due to the fact that most of these are lensed sources; with magnification included, the observed sources will have apparent luminosities consistent with HyLIRGS. Unfortunately, there is a lack of data between redshifts of $z \sim 0.2$ and $z \sim 1$, with the \citet{Magdis2014} sample and the spectrum of IRAS 00397-1312 from HERUS (\citealp{Pearson2016}) being the only SPIRE/FTS observed spectra in this range. In general, SPIRE/FTS observations we analyze here were taken in high resolution mode, with a spectral resolving power of $300-1000$ through a resolution of 1.2\,GHz and frequency span of $\rm 447\,GHz-1568\,GHz$. The data come from two bolometer arrays: the spectrometer short wavelength (SSW) array, covering $\rm 194\,\mu m-318\,\mu m$ ($\rm 944\,GHz–1568\,GHz$) and the spectrometer long wavelength (SLW) array, covering $\rm 294\,\mu m-671\,\mu m$ ($\rm 447\,GHz–1018\,GHz$). The two arrays have different responses on the sky with the full-width half-maximum (FWHM) of the SSW beam at 18$^{\prime\prime}$ and the SLW beam varying from 30$^{\prime\prime}$ to 42$^{\prime\prime}$ with frequency \citep{Swinyard2014}. The SPIRE/FTS data typically involves $\sim90-100$ scans of the faint, high-redshift sources and about half as many scans for the lower-redshift sources. Total integration times for each source are presented in Table \ref{table:obsids}. Typical total integration times of order 5000 seconds achieve unresolved spectral line sensitivities down to $\sim 10^{-18}\,{\rm W\,m^{-2}} (3\sigma)$. \begin{figure*}[!th] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.75]{z0-005_post.pdf} \caption{Sinc-Gauss and sinc fits to the detected atomic and molecular lines in the low-redshift stack at $0.005<z<0.05$. The spectrum itself is shown in black. The green curve shows a sinc fit, red shows sinc-Gauss fit, and the blue curve is the 1$\sigma$ jackknife noise level. The sinc fit is often too thin to capture the full width of the spectral lines. The lines are shifted to the rest-frame based on the public spectroscopic redshifts reported in the literature. Fluxes are measured from the best-fit models. The fluxes of the lines are reported in Table \ref{table:linefluxes1}. } \label{fig:z0-005_post} \end{figure*} \begin{figure*}[!th] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.75]{z005-02_post.pdf} \caption{Sinc-Gauss (red) and sinc (green) fits to the detected atomic and molecular lines in the stack at $0.05<z<0.2$, with the spectrum itself in black. We detect all the lines same as the low redshift stack (Figure \ref{fig:z0-005_post}) albeit with a different detection significance. In particular [C\,I] (1-0) is marginally detected in this redshift bin as fewer than ten sources contribute to the stack at this frequency, leading to a higher jackknife noise level. Fluxes of lines detected in this stack are also reported in Table \ref{table:linefluxes1}.} \label{fig:z005-02_post} \end{figure*} \begin{figure}[!th] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.75]{intz_post.pdf} \caption{Sinc-Gauss (red) and sinc (green) fits to the [C\,II] line in the $0.2<z<0.5$ stack. The spectrum itself is shown in black with the 1$\sigma$ noise level in blue.} \label{fig:int_z_post} \end{figure} \begin{figure*}[!th] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, scale=0.7]{highz_post1.pdf} \caption{Fits to lines for the three luminosity bins of the high-redshift sources. The sinc-Gauss fit is shown in red, and the sinc-only fit is shown in green. The spectrum itself in black, and the 1$\sigma$ jackknife noise level is in blue.} \label{fig:high_z_post} \end{figure*} \begin{figure}[!th] \centering \includegraphics[trim=1cm 0cm 0cm 0cm, scale=0.75]{co_rb_lumbins.pdf} \includegraphics[trim=1.5cm 0cm 0cm 0.6cm, scale=0.65]{water_lumbins.pdf} \caption{{\it Top:} The carbon monoxide spectral line energy distribution for $0.005 < z < 0.05$ in five luminosity bins as presented in Figure \ref{fig:lowz_lirbins}. The filled regions are taken from \citet{Rosenberg2015} (see also \citealt{RobertsBorsani2017}), and they correspond to the range of CO flux ratios in normal star-forming galaxies (green stripes), starbursts and Seyferts (solid cyan), and ULIRGs and QSOs (orange stripes). {\it Bottom:} Spectral line energy distribution for transitions in water as a function of excitation temperature as in \citet{Yang2013} at $0.005 < z < 0.05$ in the luminosity bins in which water lines were strongly detected. These detections are compared to the water spectral line energy distribution for individual sources fit using sinc-Gauss profiles.} \label{fig:co_lumbins} \end{figure} \begin{figure*}[t] \centering \includegraphics[trim=2cm 0cm 0cm 0cm, scale=0.85]{contours_lumbins.pdf} \caption{Conditions in the ISM as probed by neutral [C\,I] (2-1)/[C\,I] (1-0) line ratio for $0.005 < z < 0.05$ and $0.05 < z < 0.2$ redshift bins. RADEX contours for an array of theoretical [C\,I] (2-1)/[C\,I] (1-0) ratios are shown in black. The dashed lines represent the 1$\sigma$ uncertainty.} \label{temp_ci_lumbins} \end{figure*} \section{Stacking Analysis} The Level-2 FTS spectral data are procured from the {\it Herschel} Science Archive (HSA) where they have already been reduced using version SPGv14.1.0 of the \textit{Herschel} Interactive Processing Environment (HIPE, \citet{Ott2010}) SPIRE spectrometer single pointing pipeline \citep{Fulton2016} with calibration tree \textsc{SPIRE\_CAL\_14\_2}. We use the point-source calibrated spectra. Additional steps are required to further reduce the data. An important step is the background subtraction. While {\it Herschel}/SPIRE-FTS observations include blank sky dark observations taken on or around the same observing day as the source observations are taken, they do not necessarily provide the best subtraction of the background \citep{Pearson2016}. The same study also showed that attempts to use a super-dark by combining many dark-sky observations into an average background do not always yield an acceptable removal of the background from science observations. Instead, the off-axis detectors present in each of the SPIRE arrays are used to construct a ``dark'' spectrum \citep{Polehampton2015}. These off-axis detectors provide multiple measurements of the sky and telescope spectra simultaneous with the science observations and are more effective at correcting the central spectrum. The background is constructed by taking the average of the off-axis detector spectra, but only after visually checking the spectra via HIPE's background subtraction script \citep{Polehampton2015} to ensure that the background detectors do not contain source emission. If any outliers are detected, they are removed from the analysis. Such outliers are mainly due to science observations that contain either an extended source or a random source that falls within the arrays. We use the average from all acceptable off-axis detectors from each science observation as the background to subtract from the central one. In a few unusual cases, a continuum bump from residual telescope emission in some spectra was better subtracted using a blank sky dark observation rather than an off-axis subtraction. In these cases, background subtraction was performed using the blank sky dark observation. As part of the reduction, and similar to past analysis (e.g., \citealt{Rosenberg2015, Pearson2016}), we found a sizable fraction of the sources to show a clear discontinuity in flux between the continuum levels of the central SLW and SSW detectors in the overlap frequency interval between 944\,GHz and 1018\,GHz. If this discontinuity is still visible after the background subtraction (off-axis detector background or blank sky observation background) as discussed above, then we considered this offset to be an indication of extended source emission. For extended sources, we subtract a blank sky dark (and not an off-axis dark, as off-axis detectors may contain source emission) and correct for the source's size with HIPE's semiExtendedCorrector tool (SECT, \citealt{Wu2013}), following the \citet{Rosenberg2015} method of modeling the source as a Gaussian and normalizing the spectra for a Gaussian reference beam of 42$^{\prime\prime}$. There are two other sources of discontinuity between the SLW and SSW detectors, one from a flux droop in the central SLW detector due to the recycling of the SPIRE cooler \citep{Pearson2016} and another due to potential pointing offsets \citep{Valtchanov2014}. Due to the differences in the size of the SLW and SSW SPIRE beams, a pointing offset can cause a larger loss of flux in the SSW beam than in the SLW beam. If an extended source correction was not able to fix the discontinuity between the SLW and SSW detectors, the discontinuity may likely be coming from the cooler recycling or from a pointing offset. We assume that these two effects are negligible, as we remove any continuum remaining after the application of SECT from the central SLW and SSW detectors by subtracting a second-order polynomial fit to the continuum. Once the corrected individual spectra are obtained, the high-redshift lensed sample was corrected for lensing magnification. The magnification factors come from lens models based on Sub-millimeter Array (SMA) and Keck/NIRC2-LGS adaptive optics observations \citep{Bussmann2013, Calanog2014}. Though these are mm-wave and optical magnifications while the present study involves far-IR observations, we ignore any effects of differential magnification \citep{Serjeant2014}. We simply make use of the best determined magnification factor, mainly from SMA analysis \citep{Bussmann2013}. For the overlapping lensed source sample with PACS spectroscopy, the lensing magnification factor used here is consistent with values used in \citet{Wardlow2017}. Sources with PACS spectroscopy that appear in \citet{Wardlow2017} are marked in Table \ref{table:all_targets}. \begin{figure}[!th] \centering \includegraphics[trim=0.5cm 0cm 0cm 0cm, scale=0.65]{data_calibration.pdf} \caption{Line versus infrared luminosity (rest-frame $8-1000\,\mu$m), L$_{\rm IR}$, of star-forming galaxies for [C\,II], [O\,I], and [O\,III] fine structure lines at high redshift. Background data are from the literature sources listed in the text. The solid green lines correspond to the average L$_{\rm line}$/L$_{\rm IR}$ ratios (-3.03, -2.94 and -2.84) for the [O\,I] 63.18\,$\mu$m, [O\,III] 88.36\,$\mu$m, and [C\,II] 157.7\,$\mu$m lines from the literature, respectively. The reason for the choice of a linear relation is explained in the text. The cyan stripes correspond to two times the dispersion around the mean relation ($\sigma = 0.35, 0.48$ and $0.43$, respectively). Also shown, for comparison, are the $L_{\rm line}$/L$_{\rm IR}$ relations found in the literature (see text).} \label{data_calibration} \end{figure} \begin{figure*}[!th] \centering \includegraphics[trim=1cm 0cm 0cm 0cm, scale=0.75]{cii_plus_oi.pdf} \includegraphics[trim=1cm 0cm 0cm 0cm, scale=0.75]{oi_over_cii.pdf}\\ \includegraphics[trim=1cm 0cm 0cm 1cm, scale=0.75]{oiii_over_cii.pdf} \includegraphics[trim=1cm 0cm 0cm 1cm, scale=0.75]{nii.pdf} \label{fig:cii_plus_oi} \caption{{\it Top Left:} Ratio of ([C\,II]+[O\,I]) luminosity to total infrared luminosity (rest-frame $8-1000\,\mu$m) in three luminosity bins for sources with $0.8 < z < 4$ as a function of total infrared luminosity. The breakdown of the three luminosity bins is as follows: L$_{\rm IR} < 10^{12.5}\,$L$_{\odot}$, $10^{12.5}\,$L$_{\odot} \, < $L$_{\rm IR} < 10^{13}\,$L$_{\odot}$, and L$_{\rm IR} > 10^{13}\,$L$_{\odot}$; however, [O\,I] is only detected in the middle luminosity bin. For comparison, we show data from \citet{Cormier2015,Brauher2008,Farrah2013} and \citet{SHINING2011}. {\it Bottom Left and Top Right:} Line ratios as a function of total infrared luminosity in three luminosity bins for sources with $0.8 < z < 4$. For comparison, we show data from \citet{Cormier2015} and \citet{Brauher2008}. {\it Right:} Line luminosity of the [N\,II] transition in luminosity bins for sources at $0.005 < z < 0.05$. Background data were produced by fitting to the [NII] lines in individual spectra in the HerCULES and GOALS samples.} \end{figure*} To obtain the average stacked spectrum in each of the redshift bins or luminosity bins as we discuss later we follow the stacking procedure outlined by \citet{Spilker2014}. It involves scaling the flux densities in individual spectra in each redshift bin to the flux densities that the source would have were it located at some common redshift (which we take to be the mean redshift in each bin) and then scaling to a common luminosity so that we can present an average spectrum of the sample. For simplicity, we take the mean redshift and median infrared luminosity in each bin and both scale up and scale down individual galaxy spectra in both redshift and luminosity to avoid introducing biases in the average stacked spectrum; however we note that the sample does contain biases associated with initial sample selections in the proposals that were accepted for {\it Herschel}/SPIRE-FTS observations. We discuss how such selections impact a precise interpretation of the spectra in the discussion. We now outline the process used in the scaling of spectra. The background-subtracted flux densities of the spectra are scaled to the flux values that they would have at the common redshift, which was taken to be the mean redshift in each of the redshift categories; namely, $z_{\rm com} = 0.02$ for the $0.005<z<0.05$ sources, $z_{\rm com} = 0.1$ for $0.05<z<0.2$ sources, $z_{\rm com} = 0.3$ for $0.2<z<0.5$ sources, $z_{\rm com} = 1.4$ for $0.8<z<2$ sources, and $z_{\rm com} = 2.8$ for $2<z<4$ sources. The choice between median or mean redshift does not significantly affect the overall spectrum or line fluxes. The flux density and error values (error values are obtained from the error column of the level-2 spectrum products from the {\it Herschel} Science Archive) of each spectrum are multiplied by the scaling factor given in \citet{Spilker2014}: \begin{equation} f=\bigg(\frac{D_{\rm L}(z_{\rm src})}{D_{\rm L}(z_{\rm com})}\bigg)^2 \times\bigg(\frac{1 + z_{\rm com}}{1 + z_{\rm src}}\bigg) \label{scale_factor} \end{equation} where $D_{\rm L}$ is the luminosity distance. The flux density and error values of each spectrum are then representative of the flux density and error values that the source would have were it located at $z_{\rm com}$. The frequency axes of the scaled spectra are then converted from observed-frame frequencies to rest-frame frequencies. To normalize the spectra, all spectrum flux densities and errors are scaled by a factor such that each source will have the same total infrared luminosity (rest-frame $8-1000 \, \mu$m); namely, L$_{\rm IR} = 10^{11.35} \,$L$_{\odot}$, $10^{12.33}\,$L$_{\odot}$, $10^{11.89}\,$L$_{\odot}$, $10^{12.53}\,$L$_{\odot}$ and $10^{12.84}\,$L$_{\odot}$ in each of the five bins, respectively. In the two highest redshift bins, we calculate a total infrared luminosity by fitting a single-temperature, optically-thin, modified blackbody (i.e. greybody with $S({\nu})\propto\nu^{\beta}B_{\nu}(T)$ where $B_{\nu}(T)$ is the Planck function) spectral energy distribution (SED) (commonly used in the literature, e.g. \citealp{Casey2012, Bussmann2013}) to the available photometry in the infrared from {\it Herschel} and public IRSA data. For this we use the publicly available code developed by \citet{Casey2012} assuming a fixed emissivity ($\beta = 1.5$) (e.g. \citealp{Bussmann2013}). The resulting infrared luminosities are presented in Table \ref{table:all_targets}, along with lensing magnification factors and references. Luminosities in the tables are corrected for lensing magnification (where applicable), and we ignore the uncertainty in magnification from existing lens models. Sources without a magnification of factor $\mu$ are not affected by gravitational lensing. After the spectra are scaled to a common IR luminosity, a second-order polynomial is then fit to the continuum of each source and is subsequently subtracted from each source spectrum. Instrumental noise impacts the continuum subtraction and leads to residuals in the continuum-subtracted spectrum. These residuals in return impact the detection of faint lines. A number of objects have multiple FTS spectra, taken at multiple time intervals as part of the same program or observations conducted in different programs. Multiples of the same object are combined into a single average spectrum by calculating the mean flux density at each frequency for each of the repeats. This mean spectrum is what is used in the stacking procedure. After the spectra are calibrated and scaled, the flux values at each frequency in the rest frame of the spectra are stacked using an inverse variance weighting scheme with the inverse of the square of the flux errors as weights. In the $0.005 < z < 0.05$ stack, a minority of the sources (though still a significant subset of the total) have high signal-to-noise ratios and thus dominate over the other sources when using the inverse variance weighting scheme. To avoid this bias without throwing out sources, we stack the $0.005 < z < 0.05$ bin by calculating the mean stack without inverse variance weighting. The unweighted mean stack is shown in Figure \ref{fig:z0-005}. The inverse variance weighted stack for this redshift bin is presented in the Appendix for comparison. The noise level of the stacked spectrum in each of the five redshift bins is estimated using a jackknife technique in which we remove one source from the sample and then stack. The removed source is replaced, and this process is repeated for each source in the sample. The jackknife error in the mean of the flux densities at each frequency from the jackknifed stacks is taken to be the 1$\sigma$ noise level in the overall stacked spectrum in each redshift bin. The red curves in the upper panels of Figures \ref{fig:z0-005} - \ref{fig:z2-4} are found by smoothing the jackknife error curve. \section{Stacking Results} The stacked spectra in each of the five redshift bins are shown in Figures \ref{fig:z0-005} - \ref{fig:z2-4}, while in Figure \ref{fig:lowz_lirbins} we show the mean stacks (no inverse-variance weighting) for the $0.005 < z < 0.05$ bin by sub-dividing the sample into five luminosity bins given by $10^{11.0}\,$L$_{\odot}<$ L$_{\rm IR}<10^{11.2}\,$L$_{\odot}$, $10^{11.2}\,$L$_{\odot}<$ L$_{\rm IR}<10^{11.4}\,$L$_{\odot}$, $10^{11.4}\,$L$_{\odot}<$ L$_{\rm IR}<10^{11.6}\,$L$_{\odot}$, $10^{11.6}\,$L$_{\odot}<$ L$_{\rm IR}<10^{12.0}\,$L$_{\odot}$, and L$_{\rm IR}>10^{12.0}\,$L$_{\odot}$. For the purposes of this study and for PDR model interpretations, we concentrate on lines that are detected at a signal-to-noise ratio greater than 3.5. The stacks do reveal detections with a signal-to-noise ratios at the level of 2.5 to 3; we will return to those lines in future papers. The natural line shape of the SPIRE FTS is a sinc profile \citep{Swinyard2014}. A sinc profile is typically used to fit unresolved spectral lines. However, a sinc profile may be too thin to fully capture the width of broad partially-resolved extragalactic spectral lines, in which case a sinc-Gauss (sinc convolved with a Gaussian) can provide a better fit\setcounter{footnote}{0}\footnote{\url{http://herschel.esac.esa.int/hcss-doc-15.0/index.jsp\#spire_drg:_start}}. For spectral lines with the same intrinsic line width, the sinc-Gauss fit gives a higher flux measurement than the sinc fit; the ratio of sinc-Gauss to sinc flux increases as a function of increasing spectral line frequency. For broad line-widths, the sinc-Gauss fit contains significantly more flux than the pure sinc fit. Because the stacked SPIRE/FTS spectra contain a variety of widths for each spectral line and because the width of each line is altered when scaling the frequency axis of the spectra to the common-redshift frame, the sinc profile appeared to under-fit all of the spectral lines in the stacked spectra, so a sinc-Gauss profile was used for flux extraction. See Figures \ref{fig:z0-005_post} - \ref{fig:high_z_post}. The width of the sinc component of the fit was fixed at the native SPIRE FTS resolution of 1.184\,GHz, and the width of the Gaussian component was allowed to vary. The integral of the fitted sinc-Gauss profile was taken to be the measured flux. The fluxes from the fits are presented in Tables \ref{table:linefluxes1} - \ref{table:linefluxes3}. In the case of an undetected line (i.e., the feature has less than 3.5$\sigma$ significance), we place an upper limit on its flux by injecting an artificial line with velocity width 300 km s$^{-1}$ (a typical velocity width for these lines; e.g., \citealt{Magdis2014}) into the stack at the expected frequency and varying the amplitude of this line until it is measured with 2$\sigma$ significance. The flux of this artificial line is taken to be the upper limit on the flux of the undetected line. The error on the fluxes includes a contribution from the uncertainty in the fits to the spectral lines as well as a 6$\%$ uncertainty from the absolute calibration of the FTS. The error due to the fit is estimated by measuring the ``bin-to-bin'' spectral noise of the residual spectrum in the region around the line of interest (see SPIRE Data Reduction Guide). The residual spectrum is divided into bins with widths of 30 GHz, and the standard deviation of the flux densities within each bin is taken to be the noise level in that bin. Additionally, we incorporate a 15$\%$ uncertainty for corrections to the spectra for (semi)-extended sources \citep{Rosenberg2015} in the lowest redshift stack. This 15\% uncertainty is not included for sources with $z > 0.05$, as these are all point sources (as verified by inspection). We now discuss our stacking results for the five redshift bins; for simplicity we define low-redshift as $0.005 < z< 0.2$, intermediate as $0.2 < z< 0.5$ and high-redshift as $0.8 < z<4$; both low and high-redshift have two additional redshift bins. Within these bins we also consider luminosity bins when adequate statistics allow us to further divide the samples. \subsection{Low-redshift stacks} Figures \ref{fig:z0-005} and \ref{fig:z005-02} show the stacked FTS spectra and corresponding uncertainty along with major atomic and molecular emission and absorption lines for the $0.005<z<0.05$ and $0.05<z<0.2$ bins respectively. With the large number of galaxy samples, the far-IR spectrum of lowest redshift bin results in a highly reliable average spectrum showing a number of ISM atomic and molecular emission lines. In particular we detect all the CO lines with $J_{\rm upper}>5$ out to the high excitation line of $\rm CO(13-12)$. This allows us to construct the CO spectral line energy distribution (SLED) and to explore the ISM excitation state in DSFGs in comparison with other starbursts and that of normal star-forming galaxies (see Section 5). We further detect multiple H$_2$O emission lines in these stacks which arise from the very dense regions in starbursts. The strength of the rotational water lines rivals that of the CO transition lines. We additionally detect the [C\,I] (1-0) at 609\,$\mu$m and [C\,I] (2-1) at 370\,$\mu$m along with [N\,II] at 205\,$\mu$m in both redshift bins. We will use these measured line intensity ratios in Section 5 to construct photodissociation region models of the ISM and to study the density and ionizing photon intensities. We note here that the [C\,I] line ratios are very sensitive to the ISM conditions and would therefore not always agree with more simplistic models of the the ISM. We will discuss these further in Section 5. For comparison to Figure \ref{fig:z0-005}, which is stacked using an unweighted mean, Figure \ref{fig:z005-02_inv} shows the $0.005 < z < 0.05$ sources stacked with an inverse variance weighting. A few absorption lines also appear in the low-redshift stack. Despite Arp 220 \citep{Rangwala2011} being the only individual source with strong absorption features, many of the absorption features are still present in the stack due to the high signal-to-noise ratio of Arp 220 in conjunction with an inverse variance weighting scheme for stacking. The SPIRE FTS spectrum of Arp 220 has been studied in detail in \citet{Rangwala2011} and is characterized by strong absorption features in water and related molecular ions OH$^+$ and H$_2$O$^+$ interpreted as a massive molecular outflow. The best-fit profiles of the detected lines in the low-redshift stacks are shown in Figures \ref{fig:z0-005_post} and \ref{fig:z005-02_post} for the $0.005<z<0.05$ and $0.05<z<0.2$ redshift bins, respectively. Fluxes in $\rm{W\,m}^{-2}$ are obtained by integrating the best-fit line profiles. Table \ref{table:linefluxes1} summarizes these line fluxes as well as velocity-integrated fluxes from the sinc-Gauss fits for detections with $\rm S/N > 3.5$ in these stacks. As discussed above, we further stack the lowest redshift bin ($0.005<z<0.05$) in five infrared luminosity bins. Figure \ref{fig:lowz_lirbins} shows the stacked FTS spectra each of these luminosity bins. See the caption in Figure \ref{fig:lowz_lirbins} for the redshift and luminosity breakdown of the sample. By comparing these stacks we can look at the effects of infrared luminosity on emission line strengths. It appears from these stacked spectra that the high-$J$ CO lines are comparable in each of the luminosity bins. We explore the variation in the [N\,II] line in the discussion. Fluxes for the lines in each luminosity bin are tabulated in Figure \ref{table:linefluxes2}. \subsection{Intermediate-redshift stacks} We show the intermediate-redshift ($0.2<z<0.5$) stack in Figure \ref{fig:z02-5}. Due to the limited number of galaxies observed with SPIRE/FTS in this redshift range, we only detect a bright [C\,II] line with our threshold signal-to-noise ratio of 3.5. The [C\,II] 158\,$\mu$m fine structure line is a main ISM cooling line and is the most pronounced ISM emission line detectable at high redshifts, when it moves into mm bands, revealing valuable information on the state of the ISM. We further discuss these points in Section 5. Figure \ref{fig:int_z_post} shows the best-fit profile to the [C\,II] line in the intermediate redshift. The measured fluxes from this profile are reported in Table \ref{table:linefluxes1}. The average [C\,II] flux from the stack is lower than the measurements reported in \citet{Magdis2014} for individual sources (note that our $0.2 < z < 0.5$ is comprised almost entirely of the sources from \citet{Magdis2014}, the exception being the source IRAS 00397-1312). Stacking without IRAS 00397-1312 leads to similar results. We attribute the deviation of the stack [C\,II] flux toward lower values to the scalings we apply when shifting spectra to a common redshift and common luminosity during the stacking process. \subsection{High-redshift stacks} The high redshift ($0.8<z<2$ and $2<z<4$) FTS stacks are shown in Figures \ref{fig:z08-2} and \ref{fig:z2-4} consisting of 36 total individual spectra for sources in Table \ref{table:all_targets}. The stack at $0.8<z<2$ also suffers from a limited number of galaxies observed with the FTS. At $0.8<z<2$, [C\,II] 158\,$\mu$m and [O\,III] 88\,$\mu$m appear. We detect [C\,II] at 158\,$\mu$m, [O\,III] at 88\,$\mu$m and [O\,I] at 63\,$\mu$m atomic emission lines with $\rm S/N>3.5$ in the stacked spectra at $2<z<4$. The relative line ratios of these main atomic fine structure cooling lines will be used to construct the photodissociation region model of the ISM of DSFGs at these extreme redshifts to investigate the molecular density and radiation intensity. To study the strengths of spectral lines at different luminosities, all sources with $z>0.8$ were combined into a single sample and then divided into three luminosity bins with roughly the same number of sources in each bin. The average luminosities in the three bins are $10^{12.41}\,$L$_{\odot}$, $10^{12.77}\,$L$_{\odot}$, and $10^{13.24}\,$L$_{\odot}$. See Tables \ref{table:linefluxes3} and \ref{table:highz_flux} for the precise breakdown of the sample and measured fluxes. Each of the subsamples is separately stacked, and the line fluxes are measured as a function of far-infrared luminosity. Figure \ref{fig:high_z_post} shows the best-fit line profiles to the three main detected emission lines in the three infrared luminosity bins. The ISM emission lines are more pronounced with increasing infrared luminosity. This agrees with results of individual detected atomic emission lines at high redshifts \citep{Magdis2014, Riechers2014} although deviations from a main sequence are often observed depending on the physics of the ISM in the form of emission line deficits \citep{Stacey2010}. These are further discussed in the next section. \section{Discussion} The ISM atomic and molecular line emissions observed in the stacked spectra of DSFGs can be used to characterize the physical condition of the gas and radiation in the ISM across a wide redshift range. This involves investigating the CO and water molecular line transitions and the atomic line diagnostic ratios with respect to the underlying galaxy infrared luminosity for comparison to other populations and modeling of those line ratios to characterize the ISM. \subsection{The CO SLED} The CO molecular line emission intensity depends on the conditions in the ISM. Whereas the lower-$J$ CO emission traces the more extended cold molecular ISM, the high-$J$ emissions are observational evidence of ISM in more compact starburst clumps (e.g., \citealt{Swinbank2011}). In fact, observations of the relative strengths of the various CO lines have been attributed to a multi-phase ISM with different spatial extension and temperatures \citep{Kamenetzky2016}. The CO spectral line energy distribution (SLED), plotted as the relative intensity of the CO emission lines as a function of the rotational quantum number, $J$, hence reveals valuable information on the ISM conditions (e.g., \citealt{Lu2014}. Figure \ref{fig:co_lumbins} shows the high-$J$ CO SLED of the DSFGs for stacks in the two low redshift bins of $0.005<z<0.05$ and $0.05<z<0.2$. Here we are limited to the $J_{\rm upper}>5$ CO SLED covered by the SPIRE/FTS in the redshift range probed. A combined {\it Herschel}/SPIRE and PACS stacked spectra of DSFGs and corresponding full CO SLED will be presented in Wilson et al. in prep. The CO SLED is normalized to CO (5-4) line flux density and plotted as a function of ${J_{\rm upper}}$. The background colored regions in \ref{fig:co_lumbins} are from \citet{Rosenberg2015} in which they determined a range of CO flux ratios for three classes of galaxies from the HerCULES sample: star-forming objects, starbursts and Seyferts, and ULIRGs and QSOs. The $0.005<z<0.05$ sample is consistent with the starbursts and Seyfert regions whereas line measurements from stacked spectra in $0.05<z<0.2$ redshift bin are more consistent with ULIRGs and QSO regions. Both measurements are higher than the expected region for normal star-forming galaxies which indicates a heightened excitation state in DSFGs specifically at the high-$J$ lines linked to stronger radiation from starbursts and/or QSO activity. Increased star-formation activity in galaxies is often accompanied by an increase in the molecular gas reservoirs. This is studied locally as a direct correlation between the observed infrared luminosity and CO molecular gas emission in individual LIRGs and ULIRGs \citep{Kennicutt2012}. To further investigate this correlation, we looked at the CO SLED in our low-$z$ ($0.005<z<0.05$) sample in bins of infrared luminosity (Figure \ref{fig:lowz_lirbins}). Figure \ref{fig:co_lumbins} further shows the CO SLED for the the different luminosity bins. The stronger radiation present in the higher luminosity bin sample, as traced by the total infrared luminosity, is responsible for the increase in the CO line intensities. In the high luminosity bin sample, the excitation of the high-$J$ lines could also partially be driven by AGN activity given the larger fraction of QSO host galaxies in the most IR luminous sources (e.g., \citealt{Rosenberg2015}). \subsection{ISM Emission Lines} \subsubsection{Atomic and Molecular Line Ratios} We detect several $\rm H_2O$ emission lines in the two lowest redshift bins of $0.005<z<0.05$ and $0.05<z<0.2$. Fluxes from detected water rotational lines are plotted in Figure \ref{fig:co_lumbins}, along with data from fits made to individual spectra from the sample that exhibited strong water line emission. These include well-known sources such as Arp 220 at $z = 0.0181$ \citep{Rangwala2011} and Mrk 231 at $z = 0.0422$ \citep{Vanderwerf2010,Gonzalez2010}. $\rm H_2O$ lines are normally produced in the warm and most dense regions of starbursts \citep{Danielson2011} and may indicate infrared pumping by AGN \citep{GonzalezAlfonso2010,Bradford2011}. Figure \ref{fig:co_lumbins} also shows the different water emission lines and the ISM temperatures required for their production. As we see from the figure, at the highest temperature end the emission is more pronounced in galaxies in the $0.05<z<0.2$ redshift range. These systems tend to have a higher median infrared luminosity (Figure \ref{fig:lumhist}) and hence hotter ISM temperatures which are believed to drive the high temperature water emissions \citep{Takahashi1983}. Figure \ref{fig:co_lumbins} also shows the dependence of the water emission lines on the infrared luminosity for three of our five luminosity bins in the $0.005<z<0.05$ sample with the strongest H$_2$O detections. Using a sample of local {\it Herschel} FTS/SPIRE spectra with individual detections, \citet{Yang2013} showed a close to linear relation between the strength of water lines and that of L$_{\rm IR}$. We observe a similar relation in our stacked binned water spectra of DSFGs across all different transitions with higher water emission line intensities in the more IR-luminous sample. The first two neutral [C\,I] transitions ([C\,I] (1-0) at 609\,$\mu$m and [C\,I] (2-1) at 370\,$\mu$m) are detected in both low-$z$ stacks (see Figures \ref{fig:z0-005} and \ref{fig:z005-02}). We look at the [C\,I] line ratios in terms of gas density and kinetic temperature using the non-LTE radiative transfer code RADEX\footnote{\url{http://home.strw.leidenuniv.nl/~moldata/radex.html}} \citep{vanderTak2007}. To construct the RADEX models, we use the collisional rate coefficients by \citet{Schroder1991} and use the same range of ISM physical conditions reported in \citet{Pereira2013} (with $T=10-1000\,{\rm K}$, $n_{\rm H_2}=10-10^8\,{\rm cm^{-3}}$ and $N_{\rm C}/\Delta v=10^{12}-10^{18}\,{\rm cm^{-2}/(km\,s^{-1})}$). Figure \ref{temp_ci_lumbins} shows the expected kinetic temperature and molecular hydrogen density derived by RADEX for the observed [C\,I] ratios in the low-$z$ stacks for the different infrared luminosity bins with contours showing the different models. The [C\,I] emission is observed to originate from the colder ISM traced by CO (1-0) rather than the warm molecular gas component traced by the high-$J$ CO lines \citep{Pereira2013} and in fact the temperature is well constrained from these diagrams for high gas densities. The fine structure emission line relative strengths are important diagnostics of the physical conditions in the ISM. Here we focus on the three main atomic lines detected at $z>0.8$ ([C\,II] at 158\,$\mu$m, [O\,I] at 63\,$\mu$m and [O\,III] at 88\,$\mu$m) and study their relative strengths as well as their strength in comparison to the infrared luminosity of the galaxy. We break all sources with $ z > 0.8 $ into three smaller bins based on total infrared luminosity. Table \ref{table:highz_flux} lists the infrared luminosity bins used. The [C\,II] line is detected in each subset of the high-redshift stack whereas [O\,I] and [O\,III] are only detected in the $10^{12.5}$ L$_{\odot}$ $<$ $10^{13}$ L$_{\odot}$ infrared luminosity bin. Figure \ref{data_calibration} shows the relation between emission line luminosity and total infrared luminosity. Total infrared luminosity is integrated in the rest-frame wavelength range $8-1000\,\mu$m. Luminosities in different wavelength ranges in the literature have been converted to L$_{\rm IR}$ using the mean factors derived from Table 7 of \citet{Brisbin2015}: \begin{subequations}\label{grp} \begin{align} &{\rm log}(\rm{L}_{\rm IR}) = {\rm log}(\rm{L}(42.5\,\mu{\rm m} - 122.5\,\mu{\rm m})) + 0.30 \\ &{\rm log}(\rm{L}_{\rm IR}) = {\rm log}(\rm{L}(40\,\mu{\rm m} - 500\,\mu{\rm m})) + 0.145 \\ &{\rm log}(\rm{L}_{\rm IR}) = {\rm log}(\rm{L}(30\,\mu{\rm m} - 1000\,\mu{\rm m})) + 0.09 \end{align} \end{subequations} For the [C\,II] 158\,$\mu$m line we used data from a compilation by \citet{Bonato2014}; references therein, \citet{Georgethesis}, \citet{Brisbin2015}, \citet{Oteo2016}, \citet{Gullberg2015}, \citet{Schaerer2015}, \citet{Yun2015}, \citet{Magdis2014}, \citet{Farrah2013}, \citet{Stacey2010}, \citet{Diaz2013}, and a compilation of data from SHINING \citep{SHINING2011}. For the [O\,I] 63\,$\mu$m line we used data from compilation by \citet{Bonato2014}; references therein, \citet{Ferkinhoff2014}, \citet{Brisbin2015}, \citet{Farrah2013}, and SHINING \citep{SHINING2011}. For the [O\,III] 88\,$\mu$m line we used data from a compilation by \citet{Bonato2014}; references therein, \citet{Georgethesis}, and SHINING \citep{SHINING2011}. As in \citet{Bonato2014}, we excluded all objects for which there is evidence for a substantial AGN contribution. The line and continuum measurements of strongly lensed galaxies given by \citet{Georgethesis} were corrected using the gravitational magnifications, $\mu$, estimated by \citet{Ferkinhoff2014} while those by \citet{Gullberg2015} were corrected using the magnification estimates from \citet{Hezaveh2013} and \citet{Spilker2016} available for 17 out of the 20 sources. For the other three sources we used the median value of $\mu_{\rm med}$ = 7.4. The solid green lines in Figure 15 correspond to the average L$_{\rm line}$/L$_{\rm IR}$ ratios of -3.03, -2.94 and -2.84 for the [O\,I] 63\,$\mu$m, [O\,III] 88\,$\mu$m and [C\,II] 158\,$\mu$m lines from the literature, respectively. The [CII] line luminosity-to-IR luminosity ratio is at least an order of magnitude higher than the typical value of 10$^{-4}$ quoted in the literature for local nuclear starburst ULIRGS and high-z QSOs. Since the data come from heterogeneous samples, a least square fitting is susceptible to selection effects that may bias the results. To address this issue, \citet{Bonato2014} have carried out an extensive set of simulations of the expected emission line intensities as a function of infrared luminosity for different properties (density, metallicity, filling factor) of the emitting gas, different ages of the stellar populations and a range of dust obscuration. For a set of lines, including those considered in this paper the simulations were consistent with a direct proportionality between L$_{\rm line}$ and L$_{\rm IR}$. Based on this result, we have adopted a linear relation. The other lines show L$_{\rm line}-$L$_{\rm IR}$ relations found in the literature, namely: \begin{subequations}\label{grp} \begin{align} &{\rm log}(\rm{L}{\rm [O\,I]}\,63\,\mu{\rm m}) = {\rm log}(\rm{L}_{\rm IR}) - 2.99,\\ &{\rm log}(\rm{L}{\rm [O\,III]}\,88\,\mu{\rm m}) = {\rm log}(\rm{L}_{\rm IR}) - 2.87, \\ &{\rm log}(\rm{L}{\rm [C\,II]}\,158\,\mu{\rm m}) = {\rm log}(\rm{L}_{\rm IR}) - 2.74, \end{align} \end{subequations} from \citet{Bonato2014}, \begin{subequations}\label{grp} \begin{align} &{\rm log}(\rm{L}{\rm [O\,I]}\,63\,\mu{\rm m}) = 0.98\times{\rm log}(\rm{L}_{\rm IR}) - 2.95,\\ &{\rm log}(\rm{L}{\rm [O\,III]}\,88\,\mu{\rm m}) = 0.98\times{\rm log}(\rm{L}_{\rm IR}) - 3.11, \\ &{\rm log}(\rm{L}{\rm [C\,II]}\,158\,\mu{\rm m}) = 0.89\times{\rm log}(\rm{L}_{\rm IR}) - 2.67, \end{align} \end{subequations} from \citet{Spinoglio2014}, \begin{subequations}\label{grp} \begin{align} &{\rm log}(\rm{L}{\rm [O\,I]}\,63\,\mu{\rm m}) = 0.70\times{\rm log}(\rm{L}_{\rm IR}) + 0.32,\\ &{\rm log}(\rm{L}{\rm [O\,III]}\,88\,\mu{\rm m}) = 0.82\times{\rm log}(\rm{L}_{\rm IR}) - 1.40, \\ &{\rm log}(\rm{L}{\rm [C\,II]}\,158\,\mu{\rm m}) = 0.94\times{\rm log}(\rm{L}_{\rm IR}) - 2.39, \end{align} \end{subequations} from \citet{Gruppioni2016}, and \begin{subequations}\label{grp} \begin{align} &{\rm log}(\rm{L}{\rm [O\,I]}\,63\,\mu{\rm m}) = 1.10\times{\rm log}(\rm{L}_{\rm IR}) - 4.70,\\ &{\rm log}(\rm{L}{\rm [C\,II]}\,158\,\mu{\rm m}) = 1.56\times{\rm log}(\rm{L}_{\rm IR}) - 10.52, \end{align} \end{subequations} from \citet{Farrah2013}, respectively. In the high-$z$ bin at $z > 1$, we find that [O\,III] and [O\,I] detections are limited to only one of the three luminosity bins. The ISM emission lines show a deficit (i.e. deviating from a one to one relation) compared to the infrared luminosity. This in particular is more pronounced in our stacked high-$z$ DSFG sample compared to that of local starbursts and is similar to what is observed in local ULIRGs. This deficit further points towards an increase in the atomic ISM lines optical depth in these very dusty environments. There is no clear trend in the measured lines with the infrared luminosities, given the measured uncertainties, however there is some evidence pointing towards a further decrease with increasing IR luminosity. Figure 16 shows the [O\,I]/[C\,II] line ratio for the stacks of DSFGs compared to \citet{Brauher2008} and \citet{Cormier2015}. Although both lines trace neutral gas, they have different excitation energies (with the [O\,I] being higher). Given the uncertainties, we don't see a significant trend in this line ratio with the infrared luminosity. Due to the wavelength coverage of SPIRE/FTS, we are unable to study the [N\,II] 205\,$\mu$m line in the high-$z$ bin. Instead, we concentrate on the luminosity dependence of the [N\,II] 205\,$\mu$m line in the low-$z$ bin. This [N\,II] ISM emission cooling line is usually optically thin, suffering less dust attenuation compared to optical lines and hence is a strong star-formation rate indicator \citep{Zhao2013,Herrera2016,Hughes2016, Zhao2016}. The [N\,II] line luminosity in fact shows a tight correlation with SFR for various samples of ULIRGs \citep{Zhao2013}. Given the ionization potential of [N\,II] at 14.53\,eV, this line is also a good tracer of the warm ionized ISM regions \citep{Zhao2016}. Figure 16 shows the [N\,II] emission for our low-$z$ stack ($0.005<z<0.05$) as a function of infrared luminosity for the five luminosity bins outlined in Figure 8. The [N\,II] line luminosity probes the same range as observed for other samples of ULIRGs and consistently increases with infrared luminosity (a proxy for star-formation) \citep{Zhao2013}. The [N\,II]/L$_{\rm IR}$ ratio is $\sim 10^{-5}$ compared to the [C\,II]/L$_{\rm IR}$ at $\sim 10^{-3}$ \citep{Diaz2013, Ota2014,Herrera2015, Rosenberg2015}. \clearpage \begin{table} \rotatebox{90}{% \begin{minipage}{9in} \begin{center} \fontsize{7}{10}\selectfont \caption{Fluxes of observed spectral lines in each of the redshift bins.} \begin{tabular}{| c c | c c | c c | c c | c c | c c |} \hline\hline \\ [0.1ex] & & \multicolumn{2}{c|}{$0.005 < z < 0.05$} & \multicolumn{2}{c|}{$0.05 < z < 0.2$} & \multicolumn{2}{c|}{$0.2 < z < 0.5$} & \multicolumn{2}{c|}{$0.8 < z < 2$} & \multicolumn{2}{c|}{$2 < z < 4$}\\ \\ \hline \hline Line & Rest Freq. & Flux & Flux & Flux & Flux & Flux & Flux & Flux & Flux & Flux & Flux \\ [0.5ex] & [$\rm{GHz}$] & [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]\\ [0.5ex] \hline \hline &\\ CO (5-4) & 576.268 & 15 $\pm$ 3 & 790 $\pm$ 130 & 2.8 $\pm$ 0.4 & 160 $\pm$ 30 & - & - & - & - & - & - \\ CO (6-5) & 691.473 & 14 $\pm$ 3 & 620 $\pm$ 100 & 3.8 $\pm$ 0.4 & 180 $\pm$ 20 & $<$ 0.40 & $<$ 23 & - & - & - & - \\ CO (7-6) & 806.653 & 12 $\pm$ 2 & 440 $\pm$ 80 & 4.7 $\pm$ 0.4 & 190 $\pm$ 20 & $<$ 0.38 & $<$ 19 & - & - & - & - \\ CO (8-7) & 921.800 & 11 $\pm$ 2 & 360 $\pm$ 60 & 4.5 $\pm$ 0.4 & 160 $\pm$ 20 & $<$ 0.24 & $<$ 10 & - & - & - & - \\ CO (9-8) & 1036.914 & 9.7 $\pm$ 1.7 & 280 $\pm$ 50 & 4.0 $\pm$ 0.5 & 130 $\pm$ 20 & $<$ 0.21 & $<$ 7.7 & $<$ 0.48 & $<$ 33 & - & - \\ CO (10-9) & 1151.985 & 9.6 $\pm$ 1.7 & 250 $\pm$ 50 & 5.7 $\pm$ 0.6 & 160 $\pm$ 20 & $<$ 0.32 & $<$ 11 & $<$ 0.34 & $<$ 21 & - & - \\ CO (11-10) & 1267.016 & 4.9 $\pm$ 1.0 & 120 $\pm$ 30 & 3.9 $\pm$ 0.4 & 100 $\pm$ 20 & $<$ 0.50 & $<$ 16 & $<$ 0.21 & $<$ 12 & - & - \\ CO (12-11) & 1381.997 & 5.4 $\pm$ 1.1 & 120 $\pm$ 30 & 3.5 $\pm$ 0.5 & 84 $\pm$ 10 & $<$ 0.34 & $<$ 9.5 & $<$ 0.26 & $<$ 14 & - & - \\ CO (13-12) & 1496.926 & 2.3 $\pm$ 0.6 & 54 $\pm$ 13 & 2.7 $\pm$ 0.5 & 60 $\pm$ 9 & $<$ 0.37 & $<$ 9.7 & $<$ 0.33 & $<$ 16 & $<$ 0.38 & $<$ 29 \\ $\rm{H_2O}$ 211-202 & 752.032 & 1.9 $\pm$ 0.4 & 78 $\pm$ 17 & 1.1 $\pm$ 0.3 & 49 $\pm$ 9 & $<$ 0.49 & $<$ 26 & - & - & - & - \\ $\rm{H_2O}$ 202-111 & 987.927 & 5.5 $\pm$ 1.2 & 170 $\pm$ 40 & 2.3 $\pm$ 0.3 & 78 $\pm$ 9 & $<$ 0.30 & $<$ 12 & $<$ 0.50 & $<$ 37 & - & - \\ $\rm{H_2O}$ 312-303 & 1097.365 & 2.7 $\pm$ 0.7 & 75 $\pm$ 19 & 2.3 $\pm$ 0.3 & 70 $\pm$ 9 & $<$ 0.23 & $<$ 8.2 & $<$ 0.43 & $<$ 29 & - & - \\ $\rm{H_2O}$ 312-221 & 1153.128 & - & - & - & - & - & - & - & - & - & - \\ $\rm{H_2O}$ 321-312 & 1162.910 & 2.7 $\pm$ 0.7 & 72 $\pm$ 18 & 2.9 $\pm$ 0.3 & 82 $\pm$ 9 & $<$ 0.32 & $<$ 11 & $<$ 0.31 & $<$ 19 & - & - \\ $\rm{H_2O}$ 422-413 & 1207.638 & $<$ 1.2 & $<$ 30 & 1.6 $\pm$ 0.5 & 44 $\pm$ 12 & $<$ 0.42 & $<$ 14 & $<$ 0.25 & $<$ 15 & - & - \\ $\rm{H_2O}$ 220-211 & 1228.789 & 3.9 $\pm$ 1.0 & 96 $\pm$ 23 & 1.6 $\pm$ 0.4 & 43 $\pm$ 11 & $<$ 0.50 & $<$ 16 & $<$ 0.24 & $<$ 14 & - & - \\ $\rm{H_2O}$ 523-514 & 1410.615 & $<$ 1.4 & $<$ 30 & 1.8 $\pm$ 0.4 & 41 $\pm$ 9 & $<$ 0.35 & $<$ 9.7 & $<$ 0.36 & $<$ 19 & - & - \\ $[\rm C\,I] \, (1-0)$ & 492.161 & 9.2 $\pm$ 4.1 & 570 $\pm$ 250 & 2.5 $\pm$ 0.8 & 170 $\pm$ 50 & - & - & - & - & - & - \\ $[\rm C\,I] \,(2-1)$ & 809.340 & 15 $\pm$ 3 & 570 $\pm$ 100 & 3.0 $\pm$ 0.3 & 120 $\pm$ 10 & $<$ 0.39 & $<$ 18 & - & - & - & - \\ $[\rm N\,II]$ & 1461.132 & 96 $\pm$ 16 & 2000 $\pm$ 400 & 5.4 $\pm$ 0.5 & 120 $\pm$ 10 & $<$ 0.39 & $<$ 11 & $<$ 0.14 & $<$ 6.9 & $<$ 0.52 & $<$ 41 \\ $[\rm C\,II]$ & 1901.128 & - & - & - & - & 4.0 $\pm$ 0.4 & 83 $\pm$ 7 & 1.3 $\pm$ 0.2 & 51 $\pm$ 5 & 0.22 $\pm$ 0.04 & 13 $\pm$ 2 \\ $[\rm N\,II]$ & 2461.250 & - & - & - & - & - & - & $<$ 0.17 & $<$ 4.8 & $<$ 0.048 & $<$ 2.2 \\ $[\rm O\,III]$ & 3393.006 & - & - & - & - & - & - & 1.1 $\pm$ 0.3 & 23 $\pm$ 6 & 0.14 $\pm$ 0.03 & 4.5 $\pm$ 1.0\\ $[\rm O\,I]$ & 4744.678 & - & - & - & - & - & - & - & - & 0.14 $\pm$ 0.05 & 3.5 $\pm$ 1.1\\ & \\[0.5ex] \hline\hline \end{tabular} \tablecomments{$\rm CO (10-9)$ is contaminated by emission from $\rm H_2O \, 312-221$, so we quote only the combined flux for the two emission lines in the $\rm CO (10-9)$ row. In the five redshift bins ($0.005 < z < 0.05$, $0.05 < z < 0.2$, $0.2 < z < 0.5$, $0.8 < z < 2$, and $2 < z < 4$), the mean redshifts are $z = 0.02$, $z = 0.1$, $z = 0.3$, $z = 1.4$, $z = 2.8$, respectively, and the median IR luminosities are $10^{11.35}$ L$_{\odot}$, $10^{12.33}$ L$_{\odot}$, $10^{11.89}$ L$_{\odot}$, $10^{12.53}$ L$_{\odot}$, and $10^{12.84}$ L$_{\odot}$, respectively.} \label{table:linefluxes1} \end{center} \end{minipage} } \end{table} \clearpage \clearpage \begin{table} \rotatebox{90}{% \begin{minipage}{9in} \begin{center} \fontsize{7}{10}\selectfont \caption{ Measured fluxes of observed spectral lines from sources with $0.005 < z < 0.05$ in five luminosity bins.} \begin{tabular}{| c c | c c | c c | c c | c c | c c |} \hline \hline \\ & & \multicolumn{2}{c|}{$10^{11.0} \,$L$_{\odot} \, < $ L $< 10^{11.2} \,$L$_{\odot}$} & \multicolumn{2}{c|}{$10^{11.2} \,$L$_{\odot} \, < $ L $< 10^{11.4} \,$L$_{\odot}$} & \multicolumn{2}{c|}{$10^{11.4} \,$L$_{\odot} \, < $ L $< 10^{11.6} \,$L$_{\odot}$} & \multicolumn{2}{c|}{$10^{11.6} \,$L$_{\odot} \, < $ L $< 10^{12.0} \,$L$_{\odot}$} & \multicolumn{2}{c|}{ L $> 10^{12.0} \,$L$_{\odot}$}\\ \\ \hline \hline \\ [0.1ex] Line & Rest Freq. & Flux & Flux & Flux & Flux & Flux & Flux & Flux & Flux & Flux & Flux \\ [0.5ex] & [$\rm{GHz}$] & [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]\\ [0.5ex] \hline \hline & \\ &\\ CO (5-4) & 576.268 & 22 $\pm$ 4 & 1200 $\pm$ 200 & 17 $\pm$ 3 & 880 $\pm$ 150 & 16 $\pm$ 3 & 840 $\pm$ 150 & 20 $\pm$ 4 & 1100 $\pm$ 200 & 18 $\pm$ 4 & 980 $\pm$ 170 \\ CO (6-5) & 691.473 & 16 $\pm$ 3 & 720 $\pm$ 120 & 16 $\pm$ 3 & 710 $\pm$ 120 & 18 $\pm$ 3 & 820 $\pm$ 150 & 20 $\pm$ 4 & 910 $\pm$ 200 & 22 $\pm$ 4 & 1000 $\pm$ 200 \\ CO (7-6) & 806.653 & 13 $\pm$ 3 & 480 $\pm$ 80 & 12 $\pm$ 3 & 470 $\pm$ 80 & 15 $\pm$ 3 & 580 $\pm$ 100 & 20 $\pm$ 4 & 760 $\pm$ 130 & 24 $\pm$ 4 & 910 $\pm$ 150 \\ CO (8-7) & 921.800 & 10 $\pm$ 2 & 330 $\pm$ 60 & 11 $\pm$ 2 & 370 $\pm$ 70 & 15 $\pm$ 3 & 500 $\pm$ 90 & 19 $\pm$ 3 & 630 $\pm$ 110 & 27 $\pm$ 5 & 930 $\pm$ 160 \\ CO (9-8) & 1036.914 & 8.5 $\pm$ 2.0 & 250 $\pm$ 60 & 7.7 $\pm$ 1.7 & 230 $\pm$ 50 & 14 $\pm$ 3 & 410 $\pm$ 80 & 16 $\pm$ 3 & 490 $\pm$ 90 & 24 $\pm$ 5 & 730 $\pm$ 130 \\ CO (10-9) & 1151.985 & 8.5 $\pm$ 1.9 & 230 $\pm$ 50 & 10 $\pm$ 2 & 260 $\pm$ 50 & 14 $\pm$ 4 & 380 $\pm$ 90 & 17 $\pm$ 3 & 460 $\pm$ 80 & 34 $\pm$ 6 & 930 $\pm$ 160 \\ CO (11-10) & 1267.016 & $<$ 7.0 & $<$ 170 & 3.4 $\pm$ 1.2 & 82 $\pm$ 27 & 12 $\pm$ 4 & 290 $\pm$ 100 & 10 $\pm$ 2 & 250 $\pm$ 50 & 21 $\pm$ 4 & 520 $\pm$ 90 \\ CO (12-11) & 1381.997 & $<$ 5.0 & $<$ 110 & 4.6 $\pm$ 1.5 & 100 $\pm$ 30 & 6.4 $\pm$ 1.9 & 140 $\pm$ 40 & 11 $\pm$ 2 & 250 $\pm$ 50 & 14 $\pm$ 3 & 320 $\pm$ 60 \\ CO (13-12) & 1496.926 & $<$ 3.9 & $<$ 80 & $<$ 4.8 & $<$ 97 & $<$ 9.3 & $<$ 190 & 11 $\pm$ 3 & 220 $\pm$ 50 & 15 $\pm$ 3 & 310 $\pm$ 60 \\ $\rm{H_2O}$ 211-202 & 752.032 & $<$ 1.5 & $<$ 59 & 2.4 $\pm$ 0.6 & 97 $\pm$ 25 & 5.2 $\pm$ 1.4 & 210 $\pm$ 60 & 3.0 $\pm$ 0.6 & 120 $\pm$ 30 & 9.3 $\pm$ 1.7 & 390 $\pm$ 70 \\ $\rm{H_2O}$ 202-111 & 987.927 & $<$ 3.2 & $<$ 99 & 5.4 $\pm$ 1.2 & 170 $\pm$ 40 & $<$ 6.1 & $<$ 190 & 4.8 $\pm$ 1.1 & 150 $\pm$ 40 & 18 $\pm$ 4 & 580 $\pm$ 110 \\ $\rm{H_2O}$ 312-303 & 1097.365 & $<$ 6.1 & $<$ 170 & $<$ 3.2 & $<$ 88 & $<$ 5.9 & $<$ 170 & $<$ 4.8 & $<$ 140 & 12 $\pm$ 3 & 350 $\pm$ 70 \\ $\rm{H_2O}$ 312-221 & 1153.128 & - & - & - & - & - & - & - & - & - & - \\ $\rm{H_2O}$ 321-312 & 1162.910 & $<$ 2.7 & $<$ 69 & 3.5 $\pm$ 1.1 & 93 $\pm$ 28 & $<$ 5.0 & $<$ 140 & 3.8 $\pm$ 1.1 & 100 $\pm$ 30 & 19 $\pm$ 4 & 520 $\pm$ 90 \\ $\rm{H_2O}$ 422-413 & 1207.638 & $<$ 2.7 & $<$ 67 & $<$ 2.4 & $<$ 60 & $<$ 2.7 & $<$ 68 & $<$ 3.4 & $<$ 87 & 8.6 $\pm$ 1.9 & 220 $\pm$ 50 \\ $\rm{H_2O}$ 220-211 & 1228.789 & $<$ 4.6 & $<$ 120 & 4.6 $\pm$ 1.5 & 110 $\pm$ 37 & 6.1 $\pm$ 1.9 & 150 $\pm$ 50 & 3.0 $\pm$ 0.9 & 75 $\pm$ 22 & 16 $\pm$ 3 & 400 $\pm$ 80 \\ $\rm{H_2O}$ 523-514 & 1410.615 & $<$ 3.0 & $<$ 65 & $<$ 3.6 & $<$ 77 & $<$ 2.8 & $<$ 61 & $<$ 1.8 & $<$ 40 & 7.5 $\pm$ 1.9 & 170 $\pm$ 40 \\ $[\rm C\,I] \, (1-0)$ & 492.161 & 14 $\pm$ 5 & 850 $\pm$ 250 & 11 $\pm$ 3 & 680 $\pm$ 140 & 10 $\pm$ 3 & 640 $\pm$ 150 & 9.6 $\pm$ 2.3 & 600 $\pm$ 140 & 8.8 $\pm$ 2.7 & 560 $\pm$ 170 \\ $[\rm C\,I] \,(2-1)$ & 809.340 & 21 $\pm$ 4 & 790 $\pm$ 130 & 19 $\pm$ 4 & 700 $\pm$ 120 & 20 $\pm$ 4 & 750 $\pm$ 130 & 17 $\pm$ 3 & 640 $\pm$ 110 & 16 $\pm$ 3 & 610 $\pm$ 110 \\ $[\rm N\,II]$ & 1461.132 & 160 $\pm$ 30 & 3300 $\pm$ 600 & 130 $\pm$ 20 & 2600 $\pm$ 500 & 100 $\pm$ 20 & 2100 $\pm$ 400 & 73 $\pm$ 12 & 1500 $\pm$ 300 & 34 $\pm$ 6 & 730 $\pm$ 120 \\ $[\rm C\,II]$ & 1901.128 & - & - & - & - & - & - & - & - & - & - \\ $[\rm N\,II]$ & 2461.250 & - & - & - & - & - & - & - & - & - & - \\ $[\rm O\,III]$ & 3393.006 & - & - & - & - & - & - & - & - & - & - \\ $[\rm O\,I]$ & 4744.678 & - & - & - & - & - & - & - & - & - & - \\ & \\[0.5ex] \hline\hline \end{tabular} \tablecomments{$\rm CO (10-9)$ is contaminated by emission from $\rm H_2O \, 312-221$, so we quote only the combined flux for the two emission lines in the $\rm CO (10-9)$ row. In the luminosity ranges $10^{11.0-11.2}$L$_{\odot}$, $10^{11.2-11.4}$L$_{\odot}$, $10^{11.4-11.6}$L$_{\odot}$, $10^{11.6-12.0}$L$_{\odot}$, and L $ > \, 10^{12}$ L$_{\odot}$, the mean redshifts are $z = 0.015$, $z = 0.018$, $z = 0.021$, $z = 0.027$, and $z = 0.038$, respectively, and the median IR luminosities are $10^{11.12}$ L$_{\odot}$, $10^{11.32}$ L$_{\odot}$, $10^{11.49}$ L$_{\odot}$, $10^{11.69}$ L$_{\odot}$, and $10^{12.21}$ L$_{\odot}$, respectively.} \label{table:linefluxes2} \end{center} \end{minipage} } \end{table} \clearpage \clearpage \begin{table} \rotatebox{90}{% \begin{minipage}{9in} \begin{center} \fontsize{7}{10}\selectfont \caption{Measured fluxes of observed spectral lines from sources with $0.8 < z < 4$ in three luminosity bins.} \begin{tabular}{| c c | c c | c c | c c |} \hline \hline \\ & & \multicolumn{2}{c|}{$10^{11.5} \,$L$_{\odot} \, < $ L $< 10^{12.5} \,$L$_{\odot}$} & \multicolumn{2}{c|}{$10^{12.5} \,$L$_{\odot} \, < $ L $< 10^{13.0} \,$L$_{\odot}$} & \multicolumn{2}{c|}{$10^{13.0} \,$L$_{\odot} \, < $ L $< 10^{14.5} \,$L$_{\odot}$} \\ \\ \hline\hline\\ [0.1ex] Line & Rest Freq. & Flux & Flux & Flux & Flux & Flux & Flux \\ [0.5ex] & [$\rm{GHz}$] & [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]& [$10^{-18}$ $\rm{W}\rm{m^{-2}}$] & [$\rm{Jy}\,\rm{km}\,\rm{s^{-1}}$]\\ [0.5ex] \hline \hline & \\ &\\ CO (5-4) & 576.268 & - & - & - & - & - & - \\ CO (6-5) & 691.473 & - & - & - & - & - & - \\ CO (7-6) & 806.653 & - & - & - & - & - & - \\ CO (8-7) & 921.800 & - & - & - & - & - & - \\ CO (9-8) & 1036.914 & $<$ 1.5 & $<$ 130 & - & - & - & - \\ CO (10-9) & 1151.985 & $<$ 1.1 & $<$ 89 & $<$ 0.51 & $<$ 46 & - & - \\ CO (11-10) & 1267.016 & $<$ 0.66 & $<$ 50 & $<$ 0.21 & $<$ 17 & - & - \\ CO (12-11) & 1381.997 & $<$ 0.18 & $<$ 12 & $<$ 0.20 & $<$ 15 & - & - \\ CO (13-12) & 1496.926 & $<$ 0.11 & $<$ 6.8 & $<$ 0.16 & $<$ 11 & - & - \\ $\rm{H_2O}$ 211-202 & 752.032 & - & - & - & - & - & - \\ $\rm{H_2O}$ 202-111 & 987.927 & - & - & - & - & - & - \\ $\rm{H_2O}$ 312-303 & 1097.365 & $<$ 0.96 & $<$ 84 & $<$ 0.53 & $<$ 49 & - & - \\ $\rm{H_2O}$ 312-221 & 1153.128 & - & - & - & - & - & - \\ $\rm{H_2O}$ 321-312 & 1162.910 & $<$ 0.99 & $<$ 82 & $<$ 0.51 & $<$ 45 & - & - \\ $\rm{H_2O}$ 422-413 & 1207.638 & $<$ 0.92 & $<$ 73 & $<$ 0.31 & $<$ 26 & - & - \\ $\rm{H_2O}$ 220-211 & 1228.789 & $<$ 0.91 & $<$ 71 & $<$ 0.24 & $<$ 20 & - & - \\ $\rm{H_2O}$ 523-514 & 1410.615 & $<$ 0.14 & $<$ 9.4 & $<$ 0.18 & $<$ 13 & - & - \\ $[\rm C\,I] \, (1-0)$ & 492.161 & - & - & - & - & - & - \\ $[\rm C\,I] \,(2-1)$ & 809.340 & - & - & - & - & - & - \\ $[\rm N\,II]$ & 1461.132 & $<$ 0.12 & $<$ 7.5 & $<$ 0.18 & $<$ 13 & & - \\ $[\rm C\,II]$ & 1901.128 & 0.20 $\pm$ 0.02 & 10 $\pm$ 1 & 0.56 $\pm$ 0.06 & 30 $\pm$ 4 & 0.89 $\pm$ 0.25 & 55 $\pm$ 15 \\ $[\rm N\,II]$ & 2461.250 & $<$ 0.025 & $<$ 0.97 & $<$ 0.066 & $<$ 2.7 & $<$ 0.21 & $<$ 10 \\ $[\rm O\,III]$ & 3393.006 & $<$ 0.094 & $<$ 2.7 & 0.31 $\pm$ 0.09 & 9.2 $\pm$ 2.5 & $<$ 0.37 & $<$ 13 \\ $[\rm O\,I]$ & 4744.678 & $<$ 0.076 & $<$ 1.6 & 0.59 $\pm$ 0.15 & 13 $\pm$ 3 & $<$ 0.35 & $<$ 8.5 \\ & \\[0.5ex] \hline\hline \end{tabular} \tablecomments{$\rm CO (10-9)$ is contaminated by emission from $\rm H_2O \, 312-221$, so we quote only the combined flux for the two emission lines in the $\rm CO (10-9)$ row. In the luminosity ranges $10^{11.5-12.5}$L$_{\odot}$, $10^{12.5-13.0}$L$_{\odot}$, $10^{13.0-14.5}$L$_{\odot}$, the mean redshifts are $z = 2.19$, $z = 2.40$, and $z = 2.93$, respectively, and the median IR luminosities are $10^{12.41}$ L$_{\odot}$, $10^{12.77}$ L$_{\odot}$, and $10^{13.24}$ L$_{\odot}$, respectively.} \label{table:linefluxes3} \end{center} \end{minipage} } \end{table} \clearpage \begin{table*} \caption{Uncorrected line ratios used in PDR modeling for high-redshift sources in three luminosity bins based on lensing-corrected luminosity.} \begin{center} \begin{tabular}{c c c c c c c} \hline\hline\\ [0.1ex] Range & Median & Number of & $[\rm O\,I]/[\rm C\,II]$ & $[\rm C\,II]$/FIR & $[\rm O\,I]$/FIR &($[\rm O\,I]$+$[\rm C\,II]$)/FIR \\ [0.5ex] [$\rm{log_{10}(L_{\odot})}$] & [$\rm{log_{10}(L_{\odot})}$] & Sources & & ($\times\rm{10^{-4}}$) & ($\times\rm{10^{-4}}$) & ($\times \rm{10^{-4}}$)\\ [0.5ex] \hline 11.5 - 12.5 & 12.41 $\pm$0.12 & 11 & $<$ 0.38 [36] & 7.8$\pm$2.3 [1] & $<$ 3.0 [4] & $<$ 11 [1.8] \\ 12.5 - 13.0 & 12.77 $\pm$0.17 & 15 & 1.1$\pm$0.3 [36] & 12$\pm$5 [1] & 13 $\pm$ 6 [4] & 24$\pm$11 [2.6] \\ 13.0 - 14.5 & 13.24 $\pm$0.32 & 10 & $<$ 0.40 [36] & 11$\pm$9 [1] & $<$ 4.1 [4] & $<$ 15 [1.8] \\ & & \\[0.5ex] \hline\hline\\ [0.1ex] \end{tabular} \end{center} \tablecomments{The median luminosities in each bin are $10^{12.41}\,$L$_{\odot}$, $10^{12.77}\,$L$_{\odot}$, and $10^{13.24}\,$L$_{\odot}$, and the mean redshifts are 2.19, 2.40, and 2.93. These ratios are uncorrected for [O\,I] optical thickness, filling factors, and non-PDR [C\,II] emission, or for a plane-parallel PDR model FIR. The total correction factor (i.e., ([A]/[B])$_{\rm corrected}$/([A]/[B])$_{\rm uncorrected}$) for each ratio is given in brackets. The plots in Figure \ref{n_go} do take these correction factors into account.} \label{table:highz_flux} \end{table*} \begin{table*} \caption{Uncorrected line ratios used in the PDR modeling of the observed lines in the $0.005<z<0.05$ and $0.05<z<0.2$ redshift bins.} \begin{center} \begin{tabular}{c c c c c c c c c} \hline\hline\\ [0.1ex] Range & Median & Number of & $\frac{[\rm C\,I](2-1)}{[\rm C\,I](1-0)}$ & $\frac{[\rm C\,I] (1-0)}{\rm{CO (7-6)}}$ & $\frac{[\rm C\,I] (2-1)}{\rm{CO (7-6)}}$ & $\frac{[\rm C\,I] (2-1)}{\rm{FIR}}$ &$\frac{[\rm C\,I] (1-0)}{\rm{FIR}}$ &$\frac{\rm{CO} (7-6)}{\rm{FIR}}$ \\ [0.5ex] & [$\rm{log_{10}(L_{\odot})}$] & Sources & & & & ($\times \rm{10^{-5}}$)& ($\times \rm{10^{-5}}$)& ($\times \rm{10^{-5}}$)\\ [0.5ex] \hline $0.005<z<0.05$ & 11.35 $\pm$1.03 & 115 & 1.6$\pm$0.8 [1] & 0.77$\pm$0.37 [1] & 1.3$\pm$0.4 [1] & 1.6$\pm$3.7 [0.5]& 0.97$\pm$2.29 [0.5]& 1.3 $\pm$ 2.9 [0.5] \\ $0.05<z<0.2$ & 12.33$\pm$0.23 & 34 & 1.2$\pm$0.4 [1] & 0.53$\pm$0.18 [1]& 0.63$\pm$0.09 [1]& 0.93$\pm$0.51 [0.5] & 0.78$\pm$0.48 [0.5]& 1.5$\pm$0.8 [0.5]\\ & & \\[0.5ex] \hline\hline\\ [0.1ex] \end{tabular} \end{center} \tablecomments{The median luminosities of sources in these bins are L$_{\rm IR} = 10^{11.35} \,$L$_{\odot}$ and $10^{12.33}\,$ L$_{\odot}$, and the mean redshifts are $z=0.02$ and $z=0.1$, respectively. These ratios do not account for the corrections given in the text. The total correction factor (i.e., ([A]/[B])$_{\rm corrected}$/([A]/[B])$_{\rm uncorrected}$) for each ratio is given in brackets, where applicable. The large uncertainties reported in the $0.005 < z < 0.05$ bin stem from the large standard deviation of source FIR luminosities.} \label{table:lowz_pdr_ratios} \end{table*} \subsubsection{PDR Modeling} The average gas number density and radiation field strength in the interstellar medium can be inferred using photodissociation regions (PDR) models. About 1\% of far-ultraviolet (FUV) photons from young stars collide with neutral gas in the interstellar medium and strip electrons off of small dust grains and polycyclic aromatic hydrocarbons via the photoelectric effect. These electrons transfer some of their kinetic energy to the gas, heating it. The gas is subsequently cooled by the emission of the far-infrared lines that we observe. The remaining fraction of the UV light is reprocessed in the infrared by large dust grains via thermal continuum emission \citep{Hollenbach1999}. Understanding the balance between the input radiation source and the underlying atomic and molecular cooling mechanisms is essential in constraining the physical properties of the ISM. We use the online PDR Toolbox\footnote{\url{http://dustem.astro.umd.edu/pdrt/}} \citep{Poundwolfire2008,Kaufman2006} to infer the average conditions in the interstellar medium that correspond to the measured fluxes of both the stacked low ($0.005<z<0.05$ and $0.05<z<0.2$) and high-redshift ($0.8<z<4$) spectra. The PDR toolbox uses the ratios between the fluxes of fine structure lines and of the FIR continuum to constrain the PDR gas density and strength of the incident FUV radiation (given in units of the Habing field, $1.6\times10^-3\,\rm{erg}\,\rm{cm}^{-2}\,\rm{s}^{-1}$). At low redshifts, the PDR models take into account the lines [C\,I] (1-0), [C\,I] (2-1), CO (7-6), and the FIR continuum; at high redshits, the models use [C\,II] 158 $\mu$m, [O\,I] 63 $\mu$m, and the FIR continuum. We do not attempt PDR models of the intermediate redshift sample as we only detect the [C\,II] line in that redshift bin which would not allow us to constrain the parameters characterizing the ISM (in particular constraining the radiation field-gas density parameter space). As previously discussed, all sources with $ z > 0.8 $ are divided into three smaller bins based on total infrared luminosity. The [C\,II] line is detected in each subset of the high-redshift stack. In the high-redshift stacks, we observed emission from singly-ionized carbon ([C\,II] at 158\,$\mu$m) as well as some weak emission from neutral oxygen ([O\,I] at 63\,$\mu$m). We perform PDR modeling for only one of three luminosity bins. In this bin (12.5 L$_{\odot} \, < $ L $<$ 13.0 L$_{\odot}$), the [C\,II] and [O\,I] detections were the strongest, while in the other two bins, the detections were either too weak or nonexistent. Before applying measured line ratios to the PDR toolbox, we must make a number of corrections to the measured fluxes. First, the PDR models of \citet{Kaufman1999} and \citet{Kaufman2006} assume a single, plane-parallel, face-on PDR. However, if there are multiple clouds in the beam or if the clouds are in the active regions of galaxies, there can be emission from the front and back sides of the clouds, requiring the total infrared flux to be cut in half in order to be consistent with the models (e.g., \citealt{Kaufman1999,DeLooze2016}). Second, [O\,I] can be optically thick and suffers from self-absorption, so the measured [O\,I] is assumed to be only half of the true [O\,I] flux; i.e., we multiply the measured [O\,I] flux by two (e.g., \citealt{DeLooze2016,Contursi2013}). [C\,II] is assumed to be optically thin, so no correction is applied. Similarly, no correction is applied for [C\,I] and CO at low redshifts. Third, the different line species considered will have different beam filling factors for the SPIRE beam. We follow the method used in \citet{Wardlow2017} and apply a correction to only the [O\,I]/[C\,II] ratio using a relative filling factor for M82 from the literature. Since the large SPIRE beam size prevents measurement of the relative filling factors, the [O\,I]/[C\,II] ratio is corrected by a factor of 1/0.112, which is the measured relative filling factor for [O\,I] and [C\,II] in M82 \citep{Stacey1991, Lord1996, Kaufman1999, Contursi2013}. \citet{Wardlow2017} note that the M82 correction factor is large, so the corrected [O\,I]/[C\,II] ratio represents an approximate upper bound. Lastly, it is possible that a significant fraction of the [C\,II] flux can come from ionized gas in the ISM and not purely from the neutral gas in PDRs (e.g., \citealt{Abel2006,Contursi2013}). As a limiting case, we assume that 50\% of the [C\,II] emission comes from ionized regions. This correction factor is equivalent to the correction for ionized gas emission used in \citet{Wardlow2017} and is consistent with the results of \citet{Abel2006}, who finds that the ionized gas component makes up between 10-50\% of [C\,II] emission. To summarize: a factor of 0.5 is applied to the FIR flux to account for the plane-parallel model of the PDR Toolbox, a factor of 2 is applied to the [O\,I] flux to account for optical thickness, a factor of 0.5 is applied to the [C\,II] flux to account for ionized gas emission, and lastly, a correction factor of 1/0.112 is applied to the [O\,I]/[C\,II] ratio to account for relative filling factors. We do not apply any corrections to the [C\,I] (1-0), [C\,I] (2-1), or CO (7-6) fluxes used in the PDR modeling of the lower-redshift stacks. These correction factors can significantly alter the flux ratios; for example, the ratio ([O\,I]/[C\,II])$_{\rm corrected}$ = 36$\times$([O\,I]/[C\,II])$_{\rm uncorrected}$. Tables \ref{table:highz_flux} and \ref{table:lowz_pdr_ratios} contain the uncorrected line ratios with the total correction factor for each ratio given in brackets. Naturally, these corrections introduce a large amount of uncertainty into our estimated line ratios. To demonstrate the effects that these corrections have on the results, we include contours from uncorrected and corrected line ratios in Figures \ref{n_go_ci} and \ref{n_go}. In Figure \ref{n_go_ci} (low redshifts), the only flux correction carried out is the correction to the FIR flux. This correction is indicated by the dashed line in each of the plots. In Figure \ref{n_go}, the lefthand-side plot displays the constraints on gas density and radiation field intensity (n,$\,$G$_0$) for high-redshift sources in the luminosity bin 12.5 L$_{\odot}$ $<$ L $<$ 13.0 L$_{\odot}$ determined from the uncorrected line ratios. The righthand-side plot shows the same contours but with the aforementioned correction factors taken into account. Clearly, the corrections can shift the intersection locus (the gray regions) to very different parts of n-G$_0$ parameter space. However, the correction factors should be treated with caution and represent limiting cases. The most variation is observed in the [O\,I]/[C\,II] ratio (shown in red), so the [O\,I]/[C\,II] contours on the lefthand and righthand plots in Figure \ref{n_go} represent the two extreme locations that this contour can occupy. The uncorrected line ratios are summarized in Tables \ref{table:highz_flux} and \ref{table:lowz_pdr_ratios}. These tables include line ratios that are not included in Figures \ref{n_go_ci} and \ref{n_go} (for example, Table \ref{table:highz_flux} contains the ratio [O\,I]/FIR, which does not appear in Figure \ref{n_go}). The figures contain only the independent ratios; the tables contain more (though not all independent ratios) for completeness. \begin{figure*}[!th] \centering \begin{minipage}{\columnwidth} \includegraphics[width=\columnwidth,trim=3cm 0cm 0cm 1cm, scale=0.7]{pdr_ci_z0-005.pdf} \end{minipage} \hfill \begin{minipage}{\columnwidth} \includegraphics[width=\columnwidth,trim=3cm 0cm 0cm 1cm, scale=0.7]{pdr_ci_z005-02.pdf} \end{minipage} \caption{PDR modeling of observed fluxes in $0.005 < z < 0.05$ bin (left) and $0.05 < z < 0.2$ (right). The solid lines are constraint contours determined from modeling, and the dotted lines are the 1$\sigma$ uncertainties. The dashed lines indicate the changes in line flux ratios when the FIR correction (see text) is applied. The gray regions indicate the most likely values of $n$ and $G_0$ determined from a likelihood analysis using the corrected flux values of FIR. Table \ref{table:lowz_pdr_ratios} lists the flux values for these two redshift bins before FIR corrections were applied. The line fluxes are in units of $\rm{W m^{-2}}$, and the L$_{\rm IR}$ is the far-infrared flux, where the wavelength range that defines L$_{\rm IR}$ is converted to 30-1000 $\mu$m \citep{Farrah2013}.} \label{n_go_ci} \end{figure*} \begin{figure*}[!th] \centering \begin{minipage}{\columnwidth} \includegraphics[width=\columnwidth,trim=3cm 0cm 0cm 1cm, scale=0.7]{pdr_bin2_uncorr.pdf} \end{minipage} \hfill \begin{minipage}{\columnwidth} \includegraphics[width=\columnwidth,trim=3cm 0cm 0cm 1cm, scale=0.7]{pdr_bin2_corr.pdf} \end{minipage} \caption{\textit{Left:} PDR modeling of observed fluxes for sources with $0.8 < z < 4$ in the luminosity bin $10^{12.5}\,L_{\odot} < L_{\rm IR} < 10^{13}\,L_{\odot}$. No correction factors (see text) are applied to the line and line-FIR ratios in this plot. The gray regions indicates the most likely values of $n$ and $G_0$ determined from a likelihood analysis. The uncorrected ratios used for PDR modeling are given in Table \ref{table:highz_flux}. The line fluxes are in units of $\rm{W m^{-2}}$, and the FIR is the far-infrared flux, where the wavelength range that defines L$_{\rm IR}$ is converted to 30-1000\,$\mu$m \citep{Farrah2013}. Though sources in this redshift range are split into three bins based on total infrared luminosity in the text (L$_{\rm IR} < 10^{12.5}\,$L$_{\odot}$, $10^{12.5}\,$L$_{\odot} \, <\, $L$_{\rm IR} < 10^{13}\,$L$_{\odot}$, and L$_{\rm IR} > 10^{13}\,$L$_{\odot}$), the lack of [O\,I] detections in the first and third bins mean that PDR models for only the second bin are presented. \textit{Right:} Same PDR model as on the left but with the correction factors discussed in the text taken into account. The most variation appears in the [O\,I]/[C\,II] ratio, which shifts the intersection region from log($n$) $\sim$ 2.5 and log($G_0$) $\sim$ 2.5 to log($n$) $\sim$ 5 and log($G_0$) $\sim$ 4.} \label{n_go} \end{figure*} \begin{figure*}[!th] \centering \includegraphics[trim=3cm 0cm 0cm 1cm, scale=1]{patches.pdf} \caption{The results of PDR modeling compared to results from the literature. The light blue region represents the derived n-G$_0$ for sources with $0.8 < z < 4$ and $12.5 < L/L_{\odot} < 13.0$. The orange and green regions represent the derived quantities for $0.005 < z < 0.05$ and $0.05 < z < 0.2$ subsamples, respectively. The regions shown here take into the account the correction factors discussed in the text. For comparison, the conditions for local spiral galaxies, molecular clouds, local starbursts, and galactic OB star-forming regions from \citet{Stacey1991} are shown, as well as data points for local star-forming galaxies from \citet{Malhotra2001} and for SMGs come from \citet{Wardlow2017, Sturm2010,Cox2011,Danielson2011,Valtchanov2011,Alaghband-Zadeh2013,Huynh2014}, and \citet{Rawle2014}.} \label{fig:patches} \end{figure*} The gray shaded regions in Figures \ref{n_go_ci} and \ref{n_go} represent the most likely values of n and G$_0$ given the measured line flux ratios. To generate these regions, we perform a likelihood analysis using a method adapted from \cite{Ward2003}. The density n and radiation field strength G$_0$ are taken as free parameters. For measured line ratios $\vec{R}$ with errors $\vec{\sigma}$, we take a Gaussian form for the probability distribution; namely, \begin{equation} \mathrm{P(}\vec{\mathrm{R}}\,|\,\mathrm{n,G_0,}\, \vec{\sigma}) = \prod\limits_{i=1}^{\mathrm{N}} \frac{1}{\sqrt{2\pi}\sigma_i} \exp{\bigg\{-\frac{1}{2}\bigg[\frac{\mathrm{R}_i - \mathrm{M}_i}{\sigma_i}\bigg]^2\bigg\}} \end{equation} where the R$_i$ are the measured line ratios (i.e., [O\,I]/[C\,II], [C\,II]/FIR, etc.), N is the number of independent line ratios, and the M$_i$ are the theoretical line ratio plots from the PDR toolbox. A grid of discrete points in n, G$_0$-space ranging from $1 < $ log$_{10}($n$) < 7$ and $-0.5 < \, $log$_{10}($G$_0) \, < \, 6.5$ is constructed. To compute the most likely values of n and G$_0$, we use Bayes' theorem: \begin{equation} \mathrm{P(n,G_0}\,|\,\vec{\mathrm{R}},\vec{\sigma}) = \frac{\mathrm{P(n,G_0)\,P(}\vec{\mathrm{R}}\,|\,\mathrm{n,G_0},\vec{\sigma})}{\sum\limits_{\mathrm{n,G_0}} \, \mathrm{P(n,G_0)\,P(}\vec{\mathrm{R}}\,|\,\mathrm{n,G_0},\vec{\sigma})} \end{equation} The prior probability density function, P(n,G$_0$), is set equal to 1 for all points in the grid with G$_0>10^{2}$. Points with G$_0<10^{2}$ are given a prior probability of 0. The reason for this choice of prior stems from the argument that, given the intrinsic luminosities of our sources ($\sim 10^{11.5-13.5} $L$_{\odot}$), low values of G$_0$ (which include, for example, the value of G$_0$ at the line convergence in the high-$z$ PDR plot at $\rm {log(n/cm^{-3}) \sim 4.5}$ and $\rm{log(G_0) \sim 0.2}$) would correspond to galaxies with sizes on the order of hundreds of kpc or greater \citep{Wardlow2017}. Such sizes are expected to be unphysical, as typical measurements put galaxy sizes with these luminosities at ~$0.5-10 \,$kpc (see \citealt{Wardlow2017} and references therein). P(n, G$_0\,|\,\vec{\rm R},\vec{\sigma})$ gives the probability for each point in the n-G$_0$ grid that that point represents the actual conditions in the PDR, given the measured flux ratios. The gray regions in Figures \ref{n_go} and \ref{n_go_ci} are 68.2\% confidence regions. The relative likelihoods of each of the points in the grid are sorted from highest to lowest, and the cumulative sum for each grid point (the likelihood associated with that grid point summed with the likelihoods of the points preceding it in the high-to-low ordering) is computed. Grid points with a cumulative sum less than 0.682 represent the most likely values of density n and UV radiation intensity G$_0$, given the measured fluxes, with a total combined likelihood of 68.2\%. These points constitute the gray regions. The data constrain the interstellar gas density to be in the range $\rm{log(n/cm^{-3}) \sim 4.5 - 5.5}$ for both low-$z$ and high-$z$, where these values are estimated from the PDR models with correction factors taken into account. The FUV radiation is constrained to be in the range of $\rm{log(G_0) \sim 3 - 4}$ and $\rm{log(G_0) \sim 3 - 5}$ for low-$z$ and high-$z$, respectively. The [C\,I] (2-1)/[C\,I] (1-0) line ratio is observed to deviate from the region of maximum likelihood on the $\rm G_0$-density diagram (Figure \ref{n_go_ci}). The region of maximum likelihood is shaded in gray in the figure. In fact this ratio is very sensitive to the conditions in the ISM, such that a modest change in the radiation strength or density would shift the line towards the expected locus \citep{Danielson2011}. The PDR models also constrain the assumption for the production of [C\,I] to that of a thin layer on the surface of far-UV heated molecular ISM whereas several studies \citep{Papadopoulos2004} point to the coexistence of neutral [C\,I] along CO in the same volume. These assumptions could also result in the deviations observed in the PDR models. Figure \ref{fig:patches} summarizes our main results of the PDR modeling based on the low and high redshift ISM emission lines from the stacked FTS spectra. We compare these measurements with that of local star-forming galaxies \citep{Malhotra2001}, local starbursts \citep{Stacey1991} and archival SMGs. We see from Figure \ref{fig:patches} that local DSFGs are on average subject to stronger UV radiation than that of local star-forming galaxies and are more consistent with local starbursts. Our measured density and radiation field strengths are further in agreement with results reported in \citet{Danielson2011} for a single DSFG at $z\sim2$. Given the uncertainty in filling factors and in the fraction of non-PDR [C\,II] emission, the [O\,I]/[C\,II] ratio contour in Figure \ref{n_go} may shift downward and to the left toward smaller density and radiation field strength where it would be more consistent with the results in \citet{Wardlow2017} for {\it Herschel}/PACS stacked spectra of DSFGs. \section{Summary} \begin{itemize} \item We have stacked a diverse sample of \textit{Herschel} dusty, star-forming galaxies from redshifts $0.005<z<4$ and with total infrared luminosities from from LIRG levels up to luminosities in excess of $10^{13}\,$L$_{\odot}$. The sample is heterogeneous, consisting of starbursts, QSOs, and AGN, among other galaxy types. With this large sample, we presented a stacked statistical analysis of the archival spectra in redshift and luminosity bins. \item We present the CO and H$_2$O spectral line energy distributions for the stacked spectra. \item Radiative transfer modeling with RADEX places constraints on the gas density and temperature based on [C\,I] (2-1) 370 $\mu$m and [C\,I] (1-0) 609 $\mu$m measurements. \item We use PDR modeling in conjunction with measured average fluxes to constrain the interstellar gas density to be in the range $\rm{log(n/cm^{-3}) \sim 4.5 - 5.5}$ for stacks at low and high redshifts. The FUV radiation is constrained to be in the range of $\rm{log(G_0) \sim 3 - 4}$ and $\rm{log(G_0) \sim 3 - 5}$, for low redshifts and high redshifts, respectively. Large uncertainties are present, especially due to effects such as contributions to the [C\,II] line flux due to non-PDR emission for which we can only estimate the correction factors to the observed line fluxes. Such uncertainties may lead to further discrepancies between the gas conditions at high- and low-redshifts, which may be understood in terms of nuclear starbursts of local DSFGs and luminous and ultra-luminous infrared galaxies compared to $\sim$ 10 kpc-scale massive starbursts of high-$z$ DSFGs. \end{itemize} \section*{Acknowledgments} The authors thank an anonymous referee for his/her helpful comments and suggestions. The authors also thank Rodrigo Herrera-Camus, Eckhard Sturm, Javier Gracia-Carpio, and SHINING for sharing a compilation of [C\,II], [O\,III], and [O\,I] line measurements as well as FIR data to which we compare our results. We wish to thank Paul Van der Werf for the very useful suggestions and recommendations. Support for this paper was provided in part by NSF grant AST-1313319, NASA grant NNX16AF38G, GAANN P200A150121, HST-GO-13718, HST-GO-14083, and NSF Award \#1633631. JLW is supported by a European Union COFUND/Durham Junior Research Fellowship under EU grant agreement number 609412, and acknowledges additional support from STFC (ST/L00075X/1). GDZ acknowledges support from the ASI/INAF agreement n.~2014-024-R.1. The \textit{Herschel} spacecraft was designed, built, tested, and launched under a contract to ESA managed by the \textit{Herschel}/Planck Project team by an industrial consortium under the overall responsibility of the prime contractor Thales Alenia Space (Cannes), and including Astrium (Friedrichshafen) responsible for the payload module and for system testing at spacecraft level, Thales Alenia Space (Turin) responsible for the service module, and Astrium (Toulouse) responsible for the telescope, with in excess of a hundred subcontractors. SPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA). HIPE is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortia. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \bibliographystyle{apj}
1,116,691,498,748
arxiv
\section*{\refname} \def\@biblabel##1{##1.} \small \list{\@biblabel{\@arabic\c@enumiv}}% {\settowidth\labelwidth{\@biblabel{#1}}% \leftmargin\labelwidth \advance\leftmargin\labelsep \if@openbib \advance\leftmargin\bibindent \itemindent -\bibindent \listparindent \itemindent \parsep \z@ \fi \usecounter{enumiv}% \let\p@enumiv\@empty \renewcommand\theenumiv{\@arabic\c@enumiv}}% \if@openbib \renewcommand\newblock{\par}% \else \renewcommand\newblock{\hskip .11em \@plus.33em \@minus.07em}% \fi \sloppy\clubpenalty4000\widowpenalty4000% \sfcode`\.=\@m} {\def\@noitemerr {\@latex@warning{Empty `thebibliography' environment}}% \endlist} \makeatother \usepackage{physics} \section{Introduction} \label{sec:intro} \subfile{sections_Paper/1_intro} \section{Related Work} \label{sec:related_work} \subfile{sections_Paper/2_related} \section{Method} \label{sec:method} \subfile{sections_Paper/3_method} \section{Experiments} \label{sec:experiments} \subfile{sections_Paper/4_experiments} \section{Conclusion} \label{sec:conclusion} \subfile{sections_Paper/5_conclusion} \clearpage \bibliographystyle{config/splncs04} \subsection{Experiment setup} \subsubsection{Dataset and metrics.} For the purpose of our experiments, we use the two-stream architecture of \cite{kosti2020context, Kosti2017EmotionRI} as our base implementation. We use their EMOTIC database~\cite{Kosti2017EMOTICEI}, which is composed of images from MS-COCO~\cite{DBLP:journals/corr/LinMBHPRDZ14}, ADE20K~\cite{zhou2017scene} along with images downloaded from the web. The database offers two emotion representation labels; a set of 26 discrete emotional categories (Cat), and a set of three emotional dimensions, Valence, Arousal and Dominance from the VAD Emotional State Model~\cite{Mehrabian1995FrameworkFA}. Valence (V), is a measure of how positive or pleasant an emotion is (negative to positive); Arousal (A), is a measure of the agitation level of the person (non-active, calm, agitated, to ready to act); and Dominance (D) is a measure of the control level of the situation by the person (submissive, non-control, dominant, to in-control). The continuous dimensions (Cont) annotations of VAD are in a 1-10 scale. \subsubsection{Loss function.} A dynamic weighted MSE loss $L_{cat}$ is used on the Category classification output layer (Cat) of the model. \begin{equation} L_{cat} = \sum_{i=1}^{26}w_i(\hat{y}_i^{cat} - y_i^{cat})^2 \end{equation} where $i$ corresponds to 26 discrete categories shown in \cref{table:category}. $\hat{y}_i^{cat}$ and $y_i^{cat}$ are the prediction and ground-truth for the $i^{th}$ category. The dynamic weight $w_i$ are computed per batch and is based on the number of occurrences of each class in the batch. Since the occurrence of a particular class can be 0, \cite{Kosti2017EmotionRI, kosti2020context} defined an additional hyper-parameter $c$. The constant $c$ is added to the dynamic weight $w_i$ along with $p_i$, which is the probability of the $i^{th}$ category. The final weight is defined as $w_i=\frac{1}{ln(p_i+c)}$. For the continuous (Cont) output layer, an L1 loss $L_{cont}$ is employed. \begin{equation} L_{cont} = \frac{1}{C}\sum_{i=1}^{C}|\hat{y}_i^{cont} - y_i^{cont}| \end{equation} Here $i$ represents one of valence, arousal, and dominance ($C$). $\hat{y}_i^{cont}$ and $y_i^{cont}$ are the prediction and ground-truth for the $i^{th}$ metric (VAD). \subsubsection{Baselines.} We compare \nameCOLOR{\mbox{PERI}}\xspace to the SOTA baselines including \textit{Emotic} Kosti \mbox{et al.}\xspace~\cite{Kosti2017EmotionRI, kosti2020context}, Huang~\mbox{et al.}\xspace~\cite{9607417}, Zhang~\mbox{et al.}\xspace~\cite{Zhang2019ContextAwareAG}, Lei~\mbox{et al.}\xspace~\cite{9008268}, and Mittal~\mbox{et al.}\xspace~\cite{Mittal2020EmotiConCM}. We reproduce the three stream architecture in \cite{9607417} based on their proposed method. For a fair comparison, we compare \nameCOLOR{\mbox{PERI}}\xspace's image-based model results with EmotiCon's~\cite{Mittal2020EmotiConCM} image-based GCN implementation. \subsubsection{Implementation details.} We use the two-stream architecture from Kosti \mbox{et al.}\xspace~\cite{Kosti2017EmotionRI, kosti2020context}. Here both the image and body feature extraction streams are Resnet-18~\cite{DBLP:journals/corr/HeZRS15} networks which are pre-trained on ImageNet~\cite{ILSVRC15}. All \mbox{PAS}\xspace images are re-sized to $128X128$ similar to the input of the body feature extraction stream. The \mbox{PAS}\xspace image is created by plotting $501$ landmarks $N+M$ on the base mask and passing it through a Gaussian filter of size $\sigma=3$. We consider the same train, validation, and test splits provided by the EMOTIC~\cite{Kosti2017EMOTICEI} open repository. \input{TEX_tables/cat} \input{TEX_tables/VAD} \subsection{Quantitative results} \cref{table:category} and \cref{table:vad} show quantitative comparisons between \nameCOLOR{\mbox{PERI}}\xspace and state-of-the-art approaches. \cref{table:category} compares the average precision (AP) for each discrete emotion category in the EMOTIC dataset~\cite{Kosti2017EMOTICEI}. \cref{table:vad} compares the valence, dominance and arousal $L1$ errors. Our model consistently outperforms existing approaches in both metrics. We achieve a significant $6.3\%$ increase in mean AP (mAP) over our base network~\cite{Kosti2017EmotionRI,kosti2020context} and a $1.8\%$ improvement in mAP over the closest competing method~\cite{Mittal2020EmotiConCM}. Compared to methods that report VAD errors, \nameCOLOR{\mbox{PERI}}\xspace achieves lower mean and individual $L1$ errors and a $2.6\%$ improvement in VAD error over our baseline~\cite{Kosti2017EmotionRI, kosti2020context}. Thus, our results effectively shows that while only using pose or facial landmarks might lead to noisy gradients, especially in images with unreliable/occluded body or face, adding cues from both facial and body pose features where available lead to better emotional context. We further note that our proposed Cont-In Blocks are effective in reasoning about emotion context when comparing \nameCOLOR{\mbox{PERI}}\xspace with recent methods that use both body pose and facial landmarks~\cite{Mittal2020EmotiConCM}. \subsection{Qualitative results} In order to understand the results further, we look at several visual examples, a subset of which are shown in \cref{figure:visual_result}. We choose Kosti~\mbox{et al.}\xspace~\cite{Kosti2017EmotionRI,kosti2020context} and Huang~\mbox{et al.}\xspace~\cite{9607417} as our baselines as they are the closest SOTA methods. We derive several key insights pertaining to our results. In comparison to Kosti~\mbox{et al.}\xspace~\cite{Kosti2017EmotionRI,kosti2020context}~\cite{9607417} and Huang~\mbox{et al.}\xspace, \nameCOLOR{\mbox{PERI}}\xspace fares better on examples where the face is clearly visible. This is expected as \nameCOLOR{\mbox{PERI}}\xspace specifically brings greater attention to facial features. Interestingly, our model also performs better for images where either the face or the body is partially visible (occluded/blurred). This supports our hypothesis that partial body poses as well as partial facial landmarks can supplement one another using our \mbox{PAS}\xspace image representation. \subsection{Ablation study} As shown in \cref{table:abalation}, we conduct a series of ablation experiments to create an optimal part-aware representation (\mbox{PAS}\xspace) and use the information in our base model effectively. For all experiments, we treat the implementation from Kosti~\mbox{et al.}\xspace~\cite{Kosti2017EmotionRI, kosti2020context} as our base network and build upon it. \input{TEX_tables/abalation} \textbf{\mbox{PAS}\xspace images.} To get the best \mbox{PAS}\xspace representation, we vary the standard deviation $(\sigma)$ of the Gaussian kernel applied to our \mbox{PAS}\xspace image. We show that $\sigma = 3$, gives us the best overall performance with a $5.9\%$ increase in the mAP and a $2.5\%$ decrease in the mean VAD error (\cref{table:abalation}: \mbox{PAS}\xspace image experiments) over the base network. From the use of \mbox{PAS}\xspace images, we see that retrieving context from input images that are aware of the facial landmarks and body poses is critical to achieving better emotion recognition performance from the base network. \textbf{Experimenting with Cont-In blocks}. To show the effectiveness of Cont-In blocks, we compare its performance with early and late fusion in \cref{table:abalation}. For early fusion, we concatenate the \mbox{PAS}\xspace image as an additional channel to the body-crop image in the body feature extraction stream. For late fusion, we concatenate the fused output of the body and image feature extraction streams with the downsampled \mbox{PAS}\xspace image. As opposed to \nameCOLOR{\mbox{PERI}}\xspace, we see a decline in performance for both mAP and VAD error when considering early and late fusion. From this we conclude that context infusion at intermediate blocks is important for accurate emotion recognition. Additionally, we considered concatenating the \mbox{PAS}\xspace images directly to the intermediate features instead of using a Cont-In block. However, feature concatenation in the intermediate layers changes the backbone ResNet architecture, severely limiting gains from ImageNet~\cite{ILSVRC15} pretraining. This is apparent in the decrease in performance from early fusion, which may be explained, in part, by the inability to load ImageNet weights in the input layer of the backbone network. In contrast, Cont-In block are fully compatible with any emotion recognition network and do not alter the network backbone. In the final experiment, we added Cont-In blocks to both the image feature extraction stream and the body feature extraction stream. Here we discovered that if we regulate the intermediate features of both streams as opposed to just the body stream the performance declines. A possible reason could be that contextual information from a single person does generalise well to the entire image with multiple people. \textbf{\nameCOLOR{\mbox{PERI}}\xspace}. From our ablation experiments, we found that \nameCOLOR{\mbox{PERI}}\xspace works best overall. It has the highest mAP among the ablation experiments as well as a lowest mean $L1$ error for VAD. While there are other hyper-parameters that have better $L1$ errors for Valence, Arousal, and Dominance independently, (different Gaussian standard deviations ($\sigma_k$)), these hyper-parameters tend to perform worse overall compared to \nameCOLOR{\mbox{PERI}}\xspace. \subsection{MediaPipe Holistic model} \label{sec:mediapipe_holistic} In order to obtain the body poses and facial landmarks, we use the MediaPipe Holistic pipeline~\cite{DBLP:journals/corr/abs-1906-08172}. It is a multi-stage pipeline which includes separate models for body pose and facial landmark detection. The body pose estimation model is trained on $224\times224$ input resolution. However, detecting face and fine-grained facial landmarks requires high resolution inputs. Therefore, the MediaPipe Holistic pipeline first estimates the human pose and then finds the region of interest for the face keypoints detected in the pose output. The region of interest is upsampled and the facial crop is extracted from the original resolution input image and is sent to a separate model for fine-grained facial landmark detection. \subsection{The Emotic Model} The baseline of our paper is the the two-stream CNN architecture from Kosti et. al~\cite{kosti2020context, Kosti2017EmotionRI}. The paper defines the task of \textit{emotion recognition in context}, which considers both body pose and scene context for emotion detection. The architecture takes as input the body crop image, which is sent to the body feature extraction stream, and the entire image, which is sent to the image feature extraction stream. The outputs from the two streams are concatenated and combined through linear classification layers. The model outputs classification labels from 26 discrete emotion categories and 3 continuous emotion dimensions, \textit{Valence}, \textit{Arousal} and \textit{Dominance}~\cite{Mehrabian1995FrameworkFA}. The 2 stream architecture is visualized in our pipeline in \cref{figure:our_arch}. In order to demonstrate our idea, we stick with the basic Resnet-18~\cite{DBLP:journals/corr/HeZRS15} backbone for both the streams. \subsection{Part aware spatial image} \begin{figure}[t] \centering \includegraphics[scale=0.5]{images/method/Contextblock.pdf} \caption{ (Left) An input image along with the mask ($\mathbf{B'}$) created by applying a Gaussian function with $\sigma=3$. The mask is binarised and used to create the PAS image ($\mathbf{P}$) in the middle. (Right) Architecture of the \mbox{Cont-In}\xspace block that uses the PAS images ($\mathbf{P}$) to modulate the Resnet features between each intermediate block. Here the input features from the ${n-1}^{th}$ Resnet block are passed in and modulated features are passed to the $n^{th}$ block are shown in purple. } \label{figure:context_block} \end{figure} One contribution of our framework is how we combine body pose information along with facial landmarks such that we can leverage both sets of features and allow them to complement each other subject to their availability. In order to do so we have three main stages to our pipeline. First, we use the MediaPipe Holistic model to extract the keypoints as described in \refsec{sec:mediapipe_holistic}. Here we get two sets of keypoint coordinates for each body crop image $\mathbf{I}$. The first set of $N$ coordinates describe the body landmarks $\textbf{b}_i$ where $i \in (0, N)$. The second set of $M$ coordinates describe the location of the facial landmarks $\textbf{f}_j$ where $j \in (0, M)$. For simplicity, we combine all detected landmarks and denote then as $\textbf{b}_k$ where $k \in (0, M+N)$. We take an all black mask $\mathbf{B}\in\mathbb{R}^{1\cp H\cp W}$ the same size as the body crop, and fit a Gaussian kernel to every landmark in the original image as \begin{equation} \mathbf{b}_k'= \frac{1}{\sigma\sqrt{2\pi}}e^{\frac{-(x-\mu)^2}{2\sigma^2}} \label{equation:1} \end{equation} The part-aware mask $\mathbf{B}'\in\mathbb{R}^{(1\cp H\cp W)}$ is created by binarizing $\mathbf{b}_k'$ using a constant threshold $\rho$, such that \begin{eqnarray} \mathbf{B}'(x) &=& \begin{cases} 1 & \text{if~} \| \mathbf{x} - \mathbf{b}_k \; \|_2 \leq \rho \text{,} \\ 0 & \text{if~} \| \mathbf{x} - \mathbf{b}_k \; \|_2 > \rho \text{,} \end{cases} \end{eqnarray} where x is the coordinates of all pixels in $\mathbf{B}$. The distance threshold $\rho$ is determined empirically. Finally, to obtain the part aware spatial (\mbox{PAS}\xspace) image $\mathbf{P}\in\mathbb{R}^{3\cp H\cp W}$, the part-aware mask is applied to the input body crop $\mathbf{I}$ using channel-wise hadamard product, \begin{equation} \mathbf{P} = \mathbf{I} \otimes \mathbf{B}' \end{equation} This process can be visualized in \cref{figure:context_block} (left). \subsection{Context Infusion Blocks} To extract information from PAS images, we explore \textit{early fusion}, which simply concatenates PAS with the body crop image $\mathbf{I}$ in the body feature extraction stream of our network. We also explore \textit{late fusion}, concatenating feature maps derived from PAS images before the fusion network. However, both of these approaches failed to improve performance. Motivated by the above, we present our second contribution, the Context Infusion Block (\mbox{Cont-In}\xspace) which is an architectural block that utilizes the PAS contextual image to condition the base network. We design \mbox{Cont-In}\xspace blocks such that they can be easily introduced in any existing emotion recognition network. \cref{figure:context_block} shows the architecture of a \mbox{Cont-In}\xspace block in detail. In \nameCOLOR{\mbox{PERI}}\xspace, the body feature extraction stream uses the \mbox{Cont-In}\xspace blocks to attend to part-aware context in the input image. Our intuition is that the pixel-aligned \mbox{PAS}\xspace images and the \mbox{Cont-In}\xspace block enables the network to determine the body part regions most salient for detecting emotion. \mbox{Cont-In}\xspace learns to modulate the network features by fusing the features of the intermediate layer with feature maps derived from PAS. Let $\textbf{X} \in \mathbb{R}^{H\cross W\cross C}$ be the intermediate features from the ${n-1}^{th}$ block of the base network. The PAS image $\textbf{P}$ is first passed through a series of convolutions and activation operations, denoted by $g(.)$, to get an intermediate representation $\mathcal{G} = g(\textbf{P})$ where $\mathcal{G} \in \mathbb{R}^{H\cross W\cross C}$. These feature maps are then concatenated with $\textbf{X}$ to get a fused representation $\textbf{F} = \mathcal{G} \oplus \textbf{X}$. $\textbf{F}$ is then passed through a second series of convolutions, activations, and finally batchnorm to get the feature map $\textbf{X}' \in \mathbb{R}^{H\cross W\cross C'}$ which is then passed through to the $n^{th}$ block of the base network (see \cref{figure:our_arch}).
1,116,691,498,749
arxiv
\section{Introduction} Population games are introduced as a framework to model population behaviors and study strategic interactions in populations by extending finite player games \cite{nash1950equilibrium, sigmund1999evolutionary, von2007theory}. It has fundamental impact on game theory related to social networks, evolution of biology species, virus and cancer, etc \cite{social, learning, shah2010dynamics,cancer}. Nash equilibrium (NE) describes a status that no player in population is willing to change his/her strategy unilaterally. To investigate stabilities of NEs, evolutionary game theory \cite{nowak2006evolutionary, san2012, sigmund1999evolutionary} has been developed in the last several decades. People from various fields (economics, biology, etc) design different dynamics, called mean dynamics or evolutionary dynamics \cite{hofbauer2003, san2009}, under various assumptions (protocols) to describe population behaviors. Important examples include Replicator, Best-response, Logit and Smith dynamics \cite{matsui1992best, shah2010dynamics, smith1984}, just to name a few. A special class of games, named potential games \cite{hofbauer1988theory, monderer1996potential,san2010} are widely considered. Heuristically, potential games describe the situation that all players face the same payoff function, called potential. Thus maximizing each player's own payoff is equivalent to maximizing the potential. In this case, NEs correspond to maximizers of the potential, which gives natural connections between mean dynamics and gradient flows obtained from minimizing the negative potential. An important example is the Replicator dynamics, which is a gradient flow of the negative potential in the probability space (simplex) with a Shahshahani metric \cite{akin1979geometry, RM, Shahshahani}. Recently, a new viewpoint has been brought into the realm of population games based on optimal transport, see Villani's book \cite{am2006,vil2008} and mean field games in the series work of Larsy, Lions \cite{MFG,de2014,lasry2007}. The mean field games have continuous strategy sets and infinite players \cite{bt2012, blanchet2014nash}. Each player is assumed to make decisions according to a stochastic process instead of making a one-shot decision. More specifically, individual players change their pure strategies \textit{locally} and simultaneously in a continuous fashion according to the direction that maximizes their own payoff functions most rapidly. Randomness is also introduced in the form of white noise perturbation. The resulting dynamics for individual players forms a mean field type stochastic differential equation, whose probability density function evolves according to the Fokker-Planck equation. Here Mean field serves as a mediator for aggregating individual players' behaviors. For potential games \cite{de2014}, Fokker-Planck equations can also be viewed as gradient flows of free energies in the probability space. Here free energy refers to the negative expected payoff added with a linear entropy term, which models risks that players take. Moreover, the probability space is treated as a Riemannian manifold endowed with optimal transport metric \cite{am2006, vil2003, vil2008}. The aim of this paper is to propose a mean dynamics on discrete strategy set, which possesses the same connections as that of mean field games and optimal transport theory. It should be noted that it is not a straightforward task to transform the theory on games with continuous strategy set directly to discrete settings. This is due to the fact that the discrete strategy set is no longer a length space, a space that one can define length of curves, and morph one curve to another in a continuous fashion. To proceed, we employ key tools developed in \cite{li-theory, li-finite, li-thesis} (Similar topics are discussed in \cite{chow2012, erbar2012ricci, maas2011gradient}). More specifically, we introduce an optimal transport metric on the probability space of the strategy set. With such metric, we derive the gradient flow of the discrete free energy as mean dynamics. In detail, consider a population game with finite discrete strategy set $S=\{1,\cdots, n\}$. Denote the set of population state \begin{equation*} \mathcal{P}(S)=\{(\rho_i)_{i=1}^n\in \mathbb{R}^n~:~ \sum_{i=1}^n\rho_i=1\ ,~\rho_i\geq 0\ ,~i\in S\}\ , \end{equation*} and payoff function $F_i\colon \mathcal{P}(S)\rightarrow \mathbb{R}$, for any $i\in S$. The derived mean dynamics is given by \begin{equation}\label{a1} \begin{split} \frac{ d\rho_i}{dt}&=\sum_{j\in N(i)} \rho_j[ F_i(\rho)- F_j(\rho)+\beta(\log\rho_j-\log\rho_i)]_+\\ &-\sum_{j\in N(i)}\rho_{i}[ F_j(\rho)- F_i(\rho)+\beta(\log\rho_i-\log\rho_j)]_+ \ ,\\ \end{split} \end{equation} where $\beta\geq 0$ is the strength of uncertainty, $\rho_i(t)$ is the probability at time $t$ of strategy $i\in S$, $[\cdot]_+=\max\{\cdot,0\}$, and $j\in N(i)$ if $j$ can be achieved by players changing their strategies from $i$. We call \eqref{a1} Fokker-Planck equation of a game. Dynamics \eqref{a1} can be viewed from numerous perspectives. First of all, if the game under consideration is a potential game, i.e. games for which there exists a term called potential $\mathcal{F}~:~\mathcal{P}(S)\rightarrow \mathbb{R}$ such that $ \frac{\partial}{\partial\rho_i}\mathcal{F}(\rho)=F_i(\rho) $, then equation \eqref{a1} can be seen as the gradient flow of the free energy defined as $$ -\mathcal{F}(\rho)+\beta\sum_{i=1}^n\rho_i\log\rho_i $$ on a Riemannian manifold $(\mathcal{P}(S), \mathcal{W})$. Here $\sum_{i=1}^n\rho_i\log\rho_i$ is the discrete entropy term and $\mathcal{W}$ is an optimal transport metric defined on the simplex. Secondly, equation \eqref{a1} can be regarded as the transition function of a nonlinear Markov process. Such Markov process models individual player's decision making process, which is \textit{local, myopic, greedy and irrational}. Locality refers to the behavior that a player only compares his/her current strategy with neighboring strategies, instead of the entire strategy set. Myopicity means that a player makes his/her decision solely based on the current available information. Greediness reflects the behavior that players always selects the strategy that improves his/her payoff most rapidly at the current time. Lastly and most importantly, by introducing white noise through the so called log-laplacian term in \eqref{a1}, the Markov process models players' uncertainty in the decision-making process. This uncertainty may be due to player making mistakes or risk-taking behavior. The risk-taking interpretation allows us to define the noisy payoff $\bar F_i\colon\mathcal{P}(S)\rightarrow \mathbb{R}$ for each strategy $i$, \begin{equation}\label{noisy-payoff} \bar F_i(\rho):=F_{i}(\rho)-\beta\log\rho_{i}\ . \end{equation} Intuitively, the monotonicity of the $\log$ term implies that the fewer players currently select strategy $i$, the more likely a player is willing to take risk by switching to strategy $i$. If the strength of the noise ($\beta$ term) was sufficiently large, the equilibrium would deviate relatively far from that without noise. Dynamics \eqref{a1} has many appealing features. For potential games, since the dynamics is a gradient flow, the stationary points of the free energy, named Gibbs measures, are equilibria of \eqref{a1}. Their stability properties can also be studied by leveraging two key notions, namely, {\em relative entropy} and {\em relative Fisher information} \cite{Fisher, vil2008}. Through their relations with optimal transport metric, we show that the relative entropy converges to 0 as $t$ goes to infinity, and the solution converges to the Gibbs measure exponentially fast. For general games, \eqref{a1} is not a gradient flow, which may exhibit complicated limiting behaviors including Hopf bifurcations. And the noise level introduces a natural parameter for such bifurcations. The arrangement of this paper is as follows. In section \ref{Game}, we give a brief introduction to population games on discrete sets. In section 3, we derive \eqref{a1} by an optimal transport metric defined on the simplex set, and introduce the Markov process associated with \eqref{a1} from the modeling perspective. In section 4, we study \eqref{a1}'s long time behavior by relative entropy and relative Fisher information. In section \ref{examples}, we discuss the application of our dynamics by working on some well-known population games. \section{Preliminaries}\label{Game} In this paper we focus on population games. Consider a game played by countable infinity many players. Each player in the population selects a pure strategy from the discrete strategy set $S=\{1,\cdots, n\}$. The aggregated state of the population can be described by the population state $\rho=(\rho_i)_{i=1}^n\in \mathcal{P}(S)$, where $\rho_i$ represents the proportion of players choosing pure strategy $i$ and $\mathcal{P}(S)$ is a probability space (simplex): \begin{equation*}\label{probs} \mathcal{P}(S)=\{(\rho_i)_{i=1}^n\in\mathbb{R}^n~:~\sum_{i=1}^n \rho_i=1\ , ~0\leq \rho_i\leq 1\ ,~i\in S\}\ . \end{equation*} The game assumes that each player's payoff is independent of his/her identity (autonomous game). Thus all players choosing strategy $i$ have the continuous payoff function $F_{i}: \mathcal{P}(S)\rightarrow \mathbb{R}$. A population state $\rho^*\in \mathcal{P}(S)$ is a Nash equilibrium of the population game if \begin{equation*} \rho_i^*>0 ~\textrm{implies that} ~F_i(\rho^*)\geq F_j(\rho^*)\ ,\quad\textrm{for all $j\in S$\ .} \end{equation*} The following type of population games has particular importance, in which NEs enjoys various prominent properties. {A population game is named a {\em potential game}, if there exists a differentiable potential function $\mathcal{F}: \mathcal{P}(S)\rightarrow \mathbb{R}$, such that $\frac{\partial}{\partial\rho_i}\mathcal{F}(\rho)=F_i(\rho)$, for all $i\in S$.} It is a well known fact that the NEs of a potential game are the stationary points of $\mathcal{F}(\rho)$. \noindent{\em Example:} Suppose that a unit mass of agents are randomly matched to play symmetric normal-form game with payoff matrix $A\in \mathbb{R}^{n\times n}$. At population state $\rho$, a player choosing strategy $i$ receives payoff equal to the expectation of the others, i.e. $F_i(\rho)=\sum_{j\in S}a_{ij}\rho_j$. In particular, if the payoff matrix $A$ is symmetric, then the game becomes a potential game with potential function $\mathcal{F}(\rho)=\frac{1}{2}\rho^TA\rho$, since $\frac{\partial}{\partial \rho_i}\mathcal{F}(\rho)=F_i(\rho)$. Given a potential game with potential $\mathcal{F}$, define the {\em noisy potential} \begin{equation*} \mathcal{\bar F}(\rho):=\mathcal{F}(\rho)-\beta \sum_{i=1}^n\rho_i\log\rho_i\ ,\quad \beta>0\ , \end{equation*} which is the summation of potential and Shannon-Boltzmann entropy. In information theory, it has been known for a long time that the entropy is a way to model uncertainties \cite{Fisher}. In the context of population games, such uncertainties may refer to players' irrational behaviors, making mistakes or risk-taking behaviors. In optimal transport theory, the negative noisy potential is usually called the {\em free energy} \cite{vil2003, vil2008}. The problem of maximizing each player's payoff with uncertainties is equivalent to maximizing the noisy potential (minimizing the free energy) \begin{equation*} \min\{-\mathcal{\bar F}(\rho)~:~\rho\in\mathcal{P}(S)\}\ . \end{equation*} We call the stationary points $\rho^*$ of the above minimization the discrete Gibbs measures, i.e. $\rho^*$ solves the following fixed point problem \begin{equation}\label{gibbs} \rho_i^*=\frac{1}{K}e^{\frac{F_i(\rho^*)}{\beta}}\ ,~\textrm{for any $i\in S$\ , where}\quad K=\sum_{j=1}^n e^{\frac{F_j(\rho^*)}{\beta}}\ . \end{equation} \section{Evolutionary dynamics via optimal transport}\label{derivation} In this section, we first introduce an optimal transport metric for population games. Based on such a distance, we propose another approach to evolutionary dynamics by optimal transport theory, see references in Villani's book \cite{vil2003, vil2008}. For potential games, such dynamics can be viewed as gradient flows of free energies. \subsection{Optimal transport metric for games} To introduce the optimal transport metric, we start with the construction of strategy graphs. A strategy graph $G=(S,E)$ is a {\em neighborhood} structure imposed on the strategy set $S=\{1,\cdots, n\}$. Two vertices $i,j\in S$ are connected in $G$ if players who currently choose strategy $i$ is able to switch to strategy $j$. Denote the neighborhood of $i$ by \begin{equation*} N(i)=\{j\in S\mid (i,j)\in E \}\ . \end{equation*} For many games, every two strategies are connected, making $G$ a complete graph. In other words, $N(i)=S\setminus \{i\}$, for any $i\in S$. For example, the strategy set of Prisoner-Dilemma game is either Cooperation (C) or Defection (D), i.e. $S=\{C, D\}$. Thus, the strategy graph is \begin{center} \begin{tikzpicture}[->,shorten >=1pt,auto,node distance=3cm, thick,main node/.style={circle,fill=blue!20,draw,minimum size=1cm,inner sep=0pt]}] \node[main node] (1) {$D$}; \node[main node] (2) [left of=1] {$C$}; \path[-] (2) edge node {} (1); \node[anchor=south] at ( 0,0.5) {$F_D(\rho)$}; \node[anchor=south] at ( -3,0.5) {$F_C(\rho)$}; \end{tikzpicture} \end{center} For any given strategy graph $G$, we can introduce an optimal transport metric on the simplex $\mathcal{P}(S)$. Denote the interior of $\mathcal{P}(S)$ by $\mathcal{P}_o(S)$. Given a function $\Phi\colon S\to \mathbb{R}$, define $\nabla\Phi\colon S\times S\to \mathbb{R}$ as \begin{equation*} \nabla\Phi_{ij}=\begin{cases} \Phi_i-\Phi_j\quad &\textrm{if $(i,j)\in E$;}\\ 0\quad &\textrm{otherwise}. \end{cases} \end{equation*} Let $m\colon S\times S\to \mathbb{R}$ be an anti-symmetric flux function such that $m_{ij} = -m_{ij}$. The divergence of $m$, denoted as $\textrm{div}(m)\colon S\rightarrow \mathbb{R}$, is defined by \begin{equation*} \textrm{div}(m)_i = -\sum_{j\in N(i)}m_{ij}\ . \end{equation*} For the purpose of defining our distance function, we will use a particular flux function \begin{equation*} m_{ij}=\rho \nabla\Phi:=g_{ij}(\rho)\nabla\Phi_{ij}\ , \end{equation*} where $g_{ij}(\rho)$ represents the discrete probability (weight) on $\mathrm{edge}$ $(i,j)$, defined by \begin{equation*} g_{ij}(\rho)= \begin{cases} \rho_j & \bar F_j(\rho)<\bar F_i(\rho)\ ;\\ \rho_i& \bar F_j(\rho)>\bar F_i(\rho)\ ;\\ \frac{\rho_i+\rho_j}{2}& \bar F_j(\rho)=\bar F_i(\rho)\ ,\\ \end{cases} \end{equation*} Here $\bar F_i(\rho)=F_i(\rho)-\beta\log\rho_i$, is defined in \eqref{noisy-payoff}. We can now define the discrete inner product on $\mathcal{P}_o(S)$ of $\nabla\Phi$ \begin{equation*} (\nabla\Phi,\nabla\Phi )_\rho:=\frac{1}{2}\sum_{(i,j)\in E} (\Phi_i-\Phi_j)^2g_{ij}(\rho)\ , \end{equation*} where $\frac{1}{2}$ is applied because each edge is summed twice, i.e. $(i,j)$, $(j, i)\in E$. The above definitions provide the following distance on $\mathcal{P}_o(S)$. \begin{definition} Given two discrete probability functions $\rho^0$, $ \rho^1\in\mathcal{P}_o(S)$, the Wasserstein metric $\mathcal{W}$ is defined by: \begin{equation*}\label{metric} \mathcal{W}(\rho^0,\rho^1)^2=\inf \{\int_0^1(\nabla\Phi,\nabla\Phi)_\rho dt~:~ \frac{d\rho}{dt}+\mathrm{div}(\rho\nabla\Phi)=0\ ,~\rho(0)=\rho^0,~\rho(1)=\rho^1\}\ . \end{equation*} \end{definition} It is known that $(\mathcal{P}_o(S), \mathcal{W})$ is a finite dimensional Riemannian manifold \cite{chow2012, maas2011gradient}. And the metric $\mathcal{W}$ depends on the graph structure of the strategy set. \subsection{Evolutionary dynamics}\label{derivation} We shall derive \eqref{a1} as a gradient flow of the free energy on the Riemannian manifold $(\mathcal{P}_o(S), \mathcal{W})$. \begin{theorem}\label{th12} Given a potential game with strategy graph $G=(S, E)$, potential $\mathcal{F}(\rho)\in C^2(\mathbb{R}^n)$ and a constant $\beta\geq 0$. Then the gradient flow of free energy \begin{equation*} -\mathcal{F}(\rho)+\beta\sum_{i=1}^n\rho_i\log\rho_i \end{equation*} on the Riemannian manifold $(\mathcal{P}_o(S), \mathcal{W})$ is the Fokker-Planck equation \begin{equation*} \begin{split} \frac{ d\rho_i}{dt}&=\sum_{j\in N(i)} \rho_j[ F_i(\rho)- F_j(\rho)+\beta(\log\rho_j-\log\rho_i)]_+\\ &-\sum_{j\in N(i)}\rho_{i}[ F_j(\rho)- F_i(\rho)+\beta(\log\rho_i-\log\rho_j)]_+ \ ,\\ \end{split} \end{equation*} for any $i\in S$. In addition, for any initial $\rho^0\in\mathcal{P}_o(S)$, there exists a unique solution $\rho(t): [0,\infty)\rightarrow \mathcal{P}_o(S)$. And the free energy is a Lyapunov function. Moreover, if $\rho^{\infty}=\lim_{t\rightarrow \infty }\rho(t)$ exists, $\rho^{\infty}$ is one of the Gibbs measures satisfying \eqref{gibbs}. \end{theorem} \begin{remark} We note that if $\beta=0$ and $G$ is a complete graph, the derived Fokker-Planck equation is the Smith dynamics \cite{smith1984}. \end{remark} \begin{remark} The strategy graph $G$ is different from the one in evolutionary graph games studied in \cite{allen2014games, lieberman2005evolutionary, graph}. They mainly consider a spatial space as the graph while our graph relates to the strategy set. \end{remark} The proof of Theorem \ref{th12} is shown in \cite{li-theory, li-thesis}, see details there. We can further extend \eqref{a1} as mean dynamics to model general population games without potential. Although \eqref{a1} can no longer be viewed as gradient flows of any sort in this case, yet it is a system of well defined ordinary differential equations in $\mathcal{P}(S)$. \begin{corollary} Given a population game with strategy graph $G=(S, E)$ and a constant $\beta\geq 0$. Assume payoff function $F:~\mathcal{P}(S)\rightarrow \mathbb{R}^n$ are continuous. For any initial condition $\rho^0\in \mathcal{P}_o(S)$, the Fokker-Planck equation \begin{equation*} \begin{split} \frac{ d\rho_i}{dt}&=\sum_{j\in N(i)} \rho_j[ F_i(\rho)- F_j(\rho)+\beta(\log\rho_j-\log\rho_i)]_+\\ &-\sum_{j\in N(i)}\rho_{i}[ F_j(\rho)- F_i(\rho)+\beta(\log\rho_i-\log\rho_j)]_+ \ ,\\ \end{split} \end{equation*} is a well defined flow in $\mathcal{P}_o(S)$. \end{corollary} The proof is similar to that of Theorem \ref{th12} and hence omitted. It is worth mentioning that, for potential games, there may exist multiple Gibbs measures as equilibria of \eqref{a1}. For non-potential games, there exist more complicated phenomena than equilibria, for example, invariant sets. We illustrate this by a modified Rock-Scissors-Paper game in Section \ref{examples}, for which Hopf bifurcation exists with respect to the parameter $\beta$. \subsection{Markov process}\label{Markov_process} In this subsection, we look at Fokker-Planck equation \eqref{a1} from the probabilistic viewpoint. More specifically, we present a Markov process whose transition function is given by \eqref{a1}. From the modeling perspective, such a Markov process models individual player's decision process that is myopic, irrational and locally greedy. The Markov process $X_{\beta}(t)$ is defined as \begin{equation}\label{Markov} \begin{split} &\textrm{Pr}(X_\beta(t+h)=j\mid X_\beta(t)=i)\\ =&\begin{cases} ( \bar F_j(\rho)- \bar F_i(\rho))_+h+o(h)\ , \quad&\textrm{if}~ j\in N(i)\ ;\\ 1-\sum_{j\in N(i)}(\bar F_j(\rho)-\bar F_i(\rho))_+h+o(h)\ ,\quad &\textrm{if}~ j=i\ ;\\ 0\ ,\quad &\textrm{otherwise}\ , \end{cases} \end{split} \end{equation} where $\bar F_i(\rho)=F_i(\rho)-\beta\log\rho_i$ and $\lim_{h\rightarrow 0}\frac{o(h)}{h}=0$. It can be easily seen that the probability evolution equation of $X_{\beta}(t)$ is exactly \eqref{a1}. Process $X_{\beta}(t)$ characterizes players' decision making process. Intuitively, players compare their current strategies with local strategy neighbors. If the neighboring strategy has payoff higher than their current payoffs, they switch strategies with probability proportional to the difference between the two payoffs. In addition, $X_\beta(t)$ represents an individual player's irrational behavior. This irrationality may be due to players' mistake or willingness to take risk. The uncertainly of strategy $i$ is quantified by term $\log\rho_{i}$. The monotonicity of this term intuitively implies that {\em the fewer players currently select strategy $i$, the more likely players are willing to take risks by switching to strategy $i$.} For this interpretation, we call $F_{i}(\rho)-\beta\log\rho_{i}$ the noisy payoff of strategy $i$, where $\beta$ is the noise level. \section{Stability via Entropy and Fisher information} In this section, we discuss the long time behavior of \eqref{a1} for potential games. We shall study the convergence properties of the dynamics \eqref{a1}. Our derivation depends on two concepts, which are extensions of discrete relative entropy and relative Fisher information \cite{convergence}. They are used to measure the closeness between two discrete measures $\rho$ and $\rho^\infty$, Gibbs measure defined by \eqref{gibbs}. The first concept is the discrete relative entropy ($\mathcal{H}$) \begin{equation*} \mathcal{H}(\rho|\rho^\infty):=\beta(\mathcal{\bar F}(\rho^\infty)-\mathcal{\bar F}(\rho))\ . \end{equation*} The other is the discrete relative Fisher information ($\mathcal{I}$) \begin{equation*} \mathcal{I}(\rho|\rho^\infty):=\sum_{(i,j)\in E}[(\log\frac{\rho_i}{e^{F_i(\rho)/\beta}}-\log\frac{\rho_j}{e^{F_j(\rho)/\beta}})_+]^2\rho_i\ . \end{equation*} We remark that in finite player games, where the potential is a linear function (non mean-field type), $\mathcal{H}$ and $\mathcal{I}$ coincide with the classical relative entropy (Kullback–Leibler divergence) and relative Fisher information respectively, see \cite{li-finite, li-thesis}. We shall show that $\mathcal{H}(\rho(t)|\rho^\infty)$ converges to 0 as $t$ goes to infinity. We will also estimate the speed of convergence and characterize their stability properties. Before that, we state a theorem that connects $\mathcal{H}$ and $\mathcal{I}$ via gradient flow \eqref{a1}. \begin{theorem}\label{H Suppose $\rho(t)$ is the transition probability of $X_\beta(t)$ of a potential game. Then the relative entropy decreases as a function of $t$. In other words, \begin{equation*} \frac{d}{dt}\mathcal{H}(\rho(t)|\rho^\infty)<0\ . \end{equation*} And the dissipation of relative entropy is $\beta$ times relative Fisher information \begin{equation}\label{Fisher} \frac{d}{dt}\mathcal{H}(\rho(t)|\rho^\infty)=-\beta\mathcal{I}(\rho(t)|\rho^\infty)\ . \end{equation} \end{theorem} The proof is based on the fact that $\mathcal{H}$ (the difference between noisy potentials) decreases along the gradient flow with respect to time. Namely, \begin{equation}\label{lyapunov} \begin{split} \frac{d}{dt}\mathcal{H}(\rho|\rho^\infty)=&-\beta\frac{d}{dt}\mathcal{\bar F}(\rho(t))=\beta(\nabla\bar F, \nabla \bar F)_\rho\\ =&\beta\sum_{ (i,j)\in E}[(\bar F_j(\rho)-\bar F_i(\rho))_+]^2\rho_i\\ =&\beta\sum_{ (i,j)\in E}[(\log\frac{\rho_i}{e^{F_i(\rho)/\beta}}-\log \frac{\rho_j}{e^{F_j(\rho)/\beta}} )_+]^2\rho_i\ .\\ \end{split} \end{equation} This shows that the noisy potential grows at the rate equal to the relative Fisher information. In other words, the population as a whole always seeks to improve the average noisy payoff at the rate equal to the expected squared benefits. Based on Theorem \ref{H}, we show that the dynamics converges to the equilibrium exponentially fast. Here the convergence is in the sense of $\mathcal{H}$ going to zero. Such phenomenon is called entropy dissipation. \begin{theorem}[Entropy dissipation]\label{main-theorem} Let $\mathcal{F}\in C^2(\mathcal{P}(S))$ be a concave potential function (not necessary strictly concave) for a given game. Then there exists a constant $C=C(p^0,G)>0$ such that \begin{equation}\label{exp} \mathcal{H}(\rho(t)|\rho^\infty)\leq e^{-Ct}\mathcal{H}(\rho^0|\rho^\infty)\ . \end{equation} \end{theorem} The proof of Theorem \ref{main-theorem} is readily available by noticing the fact that $$\mathcal{I}(\rho|\rho^\infty)< C\beta\mathcal{H}(\rho|\rho^\infty)\ ,$$ and an application of Grownwall inequality. See details \cite{li-thesis, li-theory}. In fact, the exponential convergence is naturally expected because \eqref{a1} is the gradient flow on a Riemannian manifold $(\mathcal{P}_o(S), \mathcal{W})$. In fact, a more precise characterization on the convergence rate $C$ in \eqref{exp} can be made. This characterization enables us to address the stability issues of Gibbs measures. Define \begin{equation}\label{def} \lambda(\rho)=\min_{\Phi}~~-\textrm{div}(\rho\nabla\Phi)^T\cdot\textrm{Hess}\mathcal{\bar F}(\rho)\cdot \textrm{div}(\rho\nabla\Phi) \ , \end{equation} where the infimum is among all $(\Phi_i)_{i=1}^n\in \mathbb{R}^n$, such that $(\nabla\Phi, \nabla\Phi)_\rho=1$ and $\textrm{Hess}$ represents the Hessian operator in $\mathbb{R}^n$. \begin{theorem}[Stability and asymptotic convergence rate]\label{stability} For a potential game with potential $\mathcal{F}(\rho)\in C^2$. Denote its Gibbs measure $\rho^\infty$ by \eqref{gibbs}. If $\lambda(\rho^\infty)>0$, then $\rho^{\infty}$ is an asymptotic stable equilibrium for \eqref{a1}. In addition, for any sufficiently small $\epsilon>0$, there exists a time $T>0$, such that when $t>T$, \begin{equation*} \mathcal{H}(\rho(t)|\rho^\infty)\leq e^{-2(\lambda( \rho^{\infty})-\epsilon)(t-T)} \mathcal{H}(\rho^0|\rho^\infty)\ . \end{equation*} \end{theorem} For more details, see \cite{li-theory}. The above convergence results, including the quadratic minimization \eqref{def}, shares many similar properties with continuous cases. For example, Ricci curvature lower bound and Yano's formula are well defined on discrete strategy set. See details in \cite{li-theory, li-finite, erbar2012ricci, vil2008}. \section{Examples}\label{examples} In this section, we investigate \eqref{a1} by applying it to several well-known population games. \noindent{\em Example 1: Stag Hunt.} The point we seek to convey in this example is that the noisy payoff reflects the \textit{rationality} of the population. The symmetric normal-form game with payoff matrix \begin{equation*} A=\begin{pmatrix} h&h\\ 0&s \end{pmatrix} \end{equation*} is known as Stag Hunt game. Each player in a random match needs to decide whether to hunt for a hare (h) or stag (s). Assume $s\geq h$, which means that the payoff of a stag is larger than a hare. This population game has three Nash equilibria: two pure equilibria $(0,1)$, $(1,0)$, and one mixed equilibrium $(1-\frac{h}{s}, \frac{h}{s})$. In particular, let $h=2$ and $s=3$. The population state is $\rho=(\rho_H,\rho_S)^T$ with payoff $F_H(\rho)=2$ and $F_S(\rho)=3\rho_S$. Then Fokker-Planck equation \eqref{a1} becomes \begin{equation*} \begin{cases} \dot \rho_H= \rho_S[2-3\rho_S+\beta\log\rho_S-\beta\log\rho_H]_+-\rho_{H}[-2+3\rho_S+\beta\log\rho_H-\beta\log\rho_S]_+\\ \dot\rho_S=\rho_H[3\rho_S-2+\beta\log\rho_H-\beta\log\rho_S]_+-\rho_{S}[-3\rho_S+2+\beta\log\rho_S-\beta\log\rho_H]_+\ . \end{cases} \end{equation*} The numerical results are in Figure \ref{stag-hare}. One can easily see that if the noise level $\beta$ is sufficient small, the perturbation doesn't affect the limit behavior of the mean dynamics. On the other hand, if noise level $\beta$ is large enough, \eqref{a1} settles around $(\frac{1}{2}, \frac{1}{2})$. Lastly, if the noise level is moderate, Equation \eqref{a1} has $(1,0)$ as the unique equilibrium. The above observation has practical meanings. Namely, if the perturbation is large enough, it turns out that people always choose to hunt hare (NE $(1,0)$). This is a safe choice as players can get at least a hare, no matter how the others behave. This appears even more so if comparing with the state $(0,1)$ for which the player receives nothing. If the perturbation is small and initial population appears to be more cooperative, people will choose to hunt the stag. This is a rational move because stag is definitely better than hare. \begin{figure}[H] \subfloat[$\beta=5$] {\includegraphics[scale=0.23]{7.eps}}\hspace{0cm} \subfloat[$\beta=0.5$] {\includegraphics[scale=0.15]{8.eps}}\\ \subfloat[$\beta=0.1$] {\includegraphics[scale=0.23]{9.eps}}\hspace{0cm} \subfloat[$\beta=0$] {\includegraphics[scale=0.23]{10.eps}}\\ \caption{Stag and Hare} \label{stag-hare} \end{figure} \noindent{\em Example 2: Rock-Scissors-Paper game}. Rock-Scissors-Paper has payoff matrix \begin{equation*} A=\begin{pmatrix} 0&1&-1\\ -1&0&1\\ 1&-1&0\\ \end{pmatrix}. \end{equation*} The strategy set is $S=\{r, s, p\}$. The population state is $\rho=(\rho_r,\rho_s,\rho_p)^T$ and the payoff functions are $F_r(\rho)=\rho_s-\rho_p$, $F_s(\rho)=-\rho_r+\rho_p$ and $F_p(\rho)=\rho_r-\rho_s$. By solving \eqref{a1}, we find that there is one unique Nash equilibrium around $\rho^*=(\frac{1}{3}, \frac{1}{3}, \frac{1}{3})$ for various $\beta$s. The result can be found in Figure \ref{Rock-Scissors-Paper}. \begin{figure}[H] \subfloat[$\beta=0$] {\includegraphics[scale=0.3]{5.eps}}\hspace{1cm} \subfloat[$\beta=0.1$] {\includegraphics[scale=0.3]{6.eps}}\\ \caption{Rock-Scissors-Paper} \label{Rock-Scissors-Paper} \end{figure} \noindent{\em Example 3.} We show an example with Hopf Bifurcation. Consider a modified Rock-Scissors-Paper game with payoff matrix \begin{equation*} A=\begin{pmatrix} 0&2&-1\\ -1&0&2\\ 2&-1&0\\ \end{pmatrix} \end{equation*} The strategy set is $S=\{r, s, p\}$. The population state is $\rho=(\rho_r,\rho_s,\rho_p)^T$ and the payoff functions are $F_r(\rho)=2\rho_s-\rho_p$, $F_s(\rho)=-\rho_r+2\rho_p$ and $F_p(\rho)=2\rho_r-\rho_s$. We find that there is Hopf bifurcation for Equation \eqref{a1}. If $\beta$ is large, there is a unique equilibrium around $(\frac{1}{3}, \frac{1}{3}, \frac{1}{3})^T$. If $\beta$ goes to $0$, the solution approaches to a limit cycle. The results are in Figure \ref{Bad-Rock-Scissors-Paper}. \begin{figure}[H] \subfloat[$\beta=0.5$] {\includegraphics[scale=0.3]{11.eps}} \subfloat[$\beta=0.1$] {\includegraphics[scale=0.3]{12.eps}}\\ \subfloat[$\beta=0$] {\includegraphics[scale=0.3]{13.eps}} \caption{Modified Rock-Scissors-Paper} \label{Bad-Rock-Scissors-Paper} \end{figure} \noindent{\em Example 4}. We show an example with multiple Gibbs measures. Consider a potential game with payoff matrix \begin{equation*} A=\begin{pmatrix} 1&0& 0\\ 0&1&1\\ 0&1&1\\ \end{pmatrix} \end{equation*} Denote the strategy set as $S=\{1, 2, 3\}$. The population state is $\rho=(\rho_1,\rho_2,\rho_3)^T$ and the payoff functions are $F_1(\rho)=\rho_1$, $F_2(\rho)=\rho_2+\rho_3$ and $F_3(\rho)=\rho_2+\rho_3$. We consider three sets of Nash equilibria : \begin{equation*} \{\rho\mid \rho_1=\frac{1}{2}\}\cup \{(1,0,0)\}\cup \{\rho\mid\rho_1=0\}\ , \end{equation*} where the first and third one are lines on the probability simplex $\mathcal{P}(S)$. By applying \eqref{a1}, we obtain two Gibbs measures \begin{equation*} \{(0,\frac{1}{2},\frac{1}{2})\} \cup \{ (1,0,0)\} \end{equation*} as $\beta\rightarrow 0$. The vector field is in Figure \ref{multiple-gibbs}. \begin{figure}[t!] \subfloat[$\beta=0$] {\includegraphics[scale=0.3]{1.eps}}\hspace{1cm} \subfloat[$\beta=0.1$] {\includegraphics[scale=0.3]{2.eps}}\\ \caption{Multiple Gibbs measures} \label{multiple-gibbs} \end{figure} \noindent{\em Example 5}. As a completion, we introduce a game with unique Gibbs measure. Let's consider another potential game with payoff matrix \begin{equation*} A=\begin{pmatrix} \frac{1}{2}&0&0\\ 0&1&1\\ 0&1&1\\ \end{pmatrix}\ . \end{equation*} Here the strategy set is $S=\{1, 2, 3\}$, the population state is $\rho=(\rho_1,\rho_2,\rho_3)^T$ and the payoff functions are $F_1(\rho)=\frac{1}{2}\rho_1$, $F_2(\rho)=\rho_2+\rho_3$ and $F_3(\rho)=\rho_2+\rho_3$. There are three sets of Nash equilibria \begin{equation*} \{\rho \mid1-\frac{1}{2}\rho_1=\rho_2+\rho_3\}\cup\{(1,0,0)\} \cup \{\rho~|1=\rho_2+\rho_3\}\ , \end{equation*} By applying Fokker-Planck equation \eqref{a1}, we have a unique Gibbs measure \begin{equation*} (0,\frac{1}{2},\frac{1}{2}) \end{equation*} as $\beta\rightarrow 0$. See Figure \ref{unique-gibbs} for the vector fields. \begin{figure}[H] \subfloat[$\beta=0$] {\includegraphics[scale=0.3]{3.eps}}\hspace{0.45cm} \subfloat[$\beta=0.1$] {\includegraphics[scale=0.3]{4.eps}} \caption{Unique Gibbs measures} \label{unique-gibbs} \end{figure} \section{Conclusion} In this paper, we proposed a dynamics for population games utilizing optimal transport theory and Mean field games. Comparing to existing models, it has the following prominent features. Firstly, the dynamics is the gradient flow of the noisy potential in the probability space endowed with the optimal transport metric. The dynamics can also be seen as the mean field type Fokker-Planck equations. Secondly, the dynamics is the probability evolution equation of a Markov process. Such processes model players' myopicity, greediness and irrationality. In particular, the irrational behaviors or uncertainties are introduced via the notion of noisy payoff. This shares many similarities with the diffusion or white noise perturbation in continuous cases. Last but not least, for potential games, Gibbs measures are equilibria of the dynamics. Their stability properties are obtained by the relation of optimal transport metric, entropy and Fisher information. In general, the dynamics may exhibit more complicated limiting behaviors, including Hopf bifurcations. \textbf{Acknowledgement}: This paper is mainly based on Wuchen Li's thesis.
1,116,691,498,750
arxiv
\section[Supplementary material]{Appendix: Calculation of decay rate and momentum change integrals} \setcounter{equation}{0} \setcounter{section}{1} In equs.~(11) and~(12) we show that the decay rate and the change in momentum of an excited atom with initial momentum~$\mathbf{p}_0$ are given by \begin{gather} \Gamma = 2 \pi \sum_{{\mathbf{k},\lambda}} \Omega_k^2 g_{\mathbf{k},\lambda}^2(\mathbf{p}_0) \delta\big(\omega_A-\omega_k + \tfrac{1}{M}\mathbf{k}\cdot(\mathbf{p}_0-\hbar \mathbf{k}/2)\big) \,, \label{eq:_decay_rate_appendix} \\ \tder{t}{}\ew{\vct{P}(t)} = - 2 \pi \sum_{\mathbf{k},\lambda} \hbar \mathbf{k} \Omega_k^2 g_{\mathbf{k},\lambda}^2(\mathbf{p}_0) \delta\big(\omega_A-\omega_k + \tfrac{1}{M}\mathbf{k}\cdot(\mathbf{p}_0-\hbar \mathbf{k}/2)\big) \,. \label{eq:_Pcan_dot_appendix} \end{gather} We shall assume that the atom is heavy and expand only to first order in $(Mc)^{-1}$ such that, for $\mathbf{d} = d \mathbf{e}_d$, \begin{equation} d^2 g_{\mathbf{k},\lambda}^2(\vct{p_0}) \simeq (\mathbf{d}\cdot\boldsymbol{\epsilon}_{\mathbf{k},\lambda})^2 +\tfrac{2}{Mc} (\mathbf{d} \cdot\boldsymbol{\epsilon}_{\mathbf{k},\lambda}) \big(\mathbf{p}_0-\tfrac{\hbar \omega}{2 c}\boldsymbol{\kappa}\big)\cdot\big( (\boldsymbol{\kappa}\times\boldsymbol{\epsilon}_{\mathbf{k},\lambda})\times\mathbf{d}\big) \,. \end{equation} The two polarisation directions~$\boldsymbol{\epsilon}_{\mathbf{k},\lambda}$ and the unit wave vector~$\boldsymbol{\kappa} = \mathbf{k} c/\omega_k$ are mutually orthonormal. Hence summing over the polarisations $\lambda=1,2$ we get $\sum_\lambda (\mathbf{d}\cdot\boldsymbol{\epsilon}_{\mathbf{k},\lambda})^2 = d^2 - (\mathbf{d} \cdot\boldsymbol{\kappa})^2$ and \begin{equation} \sum_\lambda (\mathbf{d}\cdot\boldsymbol{\epsilon}_{\mathbf{k},\lambda}) \vct{a}\cdot\big( (\boldsymbol{\kappa}\times\boldsymbol{\epsilon}_{\mathbf{k},\lambda})\times\mathbf{d}\big) = (\mathbf{d} \cdot \boldsymbol{\kappa})(\vct{a}\cdot\mathbf{d}) - d^2 (\vct{a}\cdot\boldsymbol{\kappa}) \end{equation} for any vector $\vct{a}=(\vct{a}\cdot\boldsymbol{\kappa}) \boldsymbol{\kappa} + \sum_\lambda (\vct{a}\cdot\boldsymbol{\epsilon}_{\mathbf{k},\lambda}) \boldsymbol{\epsilon}_{\mathbf{k},\lambda}$, but here specifically for $\vct{a}=\tfrac{2}{M c} \big(\mathbf{p}_0-\tfrac{\hbar \omega}{2 c}\boldsymbol{\kappa}\big)$. If we now change the sum over~$\mathbf{k}$ to an integral of continuous modes $\sum_\mathbf{k} \Omega_k^2 \rightarrow \tfrac{d^2}{2 (2\pi c)^3 \hbar \varepsilon_0} \int \text{d}\boldsymbol{\kappa} \int \text{d}\omega\; \omega^3$ we obtain \begin{gather} \Gamma = \frac{\pi}{(2 \pi c)^3 \hbar \varepsilon_0} \int_{4\pi} \text{d} \boldsymbol{\kappa} \int_0^\infty \text{d} \omega\; f_\Gamma(\omega) \delta\big(\omega_A-\omega_k + \tfrac{1}{M}\mathbf{k}\cdot(\mathbf{p}_0-\hbar \mathbf{k}/2)\big) \\ % \tder{t}{}\ew{\vct{P}(t)} = - \frac{\pi}{(2 \pi c)^3 c \varepsilon_0} \int_{4\pi} \text{d} \boldsymbol{\kappa} \int_0^\infty \text{d} \omega\; \boldsymbol{\kappa} \; f_{\dot{\vct{P}}}(\omega) \delta\big(\omega_A-\omega_k + \tfrac{1}{M}\mathbf{k}\cdot(\mathbf{p}_0-\hbar \mathbf{k}/2)\big) \end{gather} with the solid angle integral $\int_{4\pi} \text{d}\boldsymbol{\kappa} := \int_{-1}^1 \text{d}\cos\theta \int_0^{2\pi} \text{d}\phi$ for $\boldsymbol{\kappa}=(\sin\theta \cos \phi, \sin\theta \sin\phi, \cos\theta)$ and \begin{equation} f_\Gamma(\omega) \simeq \omega^3 \left(1+\tfrac{\hbar \omega}{Mc^2}\right) \left(d^2 - (\mathbf{d} \cdot \boldsymbol{\kappa})^2\right) + \tfrac{2}{M c} \omega^3 \left( (\mathbf{p}_0\cdot\mathbf{d}) (\mathbf{d}\cdot \boldsymbol{\kappa}) - \mathbf{p}_0\cdot\boldsymbol{\kappa} d^2 \right) \end{equation} and $f_{\dot{\vct{P}}}(\omega) = \omega f_\Gamma(\omega)$. Generally we have $\int_a^b f(x) \delta(h(x)) \text{d} x = \sum_{x_0} f(x_0)/\abs{h'(x_0)}$ for smooth functions $f$ and $h$ where $x_0$ are all zeros of $h$ within the interval $(a,b)$ and $h'(x_0)\neq 0$. In our case $h(\omega) \equiv \widetilde{\omega}_\mathbf{k} = \omega_A-\omega (1- \boldsymbol{\kappa}\cdot\mathbf{p}_0/(Mc)) - \hbar \omega^2/(2 M c^2)$ has only one positive root at \begin{equation} \omega_+ = \frac{M c^2}{\hbar}\left[ -\left(1-\frac{\boldsymbol{\kappa}\cdot\mathbf{p}_0}{Mc}\right) + \sqrt{\left(1-\frac{\boldsymbol{\kappa}\cdot\mathbf{p}_0}{Mc}\right)^2 + 2\frac{\hbar \omega_A}{M c^2}}\right] = \omega_A + \frac{\omega_A}{Mc}\left(\boldsymbol{\kappa}\cdot\mathbf{p}_0 - \frac{\hbar \omega_A}{2c}\right) + \mathcal{O}\Big( \big(\tfrac{\hbar \omega_A}{Mc^2}\big)^2 \Big)\,, \end{equation} with $\abs{h'(\omega_+)}\simeq 1-\boldsymbol{\kappa}\cdot\mathbf{p}_0/(Mc) + \hbar \omega_A/(Mc^2)$. We thus expand, again to first order in $(Mc)^{-1}$, \begin{equation} \frac{f(\omega_+)}{\abs{h'(\omega_+)}} \simeq f(\omega_A) + \frac{\boldsymbol{\kappa}\cdot \mathbf{p}_0}{M c} \big( f(\omega_A) + \omega_A f'(\omega_A) \big) - \frac{\hbar \omega_A}{2 Mc^2} \big( 2 f(\omega_A) + \omega_A f'(\omega_A) \big) \end{equation} to obtain \begin{gather} \int_0^\infty \text{d} \omega\; f_\Gamma(\omega) \delta(\widetilde{\omega}_\mathbf{k}) = \omega_A^3 \left( 1 - 3 \tfrac{\hbar \omega_A}{2 M c^2} + 4 \tfrac{(\mathbf{p}_0 \cdot \boldsymbol{\kappa})}{Mc} \right) \left( d^2 - (\mathbf{d}\cdot\boldsymbol{\kappa})^2\right) - \tfrac{2 \omega_A^3}{Mc} \left( d^2 (\mathbf{p}_0 \cdot \boldsymbol{\kappa}) - (\mathbf{p}_0 \cdot \mathbf{d})(\mathbf{d}\cdot\boldsymbol{\kappa})\right) \,, \\ \int_0^\infty \text{d} \omega\; \boldsymbol{\kappa}\; f_{\dot{\vct{P}}}(\omega) \delta(\widetilde{\omega}_\mathbf{k}) = \omega_A^4 \boldsymbol{\kappa} \left( 1 - 4 \tfrac{\hbar \omega_A}{2 M c^2} + 5 \tfrac{(\mathbf{p}_0 \cdot \boldsymbol{\kappa})}{Mc} \right) \left( d^2 - (\mathbf{d}\cdot\boldsymbol{\kappa})^2\right) - \tfrac{2 \omega_A^4}{Mc} \boldsymbol{\kappa} \left( d^2 (\mathbf{p}_0 \cdot \boldsymbol{\kappa}) - (\mathbf{p}_0 \cdot \mathbf{d})(\mathbf{d}\cdot\boldsymbol{\kappa})\right) \,. \end{gather} Terms odd in $\boldsymbol{\kappa}$ vanish in the following integration $\int_{4\pi} d\boldsymbol{\kappa}$ where we get \begin{align} \int_{4\pi} \text{d}\boldsymbol{\kappa} \, \big(d^2 - (\mathbf{d} \cdot\boldsymbol{\kappa})^2 \big) &= \tfrac{8\pi}{3} d^2 \,, \\ 5 \int_{4\pi} \text{d}\boldsymbol{\kappa} \, \boldsymbol{\kappa} (\boldsymbol{\kappa} \cdot \mathbf{p}_0) \big(d^2 - (\mathbf{d} \cdot\boldsymbol{\kappa})^2 \big) &= \tfrac{8\pi}{3} \big( 2 d^2 \mathbf{p}_0 - (\mathbf{p}_0\cdot \mathbf{d}) \mathbf{d} \big) \,, \\ 2 \int_{4\pi} \text{d}\boldsymbol{\kappa} \, \boldsymbol{\kappa} \left( d^2 (\boldsymbol{\kappa} \cdot \mathbf{p}_0) - (\boldsymbol{\kappa} \cdot \mathbf{d}) (\mathbf{d} \cdot \mathbf{p}_0)\right) &= \tfrac{8\pi}{3} \big( d^2 \mathbf{p}_0 - (\mathbf{p}_0\cdot \mathbf{d}) \mathbf{d} \big) \,, \end{align} which can be checked, for instance, by choosing $\mathbf{p}_0 = (0,0,p_0)$ and $\mathbf{d}=(d_x,d_y,d_z)$. We thus end up with \begin{gather} \Gamma = \frac{\omega_A^3 d^2}{3 \pi \varepsilon_0 \hbar c^3} \left(1-3\frac{\hbar \omega_A}{2 M c^2}\right) \,, \\ \tder{t}{}\ew{\vct{P}(t)} = - \frac{\omega_A^3 d^2}{3 \pi \varepsilon_0 \hbar c^3} \frac{\hbar \omega_A}{M c^2} \mathbf{p}_0 \,. \end{gather} This shows that the decay rate is independent of the velocity (at least to first order in $v_0/c = \mathbf{p}_0/(Mc)$) while the momentum changes as given in equ.~(16). \end{document}
1,116,691,498,751
arxiv
\section{Introduction} \label{sec:intro} Radio emission from the immediate vicinity of the supermassive black hole M87*\xspace\ was used to reconstruct the first-ever image of a black hole, estimate its mass, and interpret its theoretical environment by the Event Horizon Telescope (EHT) Collaboration \citep[][hereafter Papers~I-VI]{M87_PaperI,M87_PaperII,M87_PaperIII,M87_PaperIV,M87_PaperV,M87_PaperVI}. The analyses behind these findings included multiple image-reconstruction algorithms, model fitting in the visibility domain employing various geometric shapes, and an extensive investigation of physical emission models. The latter are based on a large library of simulations that model the source as a hot, magnetized accretion flow, that form the input for theoretical model images constructed via ray-tracing and solving the equations of radiative transfer for a thermal population of relativistic electrons emitting synchrotron radiation. Typically, these models predict images with both a sharp ring component -- i.e., associated with the location of photon rings in the underlying spacetime -- as well as a comparatively diffuse but still compact emission structure \citepalias[see, e.g.,][]{M87_PaperV}. Image reconstructions, model fitting to geometric shapes, and direct fitting to general relativistic magnetohydrodynamical (GRMHD) synchrotron models all yielded an inferred mass for M87*\xspace\ of $(6.5\pm0.7)\times10^9 M_\odot$, which agrees with stellar-dynamical mass inferences of $(6.14_{-0.62}^{+1.07})\times10^9 M_\odot$ (\citealt{Gebhardt2011}; \citetalias{M87_PaperVI}). The error in each method was estimated based on the variable emission structure from the GRMHD-based model images. In the meantime, novel imaging schemes have been developed that can either approximate \citep{Arras_2019,Sun_2020} or directly sample \citep{Themaging,DMC} the posterior distribution over possible image structures. The Bayesian nature of these schemes yield meaningful posterior distributions for the images. These posteriors permit a more rigorous characterization of the credibility of image features and a measure of image consistency. In addition, they permit a hybrid approach that combines image reconstruction with modeling specific expected features. This is demonstrated in \citet{Themaging}, where imaging is accomplished with a ``nonparametric'' model comprised of a rectilinear raster of control points, and additional geometric components (Gaussians, rings, etc.). Of particular interest here is the ability to reconstruct ring features in the data beyond the diffraction limit, provided the signal-to-noise ratios ($S/N$s) of the measured complex visibilities are sufficiently high. Such ring features are theoretically expected outcomes in black hole images due to the propagation of photons in close proximity to the black hole, and the associated strong gravitational lensing predicted by general relativity. It is useful to think of the resulting image that is measured at infinity to be composed of a {\em direct} emission component that is dominated by the typically nontrivial and uncertain astrophysical environment, plus a series of {\em ring} components that are confined to distinct narrow regions (far less influenced by the details of the overall flow structure), which arise from photons that traversed the black hole once, twice, and so on before reaching a distant observer \citep{Bardeen1973,spin}. Thus far, the M87*\xspace\ images present the total image structure, i.e., the sum of all of those components; the diffraction-limited image reconstructions in \citetalias{M87_PaperIV} cannot distinguish the ring components from the rest of the emission directly. In all of the theoretical models presented in \citetalias{M87_PaperV} that were consistent with the data, the direct emission provided the majority of the flux, with a substantial minority arising from the first higher-order image component; in what follows we will refer to these as the $n=0$ and $n=1$ ``photon rings,'' respectively, despite the former often not forming a ring. Based on inspection of the GRMHD library presented in \citetalias{M87_PaperV}, the $n=1$ ring typically contains $10\%-30\%$ of the total compact flux, depending on the model \citep[e.g.][]{Gralla_2019,Johnson_2019}. Here, we demonstrate that statistically preferred reconstructions are achieved when a thin ring is included in addition to the standard nonparameteric image-reconstruction component. The thin ring is then identified with the $n=1$ photon ring of the underlying spacetime. Interpreting the separate emission components as the $n=1$ and $n=0$ photon rings provides a powerful mapping of spacetime that is far less sensitive to the astrophysical processes in the emitting region than previous analyses \citep{spin}. In this paper we report the results of applying the hybrid image model from \citet{Themaging} to the M87*\xspace 2017 EHT dataset. By fitting a geometric ring model component simultaneously with a flexible image component, we are able to isolate the $n=1$ photon ring from the surrounding diffuse emission. We demonstrate that our reconstructed emission structure is in agreement with prior EHT imaging results, and the properties of the $n=1$ ring yield novel constraints on the black hole mass with significantly reduced systematic uncertainties. Furthermore, because the bright ring emission is effectively extracted by the geometric model component, the image component is free to capture subtler details within the remaining diffuse emission than would otherwise be possible. We find that the removal of the bright foreground ring uncovers additional low-brightness image structures that are consistent with originating from the base of the forward jet; the observed brightness asymmetry in context with previous EHT results constraining the black hole spin orientation \citepalias{M87_PaperV} aligns with expectations for a jet driven by the Blandford-Znajek mechanism \citep{BZ77}. The structure of the paper is as follows. In \autoref{sec:themaging} we summarize the algorithm used to reconstruct the images. We present the reconstructed, structure-agnostic images in \autoref{sec:themages} before presenting the hybrid reconstructions in \autoref{sec:rings}. In \autoref{sec:bhparams} we describe how our findings are related to the black hole parameters. The implications of the structure and evolution of the diffuse emission within the broader context of M87*\xspace are collected in \autoref{sec:discussion}. We summarize and conclude in \autoref{sec:conclusions}. \section{Imaging Algorithm Summary} \label{sec:themaging} We employ the forward-modeling Markov Chain Monte Carlo (MCMC) algorithm presented in \citet{Themaging} and implemented in the {\sc Themis}\xspace\ analysis framework \citep{THEMIS-CODE-PAPER}. In this scheme, the image is forward-modeled by a rectilinear grid of control points, at which the intensity is a free parameter, and between which the intensity is modeled via an approximate cubic spline. Station gains are simultaneously modeled, and are assumed to be fixed in time over a single scan. For the current work, this algorithm has been supplemented in four ways. First, we have included the field of view (FOV) as two additional parameters in the underlying image models (one for each dimension in the image plane). This change promotes the two FOVs from hyperparameters that must be manually surveyed to parameters that are continuously explored in a fashion more consistent with the general Bayesian approach employed. It also permits efficient exploration of asymmetric FOVs, i.e., different FOVs in the two directions of the rectilinear grid. While this flexibility makes only a small difference for the results presented here, it does enable analysis of highly asymmetric and extended systems (e.g., active galactic nuclei jets). Second, we permit a rotation of the rectilinear grid. This change permits the control points to optimally arrange themselves within the confines of the grid. Typically, this freedom permits smaller grid dimensions than grids that are fixed to be oriented along the cardinal directions. Again, this additional flexibility makes only a modest difference in the applications here. Third, we make use of an updated set of samplers implemented within {\sc Themis}\xspace. Details on the sampler, including demonstrations, may be found in \citet{TiedeThesis}. In summary, the improved sampler makes use of a deterministic even-odd swap tempering scheme \citep{DEO:2019} using the Hamiltonian Monte Carlo sampling kernel from the Stan package \citep{Stan:2017}. MCMC chain convergence is assessed using standard criteria, including integrated autocorrelation time, approximate split $\hat{R}$, and visual inspection of individual traces. Fourth, we fit to complex visibilities rather than some combination of visibility or closure amplitudes and phases. This requires that we reconstruct the time-variable station gain phases in addition to their amplitudes. Both are stable across a 10~minute scan. Fitting the complex visibilities simplifies the treatment of the errors at low $S/N$: thermal errors are strictly Gaussian \citep{TMS}. It also improves the structure of the likelihood surface to be explored, which is both smoother and has fewer modes in this case. Fit quality is assessed using the $\chi^2$ statistic, comparing log-likelihoods, and by inspecting the distribution of residuals. \begin{deluxetable*}{lccccccccccc} \tablecaption{Fit Quality Assessment \label{tab:chi2}} \tablehead{ \colhead{Day} & \colhead{Model\tablenotemark{a}} & \colhead{$N_{\rm params}$} & \colhead{$N_{\rm data, HI}$\tablenotemark{b}} & \colhead{$N_{\rm data, LO}$\tablenotemark{b}} & \colhead{$N_{\rm g,HI}$\tablenotemark{b}} & \colhead{$N_{\rm g,LO}$\tablenotemark{b}} & \colhead{$\chi_{\rm HI}^2$} & \colhead{$\chi_{\rm LO}^2$} & \colhead{$\chi^2$\tablenotemark{c}} & \colhead{$\Delta$BIC\tablenotemark{d}} & \colhead{$\Delta$AIC\tablenotemark{e}} } \startdata April 5 & I$_{5\times5}$+A & 34 & 336 & 336 & 162 & 162 & 295.2 & 246.4 & 628.0 & -- & --\\ & I$_{5\times5}$+A+X & 41 & 336 & 336 & 162 & 162 & 219.6 & 196.2 & 454.7 & -127.7 & -107.4 \\ April 6 & I$_{5\times5}$+A & 34 & 548 & 568 & 226 & 243 & 436.3 & 376.2 & 872.1 & -- & --\\ & I$_{5\times5}$+A+X & 41 & 548 & 568 & 226 & 243 & 356.8 & 344.0 & 756.2 & -66.8 & -68.9 \\ April 10 & I$_{5\times5}$+A & 34 & 182 & 192 & 79 & 73 & 87.9 & 96.7 & 216.5 & -- & --\\ & I$_{5\times5}$+A+X & 41 & 182 & 192 & 79 & 73 & 80.5 & 93.2 & 194.0 & 18.9 & 35.5 \\ April 11 & I$_{5\times5}$+A & 34 & 432 & 446 & 185 & 190 & 388.9 & 338.1 & 800.6 & -- & --\\ & I$_{5\times5}$+A+X & 41 & 432 & 446 & 185 & 190 & 365.6 & 313.6 & 742.6 & -10.5 & -8.0 \\ \enddata \tablenotetext{a}{Model components are as follows: an $N\times M$ dimensional image raster (I$_{N\times M}$), a large-scale asymmetric Gaussian (A), and a slashed ring (X). Detailed descriptions and of the model components and priors can be found in the main text and \autoref{app:model}.} \tablenotetext{b}{Each complex visibility is counted as two data points ($N_{\rm data,HI/LO}$). Similarly, each complex gain is counted as two gain parameters ($N_{\rm g,HI/LO}$).} \tablenotetext{c}{Includes contributions from the gain priors.} \tablenotetext{d}{Differences between the Bayesian information criterion (BIC) with and without the slashed ring. The BIC is defined by $\chi^2 + k\ln(N)$ where $k\equiv N_{\rm params}+N_{\rm g,HI}+N_{\rm g,LO}$ is the total number of model parameters and $N\equiv N_{\rm data,HI}+N_{\rm data,LO}$.} \tablenotetext{e}{Differences between the Akaike information criterion (AIC) with and without the slashed ring. The AIC is defined by $\chi^2 + 2k + 2k(k+1)/(N-k-1)$, where $k$ and $N$ are defined as they are for the BIC.} \end{deluxetable*} \section{Image Reconstructions} \label{sec:themages} Prior to applying the complete hybrid imaging model, we first perform nonhybrid image reconstructions to more easily enable comparison with previous imaging work and to provide context for subsequent interpretation of the hybrid images. We reconstruct images for each of the four EHT observations of M87*\xspace taken in 2017 April. The observations were carried out using two frequency bands \citepalias{M87_PaperIII}, which we refer to as high and low band, and we image both bands simultaneously. We assume that the image structure is shared across bands, such that a single image is produced for each day, but we permit the station gains to be completely independent among bands. Prior to fitting, we preprocess the visibility data as described in \citet{Themaging}: we coherently time-average the complex visibilities within each scan, and we add a systematic uncertainty of 1\% in quadrature to the thermal uncertainties to account for residual calibration errors. This additional error budget is motivated by the analysis of nonclosing errors in \citetalias{M87_PaperIII} and the same as that found in \citetalias{M87_PaperV} and \citetalias{M87_PaperVI} sufficient to produce high-quality fits. The model we employ uses a $5 \times 5$ grid of control points to capture the image structure, and a large-scale asymmetric Gaussian (a major-axis FWHM above 0.2~\mas; see \autoref{app:model} for details) to accommodate structure seen on only the shortest baselines (again utilizing the hybrid modeling+imaging approach). \begin{figure} \centering \includegraphics[width=\columnwidth]{fig1a.png} \includegraphics[width=\columnwidth]{fig1b.png} \caption{Direct comparisons between the model and measured values of the complex visibilities for two representative days for the combined high- and low-band data. In each plot, the upper panel shows the maximum-likelihoood I$_{5\times5}$+A model predictions (red dots) for real (filled) and imaginary (open) components of the complex visibilities (blue points). Normalized residuals are shown in the bottom panel with $\pm1\sigma$ indicated for reference (red dotted lines). To the right of the residuals, the distributions of normalized residuals (blue histograms) are shown on the right in comparison to a unit-variance normal distribution (red) and the Gaussian with the same mean and variance as the residuals (green dotted).} \label{fig:imgres} \end{figure} \autoref{tab:chi2} provides an accounting of the parameters and data quantities used for each of the fits, and it lists various fit statistics for each of our reconstructions. The quantities most comparable to the fit statistics reported in Table 5 of \citetalias{M87_PaperIV} and Table 2 of \citetalias{M87_PaperVI} would be the $\chi^2/(N_{\rm data,HI}+N_{\rm data,LO})$, which here range from 0.58 to 0.93 and are otherwise comparable to those reported previously. Direct comparisons with the complex visibility data and the corresponding residuals are shown for representative days in \autoref{fig:imgres}. \autoref{fig:imgtot} shows the resulting image reconstructions for each of the four days. The top row shows the maximum-likelihood samples from each chain, the middle row shows the posterior means, and the bottom row shows the posterior standard deviations. We find that the posterior mean images show a qualitatively similar ringlike structure to image reconstructions produced using regularized maximum-likelihood (RML) methods in \citetalias{M87_PaperIV}, though we note that we have not imposed any comparable regularization (e.g., maximum entropy, total variation) on our likelihood function. The general shape, size, and total flux of the emission structure are similar across all four days, as is the pronounced north-south asymmetry in the brightness distribution. We find that the image control point raster prefers a FOV of $\sim$50\,$\muas$ in both axis directions and a modest rotation with respect to the equatorial coordinate system (see \autoref{app:model}). The bottom row of \autoref{fig:imgtot} illustrates a measure of the uncertainty in the image reconstructions, as previously demonstrated in \citet{Themaging}. We find that the uncertainty is not uniform across the image, nor does it seem to be proportional to the image intensity. Rather, the uncertainty tends to be lowest within an approximately circular region running azimuthally around the ring, and it increases both radially inwards and outwards of this region. We note that this behavior contrasts with the appearance of the image uncertainty reported in \citet{Sun_2020}; we do not explore these differences in this paper, but we expect that they arise primarily from the large differences in likelihood and prior specification between these two algorithms. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{fig2.png} \end{center} \caption{Brightness temperature maps of M87 based on the raster image model. Shown are the maximum-likelihood sample (top), average image (middle), and standard deviation with contours from the average image overlaid ranging from $2\times10^9$~K to $8\times10^9$~K in steps of $2\times10^9$~K (bottom). Each map has been smoothed by a $15~\muas$ Gaussian beam, shown in the lower left of each panel, resulting in a combined effective resolution of approximately $20~\muas$ to make these more directly comparable with the results in \citetalias{M87_PaperIV}.}\label{fig:imgtot} \end{figure*} \section{Ring Reconstructions} \label{sec:rings} \begin{deluxetable*}{lcccc} \tablecaption{Hybrid Image-ring Fit Parameters \label{tab:ringparams}} \tablehead{ \colhead{Day} & \colhead{$I_{\rm diff}~({\rm Jy})$} & \colhead{$\theta_{\rm diff}~(\muas)$} & \colhead{$I_{\rm ring}~({\rm Jy})$} & \colhead{$\theta_{\rm ring}~(\muas)$} } \startdata April 5 & $0.252^{+0.019+0.045}_{-0.017-0.044}$ & $17.0^{+1.1+3.4}_{-1.2-2.6}$ & $0.301^{+0.007+0.022}_{-0.007-0.017}$ & $21.88^{+0.13+0.36}_{-0.14-0.37}$ \\ April 6 & $0.190^{+0.014+0.037}_{-0.012-0.041}$ & $20.2^{+1.0+2.6}_{-1.3-5.2}$ & $0.276^{+0.010+0.049}_{-0.008-0.019}$ & $21.44^{+0.09+0.21}_{-0.10-0.23}$ \\ April 10 & $0.176^{+0.017+0.047}_{-0.019-0.052}$ & $20.4^{+4.2+7.0}_{-4.7-6.7}$ & $0.302^{+0.016+0.038}_{-0.021-0.054}$ & $21.89^{+0.27+0.51}_{-0.39-0.90}$ \\ April 11 & $0.193^{+0.013+0.035}_{-0.018-0.070}$ & $20.8^{+0.7+2.8}_{-0.8-6.0}$ & $0.246^{+0.020+0.058}_{-0.013-0.026}$ & $22.51^{+0.16+0.37}_{-0.17-0.47}$ \\ \enddata \tablenotetext{}{Values quoted are the median, 50-, and 90-percentile ranges.} \end{deluxetable*} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig3a.png} \includegraphics[width=\columnwidth]{fig3b.png} \caption{Similar to \autoref{fig:imgres} for the I$_{5\times5}$+A+X model.} \label{fig:imgreswX} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{fig4.png} \end{center} \caption{Brightness temperature maps of M87 based on the hybrid ring+image model. Shown are the maximum-likelihood sample (top), average image (middle), and standard deviation with contours from the average image overlaid ranging from $2\times10^9$~K to $8\times10^9$~K in steps of $2\times10^9$~K (bottom). Each map has been smoothed by a $15~\muas$ Gaussian beam, shown in the lower left of each panel, resulting in a combined effective resolution of approximately $20~\muas$ to make these more directly comparable with the results in \citetalias{M87_PaperIV}.}\label{fig:imgwX} \end{figure*} The image model described in the previous section is incapable of reconstructing features on scales much smaller than the raster spacing, which is comparable to the nominal array resolution of $\sim$20\,$\muas$. However, the lowest-order lensed image around the black hole -- i.e., the $n=1$ photon ring -- is expected to have a thickness of only $\sim$1\,$\muas$ \citep{Johnson_2019}. While we may not expect to be able to spatially resolve the thickness of this ring with the 2017 EHT array, its $\sim$40\,$\muas$ diameter should still imprint itself on the visibility data. By enforcing the prior expectation that a putative $n=1$ component of the observed emission structure should originate from a thin ring (i.e., one having a thickness that is much less than its diameter), \citet{Themaging} showed that the diameter of the photon ring could be reliably recovered from EHT-like synthetic datasets generated from input images produced from GRMHD simulations. Following \citet{Themaging}, we perform hybrid imaging of the four M87*\xspace datasets, in which we fit for a ``slashed ring'' model component alongside the image and large-scale Gaussian described in the previous section. For the M87*\xspace black hole, which has a spin-axis inclination of ${\lesssim}20^\circ$ with respect to the line of sight (\citealt{Walker_2018}; \citetalias{M87_PaperV}), the photon ring is expected to have a nearly circular geometry with only small ($\lesssim$2\%; \citealt{Johnson_2019}) deviations from circularity even for large spin values. We thus model the ring as a thin (fractional thickness less than 5\%; see \autoref{app:model}) circular annulus with a linear brightness gradient (``slash''), and we permit the diameter, flux, and slash magnitude and orientation to be free parameters. The thickness parameter is restricted by a tight prior that forces it to be $<$5\% of the diameter. Additionally, we permit the center coordinates of the ring model component to drift with respect to the center of the image model component. We describe the model and prior distribution specification in more detail in \autoref{app:model}. Direct comparisons with the complex visibility data and the corresponding fit residuals for representative days are presented in \autoref{fig:imgreswX}, and indicate high-quality fits across the entirety of the range of baselines probed by the EHT. As tabulated in \autoref{tab:chi2}, we find lower $\chi^2$ values when fitting a hybrid image to the data relative to fitting the image model described in \autoref{sec:themages}. For comparison with Table 5 of \citetalias{M87_PaperIV} and Table 2 of \citetalias{M87_PaperVI}, the analogous $\chi^2$ quantities to those reported there range from 0.52 to 0.86, again a roughly comparable fit quality. Such improved fit quality is expected given the increased complexity of the hybrid image model, but information criteria considerations indicate that the fit improvement outweighs the additional model complexity for all but the April 10 dataset. The April 10 dataset is the sparsest of the four, containing a factor of ${\sim}$2--3 fewer data points than any of the other datasets, and the information criteria indicate that the increased complexity of the hybrid images is not statistically necessary in this case\footnote{Note that this does not preclude the possibility of more complex structure, as is preferred on other days. Rather, this only indicates that more complex structures are not required to explain the data on April 10.}. However, we note that the statistical preference for a ring component in the other three datasets---for which the coverage is more complete and the emission structure correspondingly better-constrained---implies its presence in the April 10 data set, as well. \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{fig5.png} \end{center} \caption{Brightness temperature maps of M87 on each day with fitted slashed ring (color scale) separated from the average (top) and standard deviation (bottom) of the reconstructions of the more diffuse background image (contours) produced from $10^4$ samples drawn from the posterior. The ring color scale is linear and the ring flux map is smoothed with a circular beam with a FWHM of $0.5~\muas$. The background emission is shown in contours --- thin contours are located at $(0.25,0.5,1,2)\times10^8$~K, thick contours are linearly spaced beginning at $4\times10^8$~K in steps of $4\times10^8$~K --- and smoothed with a circular beam with a FWHM of $15~\muas$.}\label{fig:imgsep} \end{figure*} Our hybrid image reconstructions for each of the four days are shown in \autoref{fig:imgwX}. We find that after convolution with a 15\,$\muas$ Gaussian beam, the gross structural properties of the emission qualitatively match those recovered from the imaging in \autoref{sec:themages} (see \autoref{fig:imgtot}) and image reconstructions in \citetalias{M87_PaperIV}, though with an evident preference for a smoother and more uniformly circular emission structure in the hybrid images. The ability of the thin ring model component to capture aspects of the source structure frees the remaining image model component to devote its flexibility to recovering fainter features. This increased focus on fainter structures can also be seen in the behavior of the image control point raster, which shows a broader east-west extent for the hybrid image fits than for the image-only fits (see \autoref{app:model}). \autoref{fig:imgsep} shows the hybrid image reconstructions with the ring and image components separated. The slashed ring model component is shown at nearly its native resolution; it is smoothed by $0.5~\muas$ for visualization purposes only. The diffuse emission map is displayed without any additional convolution (in contrast to \autoref{fig:imgwX} and \autoref{fig:imgtot}). In this figure, the slashed ring model component is visually distinct from the much-lower-brightness diffuse emission associated with the image model component. Though the model permits both the ring and image components to drift freely with respect to one another, we find that the data prefer reconstructions in which the two components are nearly concentric. We see that the diffuse emission is primarily concentrated along the southern portion of the ring, helping to define several ``knots'' of emission; a by-eye decomposition indicates that there are approximately two such knots on April 5 and 6, with a third knot appearing at the southernmost point of the ring on April 10 and 11. Additionally, we find that all four days show a feature that is significantly detected to the southwest of the ring. This southwestern component is present in some of the reconstructions from \citetalias{M87_PaperIV} and features more prominently in \citet{Arras_2020} and \citet{Carilli_2021}, but only the latter two works provided any comment; we discuss the potential origin of and implications for this feature in \autoref{sec:SWemission}. The reconstructed flux densities in both the image and ring model components are listed in \autoref{tab:ringparams}, and we find that the ring component contains between $\sim$54\%--64\% of the total flux in the image. This range exceeds the fraction of flux contained in the narrow ringlike features in GRMHD simulations, from which we anticipate only ${\sim}$10\%-30\% of the total image flux to be contained in the narrow ring. The measured fluxes are, however, consistent with the results from \citet{Themaging} for hybrid image reconstructions of simulated data. The excess ring flux appears to be a consequence of the absorption of a portion of the surrounding direct emission into the ring component. By virtue of their sparse $(u,v)$ coverage and finite $S/N$, the EHT observations have an effective angular resolution limit of approximately ${\sim}10~\muas$ \citepalias{M87_PaperIV}, which is smaller than the nominal beam size of ${\sim}20~\muas$ but still much larger than the anticipated $n=1$ ring thickness of $\sim$1--2$~\muas$. Image structures on scales smaller than this ${\sim}10~\muas$ threshold are effectively unresolved by the array. Our hybrid image model priors confine the ring thickness to be $\lesssim$5\% of the ring radius (roughly $1~\muas$,; see \autoref{app:model}), but source flux contained within an annulus of thickness ${\sim}10~\muas$ around the ring radius is structurally indistinguishable from flux residing within the ring itself. In GRMHD simulations, such an annulus contains roughly $\sim$50\%-80\% of the total source flux, consistent with the results in \autoref{tab:ringparams}. \citet{Themaging} have demonstrated that this excess flux capture does not appear to substantially bias the recovered $n=1$ ring radius. \autoref{fig:RingRadii} shows the posterior distributions for the ring component radius parameter on all four days. The radius measurements are consistent with being constant across the week; the apparent evolution from April 6 to April 11 is a modest $2\sigma$ tension after inclusion of the appropriate trials factor. Combining the ring radius measurement across all days yields $\theta_{\rm ring} = 21.74\pm0.10~\muas$. Posteriors for other ring properties (flux, position angle, thickness) can be found in \autoref{app:ancillary_posteriors}, and are similarly consistent among days. The radius of the direct emission region, $\theta_{\rm diff}$, is measured from the ring center by computing the radial location of the brightness peak in the image component. Though we note that the direct emission does not necessarily form a closed ring on any day, this definition nevertheless presents the most consistent measure of direct emission ring size to that invoked in \citet{spin}. We find that the direct emission radius is a factor of ${\sim}10$ more uncertain than that of the ring component; we generate posteriors for $\theta_{\rm diff}$ on each day from the hybrid image reconstructions, and the resulting values are listed in \autoref{tab:ringparams}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{fig6.png} \end{center} \caption{Posterior distributions for the radius of the thin ring component on each day. The variance-weighted estimates from all days and their uncertainties are indicated by the black vertical lines and gray bands, respectively. The range of ring diameters from \citetalias{M87_PaperVI} are indicated by the orange bands.}\label{fig:RingRadii} \end{figure} \section{Physical Parameters of the Black Hole} \label{sec:bhparams} \begin{deluxetable}{lcc} \tablecaption{M87*\xspace Mass Estimates \label{tab:masses}} \tablehead{ \colhead{Method/Origin} & \colhead{$\theta_M~(\muas)$} & \colhead{$M~(10^9M_\odot)$} } \startdata \S5.1 Direct $n=1$ photon ring & $4.15 \pm 0.74$ & $7.06 \pm 1.26$\\ \S5.2 Corrected $n=\infty$ photon ring & $4.22 \pm 0.17$ & $7.18 \pm 0.29$ \\ \S5.3 Joint $M$/$a$ reconstruction & $4.20^{+0.12}_{-0.06}$ & $7.13^{+0.20}_{-0.11}$\\ \hline \citetalias{M87_PaperVI} & $ 3.8\pm0.4$ & $6.5\pm0.7$\\ \citet{Gebhardt2011} & $3.62_{-0.34}^{+0.60}$ & $6.14_{-0.62}^{+1.07}$\\ \citet{Walsh2013} & $2.05_{-0.16}^{+0.48}$ & $3.45_{-0.26}^{+0.85}$\\ \enddata \tablenotetext{}{Errors indicate $1\sigma$ statistical and systematic errors, added in quadrature. See the relevant sections for more detailed error budgets.} \end{deluxetable} Most directly constrained, and most comparable to other measurements of the mass, is the angular scale $\theta_M\equiv GM/c^2D$, where $D$ is the distance to M87*\xspace. This is related to the angular radius of the bright ring. However, it is subject to additional systematic uncertainties associated with the nature of this relationship, dominated by a systematic bias associated with the location of the emission region and the dependence on black hole spin. Here we consider three methods for estimating $\theta_M$ and the corresponding mass, $M = M_9\times10^9 M_\odot$. All of these identify the bright ringlike structure in the image with the $n=1$ photon ring, produced by photons that execute a half-orbit about the black hole prior to reaching the distant image plane. This is motivated both geometrically --- higher-order photon rings are suppressed exponentially \citep{Johnson_2019} --- and from astrophysical predictions ranging from semi-analytical modeling \citep{Broderick2016} to GRMHD simulations (\citetalias{M87_PaperV}; \citealt{Porth2019,Abramowicz:2013,Gammie2003};). They differ in the manner in which they attempt to systematically address the dependence on spin and the relationship to the asymptotic ($n=\infty$) photon ring, which defines the edge of the black hole shadow. In all cases, where we transform from an angular measurement of the mass, $\theta_M$ to a physical measurement, we will assume a distance of $D=16.8\pm0.8~{\rm Mpc}$ \citepalias[see Appendix I of][and references therein]{M87_PaperVI}. These measurements and relevant comparisons are collected in \autoref{tab:masses}. \subsection{Direct Mass Estimates} \label{sec:mass} As described in \citet{spin}, the sensitivity of the size of the $n=1$ photon ring to the location of an equatorial emission region is bounded. This is in contrast to the $n=0$ photon ring, which can grow arbitrarily large with more distant equatorial emission. As a result, with the detection of the $n=1$ photon ring, it is now possible to place a limit on the mass that is weakly dependent on the physics of the emission region, subject to the assumption that this emission is confined to near-equatorial regions, e.g., as anticipated by MAD models.\footnote{This condition may be violated by distant emission along the line of sight behind the black hole. In such a case, however, it would be difficult to understand why the interior of the shadow does not exhibit a bright feature from a presumably foreground partner region.} The size of the $n=1$ photon ring from equatorial emission is given by \begin{equation} \theta_{n=1} = \vartheta_{n=1}(a,r_{\rm em},i) \theta_M \end{equation} where $\vartheta_{n=1}(a,,r_{\rm em},i)$ is a dimensionless function that ranges from $4.30$ and $6.17$ for polar observers, depending on the black hole spin, $a$, and radius of the emission peak, $r_{\rm em}$. Therefore, from the measurement of $\theta_{n=1}$ it is possible to generate an {\em astrophysics-independent} limit on the mass of M87*\xspace: \begin{equation} \begin{aligned} \theta_M &= \vartheta_{n=1}^{-1} \theta\\ &= 4.15\pm 0.02 \pm \left. 0.74 \right|_{\vartheta_{n=1}} ~ \muas, \end{aligned} \end{equation} where we have separately indicated statistical uncertainty and the systematic uncertainty in the relationship between the $n=1$ ring and the mass. The corresponding mass estimate is \begin{equation} M_9 = 7.06 \pm 0.03 \pm \left. 1.26 \right|_{\vartheta_{n=1}} \pm \left. 0.34 \right|_{D}, \end{equation} where the systematic uncertainty associated with that on the distance is also separately stated. \subsection{Corrected Mass Estimates} \label{sec:grmhd_mass} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{fig7.png} \end{center} \caption{Absolute shift in the radius of the $n=1$ photon ring (bottom) for a distant polar observer in comparison to the radius of the asymptotic photon ring ($n=\infty$) as functions of the apparent diameter of the direct image ($n=0$). The orange region indicates the $1\sigma$ range of ring sizes implied by \citetalias{M87_PaperVI}, $42\pm3~\mu$as.}\label{fig:geometric_bias_summary} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{fig8.png} \end{center} \caption{Absolute shift in the radius of the $n=1$ photon ring relative to the radius of the asymptotic ($n=\infty$) photon ring from the GRMHD simulation library presented in \citepalias{M87_PaperV}, where we have identified the $n=1$ ring with the bright ringlike feature seen in the GRMHD images. The distribution of all models \citepalias[including those excluded in][]{M87_PaperV} is shown in gray and those models that are not excluded by the 2017 EHT observations in black. The ranges inferred from the geometric arguments, assuming emission from only the equatorial plane and using the size constraint from \citetalias{M87_PaperVI}, are shown by the vertical ranges for $a=0$ (red), $a=0.75$ (green), and $a=0.998$ (blue).}\label{fig:GRMHD_bias_summary} \end{figure} The $n=1$ photon ring is biased relative to the asymptotic (i.e., $n=\infty$) photon ring associated with the boundary of the black hole shadow. The degree to which it is biased depends on the spatial distribution of the emission region \citep{Themaging,spin}. In \autoref{app:bias} we describe two attempts to estimate the degree of this bias. The first, described in detail in \autoref{app:geometric_bias}, is based solely on geometric arguments, assumes a polar observer, and is subject to the size constraints reported in \citetalias{M87_PaperVI}. These are shown in \autoref{fig:geometric_bias_summary}. These indicate that this bias is robustly limited to less than $1.3~\muas$, and typically significantly smaller. A potentially more relevant estimate based on GRMHD simulations is described in \autoref{fig:GRMHD_bias_summary}. We make use of the GRMHD simulations reported in \citetalias{M87_PaperV}. These incorporate two important additional effects: the small but nonzero inclination ($i$ ranges from $12^\circ$ to $22^\circ$) and emission from above and below the equatorial plane. Both of these tend to shrink the size of the $n=1$ photon ring, as seen in \autoref{fig:GRMHD_bias_summary}. Models that exhibit extended emission, e.g., the $R_{\rm high}=1$ simulations, can have biases that exceed $1\,\muas$. However, these are excluded by the M87*\xspace size constraints \citepalias{M87_PaperV}. When only models that are consistent with the source size and ancillary limits described in \citetalias{M87_PaperV} are considered, the size of the bias is below $0.8~\muas$; the typical shift is $\Delta\theta_{n=\infty}=0.56\pm0.32~\muas$. Like that of the $n=1$ photon ring, the angular size of the asymptotic photon ring is given by \begin{equation} \theta_{n=\infty} = \vartheta_{n=\infty}(a,i) \theta_M, \end{equation} where $\vartheta(a,i)$ is another dimensionless function, ranging from $4.90$ to $5.20$ depending on spin for the inclinations relevant for M87*\xspace. This range is considerably smaller than that for $\vartheta_{n=1}$, corresponding to a significantly reduced dependence on the details of the emission region, which is otherwise encoded in $\Delta\theta$. Thus, the uncertain spin introduces only an additional 3\% systematic uncertainty in the relationship between the asymptotic photon ring radius and the mass. Therefore, the mass of M87*\xspace in angular units is given by \begin{equation} \begin{aligned} \theta_M &= \vartheta_{n=\infty}(a,i)^{-1} \left( \theta_{n=1} - \Delta\theta_{n=\infty}\right)\\ &= 4.22 \pm 0.02 \pm \left. 0.06 \right|_{\Delta\theta} \pm \left. 0.16 \right|_{\vartheta} ~ \muas, \end{aligned} \end{equation} where again we have separately listed the random measurement error and the systematic errors associated with the emission region ($\Delta\theta$) and spin ($\vartheta$). This corresponds to a mass estimate of \begin{equation} M_9 = 7.18 \pm 0.04 \pm \left. 0.11 \right|_{\Delta\theta} \pm \left. 0.27 \right|_{\vartheta} \pm \left. 0.34 \right|_{D}, \end{equation} \subsection{Joint Mass/Spin Estimate} \label{sec:massspin} Finally, following \citet{spin}, we attempt to jointly reconstruct the size of the $n=0$ and $n=1$ photon rings, as listed in \autoref{tab:ringparams}. This leverages the additional information presented by the diffuse emission to constrain the location of the emission region. However, because there is limited evolution in the maps during the 2017 EHT observing campaign and the simulated analyses in \citet{spin}, the constraint on spin will be weak, rendering this primarily a demonstration in principle. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{fig9.png} \end{center} \caption{Joint posterior on the spin and mass of M87*\xspace from the measurements of $\theta_{n=0}$ and $\theta_{n=1}$ across the four observation days in the 2017 EHT campaign. Contours indicate cumulative 50\%, 90\%, and 99\% regions. For comparison, the one-dimensional marginalized posteriors on $M$ and $a$ are shown by the dashed lines for simulated ring sizes generated assuming a single emission radius and Gaussian errors with the sizes quoted in \autoref{tab:ringparams}.}\label{fig:Matri} \end{figure} Details for the model, likelihood, and method of sampling are collected in \autoref{app:massspin}. The resulting joint posterior on $\theta_M$ and $a$ is shown in \autoref{fig:Matri}. Marginalizing over spin, we find for the mass \begin{equation} \theta_M = 4.20^{+0.12+0.23}_{-0.06-0.10}~\muas, \end{equation} and \begin{equation} M_9 = 7.13^{+0.20+ 0.39}_{-0.11-0.17} \pm \left. 0.34 \right|_{D} \end{equation} where the indicated errors correspond to 1$\sigma$ and 2$\sigma$. Note that because the spin is simultaneously reconstructed, these include what has previously been identified as the systematic uncertainties associated with the n=1 photon ring size bias and spin. After marginalizing over mass, the spin estimate is \begin{equation} a = 1.0^{+0+0}_{-0.5-1}, \end{equation} where again the indicated errors correspond to 1$\sigma$ and 2$\sigma$. As anticipated, the spin is effectively unconstrained, although there appears to be a very weak preference for high spin. To demonstrate this we repeated the analysis with synthetic size measurements constructed from the equatorial emission model with two sets of $(\theta_M,a)$. The first was set to the average values from the joint posteriors, $(\theta_M,a)=(4.32~\muas,0.63)$. The second was to set to $(\theta_M,a)=(4.2~\muas,0.0)$, exploring the posterior associated with a truth value of zero spin. These are shown in \autoref{fig:Matri} by the dashed red and green lines, respectively, in the one-dimensional, marginalized posteriors for $\theta_M$ and $a$. Both exhibit the same posterior excess near $a=1$, implying that it is not significant. \subsection{Synthesis and Discussion} The accuracy of previous EHT estimates of $\theta_M$ for M87*\xspace is dominated by systematic uncertainties associated with the astrophysics of the emission region (\citetalias{M87_PaperVI}; \citealt{Gralla_2019}). However, the detection of the bright ring reduces a number of these uncertainties, both qualitatively and quantitatively. The mass estimate presented in \autoref{sec:mass}, in which the bright ring is identified with the $n=1$ photon ring, is independent of even pathological near-equatorial emission distributions. As a result, the direct detection of the bright ring has effectively produced a mass estimate in which the impacts of astrophysical uncertainties are strictly bounded. While the systematic uncertainty associated with this detection is nearly twice that reported in \citetalias{M87_PaperVI}, it is no longer dependent on the astrophysical calibration procedure used there, eliminating a key astrophysical uncertainty. The combined image reconstructions confirm that the image morphology on each day is similar to those produced by GRMHD simulations, comprised of a bright ring and a diffuse, more variable surrounding emission structure. This provides a strong conceptual foundation for the calibrated mass estimates using GRMHD simulations presented in \citetalias{M87_PaperVI} and revised in \autoref{sec:grmhd_mass}. The inclusion of a prior expectation on the size of the emission region, inherent in the GRMHD simulations, results in significant improvements of systematic uncertainties. As a result, the effective systematic uncertainty is reduced to roughly half of that in \citetalias{M87_PaperVI}. Finally, the diffuse emission provides a direct, astrophysics-independent estimate of the emission region location. Joint modeling of the bright ring and the diffuse component as the $n=1$ and $n=0$ photon rings, respectively, generates an astrophysics-independent estimate of the mass that incorporates the remaining systematic uncertainties directly as statistical errors. The half-range is approximately a quarter of that quoted in \citetalias{M87_PaperVI}. This implies that the position of the $n=0$ photon ring provides a stronger constraint on the location of the emission region than the prior inferred from the GRMHD simulations from \citetalias{M87_PaperV}. All of the mass estimates presented here are consistent among each other within their respective systematic errors. They are also consistent with the combined mass estimates presented in \citetalias{M87_PaperVI}, though they lie at the high end of the mass range listed there. This suggests that those GRMHD simulations with more compact emission regions are more consistent with the diffuse emission maps reconstructed here. These mass estimates are also consistent with those arising at scales of $10^2$~pc from the dynamics of stars \citep{Gebhardt2011}. It remains inconsistent with the gas dynamical mass estimate reported in \citet{Walsh2013}. The significance of this discrepancy has now grown to more than $4\sigma$. This inconsistency may be ameliorated by adjustments in the underlying gas disk model \citep{Jeter2019,Jeter2020b}. \section{Origin and Evolution of the Diffuse Component} \label{sec:discussion} The detection of a bright ringlike feature, and its separation from the diffuse component, has a number of immediate implications for the origin of the emission and the properties of the central black hole. As shown explicitly in \citet{Themaging}, the morphology of the diffuse emission is accurately recovered despite the absorption of excess flux into the thin ring component. Thus, here we discuss the structure the diffuse component in the broader context of the environment and properties of M87*\xspace. \subsection{Evolution of M87 from 2017 April 5-11} There are clear signatures of evolution across the EHT observation campaign. The diffuse emission maps on neighboring days are similar. This is consistent with expectations based on the dynamical timescales; $GM/c^3\approx9$~hr in M87*\xspace and GRMHD simulations indicate little evolution on timescales shorter than $10 GM/c^3$ \citep{Porth2019}. In contrast, significant evolution in the diffuse emission occurs between the first two days (April 5, 6) and the last two days (April 10, 11). This evolution seems to primarily manifest as the addition in the later two days of a distinct component to the southern region of the diffuse ring. The absence of an intervening observation leaves the origin of this southern component unclear; it could be a new component appearing, or it could be a growing extension of the western component. The latter interpretation would be consistent with the clockwise rotation of the black hole inferred from the orientation of the diffuse ring \citepalias{M87_PaperIV} and possibly to outflowing features within the jet \citep{Jeter2020}. Note that the sense of this rotation is opposite to the predominantly counter-clockwise evolution identified in the total emission map, exhibited in \autoref{fig:imgtot} and \autoref{fig:imgwX}, as well as other analyses (\citetalias{M87_PaperIV}; \citealt{Arras_2020}; \citealt{Carilli_2021}). Equally important, if not more so, is what does {\em not} evolve. No significant changes in the angular size of the narrow ring component are detected. This is consistent with the interpretation of this feature as primarily gravitational, and thus dependent only weakly on the details of the otherwise evolving emission region. \subsection{Extended Southwestern Emission} \label{sec:SWemission} \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{fig10.png} \end{center} \caption{Comparison of the stacked image of the diffuse emission produced by the variance-weighted mean (upper left) and its standard deviation (lower left) with the stacked GMVA map at 3~mm from \citet{M873mm} (right). In the GMVA map, the extended jet is clearly visible, and the contours are located at $(-1,1,1.414,2,\dots)\times0.47~{\rm mJy~beam^{-1}}$; the contours of the diffuse emission map are the same as in \autoref{fig:imgsep}. The fit to the ridgelines of the limb-brightened emission from \citet{M873mm} are shown by white solid (top) and dashed (bottom) lines in each panel, with the origin shown by the open green circle. A ring with the mean radius of $22.22~\muas$ is indicated by the dashed red line. Boundaries of neighboring panels are indicated in green dashed lines. Shown by green circles are the measured core shifts at 2.3~GHz (rightmost), 5~GHz, 8.4~GHz, 15.2~GHz, 23.8~GHz and 43.2~GHz (leftmost closed) from \citet{Hada2011} referenced to the anticipated location of the 3~mm core (right open). The expected location of the 1.3~mm core (left open) matches the peak of the diffuse emission.}\label{fig:EHTGMVA} \end{figure*} In all of the diffuse emission maps shown in \autoref{fig:imgsep}, an extension to the southwest is visible. The ability to produce a statistically meaningful image posterior enables the assignment of a significance to these features, which ranges from $4\sigma$ to $12\sigma$ on the individual days. On a three of the four of days there is a matching northwestern extension, though this is less significant ($1\sigma$-$2\sigma$). While such features appear similar to the dirty beams seen in some M87*\xspace images produced by other algorithms \citepalias[see Figures~7 and 8 of][]{M87_PaperIV}, none have been seen at statistically significant levels (as characterized by the image posteriors) in the various simulated data tests performed with the Bayesian scheme employed here \citep[see, e.g., Figure~4 of][]{Themaging}. Similar claims of a statistically significant detection have been made by other groups \citep{Arras_2020,Carilli_2021}. It is suggestive that the orientation of these diffuse extensions align with the limb-brightened jet seen at 3~mm, shown in \autoref{fig:EHTGMVA}. We align the center of light of the mean 1.3~mm maps, averaged over the four observation days in 2017, and the centroid of the core component of the 3~mm maps. The ridgeline fits from \citet{M873mm} are also shown in \autoref{fig:EHTGMVA}, assuming a jet position angle of $69^\circ$ east of north and a width, $W\propto z^{-0.498}$, where $z$ is the projected core distance. Additional core shifts along the jet of $25~\muas$ and transverse to the jet of $-10~\muas$ are applied. Extrapolating the core shift power law determined from longer wavelengths by \citet{Hada2011}, places the anticipated 1.3~mm core on top of the brightest diffuse component.\footnote{The positions of the 1.3~mm and 3~mm cores are strongly correlated in the fits reported in \citet{Hada2011}. Thus, the {\em relative} positions of the 1.3~mm and 3~mm cores is much better constrained than the absolute position of either in relation to the 7~mm core. We estimate the uncertainty in the location of the 1.3~mm core via Monte Carlo sampling of the power-law fit parameters reported in \citet{Hada2011}, assuming independent Gaussian errors in the fit parameters.} With these shifts, the ridgelines connect from the southwestern and northwestern extensions. We caution that the apparent structure of the jet base on horizon scales may depart substantially from the power-law behavior at large scales due to the combined effects of inclination, gravitation lensing at small radii, inhomogeneous evolution in the optical depth across the image, and relativistic motion \citep[see, e.g.,][]{Broderick2009, Moscibrodzka2016, Chael19, Davelaar19}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{fig11.png} \end{center} \caption{Bottom: flux profiles along the top and bottom jet ridgelines shown in \autoref{fig:EHTGMVA} for the stacked 1.3~mm and 3~mm emission maps. The latter have been smoothed over a scale of 100~\uas to remove the clear beam features. The shaded regions indicate a combination of the intrinsic 1$\sigma$ errors and that associated with nearby ridgelines. Top: implied brightness ratio between the two ridgelines. A brightness ratio of unity is shown by the green dotted line.}\label{fig:beaming} \end{figure} The dominance of the southwestern extension, in comparison to the more marginal northwestern extension, is consistent with a rapidly rotating jet structure near the black hole, aligned with the black hole spin. Brightness temperature profiles are shown in \autoref{fig:beaming} along the corresponding paths shown in \autoref{fig:EHTGMVA}. The brightness ratio between these two components reaches values as low as ${\mathcal R}\approx0.2$ at similar projected distances from the black hole. The uncertainty of these profiles is a combination of the intrinsic uncertainty in the reconstructions and of the ridgelines themselves; we show the uncertainty associated with a Gaussian error in the position of the parabola apex (the rightmost open green point in \autoref{fig:EHTGMVA}) of $2~\muas$ for the EHT profiles and $10~\muas$ for the 3~mm profiles. In \autoref{fig:beaming} the difference in the brightness temperature profiles is naturally explained by black hole spin-driven jets like those first described in \citet{BZ77}. Relativistic rotational motions in the emitting plasma near the black hole, with velocities that could reach as high as $c/2$, are a consequences of the twisted magnetic field lines that penetrate the horizon and extract the rotational energy of the black hole. These motions are seen explicitly in GRMHD simulations \citep[see, e.g.,][]{Wong2021}. At larger distances, this rotation is suppressed due to angular momentum conservation, at which point the jet plasma motions become predominantly poloidal. As a result, even only very modest increases in the jet height result in drastic reductions in the beaming-induced brightness asymmetry, naturally explaining its absence at 3~mm and longer wavelengths. In both \autoref{fig:EHTGMVA} and \citetalias{M87_PaperV} the implied spin of the black hole is oriented into the sky, i.e., the black hole rotates clockwise, dragging the emitting plasma with it. As such, this provides a striking, direct confirmation of black hole spin as the driver of the jet in M87*\xspace. In practice, the angular scale over which the limbs become symmetric depends on the detailed jet structure, and thus black hole spin \citep{Takahashi_etal:2018}; however, we leave further astrophysical interpretation to future work. \section{Conclusions} \label{sec:conclusions} We have applied the hybrid imaging algorithm outlined in \citet{Themaging} to the 2017 EHT observations of M87*\xspace. This method is a Bayesian imaging and modeling scheme that reconstructs the brightness map from visibility data, accounting for station gains and atmospheric phase delays, and produces statistically meaningful posteriors for both components. We considered both imaging and imaging with a narrow ring, finding that information criteria indicate a preference for the latter on three of four observation days. We demonstrate that the EHT observations of the horizon scale emission of M87*\xspace support the presence of a narrow ringlike feature. Its radius is consistent across the seven days of the 2017 EHT observing campaign (2017 April 5-11). The size and structure of this ring is consistent with the prominent lensed structure anticipated in horizon-resolving images of M87*\xspace. We associate the ring emission with the strong gravitational lensing that produces the $n=1$ photon ring; this decomposition thus represents the first direct detection of the ``back of the emission region.'' It also provides an important confirmation of the key role played by strong lensing in the formation of the images of M87*\xspace presented in \citetalias{M87_PaperI}. Evolving, extended diffuse emission is clearly present in addition to the bright, narrow ring. The extended image is consistent among neighboring days, as anticipated by the long dynamical timescales in M87*\xspace. However, it differs from the beginning of the week to the end of the week. This may be due either to the appearance of a new southern component or the shearing of a western component southeastward. The latter is consistent with the direction of motion expected from the black hole spin orientation presented in \citetalias{M87_PaperV}. The diffuse emission is dominated by compact components that surround the bright ring, similar to the morphology in many GRMHD simulation images. This is more compact than many such images, suggesting that additional constraints on the library in \citetalias{M87_PaperV} can be made based on the compactness of this portion of the diffuse emission. Extended components within the diffuse emission are also detected at statistically significant levels. These include a southwestern extension, detected at between $4\sigma$ and $8\sigma$ across all days, and a northwestern extension that is only marginally detected on two days (April 5 and 11). Both of these components are reminiscent of the larger-scale limb-brightened features seen at 3~mm; their orientation matches those extrapolated from longer wavelengths. These may be tentatively identified with the rapidly rotating jet footprint. The difference in the luminosity of the southern and northern components would then be naturally explained by relativistic jet rotation at the jet base. Both of these would support the conclusion that the jet in M87*\xspace\ is driven by the black hole rotation, as described in \citet{BZ77}. The size of the bright ring and its relation to the diffuse emission presents a number of ways to estimate the black hole mass-to-distance ratio with varying degrees of astrophysical inputs. All of these estimates are consistent, with a $\theta_M=4.15\pm0.74~\muas$, independent of the extent of the emission region. Our most precise estimate arises from simultaneously reconstructing the ring and diffuse emission scales, and obtains $\theta_M=4.20^{+0.12}_{-0.06}~\muas$, which improves on the fractional uncertainty presented in \citetalias{M87_PaperVI} by a factor of more than four. The resulting mass estimate after folding in a distance of $16.8~{\rm Mpc}$ is $7.13^{+0.20}_{-0.11}\times10^9~M_\odot$. The uncertainty in the mass estimate is now dominated by the systematic uncertainty in the distance of $0.35\times10^9~M_\odot$. These mass-to-distance ratio estimates are consistent with those from the variety of methods presented in \citetalias{M87_PaperVI} and from stellar dynamics \citep{Gebhardt2011}. Because the latter is estimated from the dynamics of what are effectively test particles at distances four orders of magnitude larger than the photon orbit, the comparison of these mass measurements provide a direct test of general relativity, as described in \citetalias{M87_PaperVI}. In practice, this test remains limited by the uncertainty in the stellar dynamics measurement. Nevertheless, the mass estimates presented here lie at the high end of the ranges presented in \citetalias{M87_PaperVI}. This suggests that the set of GRMHD simulations used to calibrate the mass estimates in \citetalias{M87_PaperVI} were more extended than the observed emission, biasing the calibration factor $\alpha$ toward high values and thus the mass toward low values. This is only partly ameliorated by selecting only MAD models in the calibration process. In comparison, no such calibration is required for two of the three mass estimates made here; in the one instance where calibration from simulations is performed, the measured systematic modification is small due to the much more robust size of the bright rings in simulated images. Additional epochs of horizon-resolving observations will prove particularly useful in confirming the existence and nature of the bright ring. While small variations in its location are anticipated, associated with (potentially large) variations in the location of the emission region, if the ring structure detected here is, in fact, identified with the $n=1$ photon ring, it should persist. The evolution of the diffuse emission map will also be diagnostic of the location and origin of the emission in M87*\xspace. Similar quality observations that extend over observation epochs as short as two weeks will permit the conclusive differentiation between orbiting and outflowing features \citep{Jeter2020}. The ability to resolve dim, variable structures thus motivates longer-duration observing campaigns. The constraints on black hole spin obtained here are inconclusive. However, the ability to constrain $(\theta_M,a)$ to a band in the mass-spin parameter space indicates that even a single, fortuitous future EHT observation of M87*\xspace may provide a measurement of black hole spin from gravitational lensing alone \citep{spin}. The strength of potential spin constraints depends on the degree to which the emission region location differs during future observations. Multiple additional measurements provides a lensing-only test of general relativity \citep{spin}. These provide a strong motivation for including M87*\xspace in future EHT campaigns and those of subsequent instruments. It also suggests that future space-based millimeter-very long baseline interferometry experiments may be able to detect the next-order lensed image, i.e., that associated with the $n=2$ ring, via a similar method to that presented here \citep{spin}. The astrophysics-independent mass measurements become substantially better constrained in this instance, and immediate tests of general relativity become possible. Finally, this provides a direct demonstration of the ability to leverage high $S/N$ data to estimate image features with precisions that significantly exceed those implied by the ostensible observing beam. This effective super-resolution may suggest that a similar effort applied to more distant sources may yield practical mass estimates even in the absence of resolved ring structures in images. \mbox{}\\ \indent This work was made possible by the facilities of the Shared Hierarchical Academic Research Computing Network (SHARCNET:www.sharcnet.ca) and Compute/Calcul Canada (www.computecanada.ca). Computations were made on the supercomputer Mammouth Parall\`ele 2 from the University of Sherbrooke, managed by Calcul Qu\'ebec and Compute Canada. The operation of this supercomputer is funded by the Canada Foundation for Innovation (CFI), the minist\`ere de l'\'Economie, de la science et de l'innovation du Qu\'ebec (MESI) and the Fonds de recherche du Qu\'ebec - Nature et technologies (FRQ-NT). This work was supported in part by the Perimeter Institute for Theoretical Physics. Research at the Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. A.E.B. thanks the Delaney Family for their generous financial support via the Delaney Family John A. Wheeler Chair at Perimeter Institute. A.E.B. and P.T. receive additional financial support from the Natural Sciences and Engineering Research Council of Canada through a Discovery grant. R.G.\ receives additional support from the ERC synergy grant “BlackHoleCam: Imaging the Event Horizon of Black Holes” (grant No.\ 610058). D.W.P. is supported by the NSF through grant Nos.\ AST-1952099, AST-1935980, AST-1828513, and AST-1440254; by the Gordon and Betty Moore Foundation through grant No.\ GBMF-5278; and in part by the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation to Harvard University. Furthermore, we thank Ivar Coulson for useful contributions during this project. H.-Y.P.\ acknowledges the support of the Ministry of Education (MoE) Yushan Young Scholar Program, the Ministry of Science and Technology (MOST) under the grant No. 110-2112-M-003-007-MY2, and National Taiwan Normal University. I.M.V.\ acknowledges support from Research Project PID2019-108995GB-C22 of Ministerio de Ciencia e Innovaci\'on (Spain) and from the GenT Project CIDEGENT/2018/021 of Generalitat Valenciana (Spain).
1,116,691,498,752
arxiv
\section{Introduction} The study of the implications of quantum mechanics applied to multipartite correlations has brought many apparent paradoxes \cite{Schrodinger35,Einstein35} and later many useful applications in information processing\cite{Nielsen2000}, communication\cite{Bennett93}, cryptography\cite{Ekert91} and metrology\cite{Lloyd06}. One of the simplest systems that is able to show a lot of the complexities and beautiful subtleties of quantum mechanics is a pair of two level systems (or qubits). As simple as it may look, many important questions regarding this simple bipartite system have not been or cannot be answered \cite{Girolami11}. While the situation for the description of the correlations of pure quantum states is reasonably well understood, extending the treatment to encompass mixed states, in which quantum and classical correlations both play roles, has led to a veritable menagerie of metrics intended to gauge the ``quantumness'' of a state. In order to clarify things, in this paper we aim to provide a compendium of the most relevant properties of certain type of two qubits systems known as $X$ states. These states are ubiquitous in the literature as they generalize many important classes of mixed quantum states such as maximally entangled states (like the singlet state or the Bell states), partially entangled and quantum correlated states (like the Werner states), the maximally entangled mixed states \cite{Munro01} as well as non entangled, non quantum-correlated states. The states that concern us here receive their name for the form of their density matrix: \begin{eqnarray}\label{def:matrix} \rho_X = \left( \begin{array}{cccc} a & 0 & 0 & w \\ 0 & b & z & 0 \\ 0 & z^* & c & 0 \\ w^* & 0 & 0 & d \end{array} \right). \end{eqnarray} in the basis $\ket{00}, \ket{01},\ket{10}, \ket{11}$ ($\ket{\alpha \beta} \equiv \ket{\alpha}_A \otimes \ket{\beta}_B$). They have been generated in different physical systems. Two of such systems are the polarizations of a pair of photons generated by a non-linear crystal \cite{Kwiat95} and the electronic levels of a pair of cold ions in a trap \cite{Monroe95}. In both cases pure states are generated. Nevertheless, one can imagine that by a desired process\cite{Altepeter03} (such as having the photons pass through a decoherer) or an undesired one \cite{Roos06} (like stray magnetic fields that randomly shift the local energy levels of the cold ions) only coherences between different basis elements are preserved. \\ For instance it might happen that a certain pure two qubit state vector $\ket{\psi}=\alpha \ket{00}+\beta \ket{01}+\gamma \ket{10}+\delta \ket{11}$ is prepared. These qubits can be encoded in the levels of a pair of trapped cold ions. In this case, the levels will be subject to the Zeeman effect induced by stray magnetic fields and they will suffer a shift \begin{equation} \Delta E = \mu_B g_J m_J B(t) \end{equation} where $\mu_B$ is the Bohr magneton, $g_J$ is the gyromagnetic factor, $m_J$ is the magnetic quantum number of the level and $B(t)$ is the fluctuating magnetic field. Under these circumstances the relative phase between the excited ($\ket{1}$) and ground ($\ket{0}$) will be given by: \begin{equation} \phi(t)=\frac{\mu_B}{\hbar} \left\{ g_J^{(1)} m_J^{(1)}-g_J^{(0)} m_J^{(0)} \right\} \int_0^t dt' B(t') dt' \end{equation} Typically the fluctuations of the magnetic field will occur in a time scale that is too short to be resolved by any measuring apparatus and thus one will have to resort to a time averaged description \cite{Omar11}. The fluctuations in the phase will induce a decay in certain coherences of the time-averaged density matrix. The evolution of the state vector will be as follows $\ket{\psi(t)}=\alpha \ket{00}+\beta e^{i \phi(t)} \ket{01}+ e^{i \phi(t)}\gamma \ket{10}+\delta e^{i 2\phi(t)} \ket{11}$, but this is just for one realization and one needs consider the average density operator which would be given by: \begin{eqnarray*} \bar \rho = \left( \begin{array}{cccc} |\alpha|^2 & \overline{ e^{-i \phi(t) }} \alpha \beta ^* & \overline{ e^{-i \phi(t) }} \alpha \gamma ^* & \overline{ e^{-2 i \phi(t) }} \alpha \delta ^* \\ \overline{ e^{i \phi(t) }} \beta \alpha ^* & |\beta| ^2 & \beta \gamma ^* & \overline{ e^{-i \phi(t) }} \beta \delta ^* \\ \overline{ e^{i \phi(t) }} \gamma \alpha ^* & \gamma \beta ^* & |\gamma| ^2 & \overline{ e^{-i \phi(t) }} \gamma \delta ^* \\ \overline{ e^{2 i \phi(t) }} \delta \alpha ^* &\overline{ e^{i \phi(t) } }\delta \beta ^* & \overline{ e^{i \phi(t) }} \delta \gamma ^* & |\delta| ^2 \end{array} \right), \end{eqnarray*} where the overbar denotes an ensemble average (and the ergodic hypothesis has been invoked to switch from a time to an ensemble average). It is reasonable to assume that $\phi$ is normally distributed and thus $\overline{ e^{n i \phi(t) }}=e^{i n \overline{\phi(t)}-n^2 \overline{\phi(t)^2}/2}$. The magnetic field $B(t)$ can be modelled as white noise with zero mean which implies that $\overline{\phi(t)}=0$ and $\overline{\phi(t)^2}=t/T_2$ where $T_2$ characterizes the variance of the random process $B(t)$. Under this assumption it is clear that all coherences except the one proportional to $\beta^* \gamma$ approach zero as $t \gg T_2$, and the resultant state is an $X$ state, with $a=|\alpha|^2$, $b=|\beta|^2$, $c=|\gamma|^2$, $d=|\delta|^2$, $a=|\alpha|^2$, $z=\beta \gamma ^*$ and $w=0$. The above example illustrates one of many possible ways in which an initially arbitrary pure state decoheres into $X$ states. In the following section we intend to investigate general properties of these states. First in section (\ref{def}) we introduce and relate two convenient parameterizations for $X$ states. In section (\ref{corr}) we calculate several important measures of quantum correlations and comment on the type of $X$ states that maximize and minimize such correlations. We also include a measure of classical correlations for these states. Finally, in section (\ref{dyna}) we study the types of dynamics that generate and preserve the shape of $X$ states. \section{Definitions and Parameterizations}\label{def} For a pair of two levels systems, $A$ and $B$, one can define local bases with states that we label $\ket{0}_A, \ket{1}_A$ for system $A$ and $\ket{0}_B, \ket{1}_B$ for system $B$. By taking the tensor product of the basis elements one can construct a basis for the qubit Hilbert space. In such a basis we write a general $X$ state of the system as: \begin{eqnarray}\label{def:braket} \rho_X& =& a \ket{00} \bra{00}+b \ket{01} \bra{01} + c \ket{10} \bra{10} +d \ket{11} \bra{11} \nonumber \\ && z \ket{01} \bra{10}+z^* \ket{10}\bra{01}+w \ket{00}\bra{11}+w^* \ket{11}\bra{00}. \end{eqnarray} In the ordered basis $\{\ket{00},\ket{01},\ket{10},\ket{11} \}$ the operator $ \rho_X$ takes the matrix form shown in (\ref{def:matrix}). Normalized density operators are positive semidefinite and have unit trace, which for the above parameterization implies the following constraints: \begin{eqnarray}\label{constraints} a+b+c+d=1, \nonumber \\ a,b,c,d \geq 0, \nonumber \\ |z| \leq \sqrt{b c}, \quad \mbox{and} \quad |w| \leq \sqrt{a d}. \end{eqnarray} It is always possible to apply a pair of local unitary transformations to make all the coefficients in the definitions (\ref{def:braket}) and (\ref{def:matrix}) non-negative. For instance, the phases of $z$ and $w$ can be absorbed into $\ket{0}_A$ and $\ket{0}_B$, \emph{i.e.} one redefines $e^{i \arg(z)} \ket{0}_A$ as $\ket{0_A}$ and $e^{i \arg(w)} \ket{0}_B$ as $\ket{0_B}$. Because the correlations of a system do not change when local unitaries are applied to the subsystems then such correlations will depend only on the absolute values of the coherences $z$ and $w$ but not on their phases. From now on we assume that such local unitary transformations have been applied and thus all the coefficients in (\ref{def:matrix}) are real and non-negative. An equivalent and rather useful way of writing the operator (\ref{def:braket}) is by using the Fano parameterization\cite{Fano57}. To this end, one defines the usual one-qubit Pauli operators, \begin{eqnarray} \sigma_1= \sigma_x=\ket{0}\bra{1}+\ket{1}\bra{0} ;&\quad& \sigma_2= \sigma_y=i \left(\ket{1}\bra{0}-\ket{0}\bra{1}\right); \nonumber\\ \sigma_3= \sigma_z=\ket{1}\bra{1}-\ket{0}\bra{0}; &\quad& \sigma_0= \mathbb{I}=\ket{0}\bra{0}+\ket{1}\bra{1}. \nonumber \end{eqnarray} With this definitions one can write: \begin{eqnarray}\label{fano} \rho_X=\frac{1}{4}\left\{ \mathbb{I} \otimes \mathbb{I}+A_3 \sigma_3 \otimes \mathbb{I} + B_3 \mathbb{I} \otimes \sigma_3+ \sum_{i=1}^3 C_i \sigma_i \otimes \sigma_i \right \} . \nonumber \end{eqnarray} The two parameterizations can be related as follows: \begin{equation} \begin{array}{rclrclrcl} A_3&=&(a+b)-(c+d) \quad & B_3&=&(a+c)-(b+d) \quad & \\ C_1&=&2 (z+w) & C_2&=&2(z-w) & C_3&=&(a+d)-(b+c). \end{array} \end{equation} Notice also that from the Fano parameterization the reduced density matrices can be obtained straightforwardly: \begin{eqnarray} \label{marginals} \rho_A=\text{tr}_B\left( \rho_X\right)=\frac{1}{2}\left\{ \mathbb{I}+A_3 \sigma_3 \right\}, \quad \rho_B=\text{tr}_A\left( \rho_X\right)=\frac{1}{2}\left\{ \mathbb{I}+B_3 \sigma_3 \right\}. \end{eqnarray} Both density matrices are diagonal in the $\ket{0},\ket{1}$ basis.\\ Finally, notice that $X$ states contain as particular instances many important states like the four maximally entangled Bell states, \begin{eqnarray}\label{bells} \ket{\phi_0}=\frac{1}{\sqrt{2}}\left(\ket{00}+\ket{11} \right), \quad \ket{\phi_i}=( \mathbb{I} \otimes \sigma_i) \ket{\phi_0} \end{eqnarray} the maximally mixed state, mixtures of maximally entangled and maximally mixed states like Werner states \begin{eqnarray} \rho_W=(1-\epsilon) \frac{\mathbb{I}}{4}+\epsilon \ket{\phi_i} \bra{\phi_i}, \label{Wern} \end{eqnarray} and all the states that for a given value of their mixedness (or purity) maximize their entanglement \cite{Munro01}. \section{Quantum and Classical Correlations}\label{corr} In this section we provide expressions for several quantum correlations for two-qubit $X$ states. We also include an expression for the classical correlations of these states. Before calculating any relevant measure of correlations between the two parties in the bipartite system we write its state in two canonical forms. The first one is the eigenvalue decomposition $ \rho=\sum_i \lambda _i \ket{\lambda_i} \bra{\lambda_i}$ with: \begin{eqnarray}\label{ev} \lambda_{1/2}&=&u_+ \pm \sqrt{u_-^2+w^2} \\ \lambda_{3/4}&=&r_+ \pm \sqrt{r_-^2+z^2} \nonumber\\ \ket{\lambda_{1/2}} &=& \frac{1}{N_{1/2}}\left(\left\{u_- \pm \sqrt{u_-^2+w^2} \right\}\ket{00}+w\ket{11} \right) \nonumber\\ \ket{\lambda_{3/4}} &=& \frac{1}{M_{1/2}}\left(\left\{r_- \pm \sqrt{r_-^2+z^2} \right\}\ket{01}+z\ket{10} \right) \nonumber \end{eqnarray} where $u_{\pm}=(a\pm d)/2$, $r_{\pm}=(b\pm c)/2$ and $N_{1/2}$ and $M_{1/2}$ are normalization constants. Notice that (\ref{constraints}) ensures that all the $\lambda_i$ are positive. From the above it is straightforward to obtain the von-Neumann entropy as $S( \rho_X)=-\sum_i \lambda_i \log(\lambda_i)$.\\ Using the eigenvalues of $\rho_X$ or directly calculating the trace of the square of the density matrix one obtains the purity of the state: \begin{eqnarray} \text{tr}{\rho_X^2}=\sum_i \lambda_i^2=a^2+b^2+c^2+d^2+2w^2+2z^2. \nonumber \end{eqnarray} The purity will only be equal to one if either $a=d=w=0$ and $b c =z^2$ or $b=c=z=0$ and $a d =w^2$.\\ From the definitions in this section, we calculate several relevant quantum correlations.\\ \subsection{Entanglement} For decades one correlation, Quantum Entanglement, was considered the defining property that distinguishes quantum systems from classical ones. In simple terms, it was considered to be present in a system by the inability to factor its state into a product of the states of the subsystems that make it. To be more specific in quantifying entanglement, in the case of pure states of two qubits, this is achieved by calculating the von Neumann entropy of the reduced density matrix of the system. For mixed states, this method cannot be applied directly. First, the mixed state is written in a pure-state decomposition as follows: \begin{equation} { \rho}=\sum_{i}p_{i}|\psi_{i}\rangle\langle \psi_{i}|. \label{G} \end{equation} Entanglement for this mixed state is then defined in terms of the entanglement of the pure states involved in the decomposition \emph{and} minimized over all decompositions (since decomposition (\ref{G}) is by no means unique): \begin{equation} E({ \rho})=\min\sum_{i} p_{i} E(\psi_{i}), \label{H} \end{equation} Standing on this concept, Wootters defined the famous entanglement measure known as concurrence\cite{Wootters98} . In the case of our $X$ states, which in the most general cases are mixed, concurrence is given by: \begin{equation}\label{conc} \mathcal{C}( \rho_X)=2 \max\left\{0,z-\sqrt{a d},w-\sqrt{b c} \right\} \end{equation} Note that this quantity takes values between 0 and 1, the former corresponding to no entanglement in the system, the latter to maximal entanglement, and anything in between correspond to cases of partially entangled states. \subsection{Partial Transpose and Negativity} One of the most powerful techniques for entanglement detection is the use of positive but not completely positive (PNCP) maps. A positive linear map $\Lambda$ between the space of operators acting in two Hilbert spaces $\mathcal{H}_A$ and $\mathcal{H}_B$ satisfies\cite{Guhne2009,bruss2002}: \begin{eqnarray*} \text{If} \ \Lambda(X)=Y \ \text{then} \ \Lambda(X^\dagger)=Y^\dagger \\ \text{If} \ X \geq 0 \ \text{then} \ \Lambda(X)=Y \geq 0 \end{eqnarray*} A positive map $\Lambda$ is completely positive if for an arbitrary Hilbert space $\mathcal{H}_C$ the map $\mathcal{I}_C \otimes \Lambda$ is positive, otherwise is termed positive but not completely positive. Notice that a PNCP map will always map separable density operators to separable density operators. A failure to obtain a positive operator after applying a PNCP over a density operator will imply that the density operator that is given is entangled. One very useful example of a PNCP map is the transpose. In \cite{Peres} the partial transposition of a density operator with respect to one of its subsystems (which is a PNCP) is used to give a very powerful condition for entanglement detection. In \cite{Horo96} it is shown that this condition is necessary and sufficient for entanglement in qubit-qubit and qubit-qutrit systems. Finally, in \cite{VidalWerner}, this concept is used to quantify entanglement. A partial transpose is performed over the first subsystem (system A here), and the measure called Negativity, $\mathcal{N}(\rho)$, is the sum of the absolute values of the negative eigenvalues of the partially transposed matrix. For arbitrary dimensions if $\mathcal{N}(\rho)> 0$ there is entanglement in the system, but $\mathcal{N}(\rho)= 0$ gives us no definite answer as to whether the system in entangled or separable in the general case. Nevertheless, as already mentioned for qubit-qubit and qubit-qutrit systems, $\mathcal{N}(\rho)=0$ necessarily implies that $\rho$ is separable. The partial transposition amounts to swapping the second set of labels in each bra and ket in equation (\ref{def:braket}) which for $X$ states amounts to exchange $z$ and $w$. Thus the conditions for having a positive partially transposed $X$ state are given \cite{Ali09}: \begin{eqnarray} z\leq\sqrt{ad} \text{ and } w\leq\sqrt{bc} \end{eqnarray} which is precisely what the concurrence tells us (\ref{conc}) since if any of these conditions is violated the partially transposed $X$ state will not be positive or equivalently the state itself will be entangled. As for the precise eigenvalues of the partial transpose of an $X$ states these are given by equation (\ref{ev}) but again doing the change $w \longleftrightarrow z$. In this case only the cases with minus signs before the inequality might give negative eigenvalues and thus the negativity is given by: \begin{equation*} \mathcal{N}(\rho)=-\min\left\{0,u_+ - \sqrt{u_-^2+z^2} ,r_+ - \sqrt{r_-^2+w^2} \right\} \end{equation*} where $u_\pm$ and $r_\pm$ are the same quantities that appear in (\ref{ev}).\\ Another interesting property of a partially transposed qubit-qubit density matrix is that it will have at most one negative eigenvalue, and thus its determinant gives a necessary and sufficient condition for entanglement. In \cite{Horo08} this is shown with complete generality, here we show it for $X$ states by elementary means. Notice that if for instance $z \geq \sqrt{a d}$ then from (\ref{constraints}) one has $\sqrt{b c}\geq z$ and $\sqrt{a d} \geq w$ which automatically implies that $\sqrt{b c} \geq w$. The argument is reversed if $w\geq\sqrt{bc} $. \subsection{Fully Entangled Fraction} The fully entangled fraction (FEF) is defined as the maximum fidelity that a given quantum state has with a maximally entangled state\cite{Bennett96}. The fidelity between a (generally) mixed state and a pure state is defined as\cite{Jozsa94}: \begin{eqnarray} \mathcal{F}( \rho,\ket{\psi}\bra{\psi})=\bra{\psi} \rho\ket{\psi} \end{eqnarray} For the fully entangled fraction one calculates the above expectation value with the four Bell states (\ref{bells}). For $\rho_X$ one has: \begin{equation}\label{FEF} \mathcal{E}(\rho_X)=\max(a+d+2w-1,b+c+2z-1). \end{equation} For a state with a fixed value of the FEF $\mathcal{E}$ it is easily found that the concurrence of such state is bounded by: \begin{equation} \mathcal{E} \leq \mathcal{C} \leq \frac{\mathcal{E}+1}{2} \end{equation} In particular the states that saturate the upper and lower bounds of the inequality are of the $X$ type\cite{Grond02}. \subsection{The Schmidt Number of the State} An important characteristic of a bipartite quantum state is its ability to encode information about a local quantum process that acts on only one of the qubits. In particular, one asks the question of whether by knowing an initial state $\rho_{\text{in}} $ and the state that is obtained by applying an unknown quantum process in one (say the second) qubit, $\rho_{\text{out}}=\left(\mathcal{I} \otimes \mathcal{L}\right)\left\{\rho_{\text{in}} \right\}$, is it possible to know what the process $\mathcal{L}$ was? In \cite{Alte03} it is shown that the necessary and sufficient condition to provide complete information about the local process $\mathcal{L}$ is that the Schmidt number of the density matrix equals the square of the dimensionality of the system on which the process acts, in our case $2^2=4$. The Schmidt number, which is a familiar concept for pure states, can be found for bi-partite density operators as follows. For any state $\rho$ one can define the matrix $\Gamma_{\mu,\nu} = {\rm Tr}\{\rho(\sigma_{\mu}\otimes\sigma_{\nu})\}/2$\footnote{The factor of $1/2$ is inserted because the Pauli matrices are orthogonal with respect to the Hilbert-Schmidt inner product $\text{tr}(\sigma_\mu\sigma_\nu)=2 \delta_{\mu,\nu}$ but are obviously not normalized. To account for this fact each Pauli matrix is multiplied by $1/\sqrt{2}$ which for the tensor product of two of them will give rise to the $1/2$ factor. In particular this normalization will guarantee that any pure separable state will have only one non-zero singular value equal to 1.}, where $\mu$ and $\nu$ take the values $(0,1,2,3)$, where $\sigma_{0}$ is the identity. We may use the singular value decomposition (SVD) to rewrite $\Gamma_{\mu,\nu}$ in the form $\sum_i s_i f_{\mu}^{(i)} g_{\nu}^{(i)}$, where $s_i$ is a set of non-negative real numbers and $f_{\mu}^{(i)}$ and $g_{\nu}^{(i)}$, ($i=1,2,3,4$) are two sets of orthonormal vectors (i.e. $\sum_\mu f_{\mu}^{(i)}f_{\mu}^{(j)}= \sum_\nu g_{\nu}^{(i)}g_{\nu}^{(j)}=\delta_{i,j}$). Defining the operators $ F_i= \sum_\mu f_{\mu}^{(i)}\sigma_\mu$ and $ G_i=\sum_\nu g_{\nu}^{(i)}\sigma_\nu$, we find $\rho=\sum_i s_i F_i \otimes G_i$, which is the Schmidt decomposition of $\rho$. The operators $ F_i$ and $ G_i$ are orthonormal, \emph{i.e.}, $\text{tr}( F_i ^\dagger F_j)=\text{tr}( G_i ^\dagger G_j)=\delta_{i,j}$ (they are not, however positive, hence they do not represent states). The number of non-zero $s_i$ is the Schmidt Number of $\rho$. For $X$ states, one finds the singular values are related to the coefficients appearing in the Fano decomposition as follows: \begin{eqnarray}\label{svd} s_1 &=&\frac{C_1}{2},\nonumber\\ s_2 &=&\frac{|C_2|}{2},\nonumber\\ s_{3/4} &=& \frac{\sqrt{1+A_3^2+B_3^2+C_3^2\pm D}}{2 \sqrt{2}}, \end{eqnarray} where $D =\sqrt{\left(1+A_3^2+B_3^2+C_3^2\right)^2-4 \left(C_3-A_3 B_3\right)^2}$. \subsection{Quantum Discord and Classical Correlations} As described in an earlier subsection, the quantification of entanglement is done using the von Neumann entropy. In short, the idea is to use the fact that randomness is introduced into the system when a quantumly correlated particle is ignored (by taking the partial trace over it). Another method to quantify the strength of the quantum correlations of the system is to use the difference between the effect of measurement on a classical system compared to a quantum system; \emph{i.e.}, to use the fact that measurements disturb quantum systems, but not classical ones. This is the idea that Ollivier and Zurek's Quantum Discord is based on \cite{Ollivier01}. Like Wootters concurrence, quantum discord is defined for two-qubit systems. Labeling the two subsystems by $A$ and $B$, a measurement is performed on $B$. If a disturbance is detected, then that indicates the existence of quantum correlations, otherwise it implies that they are absent. The disturbance is quantified by using the mutual information function, which gives an indication of how much information is shared between $A$ and $B$. The difference between the mutual information function before the measurement and after measurement defines the discord. However, the set of projectors that are applied on $B$ have to be chosen so that they give the maximal value for the measurement-induced mutual information function. Notice that there is in general a more complex hierarchy of quantum correlations' quantifiers in which different types of measurement schemes are applied of which the Discord is a particular instance \cite{Lang11}. Although for concurrence, there is an analytic expression for $X$ states, it is not possible to find an analytic expression for discord for the general $X$ state \cite{Lu11}. In general the problem of the calculation of the discord can be cast into the solution of two transcendental equations as it is shown in \cite{Girolami11}. However, Luo \cite{Luo08} was able to find one subclass of $X$ states for which an analytical expression can be given. This class has maximally mixed marginals (MMM): \begin{eqnarray} \rho_A = \rho_B=\frac{1}{2} \mathbb{I} \end{eqnarray} which implies from equation (\ref{marginals}) that $A_3=B_3=0$. For the MMM states the discord is given by \cite{Luo08}: \begin{eqnarray} Q(\rho_{MMM})&=&\frac{1}{4} \left\{(1-|C_1|-|C_2|-|C_3|) \log (1-|C_1|-|C_2|-|C_3|) \right. \nonumber\\ &&(1-|C_1|+|C_2|+|C_3|) \log (1-|C_1|+|C_2|+|C_3|) \nonumber\\ &&(1+|C_1|-|C_2|+|C_3|) \log (1+|C_1|-|C_2|+|C_3|) \nonumber\\ &&\left. (1+|C_1|+|C_2|-|C_3|) \log (1+|C_1|+|C_2|-|C_3|) \right\} \nonumber\\ &&-\frac{1-C}{2}\log(1-C)-\frac{1+C}{2}\log(1+C) \end{eqnarray} with $C=\max\{ |C_1|,|C_2|,|C_3| \}$. Although, obtaining an analytical expression for the discord has been shown to be impossible for states more complex than the MMM states a great deal of advance can be achieved by characterizing the set of states that have zero discord. This was done in \cite{Vedral10}. Following the convention used in \cite{Vedral10}, we assume that measurements are made on subsystem $A$ instead of subsystem $B$. Then it can be shown that the set of states $\Omega$ that have zero discord is given by \cite{Vedral10}: \begin{eqnarray} \rho_{CL}=\sum_k p_k \ket{\psi_k} \bra{\psi_k} \otimes \rho_k^{(B)} \end{eqnarray} where $\{ \ket{\psi_k}\}$ is an orthonormal basis set for subsystem $A$. Since the set $\Omega$ is known, it is possible to define a geometric measure of discord by simply measuring the distance (square norm in the Hilbert-Schmidt space) of a given state to the closest state in the set $\Omega$. This was done in \cite{Vedral10} and the result for $X$ states is simply: \begin{eqnarray} \mathcal{D}_A^{(G)}( \rho_X)&=&\frac{1}{4} \min \left\{C_{1}^2+C_{2}^2,C_{1}^2+C_{3}^2+A_3^2\right\} \\ &=& \frac{1}{2} \min \left\{ 4 \left(w^2+z^2\right), (a-c)^2+(b-d)^2+2(w+z)^2\right\} \nonumber \end{eqnarray} The geometric discord when $B$ is measured is simply obtained by replacing $A_3$ by $B_3$ in the first equality or swapping $b$ and $c$ in the second one. Finally, we also point out that an $X$ state is non-discordant if and only if is fully diagonal, which is equivalent to saying that it only has two non-zero singular values (See (\ref{svd})). Although for general $X$ states, there does not exist an analytic formula for discord, it has been shown that there exists a set of projectors that will give accurate results \cite{Lu11}. These projectors are labelled by the term maximal-correlation-direction measurement (MCDM). Moreover, in \cite{LFFH}, an expression for discord in the case of $b=c$ is derived. Here, we derive a very similar result, but for the slightly more general case in which $b$ and $c$ are not necessarily equal. Using a related approach to these studies, we find an expression that is a very good approximation to discord: \begin{eqnarray} Q(\rho_X)&\approx&S(\rho_{B})-S(\rho_X)+ \min\left\{N_{1},N_{2}\right\}, \label{Qap} \end{eqnarray} where $S(\rho_X)$ is the von Neumann entropy of the general $X$ state density matrix, $S(\rho_{B})$ is the von Neumann entropy of the reduced density matrix of the second qubit (labelled B, on which the measurement is made) and, \begin{eqnarray} N_{1}&=& H\left(\left[\frac{1}{2}+\frac{1}{2}\sqrt{\left(a-d+b-c\right)^{2}+4(z+w)^{2}}\right]\right) \nonumber\\ N_{2}&=&-a \log_{2}\left[\frac{a}{a+c}\right]-b \log_{2}\left[\frac{b}{b+d}\right] - c \log_{2}\left[\frac{c}{a+c}\right]-d \log_{2}\left[\frac{d}{b+d}\right]. \label{N1N2} \end{eqnarray} where $H(y)=-y \ \log_2(y)-(1-y) \ \log_2(1-y)$ is the binary entropy function. In \cite{AsmaDiscord}, a parameterization, in terms of variables $\theta$ and $\phi$, that is used to find discord is defined. In the language of this reference, $N_{1}$ is found for ${\theta = \pi/4, \phi=0}$, and $N_{2}$ is found for ${\theta = \pi/2, \phi=0}$. To elaborate on the accuracy of this approximate expression for $Q$, when we analyze the $1 \times 10^{5}$ randomly generated $X$ states (see \cite{AsmaDiscord} for method description), we find the following: 0 \% of the points have error $> 10^{-3}$, 0.001 \% have error $> 10^{-4}$, 31.44 \% have error $> 10^{-5}$, 86.10 \% have error $> 10^{-6}$, and 90.78 \% have error $> 10^{-7}$. This shows that if we are only interested in the accuracy up to the forth decimal place, then this is a very good approximation for discord of an $X$ state. Notice that equation (\ref{Qap}) also gives an approximate expression for the amount of classical correlations that a given state has. Recall that the quantum discord as defined by Zurek and Ollivier is $Q(\rho)=\textbf{I}(\rho)-\textbf{C}(\rho)$, where $\textbf{I}(\rho)=S(\rho_A)+S(\rho_B)-S(\rho)$ is the mutual information function, and $\textbf{C}(\rho)$ is the measurement-induced mutual information function maximized over all measurements on subsystem B. One can also interpret these quantities as follows: $\textbf{I}(\rho)$ represents the \emph{total} (classical and quantum) correlations present in the system, $Q(\rho)$ represents the \emph{quantum} correlations, and $\textbf{C}(\rho)$ represents the \emph{classical} correlations (See \cite{Vedral03,Luo08}, for example). In the light of this interpretation, we can use $\textbf{C}(\rho)$ that we calculated to obtain the $Q$ in (\ref{Qap}) to represent the classical correlations in the $X$ states. It is approximately given by: \begin{equation} \textbf{C}(\rho_X) \approx S(\rho_A)-\min\left\{N_{1},N_{2}\right\}, \end{equation} \noindent where $S(\rho_{A})$ is the von Neumann entropy of the reduced density matrix of subsystem A, and $N_{1}$ and $N_{2}$ are defined in (\ref{N1N2}). \subsection{Measurement-Induced Disturbance} Quantum Discord involves finding a local projector (on subsystem B) that minimizes its value. Moreover, Discord is not symmetric, meaning that the discord of A with respect to B is not the same as that of B with respect to A. That is why one has to think which subsystem is more appropriate to perform the local measurement on in order to define discord. These \emph{deficiencies} inspired Luo to introduce the Measurement-Induced Disturbance or simply MID \cite{MIDLuo}. Like Discord, MID exploits the fact that measurements of quantum systems disturb them in order to capture quantum correlations. However, there are two differences. Now, the local measurements are performed on \emph{both} subsystems A and B. In addition, MID does not require searching for the optimal set of local projectors. Instead, the chosen projectors are constructed using the eigenvectors of the reduced density matrices of A and B. MID is defined as follows \cite{MIDLuo,MIDDatta}: \begin{equation} MID= I(\rho)-I(P(\rho)), \label{MID} \end{equation} \noindent where \begin{equation} P(\rho)=\sum^{m}_{i=1}\sum^{n}_{j=1}\left(\Pi^{A}_{i}\otimes\Pi^{B}_{j}\right)\rho\left(\Pi^{A}_{i}\otimes\Pi^{B}_{j}\right), \label{Pr} \end{equation} \noindent and $\Pi^{A}_{i}$ and $\Pi^{B}_{j}$ are projectors constructed using the eigenvectors of the reduced density matrices of systems A and B, respectively. Note that for two-qubit systems, and hence the $X$ states we are focusing on in this paper, $m,n=2$. In fact, for $X$ states, $P(\rho)$ is given by: \begin{eqnarray}\label{def:matrixPr} P_{X}(\rho) = \left( \begin{array}{cccc} a & 0 & 0 & 0 \\ 0 & b & 0 & 0 \\ 0 & 0 & c & 0 \\ 0 & 0 & 0 & d \end{array} \right), \end{eqnarray} \noindent and the MID is given by: \begin{equation} MID_{X}=-S(\rho_X)+S(P_{X}(\rho)), \label{MIDX} \end{equation} \noindent where $S(\rho_X)$ is the von Neumann entropy of the $X$-matrix before measurement, and $P_{X}(\rho)$ is the von Neumann entropy of the density matrix after the measurement as given in (\ref{def:matrixPr}). Note, however, that although MID is easier to calculate than discord, for several cases in which there is no quantum advantage, it predicts maximal quantum correlations \cite{AsmaDQCQ1, MIDX}. The approach taken in \cite{MIDX}, in which Ameliorated MID is introduced, addresses this issue. For the $X$ states, it is also shown that MID is ambiguous for the states classified as MMM. However, a simple calculation reveals that if $\Pi^{A}_{i}$ and $\Pi^{B}_{j}$ are constructed using the eigenvectors of $\sigma_{z}$, then $MID=Q$, where $Q$ is given by eq.(\ref{Qap}). It is also shown in \cite{MIDX} that for these states, MID and discord are the same for the Werner states and for the pure states. This implies that the result in (\ref{MIDX}) can be used in these two cases without overestimation of the strength of quantum correlations. Another related observation\cite{eric2011} in a class of states, also named Werner states, is the equality between discord and MID. Note, however, that the latter are not the same as the Werner states we discuss in this work, which are states that are a combination of the identity and a Bell state (See (\ref{Wern})). In \cite{eric2011}, the Werner states are defined to be those that satisfy: $\rho=U \otimes U \ \rho \ U^{\dagger} \otimes U^{\dagger}$ for any unitary operator $U$. Nevertheless, states in the two different classes have equal discord and MID. \section{Dynamics}\label{dyna} In this section we study the types of dynamics and quantum channels that preserve the shape of an $X$ state. One of the reasons why $X$ states became so popular in the study of the dynamics of quantum correlations is because of the relatively simple form that the concurrence takes for such states. Moreover since in such studies \cite{Yu042,Yu04,Asma08} they stay in the $X$ form for all times, the simple equation (\ref{conc}) is valid throughout their evolution. To study what types of dynamics preserve the shape of $X$ states we employ the very elegant algebraic characterization of $X$ states presented in \cite{Rau09}. The key ingredient of the characterization is to notice that the set of operators \begin{eqnarray} \mathcal{S}=&&\left\{ \mathbb{I} \otimes \mathbb{I}, \sigma_3 \otimes \mathbb{I}, \mathbb{I} \otimes \sigma_3, \sigma_1 \otimes \sigma_1, \sigma_2 \otimes \sigma_2, \sigma_3 \otimes \sigma_3, \sigma_2 \otimes \sigma_3, \sigma_3 \otimes \sigma_2 \right\} \end{eqnarray} is closed under multiplication, \emph{i.e.}, the product of two of them will be proportional to another element of the set $\mathcal{S}$. The other 8 operators that complete the 16 element basis for the 2 qubit Hilbert-Schmidt space are simply: \begin{eqnarray} \mathcal{S'}=&&\left\{ \mathbb{I} \otimes \sigma_1, \mathbb{I} \otimes \sigma_2, \sigma_1 \otimes \mathbb{I}, \sigma_2 \otimes \mathbb{I}, \sigma_1 \otimes \sigma_3, \sigma_2 \otimes \sigma_3, \sigma_3 \otimes \sigma_2, \sigma_3 \otimes \sigma_1\right\} \end{eqnarray} The product of operators belonging to $\mathcal{S}$ and $\mathcal{S'}$ satisfy the following properties: Let $A, B\in \mathcal{S}$ and $C, D \in \mathcal{S'}$ then $A C \in \mathcal{S'}$, $A B \in \mathcal{S}$ and $C D \in \mathcal{S}$. With this elementary observations in mind we characterize the types of unitary evolution and non-unitary evolution that map $X$ states to $X$ states.\\ For the Hamiltonian dynamics given by the von-Neumann equation, $\frac{d}{dt} \rho=i [ \rho, H]$ it is easily shown that $ \rho$ will remain $X$ shaped if $ H$ is also $X$ shaped. Equivalently $\rho$ will remain $X$ shaped if and only if $H$ is spanned by $\mathcal{S}$. For non-unitary evolution we first study the case of continuous time evolution. In this case the dynamics is given by a Markovian master equation in the Lindblad form \cite{petruccione}: \begin{eqnarray}\label{master} \frac{d}{dt} \rho&=&i[ \rho, H]+\sum_{n,m = 1}^{N^2-1} h_{n,m}\big( 2 L_n \rho L_m^\dagger - \rho L_m^\dagger L_n- L_m^\dagger L_n\rho\big), \nonumber \end{eqnarray} where $N$ is the dimensionality of the Hilbert space in which $ \rho$ acts, $N=4$ in our case. $ H$ dictates the Hamiltonian dynamics that was studied in the previous paragraph. $ L_n$ are a set of orthonormal operators in the Hilbert-Schmidt space to which $ \rho$ belongs. Notice that there are $N^2$ of such operators, but because one of them can always be chosen to be the identity (which will not cause non-unitary dynamics) the sum can be restricted to $N^2-1$. Finally, $h_{n,m}$ are the complex entries of a positive semi-definite matrix. In the non-unitary part of the master equation the density operator appears multiplied by two different operators, $ L_n$ and $ L_m$. These operators will preserve the $X$ shape if and only if $ L_n, L_m \in \text{span}\{ \mathcal{S} \}$ or $ L_n, L_m \in \text{span}\{ \mathcal{S'} \}$. If either the operators $ L_m, L_n$ contain elements of both $\mathcal{S}$ and $\mathcal{S'}$ or they are spanned by different sets ($\mathcal{S}$ and $\mathcal{S'}$) then there will be products involving two operators from $\mathcal{S}$ and one from $\mathcal{S'}$ which, in general, will be outside the space of $X$ states. Finally, as for quantum channels these are defined in the operator sum representation as \cite{Nielsen2000}: \begin{eqnarray} \rho \rightarrow \rho'=\mathcal{L}\{\rho \}=\sum_i X_i \rho X_i^\dagger \end{eqnarray} with $\sum_i X_i X_i^\dagger= \mathbb{I}$. In this case, the same argument applied to the master equation can be used, that is, $\mathcal{L}$ will map $X$ states to $X$ states if and only if the $X_i$ do not mix operators from $\mathcal{S}$ and $\mathcal{S'}$. \section{Conclusions} In this paper, we aimed to present some of the most interesting and useful quantum properties in the literature, calculated for $X$ states. This work served as a reminder of already known results for these states, such as those pertaining to concurrence. We also presented some new results. First, by uncovering the singular values for these states, we have provided a tool to determine whether they can be used in ancilla-assisted state tomography. We also calculated measures of entanglement, other than concurrence, which can be of interest. This includes the Negativity and the Fully Entangled Fraction. Moreover, we derived results about the quantum discord of these states, one with regards to geometric discord and the other an approximate analytic expression for discord. The latter is shown to be very accurate when the last digit to be considered is the fourth one after the decimal place. Since it was shown that an analytic expression for discord of states that fall under the class of $X$ states does not exist, this approximate (analytic) result for discord provides an easy and reliable tool in situations where optimization is neither practical nor necessary. Moreover, the measurement-induced disturbance, is calculated for $X$ states, with the added caution that it can only be used for certain states such as the Werner and pure states, and that for the MMM states, the projectors have to be chosen carefully. The $X$ state MID turns out to be a simple expression, dependent on the entropy of the total system before and that after the measurement is performed. Finally, we discussed the dynamics that preserves the form of the $X$ states by providing an exhaustive classification of the types of quantum master equations and quantum channels that preserve the shape of an initial $X$ state. \section*{Acknowledgements} The authors would like to thank Andrew G. White, Eric Chitambar, and Christian Weedbrook for valuable discussions. This work was funded by NSERC.
1,116,691,498,753
arxiv
\section{Introduction} Despite their well-known security issues, passwords are still the most popular method of end-user authentication. Guessing and offline dictionary attacks on user-generated passwords are often possible due to their limited entropy. According to Ashlee Vance~\cite{hackme}, 20\% of passwords are covered by a list of only 5,000 passwords. Therefore, aiming to increase security, complex password policies are often enforced, by requiring, e.g., a minimum number of characters, inclusion of non alpha-numeric symbols, or frequent password expiration. This often creates an undesirable conflict between security and usability -- as highlighted in the context of password selection~\cite{egelman2013does}, management~\cite{karole2011comparative,weiss2008passshapes} and composition~\cite{komanduri2011passwords,von2013survival} -- and drives users to find the easiest password that is policy-compliant~\cite{adams1999users}. Multi-factor authentication has emerged as an alternative way to improve security by requiring the user to provide more than one authentication {\em factor}, as opposed to only a password. Authentication factors are usually of three kinds: \begin{enumerate} \item {\em Knowledge} -- something the user knows, e.g., a password; \item {\em Possession} -- something the user has, e.g., a security token (also known as hardware token); \item {\em Inherence} -- something the user is, e.g., a biometric characteristic. \end{enumerate} \renewcommand{\thefootnote}{\arabic{footnote}} In this paper, we concentrate on the most common instantiation of multi-factor authentication, i.e., the one based on two factors, which we denote as 2F. Historically, 2F has been deployed mostly in enterprise, government, and financial sectors, where sensitivity of information and services has driven institutions to accept increased implementation/maintenance costs, and/or to impose additional actions on authenticating users. In 2005, the United States' Federal Financial Institutions Examination Council officially recommended the use of multi-factor authentication~\cite{council2005}, thus pressuring most institutions to adopt some forms of 2F authentication for online banking. Similarly, government agencies and enterprises often require employees to use 2F for, e.g., VPN authentication or B2B transactions. More recently, an increasing number of service providers, such as Google, Facebook, Dropbox, Twitter, GitHub, have also begun to provide their users with the option of enabling 2F, arguably, motivated by the increasing number of password databases hacked. Recent highly publicized incidents affected, among others, Dropbox, Twitter, Linkedin, Rockyou. Alas, security of 2F also suffers from a few limitations: 2F technologies, including recently proposed ones based on fingerprints~\cite{iphone}, are often vulnerable to man-in-the-middle, forgery, or Trojan-based attacks, and are not completely effective against phishing~\cite{schneier2005two}. Furthermore, 2F systems introduce non-negligible costs for service providers and require users to carry out additional actions in order to authenticate, e.g., entering a one-time code and/or carrying an additional device with them. A common assumption in the IT sector, partially supported by prior work~\cite{bauer2007lessons,bonneau2012quest,braz2006security,gunson2011user,sabzevar2008universal,strouble2009productivity}, is that 2F technologies have low(er) usability compared to authentication based only on passwords, and this likely hinders larger adoption. In fact, a few start-up companies (e.g., PassBan, Duo Security, Authy, Encap) aim to innovate the 2F landscape by introducing more usable solutions to the market. However, little research actually studied the usability of different 2F technologies. % We begin to address this gap by presenting an exploratory comparative usability study. First, we conduct a pre-study interview, involving 9 participants, to identify popular 2F technologies as well as the contexts and motivations in which they are used. Then, we present the design and the results of a quantitative study (based on a survey involving 219 Mechanical Turk users), that aims to measure the usability of a few second-factor solutions: one-time codes generated by security tokens, one-time pins received via SMS or email, and dedicated smartphone apps (such as, Google Authenticator). Note that all our participants make use of 2F (i.e., had been forced to or had chosen to), and thus might already have a reasonable mental model of how 2F works. Our comparative analysis of 2F usability yields some interesting findings. We show how users' perception of 2F usability is often correlated with their individual characteristics (such as, age, gender, background), rather than with the actual technology or the context in which it is used. We find that, overall, 2F technologies are perceived as highly usable, with little difference among them, not even when they are used for different motivations and different contexts. We also present an exploratory {\em factor analysis}, which demonstrates that three metrics -- ease-of-use, required cognitive efforts, and trustworthiness -- are enough to capture key factors affecting the usability of 2F technologies. Finally, we pave the way for future qualitative studies, based on our factor analysis, to further analyze our findings and confirm their generalizability. \section{Related Work}\label{sec:rw} In this section, we review prior work on the usability of single- and multi-factor authentication technologies. \subsection{Usability of Single Factor Technologies} Adams and Sasse~\cite{adams1999users} showed that, for users, security is not a primary task, thus users feel under attack by ``capricious'' password polices. Password policies often mandate the use of long (and hard-to-remember) passwords, frequent password changes, and using different passwords across different services. This ultimately drives the user to find the simplest password that barely complies with requirements~\cite{adams1999users}. Inglesant and Sasse~\cite{inglesant2010true} analyzed ``password diaries'', i.e., they asked users to record the times they authenticated via passwords, and found that frequent password changes are a burden, users do not change passwords unless forced to, and that it is difficult for them to create memorable, secure passwords adhering to the policy. They also concluded that context of use has a significant impact on the ability of users to become familiar with complex passwords and, essentially, on their usability. Bardram et al.~\cite{bardram2005trouble} discussed burdens on nursing staff created by hard-to-remember passwords in conjunction with frequent logouts required by healthcare security standards, such as the Health Insurance Portability and Accountability Act (HIPAA). The impact on usability and security of password composition policies has also been studied. For instance, Komanduri et al.~\cite{komanduri2011passwords} showed that complex password policies can actually \textit{decrease} average password entropy, and that a 16-character with no additional requirements provided the highest average entropy per password. Egelman et al.~\cite{egelman2013does} found that for ``important'' accounts, a password meter (i.e., a visual clue on password's strength) successfully helps increase entropy. Another line of work has focused on {\em password managers}. Chiasson et al.~\cite{chiasson2006usability} compared the usability of two password managers (PwdHash and Password Multiplier), pointing to a few usability issues in both implementations and showing that users were often uncomfortable ``relinquishing control'' to password managers. Karole et al.~\cite{karole2011comparative} studied the usability of three password managers (LastPass, KeePassMobile, and Roboform2Go), with a focus on mobile phone users. They concluded that users preferred portable, stand-alone managers over cloud-based ones, despite the better usability of the latter, as they were not comfortable giving control of their passwords to an online entity. Finally, Bonneau et al.~\cite{bonneau2012quest} evaluated, without conducting any user study, authentication schemes including: plain passwords, OpenID~\cite{recordon2006openid}, security tokens, phone-based tokens, etc. They used a set of 25 subjective factors: 8 measuring usability, 6 measuring deployability, and 11 measuring security. Although they did not conduct any user study, authors concluded that: (i) no existing authentication scheme does best in all metrics, and (ii) technologies that one could classify as 2F do better than passwords in security but worse in usability. Although not directly related to our 2F study, we will use in our factor analysis some metrics introduced in the context of password replacements~\cite{bonneau2012quest} and password managers~\cite{karole2011comparative}. \subsection{Usability of Multi-Factor Authentication Technologies} Previous work has suggested that security via 2F decreases usability of end-user authentication. For instance, Braz et al.~\cite{braz2006security} showed that 2F increases ``redundancy'', thus augmenting security but decreasing usability. Along similar lines, Strouble et al.~\cite{strouble2009productivity} analyzed the effects of implementing 2F on productivity, focusing on the ``Common Access Card'' (CaC), a combined smart card/photo ID card used (at that time) by US Department of Defense (DoD) employees. They reported that users stopped checking emails at home (due to the unavailability of card readers) and that many employees accidentally left their card in the reader. Authors also estimated that the DoD spent about \$10.4M on time lost (e.g., when employees left the base without their card and were unable to re-enter) and concluded that the CaC increased security at the expense of productivity. Gunson et al.~\cite{gunson2011user} focused on the usability of single and two-factor authentication in automated telephone banking. They presented a survey involving $62$ users of telephone banking, where participants were asked to rate their experience using a proposed set of $22$ usability-related questions. According to their analysis, 2F was perceived to be more secure, but again less usable, than simple passwords and PINs. Weir et al.~\cite{weir2009user} compared usability of push-button tokens, card activated tokens, and PIN activated tokens. They measured usability in terms of efficiency (time needed for authentication), as well as in terms of satisfaction, by asking users to rate their experience using a set of $30$ questions. In addition to usability, they measured quality, convenience, and security. They showed that users value convenience and usability over security, and thus quality and usability are sacrificed when increasing layers of security are required. Somewhat closer to our work is another study by Weir et al.~\cite{weir2010usable}, which analyzed the usability of passwords and two methods of 2F: codes generated by token and PINs received via SMS. They performed a lab study where $141$ participants were asked to report on the usability of the three technologies using $30$ proposed questions. The authors concluded that familiarity with a technology (rather than perceived usability) impacted user willingness to use a given authentication technology. Their results showed that users perceived the 1-factor method (with which the average user had most experience) as being the most secure and most convenient option. Our work differs from Weir et al. in several key aspects. We compare a larger diversity of 2F technologies (security tokens, codes received via SMS/email, and dedicated apps) and do not study the trade-off between security and usability. In constrast, we provide a comparative study among different technologies, aiming to understand how each 2F technology performs compared to others. Specifically, we study the relation between 2F technologies and the contexts in which they are used, as well as the motivation driving the users to adopt them, partially motivated by previous work by Goffman~\cite{goffman1959presentation} and Nissenbaum~\cite{nissenbaum2004privacy}, who showed that human behavior often significantly differs based on context. Finally, we consider a larger pool of participants, measure an extensive list of factors beyond Weir's work, and conduct an exploratory factor analysis to determine key factors that affect usability of 2F. \section{Pre-Study Interviews with 2F Users} Our first step is to determine broad trends and attitudes of 2F users: we aim to obtain a a general understanding of 2F technologies in use, the context in which these technologies are deployed, and why they are adopted. To this end, we interviewed 9 participants, and designed a larger quantitative study (detailed in next section) based on them. Both user studies were approved by PARC's Institutional Review Board. \subsection{Methodology} We recruited participants by posting to local mailing lists and social media (Google+ and Facebook), announcing paid interviews for a user study on security and authentication technologies. Interested users were invited to complete an online screening survey to assess eligibility to participate. We collected basic demographic information such as age, gender, education level, familiarity with Computer Security, and asked potential participants whether or not they had previously used 2F. Users without 2F familiarity were not invited to participate. The screening survey was completed by 29 people, and we selected 9 participants, most of them from the Silicon Valley, with a wide range of ages (21 to 49), genders (5 men, 4 women), and educational background (ranging from high school to Ph.D. degrees). 5/9 users reported having a background in Computer Security. We interviewed users in one-on-one meetings, either face to face, or via Skype. Before each interview, users were given a consent form, indicating the interview procedure and data confidentiality. Each participant was compensated with a \$10 Amazon Gift Card. We started the interviews by reading from a list of 2F technologies, asking participants if they had used them: \begin{compactitem} \item PIN from a paper/card (one-time PIN) \item A digital certificate \item An RSA token code \item A Verisign token code \item A Paypal token code \item Google Authenticator \item A PIN received by SMS/email \item A USB token \item A smartcard \end{compactitem} To assess users' understanding and familiarity with 2F, we let them provide a brief description of two-factor authentication, and explain the difference with password-based authentication. (Obviously, we did not provide users with a 2F definition prior to this question, nor mention that the study was about 2F). Then, we asked participants {\em why} they used 2F and why they thought other people would; this helped us understand the motivation and the context in which they used 2F. Users were also asked to recall the last time they had used any 2F technology and report any encountered issues and whether or not they wanted to change the technology (and, if so, how). If users had used multiple technologies, we also asked to compare them, and this helped us understand how participants use and perceive 2F technologies. \subsection{Findings} We found that most used 2F technologies included: codes generated by a \emph{security token}, received via \emph{SMS or email}, and codes generated by a dedicated \emph{smartphone app}, entered along with username and password. Participants used 2F technology in three contexts: \emph{work} (e.g., to log into their company's VPN), \emph{personal} (e.g., to protect a social networking account), or \emph{financial} (e.g., to gain access to online banking). Study participants used 2F because they either were \emph{forced}, \emph{wanted to}, or \emph{had an incentive}. Most users adopted security tokens because an employer or bank had forced them. Some were unhappy about this: A participant mentioned 2F was not ``worth spending 5 minutes for \$1.99 purchases''. Two participants (customers of different banks) reported adopting 2F in order to ``obtain higher limits on online banking transactions.'' Other users used 2F to ``avoid getting hacked.'' Some users of tokens complained that it was annoying to have to remember to carry security tokens. One user recommended to ``store the token in the laptop bag'' to avoid this issue. Some users experienced delays from SMS-based codes, and were ``annoyed, especially when paying for incoming texts.'' One user pointed out that (s)he ``preferred text messages'', since (s)he ``did not have a smartphone.'' Others preferred not to use security tokens as they ``can be lost.'' Some participants preferred tokens as they are easier to use compared to mobile applications, where one has to ``look down to unlock screen, find app, open app, and read the code.'' \section{Quantitative Analysis of 2F Users' Preferences}\vspace{0.2cm} Our second/main study consists of a quantitative analysis of 2F users' preferences. Inspired by the results of our pre-study interviews, we designed and conducted a survey involving 219 2F users, recruited on Mechanical Turk (MTurk). \subsection{Methodology} We initially recruited 268 U.S.-based MTurk users. All MTurk users had to have a 95\% or higher approval rating. 13 of them were not eligible as they had not used any 2F technology. 36 users abandoned the survey prior to completion. The remaining 219 MTurk users were asked to complete an online survey about 2F technologies. Study participants received \$2.00 for no more than 30 minutes of survey taking. \descr{On MTurk studies.} Previous research showed that MTurk users are a valid alternative to traditional human subject pools in social sciences. For example, Jakobsson~\cite{jakobsson2009experimenting} compared the results of a study conducted on MTurk with results conducted by an independent survey company, and found that both results were statistically indistinguishable. Furthermore, MTurk users are often more diverse in terms of age, income, education level, and geographic location than the traditional pool for social science experiments~\cite{henrich2010weirdest}. However, research has also highlighted that MTurkers are often younger and more computer savvy~\cite{Christenson:2013}. As we will discuss in Sec.~\ref{sec:discussion}, our work is intended to serve as a preliminary study, which should guide and inform the design of a qualitative study. \descr{Data sanitization.} Kittur et al.~\cite{kittur2008crowdsourcing} point out that MTurk users often try to cheat at tasks. Therefore, we designed the survey to include several sanity-check questions, such as simple math questions (in the form of Likert questions) in order to verify that participants were paying attention (and were not answering randomly). We also introduced some contrasting Likert questions (e.g., ``I enjoyed using the technology'' and ``I did not enjoy using the technology'') and verified that answers were consistent. Users who did not answer correctly all sanity checks were to be discarded from the analysis (but still compensated). Actually, all users answered the sanity-checks correctly. Also note that analysis of the time spent by each survey participant showed completion times in line with those of test runs done by experimenters (~15--30 minutes, depending on number of 2F tech used). % \descr{Recruitment.} We screened potential participants by asking whether they had used 2F, and presented a list of examples: security tokens, codes received via SMS/email, and dedicated smartphone apps. Users who reported to have never used any of these technologies were told that they were not eligible to participate in our survey, and blocked from proceeding further or going back to change their answer. % Also note the MTurk task announcement did not state that users were required to have used 2F and merely presented it alongside other basic demographics such as age and gender. \descr{Demographics.} The demographics of the 219 study participants are reported in Table~\ref{demo}. Our population included 135 (61.6\%) males and 84 females (38.4\%). 50/219 (22.8\%) users reported a background in computer science, and 12/219 (5.4\%) users reported a background in computer security. Education levels ranged from high school diploma to PhD degrees. Ages ranged from 18 to 66, with an average age of 32 and a standard deviation of 10.2. \begin{table}[ttt] \centering \begin{tabular}{|l r|} \hline \textbf{Gender} & \\ \hline Male & 61.6\% \\ Female & 38.4\% \\ \hline \hline \textbf{Age}& \\ \hline 18--24 & 22.4 \%\\ 25--34 & 48.4 \%\\ 35--44 & 17.8 \%\\ 45--54 & 5.4 \%\\ 55--65 & 5.4 \%\\ 65+ & 0.5 \%\\ \hline \hline \textbf{Income}& \\ \hline Less than \$10,000 & 15.5 \%\\ \$10,000 -- \$20,00 & 14.6 \%\\ \$20,001 -- \$35,000 & 25.5\%) \\ \$35,001 -- \$50,000 & 18.3 \%\\ \$50,0001-- \$75,000 & 18.7 \%\\ \$75,001 -- \$90,000 & 3.6 \%\\ \$90,0001 -- \$120,000 & 2.7 \%\\ \$120,001 -- \$200,000 & 0.9 \%\\ \hline \hline \textbf{Education}& \\ \hline Less than high school & 0.46 \%\\ Some college & 32 \%\\ Undergrad & 37.4 \%\\ Some grad school & 3.1\% \\ Master's degree & 5.9 \%\\ PhD & 0.9 \%\\ \hline \hline \textbf{Familiar with Computer Science?} & \\ \hline Yes & 22.8\%\\ No & 77.2\%\\ \hline \hline \textbf{Familiar with Computer Security?}& \\ \hline Yes & 5.4\%\\ No & 94.6\%\\ \hline \end{tabular} \vspace{0.2cm} \caption{Participants' demographics (Total n = 219).\label{demo}} \vspace{0.1cm} \end{table} \begin{figure*}[ttt] \centering \subfigure[]{ \includegraphics[width=.295\linewidth]{token.jpg} \label{fig:token} } \subfigure[]{ \includegraphics[width=.375\linewidth]{text.png} \label{fig:text} } \subfigure[]{% \includegraphics[scale=.47]{app2.png} \label{fig:app} } \caption{Examples of 2F technologies: (a) codes generated by a security token, (b) codes received via SMS, (c) codes generated by a dedicated smartphone app.} \vspace{0.1cm} \end{figure*} \subsection{Study Design} % \noindent\textbf{Technologies, Context, and Motivation.} The first question in the survey asked users if they had used any of the following 2F technologies (for each of them, we displayed an example picture): \begin{compactitem} \item {\em Token:} Standalone pieces of hardware which display a code, Figure \ref{fig:token}. \item {\em Email/SMS:} A code received via email or SMS (also known as ``text messages''), Figure \ref{fig:text}. \item {\em App:} Codes delivered via an app running on a smartphone or other portable electronic device, such as an iPad or Android tablet, Figure \ref{fig:app}. \end{compactitem} Next, the survey branched depending on how many and what technology/technologies had been selected. Specifically, users were asked to answer the same set of questions for each technology they had used. One of our main objectives was to measure and compare in which context and with what motivation users were exposed to 2F technologies. Specifically, for each technology we asked users in which of the following {\em context(s)} they used the technology: \begin{compactitem} \item {\em Financial:} While doing online banking or other financial transactions (e.g., bill payment, checking credit card balance, doing taxes). \item {\em Work:} While performing work duties (e.g. logging in company VPN). \item {\em Personal:} While accessing a personal account not used for work or finance (e.g., Facebook, Twitter, Google, etc.). \item {\em Other:} Open-ended. \end{compactitem} Also, we asked users {\em why} they had been using 2F. Possible motivations included: \begin{compactitem} \item {\em Voluntary:} The participant voluntarily adopted 2F . \item {\em Incentive:} The participant got an incentive to adopt 2F (e.g., extra privileges/functionality, such as increased bank transfer limits). \item {\em Forced:} The participant had no choice (e.g., employer policy forcing adoption). \item {\em Other:} Open-ended. \end{compactitem} \descr{System Usability Score and Other Likert Questions.} For each employed 2F technology participants were asked to rank the usability of the technology using $10$ Likert questions from the System Usability Scale (SUS)~\cite{brooke1996sus}. Previous research has shown SUS is a fairly accurate measure of usability~\cite{bangor2008empirical}. Note that, in order to be consistent with other Likert questions in our survey, we modified the SUS questionnaire to include a 7-point range, rather than the more common 5-point range, with 1 being ``Strongly Disagree'' and 7 being ``Strongly Agree''. % Next, for each employed 2F technology, participants, where asked a series of 7-point Likert questions (with 1 being ``Strongly Disagree'' and 7 being ``Strongly Agree'') about the following statements: \begin{compactitem} \item \textsf{\small Convenient}: I thought (technology) was convenient. \item \textsf{\small Quick}: Using (technology) was quick. \item \textsf{\small Enjoy}: I enjoyed using (technology). \item \textsf{\small Reuse}: I would be happy to use (technology) again. \item \textsf{\small Helpful}: I found using (technology) helpful. \item \textsf{\small No Enjoy}: I did not enjoy using (technology). \item \textsf{\small User Friendly}: I found (technology) technology user friendly. \item \textsf{\small Need Instructions}: I needed instructions to use (technology). \item \textsf{\small Concentrate}: I had to concentrate when using (technology). \item \textsf{\small Stressful}: Using (technology) was stressful. \item \textsf{\small Match}: (technology) did not match my expectations regarding the steps I had to follow to use it. \item \textsf{\small Frustrating}: Using (technology) was frustrating. \item \textsf{\small Trust}: I found using (technology) trustworthy. \item \textsf{\small Secure}: How secure did you feel to authenticate using (technology) instead of just username \& password? (1: ``Not at All Secure'' 7: ``Very secure'') \item \textsf{\small Easy}: Knowing how to get the code from (technology) was easy. \end{compactitem} The above questions are inspired from metrics used in previous work~\cite{bonneau2012quest,karole2011comparative} and findings from our pre-study interviews. They are meant to be extensive and measure factors beyond the System Usability Score, such as trustworthiness, convenience, ease of use, reuse, enjoyment, concentration, portability, etc. \subsection{Results} We first analyze how 2F technologies are used by investigating the relation between independent factors such as context, motivation, technologies, and gender. We then provide an exploratory factor analysis about users' perception of 2F technologies (Likert questions), aiming to understand which factors are best to capture the usability of 2F. We then provide a comparative analysis of the usability of 2F technologies using those factors, and conclude with a discussion of our findings and highlighting some issues with 2F. \begin{table} \begin{center} \small \begin{tabular}{| c | c | c |} \hline \textbf{Group} & \textbf{2F Technologies} & \textbf{\# of Participants} \\ \hline 1 & Token & 11 \\\hline 2 & Email/SMS & 77 \\\hline 3 & App & 7 \\\hline 4 & Token \& Email/SMS & 29 \\\hline 5 & Token \& App & 3 \\\hline 6 & Email/SMS \& App & 50 \\\hline 7 & Token, Email/SMS \& App & 41 \\\hline \textbf{Total} & & \textbf{219}\\ \hline \end{tabular} \end{center} \vspace{0.2cm} \caption{Usage of 2F technologies among survey participants.\label{tab:usage}} \vspace{-0.2cm} \end{table} \descr{Use of 2-Factor.} Recall from our study design that participants were asked to identify the different 2F technologies they use, in which context, and why. Almost half of the participants ($43\%$) used only one technology, while $37\%$ used two, and $20\%$ three technologies. Table~\ref{tab:usage} summarizes the use of the three 2F technologies among the $219$ participants. We observe that ``Email/SMS'' (i.e., one-time codes received via SMS or email) is the most used technology as $89.95\%$ ($197/219$) used it as a second factor. Also, $45.20\%$ ($99/219$) of participants used ``App'' (i.e., codes generated by a dedicated smartphone app, such as Google Authenticator). ``Token'' (i.e., codes generated by a hardware/security token) is the least common technology, only used by $24.20\%$ ($53/219$). It is interesting to observe that App, despite being the most recent technology, has a higher adoption rate than Token, one of the oldest technology. This evolution might be related with the fast-increasing number of users owning smartphones, which can serve as a second-factor device that is always with the user. \descr{Different Technologies in Different Contexts.} The three 2F technologies are used differently depending on context (Figure~\ref{fig:technologyvscontext}). In the financial context, Email/SMS is the most popular 2F ($69.42\%$), followed by App ($20.39\%$) and Token ($10.19\%$). In the personal context, Email/SMS is also the most popular ($54.48\%$), followed by App ($29.75\%$) and Token ($15.77\%$). In the work context, Token is the most popular ($45.36\%$), followed by Email/SMS ($39.18\%$) and App ($15.46\%$). A $\chi^2$-test shows that differences are significant ($\chi^2(4, N=582)=65.18$, $p<0.0001$). No participant reported using 2F in any context other than work/financial/personal (participants could do so via an open-ended question). It is relatively unsurprising that Token is most popular in the work context---an environment with high inertia---while it is noticeable that many users adopt tokens in the personal context. The analysis of open-ended questions seem to show that online gaming is the main field of adoption for Token in the personal context. \begin{figure} \centering \includegraphics[scale=.7]{technologyVsContext.pdf} \caption{Distribution of the use of 2F technologies across contexts.} \label{fig:technologyvscontext} \end{figure} \descr{Different Motivations for Different Technologies.} We find that few participants are incentivized to use 2F -- see Figure~\ref{fig:technologyvsmotivation}. Only $19.73\%$ of Token users, $11.65\%$ of Email/SMS users and $9.25\%$ of App users are incentivized. Actually, $44.90\%$ of Token users were forced, while $53.18\%$ of App were voluntary. A $\chi^2$-test shows that differences are significant ($\chi^2(4, N=775)=14.68$, $p<0.001$). No participant reported using 2F for any motivation other than forced/incentive/voluntary (participants could do so via an open-ended question). \begin{figure} \centering \includegraphics[scale=.7]{technologyVsMotivation.pdf} \caption{Distribution of the motivation across 2F technologies.} \label{fig:technologyvsmotivation} \end{figure} \descr{Different Motivations in Different Contexts.} We find that in the work context, $60.84\%$ of participants were forced to use 2F, versus $27.97\%$ of participants using 2F voluntarily. In the personal context, more than half participants ($51.26\%$) use 2F voluntarily and $34.73\%$ are forced to. In the financial context, about $45.45\%$ of participants use 2F voluntarily and $42.91\%$ are forced to. Distributions are plotted in Figure~\ref{fig:motivationvscontext}. A $\chi^2$-test shows that differences are significant ($\chi^2(4, N=775)=29.76$, $p<0.0001$). This result is expected, as users tend to be forced to use 2F at work, and tend to use it voluntarily (opt-in) for personal use. In the financial context, the distribution is even. \begin{figure} \centering \includegraphics[scale=.8]{motivationVsContext.pdf} \caption{Distribution of the motivation across contexts.} \label{fig:motivationvscontext} \end{figure} \begin{table}[ttt] \begin{center} \small \begin{tabular}{| c | c | c |} \hline & \textbf{Female} & \textbf{Male} \\ \hline App Users & 31 & 71 \\\hline Non-App & 53 & 64 \\\hline \end{tabular} \end{center} \vspace{0.1cm} \caption{Distribution of gender across 2F App technology.}\label{tab:gender} \end{table} \descr{Gender Differences.} While there is no gender difference in terms of adoption rate for Token and Email/SMS, male users adopt App-based 2F more than female users -- see Table~\ref{tab:gender}. The $\chi^2$-test shows that the difference is significant ($\chi^2(1, N=219)=29.76$, $p<0.05$). \descr{2F for Online Gaming.} As mentioned earlier, we also asked participants to list services and websites for which they used 2F. Surprisingly, we find that in the personal context, in addition to personal email, document sharing, and social networking sites, participants also used 2F for online gaming, e.g., on Battle.net, Diablo 3, World of Warcraft, Blizzard Entertainment, and swtor.com. \subsection{Exploratory Factor Analysis} \begin{table*}[ttt] \begin{center} \small \begin{tabular}{| c | | c | c | c | c |} \hline & \multicolumn{4}{c}{\textbf{Loadings}} \\ \hline & \textbf{Factor 1: Ease of Use} & \textbf{Factor 2: Cognitive Efforts} & \textbf{Factor 3: Trust} & \textbf{Communality} \\ \hline \hline \textsf{\small Convenient} & \textbf{0.91} & 0.05 & -0.02 & 0.77 \\ \hline \textsf{\small Quick} & \textbf{0.84} & -0.12 & -0.15 & 0.67 \\ \hline \textsf{\small Enjoy} & \textbf{0.77} & 0.15 & 0.12 & 0.63 \\ \hline \textsf{\small Reuse} & \textbf{0.75} & 0.04 & 0.19 & 0.75 \\ \hline \textsf{\small Helpful} & \textbf{0.72} & 0.02 & 0.17 & 0.69\\ \hline \textsf{\small No Enjoy} & \textbf{-0.52} & 0.22 & -0.16 & 0.55 \\ \hline \textsf{\small User Friendly} & \textbf{0.42} & -0.19 & 0.37 & 0.74 \\ \hline \textsf{\small Need Instructions} & 0.15 & \textbf{0.80} & -0.12 & 0.60\\ \hline \textsf{\small Concentrate} & 0.03 & \textbf{0.64} & 0.14 & 0.38 \\ \hline \textsf{\small Stressful} & -0.41 & \textbf{0.51} & 0.01 & 0.59 \\ \hline \textsf{\small Match} & -0.30 & \textbf{0.42} & -0.15 & 0.47 \\ \hline \textsf{\small Frustrating} & -0.47 & \textbf{0.47} & 0.00 & 0.63 \\ \hline \textsf{\small Trust} & 0.08 & -0.04 & \textbf{0.80} & 0.74\\ \hline \textsf{\small Secure} & -0.02 & 0.03 & \textbf{0.82} & 0.82\\ \hline \textsf{\small Easy} & 0.27 & -0.28 & 0.31 & 0.44 \\ \hline \hline Eigenvalues & 7.52 & 1.78 & 1.03 & \\ \hline \% of Variance & 32 & 15 & 14 & \\ \hline Total Variance & & 61\% & &\\ \hline \end{tabular} \end{center} \vspace{0.2cm} \caption{Factor Analysis Table.}\label{tab:factor} \end{table*} While SUS is a generic usability measure, we argue that 2F technologies rely on a unique combination of hardware and software that SUS may fail to capture. Following this shortcoming, previous work~\cite{bonneau2012quest,braz2006security,gunson2011user,karole2011comparative,weir2009user,weir2010usable} considered a series of questions and parameters to evaluate the usability of 2F schemes. In order to obtain key elements central to the understanding of the usability of 2F, we perform an exploratory factor analysis (see aforementioned $15$ Likert questions). We factor-analyze our questions using Principal Component Analysis (PCA) with Varimax (orthogonal) rotation. Items with loadings $< 0.4$ are excluded. The analysis yields three factors explaining a total of $61\%$ of the variance for the entire set of variables. These factors are independent of each other (i.e., they are not correlated). Factor $1$ is labeled \textbf{\em Ease of Use} (EaseUse for short) due to the high loadings by following items: \textsf{\small Quick}, \textsf{\small enjoy}, \textsf{\small user friendly}, \textsf{\small convenient}, \textsf{\small easy reuse}, \textsf{\small helpful}, and \textsf{\small convenient}. This first factor explains $32\%$ of the variance. The second derived factor is labeled \textbf{\em Cognitive Efforts} (CogEfforts for short). This factor is labeled as such due to the high loadings by following factors: \textsf{\small Frustrating}, \textsf{\small stressful}, \textsf{\small match}, \textsf{\small need instructions}, and \textsf{\small concentration}. The variance explained by this factor is $15\%$. The third derived factor is labeled \textbf{\em Trustworthiness}. This factor is labeled as such due to the high loadings by following factors: \textsf{\small Secure} and \textsf{\small trust}. The variance explained by this factor is $14\%$ (Table~\ref{tab:factor}). The commonalities of the variables included are rather low overall, with one variable having a small amount of variance (\textsf{\small Concentrate}, $38\%$) in common with the other variables in the analysis. This may indicate that the variables chosen for this analysis are only weakly related with each other. However, the KMO test indicates that the set of variables are at least adequately related for factor analysis. In conclusion, we have identified three clear factors among participants: ease of use, required cognitive efforts, and trustworthiness. \begin{figure}[ttt] \centering \includegraphics[width=0.7\linewidth]{lines.pdf} \caption{Overview of usability measures of different 2F technologies. The x-axis lists the different considered factors and the y-axis gives the average score on the 7-point Likert scale. } \label{fig:overview} \end{figure} \descr{Overview of Usability Measures.} In Figure~\ref{fig:overview}, we show the average usability measures for different 2F technologies. We obtain similar ratings for different technologies. The average SUS score is around 5.8, EaseUse is about 5.4, CogEfforts is 2.4, and Trustworthiness is around 6. We observe that SUS is high for all 2F technologies: converting the SUS score to a percentage scale, overall SUS is more than $80\%$, which is considered ``Grade A'' usability~\cite{sauro2012quantifying}. In addition, SUS is correlated with EaseUse ($r=0.8$). Next, we look into factors that influence 2F usability measures. \descr{Comparison among Different 2F Technologies.} We now compare the usability of different 2F technologies, taking into consideration the context in which they are used and the characteristics of the individuals who used them, including age, gender and whether they have a computer science background (``CS\_Back''). Some participants only use one of the 2F technologies and others use more than one. To compare the usability of different technologies, we split participants into $7$ subgroups (Table~\ref{tab:usage}) and performed analysis on each of the subgroups. Since there are not enough participants in Groups $1$, $3$, and $5$~(Table~\ref{tab:usage}), these groups were not analyzed. For usability measures, we use the three newly discovered factors (introduced above): ``EaseUse'' ($\alpha = 0.92$), ``CogEffort'' ($\alpha = 0.74$), and ``Trustworthiness'' ($\alpha = 0.81$). \descr{Email/SMS Users.} 77 participants only use Email/SMS as 2F technology, with 13 of them having a CS background. To compare the usability measures for participants who only use Emails/SMS (Group 2), we ran a MANOVA with one between factor computer science background (CS\_Back vs non CS\_Back), and age, gender and context as covariates. The dependent variables were the three usability measures. Using Pillai's trace, CS\_Back ($V= 0.07, F(3,124)=3.04, p=0.02$) was a significant factor. Age, gender and context were not significant. We conduct further analysis to test the effects of computer science background on the 3 usability dimensions using Mann-Whitney U Test (i.e., Shapiro-Wilk test shows that our data is not normally distributed). We used Bonferroni adjusted alpha levels of $0.0167$ per test ($0.05/3$). Results indicate that participants without a computer science background (EaseUse $Md= 5.88$) find Email/SMS to be easier to use than participants with a computer science background (EaseUse $Md = 4.88$, $U = 1792$, $p = 0.001$). \descr{Token \& Email/SMS Users.} 29 participants used both token and email/SMS 2F technologies, with 7 having a computer science background. To compare the usability measures for participants who use both Emails/SMS and app (Group 4), we ran a one way (Technology: Token vs. Email/SMS) within subject MANOVA, with age, gender and context as covariates. The CS\_Back was not included in the analysis because there were not enough participants with CS\_Back. No main effect of technology was found. Age was a significant covariate ($V= 0.13, F(3,63)=3.12, p=0.03$). Similarly, we test the effects of age on the 3 usability dimensions using Mann-Whitney U Test and Bonferroni adjusted alpha levels of $0.0167$ per test ($0.05/3$). Results show that elder people in Group 4 (age above median age of $33; N = 12$, cogEfforts $Md = 3$) need more cognitive efforts to use 2F technology than younger people ($N=17$, CogEfforts $Md = 2, U = 873, p = 0.003$). \descr{Email/SMS \& App Users.} 50 participants used both Email/SMS and App (Group 6), with only 5 having a computer science background. To compare the usability measures for participants who use both Emails/SMS and app (Group 6), we ran a one way (Technology: Email/SMS vs. App) within subject MANOVA, with age, gender and context as covariates. The CS\_Back was not included in the MANOVA because there were not enough participants. Similar to results in Group 4, no main effect of technology was found. Age was a significant covariate ($V= 0.13, F(3,63)=3.12, p=0.03$). Similarly, we test the effects of age on the 3 usability dimensions using Mann-Whitney U Test and Bonferroni adjusted alpha levels of $0.0167$ per test ($0.05/3$). Results show that elder people find 2F technology less trustworthy (Trustworthiness $Md = 5.5$) than younger people (Trustworthiness $Md=6.0, U=2755, p =0. 007$). \descr{Token, Email/SMS \& App Users.} 41 participants used all token, email/SMS and App, with 17 having a CS background. To compare the usability measures for participants in this group, we ran a 3(Technology: Token, Email/SMS vs. App) x 2(CS\_Back vs non CS\_Back) MANOVA, with Technology as a within subject variable and CS\_Back as a between subject variable, and age, gender and context as covariates. Technology and CS\_back were not significant. Gender is a significant factor ($V= 0.12, F(3,168) = 7.44, p = 0.0001$). Similarly, we test the effects of gender on the 3 usability dimensions using Mann-Whitney U Test and Bonferroni adjusted alpha levels of $0.0167$ per test ($0.05/3$). Female users (CogEfforts $Md = 2.75, N = 12$) need more cognitive efforts than male users (CogEfforts $Md = 2.00, U= 4124, p = 0.001, N = 29$). \subsection{Analysis of Open-Ended Questions} For each 2F technology, we asked users to answer a few open-ended questions about the services/websites where the used 2F and they issues they encountered. As mentioned earlier, security tokens tend to be used for work, finance, and personal websites. Interestingly, users often rely on tokens to protect their online gaming accounts: the fear of losing their gaming profile is high enough for users to adopt 2F. Users complain that the authentication process is often prone to failure (``The authentication to the server was down.''), is time sensitive (``Sometimes, during the code rollover, you'd end up with a mismatch and have to start the whole process over''), and that problem resolution is complicated (``If I made three mistakes entering my code, I had to call the state help desk to have my PIN reset''). Email/SMS have, overall, a high variety of use cases, but were frequently used with banks as well as with Facebook, Google, and Paypal. People complained about specific issues with codes expiring or failing to be received, especially while traveling abroad. For instance, a number of users complain about SMS not working abroad (``Sometimes it wouldn't send'', ``My husband changed his phone number when moving to the US and had a lot of problems getting things.'', ``Sometimes I am unable to receive a code if I am overseas. In that case, I have to call a toll free number or e-mail customer support to receive the code via e-mail instead of text.''), and again regarding the difficult problem resolution (``The passcode they sent me didn't work and I had to call them to get a new one. It was very frustrating.''). Finally, we noticed that enterprises rely on (mostly proprietary) security tokens (e.g., RSA/Verisign tokens) for authentication to corporate networks in the workplace. Also, smartphone apps (e.g., Google Authenticator) are mostly used by customers who opt-in to 2F with online services providers, such as, Google, Dropbox, or Facebook. \section{Discussion}\label{sec:discussion} We now discuss findings drawn by our exploratory study, and highlight items for future work. \descr{Adoption.} 2F technologies are adopted at different rates, depending on {\em contexts} and {\em motivations}. Specifically, in the work environment, codes generated by security tokens constitute the most used second factor of authentication. Codes received via email or SMS are most popular in the financial and personal contexts. Also, few users receive incentives to adopt 2F, while many utilize security tokens because they are forced to, or decide to opt-in to use dedicated smartphone apps. \descr{Usability.} User perceptions of the usability of 2F is often correlated with their individual characteristics (e.g., age, gender, background), rather than with the actual technology or the context/motivation in which it is used. We find that, overall, 2F technologies are perceived as highly usable, with little difference among them, not even when they are used for different motivations and different contexts. This seems to contrast with prior work on password policies~\cite{inglesant2010true} which showed that context of use impacts the ability of users to become familiar with complex passwords and, ultimately, affects their usability. One possible explanation, supported by participants' responses to open-ended questions, is that most 2F users do not need to provide the second authentication factor very often. For instance, some financial institutions (e.g., Chase and Bank of America) or services provider (such as Google and Facebook) only require the second factor to be entered if a user is authenticating from an unrecognized device (e.g., from a new location or after clearing cookies). \descr{Trustworthiness.} Another relevant finding is that users' perception of trustworthiness is not negatively correlated with ease of use and required cognitive efforts, somewhat in contrast to prior work~\cite{braz2006security,gunson2011user}. We find that 2F technologies perceived as more trustworthy are not necessarily less usable. One possible explanation is that prior work mostly compared 2F with passwords. \descr{Impact.} We argue that our comparative analysis is essential to begin assessing attitudes and perceptions of 2F users, identifying causes of friction, driving user-centered design of usable 2F technologies, as well as informing future usable security research. Note that, in many cases, authentication based on passwords only is actually not an option (e.g., for corporate VPN access, or for some financial services), and thus more usable 2F technologies in that context should be favored to avoid friction~\cite{adams1999users}, negative impact on productivity~\cite{strouble2009productivity,inglesant2010true}, as well as driving users to circumvent authentication policies they perceived as unnecessarily stringent~\cite{inglesant2010true}. Similarly, when users have the choice to opt-in, adoption rates will likely depend on 2F usability. \descr{Limitations and future work.} We acknowledge that our work presents some limitations and leaves a few items to future work. First, it is based on a survey of 219 MTurk users, who, arguably, might be more computer savvy and might adopt 2F more than the general population. Second, some of the points raised by our analysis -- such as, non-correlation of usability and context/motivation of use as well as the usability metrics derived by our factor analysis -- should be validated by open-ended interviews and qualitative studies. Indeed, our current and future work includes the design of a real-world user study building on the experience and the findings from this work. \section{Conclusions} This paper presented an exploratory comparative study of two-factor authentication (2F) technologies. First, we reported on a pre-study interview involving 9 participants, intended to identify popular 2F technologies as well as how they are used, when, where, and why. Next, we designed and administered an online survey to 219 Mechanical Turk users, aiming to measure the usability of a few popular 2F technologies: one-time codes generated by security tokens, one-time PINs received via SMS or email, and dedicated smartphone apps. We also recorded contexts and motivations, and study their impact on perceived usability of different 2F technologies. We considered participants that used specific 2F technologies, either because they were forced to, or because they wanted to. We presented an exploratory factor analysis to evaluate a series of parameters, including some suggested by previous work, to evaluate the usability of 2F, and show that ease of use, trustworthiness, and required cognitive effort are the three key aspects defining 2F usability. Finally, we showed that differences among the usage of 2F depend on individual characteristics of people, more than the actual technologies or contexts of use. We considered a few characteristics, such as age, gender and computer science background, and obtained a few insights into user preferences. Our preliminary study is essential to guide and inform the design of follow-up qualitative studies, which we plan to conduct as part of future work. \balance \bibliographystyle{abbrv}
1,116,691,498,754
arxiv
\section{Introduction} Over the last two years as part of our Domain Wall Fermion (DWF) physics programme we have been looking at the $K\rightarrow \pi \ell \nu_\ell$ ($K_{\ell 3}$) form factor at zero momentum transfer. Since the experimental rate for $K_{\ell 3}$ decays is proportional to $|V_{us}|^2 |f_{K\pi}^+(0)|^2$, a lattice calculation of the form factor, $f_{K\pi}^+(q^2)$ at $q^2=0$, provides an excellent avenue for the determination of the Cabibbo-Kobayashi-Maskawa (CKM) \cite{Cabibbo:1963yz} quark mixing matrix element, $|V_{us}|$. The uncertainty in the unitarity relation of the CKM matrix $ \left|V_{ud}\right|^2 + \left|V_{us}\right|^2 = 1 $ (we ignore $\left|V_{ub}\right|$ since this is very small), is dominated by the precision of $\left|V_{us}\right|$. In Fig.~\ref{fig:vusvud} we show the latest determinations of $|V_{ud}|$ \cite{Yao:2006px} and $\left|V_{us}\right|$ \cite{Boyle:2007qe}. For comparison, we also show the unitarity relation. Since it is important to establish unitarity with the best precision possible, it is essential that we decrease the error in $|V_{us}|$. \begin{wrapfigure}{r}{0.45\textwidth} \begin{center} \includegraphics[width=0.45\textwidth]{VusVudplot_3_edited.eps} \caption{Bands showing the current limits on $|V_{ud}|$ \cite{Yao:2006px}, and $|V_{us}|$\cite{Boyle:2007qe}.} \label{fig:vusvud} \end{center} \end{wrapfigure} The value of $f_{K\pi}^+(0)$ used in determining $\left|V_{us}\right|$ in figure~\ref{fig:vusvud} was determined using standard methods \cite{Becirevic:2004ya,Dawson:2006qc} involving periodic boundary conditions in the recent paper \cite{Boyle:2007qe}. There, the $K_{\ell 3}$ form factor is calculated at $q^2_{max}=(m_K - m_{\pi})^2$ and several negative values of $q^2$ for a variety of quark masses. This allows for an interpolation of the results to $q^2=0$. The form factor is then chirally extrapolated to the physical pion and kaon masses. The final result for $f_{K\pi}^+(0)$ quoted is then \cite{Boyle:2007qe} $ f_{K\pi}^+(0) = 0.9644(33)(34)(14) $ where the first error is statistical, and the second and third are estimates of the systematic errors due to the choice of parametrisation for the interpolation and lattice artefacts, respectively. This gives us a value of $\left|V_{us}\right|=0.2249(14)$. More recently, we have developed a method that uses partially twisted boundary conditions to calculate the $K_{\ell 3}$ form factor directly at $q^2=0$ \cite{Boyle:2007wg}, thereby removing the systematic error due to the choice of parametrisation for the interpolation in $q^2$. We have also used partially twisted bc's to calculate the pion form factor at values of $q^2$ below the minimum value obtainable with periodic bc's. In contrast to recent studies this allows for a direct evaluation of the charge radius of the pion. The method was developed and tested in \cite{Boyle:2007wg} and now applied in a simulation with parameters much closer to the physical point. In this paper we discuss our findings for the pion form factor from \cite{Boyle:2008yd} and our progress in improving the precision of our result for $f_{K\pi}^+(0)$ from \cite{Boyle:2007qe} using partially twisted boundary conditions. \section{Simulation Parameters} The computations are performed using an ensemble with light quark mass $am_u=am_d=0.005$ and strange quark mass $am_s=0.04$ from a set of $N_f=2+1$ flavour DWF configurations with $(L/a)^3\times T/a\times L_s=24^3\times 64\times 16$ which were jointly generated by the UKQCD/RBC collaborations \cite{Allton:2008pn} using the QCDOC computer. The gauge configurations were generated with the Iwasaki gauge action with an inverse lattice spacing of $a^{-1}=1.729(28)\mathrm{GeV}$. The resulting pion and kaon masses are $m_\pi \approx 330\mathrm{MeV}$ and $m_K \approx 575\mathrm{MeV}$, respectively. In this work we use single time-slice stochastic sources \cite{Boyle:2008rh}, for which the elements of the source are randomly drawn from a distribution $\mathcal{D}=\mathbb{Z}(2)\otimes \mathbb{Z}(2)$ which contains random $\mathbb{Z}(2)$ numbers in both its real and imaginary parts. With sources of this form we find that the computational cost of calculating quark propagators is reduced by a factor of 12. For more details on the simulations, see \cite{Boyle:2008yd}. \section{The Form Factors} \label{sec:ff} Here we briefly outline the main features of our method and we refer the reader to our earlier papers for more details \cite{Boyle:2007qe,Boyle:2007wg,Boyle:2008yd}. The matrix element of the vector current between initial and final state pseudoscalar mesons $P_i$ and $P_f$, is in general decomposed into two invariant form factors: \begin{equation} \langle {P_f(p_f)}|V_{\mu}| {P_i(p_i)}\rangle = f^+_{P_iP_f}(q^2)(p_i+p_f)_{\mu}+f^-_{P_iP_f}(q^2)(p_i-p_f)_{\mu}, \label{eq:me} \end{equation} where $q^2=-Q^2=(p_i-p_f)^2$. For $K \rightarrow \pi$, $V_{\mu} = \bar{s}\gamma_{\mu}u$, $P_i = K$ and $P_f = \pi$. For $\pi \rightarrow \pi$, $V_{\mu} = \frac{2}{3}\bar{u}\gamma_{\mu}u-\frac{1}{3}\bar{d}\gamma_{\mu}d$, $P_i = P_f= \pi$ and from vector current conservation, $f^-_{\pi\pi}(q^2)=0$. The form factors $f^+_{P_iP_f}(q^2)$ and $f^-_{P_iP_f}(q^2)$ contain the non-perturbative QCD effects and hence are ideally suited for a determination in lattice QCD. In a finite volume with spatial extent $L$ and periodic boundary conditions for the quark fields, momenta are discretised in units of $2\pi/L$. As a result, the minimum non-zero value of $Q^2$ for the pion form factor in our simulation is $q^2_{\rm min}\approx -0.15\ {\rm GeV}^2$, while for the $K\to\pi$ form factor \begin{equation} q^2=(E_K(\vec{p}_i)-E_\pi(\vec{p}_f))^2 - (\vec{p}_i - \vec{p}_f)^2\ . \end{equation} For $\vec{p}_i=0$ and $2\pi/L$ with $\vec{p}_f=0$, we have $q^2\approx 0.06\ {\rm GeV}^2$ and $-0.05\ {\rm GeV}^2$, respectively, presenting the need for an interpolation in order to extract the result of the form factor, $f_{K\pi}^+$, at $q^2=0$. In order to reach small momentum transfers for the pion form factor and $q^2=0$ for the $K\to\pi$ form factors, we use partially twisted boundary conditions \cite{Sachrajda:2004mi,Bedaque:2004ax}, combining gauge field configurations generated with sea quarks obeying periodic boundary conditions with valence quarks with twisted boundary conditions \cite{Sachrajda:2004mi,Bedaque:2004ax,Bedaque:2004kc,deDivitiis:2004kq,Tiburzi:2005hg,Flynn:2005in,Guadagnoli:2005be}. The valence quarks, $q$, satisfy \begin{equation} q(x_k+L) = e^{i\theta_k}q(x_k),\qquad (k=1,2,3)\,, \end{equation} where $\vec{\theta}$ is the twisting angle. \begin{center}\begin{picture}(120,60)(-60,-30) \ArrowLine(-50,0)(-25,0) \ArrowLine(25,0)(50,0)\Oval(0,0)(12,25)(0) \GCirc(-25,0){3}{0.5}\GCirc(25,0){3}{0.5} \GCirc(0,12){3}{0.5} \Text(-19,12)[b]{$q_2$}\Text(19,12)[b]{$q_1$} \Text(0,-15)[t]{$q_3$}\Text(0,17)[b]{$V_\mu$}\Text(-54,0)[r]{$P_i$} \Text(54,0)[l]{$P_f$}\ArrowLine(0.5,-12)(-0.5,-12) \end{picture}\end{center} Our method is decribed in detail in \cite{Boyle:2007wg,Boyle:2008yd} and proceeds by setting $\vec{\theta}=0$ for the spectator quark, denoted by $q_3$ in the above diagram. We are then able to vary the twisting angles, $\vec{\theta}_i$ and $\vec{\theta_f}$, of the quarks before $(q_2)$ and after $(q_1)$ the insertion of the current, respectively. The momentum transfer between the initial and final state mesons is now \begin{equation} q^2=(E_i(\vec{p}_i,\vec{\theta}_i)-E_f(\vec{p}_f,\vec{\theta}_f))^2 - ((\vec{p}_i+\vec{\theta}_i/L) - (\vec{p}_f+\vec{\theta}_f/L))^2\ , \end{equation} where $E(\vec{p},\vec{\theta})=\sqrt{m^2+(\vec{p}+\vec{\theta}/L)^2}$. Hence it is possible to choose $\vec{\theta}_i$ and $\vec{\theta}_f$ such that $q^2=0$, which from now on we refer to as $\vec{\theta}_K$ and $\vec{\theta}_\pi$ for when we twist a quark in the Kaon and Pion, respectively. In order to extract the matrix elements (\ref{eq:me}) from a lattice simulation, we consider ratios of three- and two-point correlation functions. For the pion form factor, we consider the ratios given in Eqs.~(3.4) and (3.5) in \cite{Boyle:2008yd}, while for the $K\to\pi$ form factors, we consider the following ratios \begin{equation}\label{eq:ratios} \begin{array}{rcl} R_{1,\,P_iP_f}(\vec{p}_i,\vec{p}_f)&{=}& 4\sqrt{E_i E_f}\, \sqrt{\frac {C_{P_iP_f}(t,\vec p_i,\vec p_f)\,C_{P_fP_i}(t,\vec p_f,\vec p_i)} {C_{P_i}(t_{\rm sink},\vec p_i)\,C_{P_f}(t_{\rm sink},\vec p_f)}}, \\[4mm] R_{3,\,P_iP_f}(\vec{p}_i,\vec{p}_f)&=& 4{\sqrt{E_i E_f}}\, \frac{C_{P_iP_f}(t,\vec p_i,\vec p_{f})}{C_{P_f}(t_{\rm sink},\vec p_f)}\, \sqrt{ \frac{C_{P_i}(t_{\rm sink}-t,\vec p_i)\,C_{P_f}(t,\vec p_f)\,C_{P_f}(t_{\rm sink},\vec p_f)} {C_{P_f}(t_{\rm sink}-t,\vec p_f)\,C_{P_i}(t,\vec p_i)\,C_{P_i}(t_{\rm sink},\vec p_i)}}\,. \end{array} \end{equation} We deviate slightly from the method outlined in \cite{Boyle:2007wg} for extracting $f_{K\pi}^0(0)$ from the ratios. Previously we considered only the time-component of the vector current and solved for $f_{K\pi}^0(0)=f_{K\pi}^+(0)$ via the linear combination \begin{equation}\label{eq:lin_comb} f_{K\pi}^0(0)=\frac{ R_{\alpha,K\pi}(\vec{p}_K,\vec{0})(m_K-E_\pi) - R_{\alpha,K\pi}(\vec{0},\vec{p}_\pi)(E_K-m_\pi) }{ (E_K+m_\pi)(m_K-E_\pi)-(m_K+E_\pi)(E_K-m_\pi) }\qquad(\alpha=1,2,3)\,. \end{equation} This, however, is just one of many expressions that can be obtained when we solve the system of simultaneous equations that are obtained when we consider all components of the vector current, $V_\mu$, rather than just $V_4$ that was considered in \cite{Boyle:2007wg} \begin{eqnarray} R_{\alpha,K\pi}(\vec{\theta}_K,\vec{0},V_4) &=& f_{K\pi}^+(0)\,(E_K+m_\pi) + f_{K\pi}^-(0)\,(E_K-m_\pi)\nonumber\\ R_{\alpha,K\pi}(\vec{0},\vec{\theta}_\pi,V_4) &=& f_{K\pi}^+(0)\,(m_K+E_\pi) + f_{K\pi}^-(0)\,(m_K-E_\pi)\nonumber\\ R_{\alpha,K\pi}(\vec{\theta}_K,\vec{0},V_i) &=& f_{K\pi}^+(0)\,\theta_{K,i} + f_{K\pi}^-(0)\,\theta_{K,i}\nonumber\\ R_{\alpha,K\pi}(\vec{0},\vec{\theta}_\pi,V_i) &=& f_{K\pi}^+(0)\,\theta_{\pi,i} - f_{K\pi}^-(0)\,\theta_{\pi,i}\ . \end{eqnarray} We can now proceed to solve this overdetermined system of equations via $\chi^2$ minimisation. \section{Pion form factor results} In Fig.~\ref{fig:fpipi} we show our results for the form factor $f^{\pi\pi}(q^2)$ for a pion with $m_\pi=330\,\mathrm{MeV}$ for a range of values of $q^2$ both using periodic bc's and partially twisted bc's (set A and sets B\&C respectively in the left plot of figure). The vertical dashed line indicates the smallest momentum transfer available on this lattice with periodic bc's. The (blue) dashed line is the result of a pole-dominance fit to our data points, while the (red) dot-dashed curve is obtained from the result of QCDSF \cite{Brommel:2006ww} evaluated at $m_\pi=330$~MeV. \begin{figure} \begin{tabular}{lcr} \hspace*{-1mm} \psfrag{xlabel}[c][b][1][0]{\small $Q^2[\,\mathrm{GeV}^2]$} \psfrag{ylabel}[c][t][1][0]{\small $f^{\pi\pi}(q^2)$} \includegraphics[width=0.35\textwidth,angle=-90]{fpipi_pole.eps} && \hspace*{-6mm} \psfrag{xlabel}[c][bc][1][0]{\small $Q^2[\,\mathrm{GeV}^2]$} \psfrag{ylabel}[c][t][1][0]{\small $f^{\pi\pi}(q^2)$} \psfrag{exp}[l][lc][1][0]{\tiny experimental data NA7} \psfrag{330MeV}[l][lc][1][0]{\tiny lattice data for $m_\pi=330\,\mathrm{MeV}$} \psfrag{NLO330MeV}[l][ll][1][0]{\tiny $\mathrm{SU}(2)$ NLO lattice-fit; $m_\pi=330\,\mathrm{MeV}$} \psfrag{NLO139.57MeV}[l][ll][1][0]{\tiny $\mathrm{SU}(2)$ NLO lattice-fit; $m_\pi=139.57\,\mathrm{MeV}$} \psfrag{444OOOOOOOOOOOOOOOOOOOO}[l][lc][1][0]{\tiny $1+\frac 16 \langle r^2_\pi\rangle^{\rm PDG}Q^2$} \includegraphics[width=0.35\textwidth,angle=-90]{final.eps} \end{tabular} \caption{$f^{\pi\pi}(q^2)$ from a $24^3\times 64$ lattice with $m_\pi=330$~MeV using partially twisted bc's.} \label{fig:fpipi} \end{figure} On the right of Fig.~\ref{fig:fpipi} we have a zoom into the low $Q^2=-q^2$ region. The triangles are our lattice data points for a pion with $m_\pi=330\,\mathrm{MeV}$, and the magenta diamonds are experimental data points for the physical pion. Because our values of $Q^2$ are very small, we apply NLO chiral perturbation theory (ChPT). In NLO ChPT, the pion form factor depends only on a single low energy constant (LEC) ($L_9^r$ for SU(3), or $l_6^r$ for SU(2)) \begin{eqnarray} f^{\pi\pi}_{\mathrm{SU}(2),\mathrm{NLO}}(q^2) &=& 1+\frac1{f^2}\left[ -2l_6^r \,q^2 + 4\tilde{\mathcal{H}}(m_\pi^2,q^2,\mu^2)\right] \label{eq:fpipiSU2}\\ f^{\pi\pi}_{\mathrm{SU}(3),\mathrm{NLO}}(q^2) &=& 1+\frac1{f_0^2}\left[ 4L_9^r \,q^2 + 4\tilde{\mathcal{H}}(m_\pi^2,q^2,\mu^2) + 2\tilde{\mathcal{H}}(m_K^2,q^2,\mu^2)\right] \label{eq:fpipiSU3} \end{eqnarray} where \begin{equation} \tilde{\mathcal{H}}(m^2,q^2,\mu^2) = \frac{m^2 H(q^2/m^2)}{32\pi^2} - \frac{q^2}{192\pi^2}\log\frac{m^2}{\mu^2} \end{equation} and \begin{equation}\label{eq:Hdef} H(x) \equiv -\frac43 + \frac5{18}x - \frac{(x-4)}6 \sqrt{\frac{x-4}x} \log\left(\frac{\sqrt{(x-4)/x}\,+1}{\sqrt{(x-4)/x}\,-1}\right) \end{equation} with $H(x) = -x/6 + O(x^{3/2})$ for small $x$. Provided our pion mass is light enough, we can use the $q^2$ dependence of $f^{\pi\pi}(q^2)$ to extract this LEC. The grey dashed curve on the right hand of Fig.~\ref{fig:fpipi} shows our SU(2) fit to the $m_\pi=330\,\mathrm{MeV}$ pion form factor data. Once the LEC is determined from this fit, we insert the physical pion mass in (\ref{eq:fpipiSU2}) to obtain the solid blue curve. In addition we also represent the PDG world average \cite{Yao:2006px} for the charge radius using the black dashed line. Our best estimate for the pion charge radius comes from the SU(2) NLO ChPT fit to the three lowest $Q^2$ points and is \begin{equation} \langle r_\pi^2\rangle=0.418(31) \,\rm{fm^2}\ . \end{equation} The fact that our result is in agreement with experiment, $\langle r_\pi^2\rangle=0.452(11) \,\rm{fm^2}$ \cite{Yao:2006px}, gives us confidence that we are in a regime where chiral perturbation theory is applicable. \section{$K_{l3}$ form factor results} As explained in Sec.~\ref{sec:ff}, we calculate the $K \rightarrow \pi$ form factor directly at $q^2=0$ by setting the Kaon and Pion in turn to be at rest, while twisting the other one such that $q^2=0$. We refer to these twist angles as $\theta_\pi$ and $\theta_K$, respectively. We then get the following equations: \begin{eqnarray} \langle K(p_K)|V_\mu| \pi(0)\rangle &=& f_{K\pi}^+(0)p_{K,\mu} - f_{K\pi}^-(0)p_{K,_\mu} \nonumber\\ \langle K(0)|V_\mu| \pi(p_{\pi})\rangle &=& f_{K\pi}^+(0)p_{\pi,\mu} + f_{K\pi}^-(0)p_{\pi,_\mu} \label{eq:simeq} \end{eqnarray} \begin{wrapfigure}{r}{0.48\textwidth} \includegraphics[width=0.5\textwidth]{f_zero-q2fitdep0.005_64.eps} \caption{$K_{\ell 3}$ form factor, $f_{K\pi}^0(q^2)$, evaluated at $q^2=0$ directly using twisted boundary conditions. Results are compared with data at $q^2\ne 0$ and fits from \cite{Boyle:2007qe}} \label{fig:kl3} \end{wrapfigure} By simply solving the simultaneous equations for each of the $\mu$ components separately we find that the errors in $f_{K\pi}^+(0)$ and $f_{K\pi}^-(0)$, are much larger than the errors in the matrix elements. We have managed to circumvent this by looking at all the $\mu$ components simultaneously, and then performing a $\chi^2$ minimisation on the overdetermined system of equations to find the values of $f_{K\pi}^+(0)$ and $f_{K\pi}^-(0)$ that best fit the equations. To obtain the matrix elements (\ref{eq:simeq}), we consider different combinations of $R_1$ and $R_3$ (\ref{eq:ratios}). We find that all combinations lead to consistent results, with the best combination being that we use $R_3$ for all matrix elements except for the case where the pion is twisted and we are considering the $4^{\rm th}$ component of the vector current. Using this set up, we obtain our preliminary results for $f^+_{K\pi}(0)$ and $f^-_{K\pi}(0)$ (for a pion mass of $m_{\pi}=330\,\mathrm{MeV}$) \begin{equation} f^+_{K\pi}(0) = 0.9742(41)\,,\quad f^-_{K\pi}(0)=-0.113(12)\ . \end{equation} Our result for $f^+_{K\pi}(0)=f^0_{K\pi}(0)$ is shown in Fig.~\ref{fig:kl3} where we compare with the previous determinations in \cite{Boyle:2007qe} which used pole $f^+_{pole}(0) = 0.9774(35)$ and quadratic $f^+_{quad}(0) =0.9749(59)$ functions to interpolate between $q^2_{max}$ and negative values of $q^2$. In our previous result, $f_{K\pi}^+(0) = 0.9644(33)(34)(14)$, these were combined, taking a systematic error of (34) for the model dependence. This contribution to the error has been eliminated in our new calculation. We conclude that using partially twisted bc's for the $K_{\ell 3}$ form factor, is an improvement on the conventional method as it removes a source of systematic error, while keeping comparable statistical errors. Another source of systematic error in our result in \cite{Boyle:2007qe} is due to the slight difference between our simulated strange quark mass ($am_s+am_{\rm res}\simeq 0.043$) and the physical strange quark ($am_s+am_{\rm res}\simeq 0.037$) \cite{Allton:2008pn}, and we are in the process of determining the effect this has on our result through a simulation with a partially quenched strange quark mass of $am_s+am_{\rm res}\simeq 0.033$. We also plan to combine our results with the latest expressions from chiral perturbation theory \cite{Flynn:2008tg}. \section*{Acknowledgements} We thank our colleagues in RBC and UKQCD within whose programme this calculation was performed. We thank the QCDOC design team for developing the QCDOC machine and its software. This development and the computers used in this calculation were funded by the U.S.DOE grant DE-FG02-92ER40699, PPARC JIF grant PPA/J/S/1998/0075620 and by RIKEN. We thank the University of Edinburgh, PPARC, RIKEN, BNL and the U.S. DOE for providing the QCDOC facilities used in this calculation. We are very grateful to the Engineering and Physical Sciences Research Council (EPSRC) for a substantial allocation of time on HECToR under the Early User initiative. We thank Arthur Trew, Stephen Booth and other EPCC HECToR staff for assistance and EPCC for computer time and assistance on BlueGene/L. JMF, AJ, HPdL and CTS acknowledge support from STFC Grant PP/D000211/1 and from EU contract MRTN-CT-2006-035482 (Flavianet). PAB, CK, CMM acknowledge support from STFC grant PP/D000238/1. JMZ acknowledges support from STFC Grant PP/F009658/1.
1,116,691,498,755
arxiv
\section{Introduction} \label{introduction} The majority of stellar-mass black holes residing in low mass binary systems spend most of their lives in quiescence. This explains the empirical fact that out of tens of thousands of such systems predicted to exist throughout our Galaxy \citep[e.g.][]{Yungelson2006}, only about 50 have been discovered. For a number of such systems, it is well established that the outburst -- which resulted in their discovery -- evolves through a number of spectral states characterised by the relative strength of their thermal and non-thermal X-ray emission and with possible differences in the accretion geometry and reflection attributes. These active states can be roughly separated into four semi-distinct states which are phenomenologically described below and extensively discussed in \citet{Remillard06} and \citet{Bellonibook2010}. At the outset of the outburst, the system goes through what has been traditionally dubbed the low-hard state (LHS) where the X-ray spectrum is dominated by a non--thermal component often simply described by a power-law (photon index $\Gamma$ between $\sim1.4-2$) spectrum with relatively low luminosity ($\sim 0.05 L_{\rm Edd} $ and and exponential cut-off at $\sim100$\hbox{$\rm\thinspace keV$}). The energy spectrum in the LHS peaks near $\sim 100$\hbox{$\rm\thinspace keV$}\ and often we also see the weak presence of a thermal component (contributing $<20\%$ of the total 2-20\hbox{$\rm\thinspace keV$} flux) with a temperature below $\sim0.5$\hbox{$\rm\thinspace keV$}\ produced by the accretion disk \citep[see e.g.][and references therein]{Reynold2011swift, reislhs}. As the luminosity increases, the spectrum moves through the intermediate state (IS) where the 2-10\hbox{$\rm\thinspace keV$}\ flux is typically a factor of $\sim4$ times higher than that of the LHS. Here, the soft ($\Gamma = 2-3$) power-law tail coexists with a strong thermal component. Recently, the intermediate state has begun to be subdivided into an early Hard-Intermediate State (HIS) and a later Soft-Intermediate state (SIS) just prior to a transition into the canonical High-Soft or thermal state, where the X-ray flux is dominated ($>75\% $ of the total 2-20\hbox{$\rm\thinspace keV$} flux) by the thermal radiation from the inner accretion disk having an effective temperature of $\sim1$\hbox{$\rm\thinspace keV$}. In this final state of the outburst, the system usually emits with luminosities $>0.1 L_{\rm Edd}$ and the power-law component is both weak (less than 25\% of the total 2-20\hbox{$\rm\thinspace keV$}\ flux) and steep ($\Gamma = 2-3$). Following the HSS, the system often returns to the LHS and subsequently goes back to quiescence were it can remain indefinitely or in some cases for a few years before this cycle restarts. The hard X-ray emission predominant in the LHS has long been linked to inverse Compton scattering of the soft thermal disk photons by a population of hot ($\sim10^9\hbox{\rm\thinspace K}$) electrons in a cloud of optically thin, ionised gas or ``corona" surrounding the inner parts of the accretion disk \citep{Shapiro1976,SunyaevTitarchuk1980}. Under the common assumption that the radio emission observed to originate from stellar mass black holes, is directly related to the presence of a jet, it is believed that all such systems, either in the LHS or in transition, launch a collimated outflow \citep[e.g.][]{Fender2001jets, fenderetal04, Fender091}. The fact that these persistent jets are observed only in the LHS suggests that the jet is linked to the corona, with claims that the corona in the LHS is indeed the launching point of persistent jets \citep[see e.g.][]{Markoff05}. The connection between the radio (jet) and X-ray flux for both stellar-mass and supermassive black holes \citep{fundamentalplane,GalloFenderPooley2003,fundamentalplane2}, often referred to as ``the fundamental plane" of black hole accretion, suggests an intimate connection between the corona and radio-jets \citep[see e.g.][]{miller2012cyg}. Whether state transitions are driven by intrinsic changes in $\dot{m}$, physical changes in the disk, disk-corona, radio jet or a combination of all these factors is a matter of much debate. \subsection{Reprocessed X-rays: Reflection } The existence of a hard X-ray source -- the corona -- also adds further complexities to the various spectral states. The reprocessing of these hard X-rays by the relatively cold accretion disk in all active states results in a number of ``reflection features" consisting of discrete atomic features together with a ``Compton-hump" peaking at approximately 30\hbox{$\rm\thinspace keV$}. The high fluorescent yield -- and relatively high cosmic abundance -- of iron often results in a particularly strong feature at $\sim 6-7$\hbox{$\rm\thinspace keV$}\ \citep[see e.g.][for a recent review of ``reflection" in black holes]{Fabianross2010}. The strong irradiation of the black hole accretion disk by the coronal photons likely causes the surface layers to be photoionised. \citet{rossfabian1993} investigated the effect of allowing the gas constituting the top layers of the accretion disk to ionise, and the authors went on to compute reflection spectra for different ionization levels. A number of similar studies of reflection from ionised matter have been conducted since \citep{MattFabianRoss1993, MattFabianRoss1996, rossfabianyoung99,NayakshinKazanasKallman2000, NayakshinKallman2001, cdid2001, GarciaKallman2010, GarciaKallman2011}. These studies demonstrate that the reflection spectrum expected from a black hole depends strongly on the level of ionization of the surface layers of the disk. This can be quantified for a constant density gas by the ionization parameter \begin{equation}\xi = \frac{L_{\rm x}}{nd^2}, \end{equation} where $L_{\rm x}$ is the ionising luminosity of the source, $d$ is the distance between the disk and the source, and $n$ is the density of the disk. Thus an increase in $\xi$, either by increasing the illuminating flux, decreasing the density or distance between the X-ray source and the disk, will cause the gas in the disk to become more ionised. \citet{MattFabianRoss1993, MattFabianRoss1996} split the behaviour of the reflection spectrum into four main regimes depending on the value of $\xi$. For low ionization parameter ($\xi < 100~\hbox{$\rm\thinspace erg~cm~s^{-1}$}$), the material is only weakly ionised and the reflection spectrum resembles that arising from ``cold'' matter, with a prominent iron line at 6.4\hbox{$\rm\thinspace keV$}, and strong absorption below $\approx 10$\hbox{$\rm\thinspace keV$}. There is only a weak contribution from the backscattered continuum at $\approx 6$\hbox{$\rm\thinspace keV$}\ and a weak iron K absorption edge at 7.1\hbox{$\rm\thinspace keV$}. As the disk becomes more ionised ($100 < \xi < 500~\hbox{$\rm\thinspace erg~cm~s^{-1}$}$) the system reaches intermediate ionization range where Fe has lost all of its M-shell ($n=3$) electrons and thus exists in the form of FeXVII--FeXXIII with a vacancy in the L-shell of the ion. Due to this vacancy, the L-shell can absorb the $K\alpha$\ line photons and thus effectively trap the escaping photon. This resonance trapping is only terminated when an Auger electron is emitted. This second ionization regime is therefore characterised by a very weak iron line and a moderate iron absorption edge. As the gas becomes more ionised ($500 < \xi < 5000~\hbox{$\rm\thinspace erg~cm~s^{-1}$}$) all low-$Z$ metals are found in their hydrogenic form and the soft reflection spectrum has only weak spectral features. Iron is found mostly in its hydrogen or helium like forms (FeXXVI or FeXXV respectively) and due to the lack of at least 2 electrons in the L-shell (i.e. not having a full 2s sub-shell), Auger de-excitation cannot occur. The result is strong emission from ``hot'' $K\alpha$\ FeXXV and FeXXVI at 6.67 and 6.97\hbox{$\rm\thinspace keV$}\ respectively and the corresponding absorption edges at approximately 8.85 and 9.28\hbox{$\rm\thinspace keV$}\ respectively. Finally, when $\xi \gg 5000~\hbox{$\rm\thinspace erg~cm~s^{-1}$}$, the disk is highly ionised and there is a distinct absence of any atomic features. A further complication arises in the reflection spectra of stellar mass black holes due to the fact that in these systems the gas in the accretion disk is inherently X-ray ``hot'' meaning that low-$Z$ metals can be fully ionised in the gas even before receiving any irradiation by the X-ray corona. To account for this extra ``thermal ionization'', \citet{refbhb} performed self-consistent calculations of the reflection resulting from the illumination of the accretion disk by both a hard, powerlaw corona and thermal disk blackbody radiation. The authors compared the results of having the disk both in hydrostatic equilibrium and under the assumption of a constant density atmosphere, and found reasonably good agreement between the two emergent spectra. \citet{refbhb} also confirmed in stellar mass black holes the result that had been previously found for AGN in that the spectrum from a constant density disk is slightly diluted (it has a lower flux) in comparison to that of a disk in hydrostatic equilibrium. Furthermore, the authors also noted a few small differences between the modes; namely a lower effective ionization parameter in the constant density model which resulted in a slightly stronger Fe$K\alpha$\ line and deeper iron K-edge. Nonetheless, the overall spectrum from the constant density approximation was shown to be in good agreement with the result for an atmosphere in hydrostatic equilibrium. The two reflection grids resulting from the work of \citet{reflionx, refbhb}, will be used frequently throughout this work. \subsection{General Relativistic Effects: Light bending} Naively, assuming isotropic coronal emission, one would expect variations in the reflection component to be directly correlated to variations in the observed power-law continuum. That is, as the observed flux of the X-ray corona increases, so should the amount of reprocessed emission. However, in a number of instances it has been found that this is not the case, with the reflection component at times behaving in an anticorrelated manner \citep[e.g.][]{Rossi2005j1650} or not varying at all despite large variations in the X-ray powerlaw continuum \citep[e.g.][]{Fabian02MCG, Fabianvaughan03, MiniuttiFabianGoyder2003,BallantyneVaughan2003,LarssonFabian2007}. By virtue of its proximity to the black hole, the emission from the corona is naturally affected by general relativistic (GR) effects. Some of the radiation from the corona which would otherwise escape is gravitationally focused -- ``bent" -- towards the accretion disk giving rise to enhanced reflection and selectively decreasing the X-ray continuum at infinity. A number of studies \citep[for instance][]{MartocchiaMatt1996, MartocchiaKaras2000LB, Miniu04, MiniuttiFabianMiller2004j1650, Niedzwiecki2008} have investigated the effect of GR on a compact, centrally concentrated X-ray corona close to a black hole\footnote{Observational evidence for such compact X-ray corona has recently come from microlensing results where the size of the X-emitting region has been shown to be of the order of $\sim10$${\it r}_{\rm g}$\ \citep{ChartasKochanek2009quasar,DaiKochanek2010quasar, ChartasKochanek2012quasar, MorganHainline2012quasar}.}. The ``light-bending" model put forward by \citet{Miniu04} predicts a number of semi-distinct regimes affecting the variability of the reflection component compared to the X-ray continuum: \begin{description} \item[Regime 1] When the corona is very close to the black hole (a few gravitational radii ${\it r}_{\rm g}$=$GM/c^2$), a large fraction of the radiation is bent onto the the accretion disk thus significantly reducing the amount of observed X-ray continuum and enhancing the reflection. A very steep emissivity profile is expected as the source is highly concentrated in the inner regions and the reflection is expected to be a steep function of the continuum in a quasi-linear manner. \item[Regime 2] When the central corona is slightly further from the black hole (at heights of $\sim 10$${\it r}_{\rm g}$), light bending causes the reflection component to vary significantly less than the X-ray continuum. The amount of light bent towards the black hole decreases as the corona moves further from the black hole and the X-ray continuum increases. \end{description} Finally, at heights $\gg 20$${\it r}_{\rm g}$, light-bending becomes less important and the observed continuum increases\footnote{ In the original paper of \citet{Miniu04} a further regime -- Regime 3 -- was defined at large radii where the reflection and powerlaw flux appeared to be anti-correlated with one another. This was an artefact of having a finite boundary for the disk extent of 100${\it r}_{\rm g}$, and instead the reflected flux should asymptotically become flat with respect to the continuum as a continuation of Regime~2 \citep[e.g.][]{Niedzwiecki2008}.}. In this manner, the presence of gravitational light-bending has been invoked to explain the fact that Seyferts \citep[e.g.][]{FabZog09, Fabian20121h0707} and XRBs \citep[e.g.][]{reismaxi} at times appears to be ``reflection-dominated". Sources where the observed X-ray spectrum display a distinct lack (or comparably small amount) of hard, powerlaw-like continuum despite a large contribution of reflection are fully consistent with the first Regime detailed above. Of course, the model presented above is idealised in that all characteristics of the observed variabilities are assumed to be a result of variation in the height of an isotropic and compact corona with a fixed luminosity. Although intrinsic variation in the luminosity of the corona may well be present, it is unlikely that they could solely explain the behavior of the reflected emission described above. Indeed, the clear presence of broad and skewed iron emission lines in a growing number of sources ranging from stellar mass black holes \citep{miller07review,miller09spin, reisspin, Steiner2011, hiemstra1652} to AGNs \citep{tanaka1995,Nandra97, Nandra07, FabZog09,3783p1} (and also to lesser extent neutron stars \citep{cackett08, cackett10, DiSalvo2005170544, disalvo09, reisns}) strongly attest that general relativistic effects play an important role in producing the line profile, further supporting the notion that the corona and the inner disk are both in the inner regions around a black hole. \subsection{\hbox{\rm XTE~J1650--500}} Amongst one of the first systems to provide observational evidence for the aforementioned effect of gravitational light bending around a black hole and indeed the first around a stellar mass black hole was \hbox{\rm XTE~J1650--500}, which was discovered by {\it RXTE}\ on 2001 September 5 as it went into outburst \citep{Remillard2001}. Based on the spectrum obtained early in decay of the outburst by {\it XMM-Newton}, and more importantly on the presence of a clearly broad iron emission line, \citet{miller02j1650} were able to not only infer that the object in \hbox{\rm XTE~J1650--500}\ was indeed a black hole, but also that it was close to maximally rotating with a dimensionless spin parameter $a$ $\approx0.998$. By decomposing the hard X-ray continuum from the reflection component in three {\it BeppoSAX}\ observations of \hbox{\rm XTE~J1650--500}, \citet{MiniuttiFabianMiller2004j1650} were able to show that the latter remained nearly constant despite a large change in the direct continuum, in a manner consistent with the predictions of light bending around a black hole. Optical observations obtained after the system had returned to near quiescence \citep{Orosz2004J1650} revealed a mass function $f(M) = 2.73\pm 0.56\hbox{$\rm\thinspace M_{\odot}$}$ with a most likely mass of $\sim4\hbox{$\rm\thinspace M_{\odot}$}$, and with it secured \hbox{\rm XTE~J1650--500}\ as genuine black hole binary system. \citet{CorbelFender2004} reported on the radio and X-ray observations of \hbox{\rm XTE~J1650--500}\ during the outburst. The authors find a clear drop by nearly an order of magnitude in the radio flux at the transition from the hard intermediate state (referred to as the intermediate state in that work) to the soft intermediate state (referred to as the steep power-law state in that work), and surprisingly they find residual radio emission during the often radio quiet disk-dominated soft state which they attributed to possible emission from previously ejected material interacting with the interstellar medium, rather than originating in the central source. A follow up study of {\it RXTE}\ data by \citet{Rossi2005j1650} used the iron line as a proxy to the total reflection component and confirmed the plausibility of the light-bending scenario for the evolution of \hbox{\rm XTE~J1650--500}. Again using data obtained from the {\it RXTE}, \citet{Homan2003} reported on the discovery of a $\sim250$Hz QPO together with a number of less coherent variability peaks at lower frequencies. By studying the spectral and timing evolution during the first $\sim 80$ days of the outburst, the authors were able to define six periods (I--VI; in this work referred to as P1--P6) having somewhat distinct spectral and timing characteristics (see their Fig~1 and Table~1). A recent study involving \hbox{\rm XTE~J1650--500}\ has discussed the similarities between X-ray binaries and AGNs \citep{Waltonreis2012} and argued that both \hbox{\rm XTE~J1650--500}\ and the active galaxy MCG--6-30-15 \citep{tanaka1995} must contain a rapidly rotating black hole, with the spin of \hbox{\rm XTE~J1650--500}\ having been formally constrained to $0.84\leq a \leq 0.98$. A further body of work based on {\it RXTE}\ observations and a variety of empirical models for the hard X-ray continuum \citep{YanWang2012} has concluded that the emission region, here referred to as the corona, decreases by a factor of $\sim 23$ in size during the transition from the hard to the soft state. \subsection{This Work} \begin{figure*}[!t] \centering { \rotatebox{0}{ {\includegraphics[height=5.5cm]{fig_lc_bkn1-eps-converted-to.pdf}}}} \hspace{-0.3cm} { \rotatebox{0}{ {\includegraphics[height=5.5cm]{fig_lc_bkn2-eps-converted-to.pdf}}}} \vspace{-0.2cm} \caption{\label{fig1} The 181 observations used were taken during the 2001/2002 outburst which is clearly seen in the All Sky Monitor (left). Since its discovery outburst, \hbox{\rm XTE~J1650--500}\ has remained in quiescence. (Right:) PCA-PCU2 (top) and HEXTE-A (bottom) count rate during the time encompassing the outburst. The different colors marks the distinct spectral states as defined by Homan et al.~(2003) based on the timing characteristics of the source. } \vspace*{0.2cm} \end{figure*} Since its discovery, \hbox{\rm XTE~J1650--500}\ has become one of the best studied black hole systems. However, the energy spectra of this system have either been studied in a great degree of detail using high-quality, single snapshot observations with {\it XMM-Newton}\ \citep[i.e.][]{miller02j1650, Waltonreis2012} or {\it BeppoSAX}\ \citep{MiniuttiFabianMiller2004j1650} or using mostly phenomenological and simple models in the study of long term evolutions with {\it RXTE}\ \citep[i.e.][]{Rossi2005j1650, Dunn2010, Dunn2010disk, YanWang2012}. In this paper, we use the full {\it RXTE}\ archival data of the outburst to {investigate the evolution of the direct power-law continuum, reflection and thermal disk components using, for the first time, a fully self-consistent prescription for the reflection component}. In this manner, we combine the virtues of detailed analyses of single observations with the immense diagnostic power of multiple {\it RXTE}\ pointings. By being able to decouple the total reflection component (Fe-$K\alpha$\ emission line together with all other reflection signatures) from the illuminating continuum, we find that the transition from the hard-intermediate state to the soft-intermediate state is accompanied by a sharp increase in the strength of the reflected emission in comparison to the direct continuum. We interpret this increase in the reflection fraction as a sudden collapse of the corona as the system approaches the thermal state, although we note that this may not be a unique interpretation. This paper is structured as follows: \S2 briefly introduces the observations and details our various selection criteria. \S3 describes the base model and assumptions used throughout this work. The various results are presented in \S4 and in \S5 we present a qualitative picture of a possible physical scenario that combines all our findings. \section{Observation and Data Reduction} \label{observation} We downloaded and analysed all 181 individual RXTE pointed observations of \hbox{\rm XTE~J1650--500}\ following its discovery. This gave a total of 307.4\hbox{$\rm\thinspace ks$}\ PCA exposure which were reprocessed with the latest \hbox{\rm{\small HEASOFT}~v6.12\/}\ tools. As such, we followed all the well established, standard reduction guides\footnote{Found at \href{http://heasarc.nasa.gov/docs/xte/recipes/cook\_ book.html}{http://heasarc.nasa.gov/docs/xte/recipes/cook\_book.html}}. Given that it is the only PCU that is always on, as well as being the best calibrated of all units, we used only data from PCU-2. We chose to use the standard offset value of $<0.02$ as well as elevation above the Earth's limb $>10$\deg. Background spectra were produced using \hbox{\rm{\small PCABACKEST}}\ using filter files created using \hbox{\rm{\small XTEFILT}}. The latest PCA-history file was also used and throughout this work we use PCA data between 3\hbox{$\rm\thinspace keV$}\ (ignored channels $\leq6$) and $25$\hbox{$\rm\thinspace keV$}\ without additional systematic errors as we are mostly interested in relative changes and the impact on the errors of the various parameters as well as in the ${\chi^{2}}$ distribution shown in Fig.~6 is minimal with the only change being a systematic shift to lower values. For our analyses, we require both PCA and High Energy X-ray Timing Experiment (HEXTE) data (but see below). The HEXTE data were also reduced in the standard manner following the procedures outlined in the {\it RXTE}\ guide. Background files were generated using \hbox{\rm{\small HXTBACK}}\ and the spectra were corrected for deadtime using \hbox{\rm{\small HXTDEAD}}. The appropriate response and ancillary files were downloaded\footnote{From \href{ftp://legacy.gsfc.nasa.gov/xte/calib_data/hexte_files/DEFAULT/}{ftp://legacy.gsfc.nasa.gov/xte/calib\_data/hexte\_files/DEFAULT/}}. HEXTE data were fit between 25-150\hbox{$\rm\thinspace keV$}. \begin{figure}[!h] \vspace{0.cm} \hspace{-0.9cm} { \rotatebox{0}{ {\includegraphics[width=9.5cm]{fig_turtle2-eps-converted-to.pdf}}}} \label{fig2} \vspace{-0.4cm} \caption{Hardness intensity diagram made using the 116 observations that remain after imposing a cut where both HEXTE Cluster A and B have source-background counts greater than 0. The vertical dashed lines marks approximate transitions between LHS - HIS- SIS -HSS. The large diamond symbols mark the position in the HID diagram for the representative spectra shown in Fig.~4. The color code is identical to Fig.~1 (right) and will remain the same throughout this work. We have scaled the size of the symbols for each observation to the value of the reflection fraction $R$, as determined in \S~4.1 and shown in Fig.~3. The size-legends from top-to-bottom are: $R=0.5, 0.75, 1, 2, 3, 4, 5$. The falling branch of the LHS and the HSS have sizes corresponding to $R=0.5$ as $R$ is poorly constrained in these states. } \end{figure} Figure~1 (left) shows the 1-day averaged long term light curve as seen from the {\it RXTE}\ All Sky Monitor (ASM) where the 2001 outburst is clearly visible. The panels on the right shows the PCA-PCU2 count rate (top) and HEXTE-A (bottom) during the time roughly encompassing the outburst. The colours highlight the various states during the outburst and are directly mapped into the hardness intensity diagram (HID) shown in Fig.~ 2. In short, the first few observations caught the source in a rising LHS which evolved to the HIS approximately 3 days later, where it remained for $\sim15$ days until the clear change to the SIS. We have further divided the HIS into early Period 1 and late Period 2, similar to the division made by Homan et al.~(2003) based on the timing characteristics displayed. The source then remains in the SIS (P3) for approximately 12 days before it makes the typical excursion to the disk dominated HSS (P4) where we see a clear drop in the HEXTE count rate in Fig.~1. After $\sim40$ days, the hard flux in \hbox{\rm XTE~J1650--500}\ sharply increases (P5 and P6) as the system returns to LHS and eventually goes back into quiescence where it remains up to the present time. The work presented in this paper is fully interpreted within the framework of reflection models \citep[see e.g.][and \S1.1]{Fabianross2010}. As such we are mainly interested in being able to determine the contribution of the reflected emission to the overall spectra. As mentioned in the introduction, reflection is not limited to the iron line profile and in order to obtain the best handle on the full reflection component we restricted the analysis that follows by requiring that {\it both} the HEXTE-A and the HEXTE-B units have background-subtracted count rates that are greater than zero. This results in 116 PCA pointing observations totalling 217.9\hbox{$\rm\thinspace ks$}. In doing so, we are effectively reducing our sensitivity to observations in very steep or disk dominated states, as well as to faint LHS, and most of the results presented here concern the intermediate states, which are more luminous at high energies (see Fig.~1; right). In order to investigate the effect of the HEXTE data on the results presented in this paper, we have also repeated all the fits described below using only the PCA data and confirmed that, although the results obtained do not strictly depend on the inclusion of the HEXTE data, owing to its relatively small statistical weight, the high energy data still provide a useful additional test of the continuum model (see e.g. Fig~4). In the following section we detail the spectral fits to the 116 observations. All our work makes use of the X-ray fitting package \hbox{\small XSPEC}\thinspace v12.7.0\thinspace\ \citep{xspec}. Where uncertainties on models parameters are quoted, this refers to the 90 per cent confidence limit for the parameter of interest. \label{Analyses} \section{The model} Previous attempts to characterise the evolution of \hbox{\rm XTE~J1650--500}\ as observed with {\it RXTE}\ have relied on a purely phenomenological interpretation of the reflection continuum and features\footnote{Contrast this phenomenological approach to the {\it reflection} component with the systematic testing of Comptonisation models for the {\it power-law} (corona) component by \citet{NowakWilmsDove2002gx} on {\it RXTE}\ data of GX~339-4. Ideally one would strive to combine physical models for both the power-law and reflection (and of course the accretion disk which is known to not be a simple multicolour disk due to various relativistic effects) however this quest for fully relativistic disk and reflection together with physical prescription for the Comptonisation continuum is proving highly complicated even for single, dedicated efforts at snapshot observations and is beyond the scope of this paper.} and often employed a single ``Gaussian"\ emission line with centroid energy fixed at 6.4\hbox{$\rm\thinspace keV$}\ as expected from neutral Fe-$K\alpha$\ \citep[i.e.][]{Dunn2010disk}, a combination of a similar ``Gaussian"\ together with a broad smeared edge \citep[\rm{\small SMEDGE\/},][]{smedge} component (i.e.~Yan \& Wang 2012), or in the more physically appropriate application, a relativistic emission line \citep[\rm{\small LAOR},][]{laor} together with \rm{\small SMEDGE\/}\ (Rossi et al.~2005). However, even in the latter example, the model was unphysical as the combination of \rm{\small SMEDGE\/}\ with an emission line does not keep consistency between the depth of the edge -- the number of absorbed photons -- and the strength of the line. In the study of \citet{Dunn2010disk}, where the authors were primarily interested in the behavior of the thermal accretion disk, such a simplification is easily justified as small deviations at high energies are unlikely to have significant effects at low energies. Rossi et al. alluded at the importance of using reflection models but unfortunately were swayed away due to the highly time-consuming and computer-intensive task that this would present nearly 10 years ago. The importance of reflection from accretion disks can be directly measured by the sheer number of theoretical works that have been devoted to fully characterising its behavior \citep[i.e.][]{LightmanWhite1988, George91, Matt1991, rossfabian1993, Zycki1994, NayakshinKazanasKallman2000, cdid2001, reflionx, refbhb, GarciaKallman2010, GarciaKallman2011}. A widely used reflection code is that of \citet[][\rm{\small REFLIONX}]{reflionx}, which provides a self-consistent treatment of the dominant atomic processes around a black hole and, given an input power-law continuum, outputs a reflection spectrum where both the ``Compton-hump", emission and absorption features are {\it all physically linked}. The reflection spectrum is calculated in the local frame, therefore we employ the convolution model \rm{\small KDBLURf}\ (\citet{laor}; vastly optimised by Jeremy~S. Sanders and employed in \citealt{fabian2012cyg} and \citealt{reismaxi}) to model the relativistic effects in the spectra. Despite the existence of newer relativistic models that includes the black hole spin (as opposed to the extent of the inner radius) as a variable parameter \citep[i.e.][]{BeckwithDone2004, kyrline,kerrconv, relconv} the advantage of using \rm{\small KDBLURf}\ on such a vast data set is it greatly expedites the fits and since we are not overly interested in absolute values, the small improvements of the newer models at the 10\% level \citep{BeckwithDone2004} does not justify their~use. Each of the 116 remaining spectra were thus fit with a base model described in \hbox{\small XSPEC~\/}\ as\footnote{We have also repeated this experiment with the latest reflection model by \citet{refbhb} (\rm{\small REFBHB}; see also \citet{reisgx} for a description of its use) which incorporates a black body component and found the results to be consistent in both case. We choose to carry on with the $\rm{\small REFLIONX} + \rm{\small DISKBB}$ combination for a number of reasons; to start, this combination can be easily reproduced (\rm{\small REFBHB}\ is not yet public) and the output parameters are somewhat standardised. e.g. \rm{\small REFLIONX}\ has $\xi$ as a free parameter whereas \rm{\small REFBHB}\ has the more obscure combination of hydrogen number density $n_{\rm H}$ and the relative strength of the illuminating flux over the thermal emission at the surface of the disk.} \\ \noindent{$\rm{\small PHABS}\times(\rm{\small POWERLAW}+ \rm{\small DISKBB}+ \rm{\small KDBLURf}\otimes\rm{\small REFLIONX}$)}. \\ \noindent{To avoid unnecessary CPU cost, all spectra were fitted adopting the same initialising values for the model parameters. A similar approach of using a base model with similar starting parameters was taken by \citet{WilmsNowak2006cyg} -- amongst others -- in the study of Cygnus~X-1.} Since the low-energy cutoff in the PCA of $\sim3$\hbox{$\rm\thinspace keV$}\ is not sufficient to constrain the neutral hydrogen column density, throughout this work we have frozen the parameter in the \rm{\small PHABS}\ model to $7\times 10^{21} ~{\rm atom}\hbox{$\rm\thinspace cm^{-2}$}$ in \hbox{\small XSPEC~\/}\ as the neutral column density is likely best modelled as being constant with time \citep*{nhpaper}\footnote{We used the standard BCMC cross-sections \citep{balucinska} and ANGR abundances \citep{abundances}.}. The key parameters in the \rm{\small REFLIONX}\ model are the spectral shape of the illuminating continuum which is set to be the same as that of the direct power-law and the ionization parameter of the accretion disk (as in Equation~1). The iron abundance was set to the solar value and the redshift was set to zero. In order to make the work more tractable, the emissivity profile in the blurring component was initially restricted to the form of a single power-law such that $\varepsilon (r) \propto r^{-q}$, and following the most recent work on the best {\it XMM-Newton}\ and {\it BeppoSAX}\ data of \hbox{\rm XTE~J1650--500}\ (Walton et al.~2012) we have frozen the inclination of the accretion disk to 65\deg\ and note that this parameter is highly constrained based on optical light curve to be greater than $50\pm 3$\deg\ and $<80$\deg\ (Oroz et al.~2004). We also check whether the results presented below change if we freeze the inclination at this lower limit (50\deg) and we confirm that they do not. As is standard with such fits, we have frozen the outer radius to the maximum value in the grid of 400${\it r}_{\rm g}$\ and the inner accretion disk is free to vary. A phenomenological combination of a \rm{\small LAOR}\ emission line on top of a reflection model such as \rm{\small PEXRIV}\ \citep{pexrav} -- which does not include the iron-$K\alpha$\ emission line -- often provides an equally good fit to X-ray spectra of black holes. However, \rm{\small PEXRIV}\ does not Compton broaden its absorption edge nor does it provide a physical coupling between itself and the extra \rm{\small LAOR}\ component which can result in parameters severely lacking in physical consistency. This is exemplified in \citet{reismaxi} where we showed for the stellar-mass black hole candidate MAXI~J1836-194 that the reflection component could be equally well modelled with a combination of \rm{\small LAOR}~+~\rm{\small PEXRIV}; \rm{\small LAOR}~+~\rm{\small KDBLUR}$\times$\rm{\small PEXRIV}; or \rm{\small KDBLUR}$\times$\rm{\small REFLIONX}. The first combination unphysically required the iron line to be coming from far within 6${\it r}_{\rm g}$\ in the strong gravity regime, yet all other reflection features -- under the formalisation of the model -- appeared exempt from such effects. After properly correcting for relativistic effects in \rm{\small PEXRIV}, the ionization parameter $\xi$ of the second model was nearly two orders of magnitude higher than the previous combination as the Fe absorption edge in \rm{\small PEXRIV}\ was no longer trying to fit the blue wing of the iron line. This change was also accompanied by an artificial hardening of the powerlaw index and a decrease in the equivalent width of the \rm{\small LAOR}\ component from $\sim 270$\hbox{$\rm\thinspace eV$}\ to $\sim180$\hbox{$\rm\thinspace eV$}. Correctly blurring the originally sharp absorption edge caused it to become smooth and more symmetric. Due to the decoupling between the \rm{\small LAOR}\ line and \rm{\small PEXRIV}, this smooth edge traded off with the \rm{\small LAOR}\ line component thus decreasing the equivalent width of the latter. As such, we stress the importance of the imposed physical consistency in the emission line and corresponding absorption edge strength afforded by \rm{\small REFLIONX}\ and strictly enforce this in the work that follows. \begin{figure*}[!t] \begin{center} {\vspace{.8cm}\hspace*{-0.8cm}\rotatebox{0}{{\includegraphics[height=20.5cm]{fig_evolution_jon3-eps-converted-to.pdf} }}} \caption{Top four panels: Extrapolated 0.1-1000\hbox{$\rm\thinspace keV$}\ flux evolution for the total, powerlaw, reflection and disk components. The errors in the fluxes were assessed using \rm{\small CFLUX\/}\ but are omitted here for clarity (but see Fig.~7 for an indication of the magnitude of the errors in each state). These errors were propagated in the derivation of the reflection fraction $R$ (eighth panel from the top). We also show in the eighth panel ($R$) the ratio of the 3--100\hbox{$\rm\thinspace keV$}\ reflection to powerlaw fluxes. A jump is in this ratio seen at the same position as that for the extrapolated fluxes. The bottom panel shows the reduced-$\chi^2$ for 114 D.o.F. The vertical dashed lines show the transition from the rising LHS, P1--P6 and back into the falling LHS. } \vspace*{0.5cm} \label{fig3} \end{center} \end{figure*} \vspace{0.cm} \section{Results and discussion} \subsection{Evolution of the Outburst} \label{Light bending} \begin{figure*}[] \centering \hspace*{0.cm} {\hspace{-0.0cm}\rotatebox{0}{{\includegraphics[width=5.5cm]{figures_spec1-eps-converted-to.pdf} }}} {\hspace{0.cm} \rotatebox{0}{{\includegraphics[width= 5.5cm]{figures_spec6-eps-converted-to.pdf} }}} {\hspace*{0.cm} \rotatebox{0}{{\includegraphics[width= 5.5cm]{figures_spec22-eps-converted-to.pdf} }}} {\hspace{-0.0cm}\rotatebox{0}{{\includegraphics[width=5.5cm]{figures_spec46-eps-converted-to.pdf} }}} {\hspace{0.cm} \rotatebox{0}{{\includegraphics[width= 5.5cm]{figures_spec64-eps-converted-to.pdf} }}} {\hspace{0.cm} \rotatebox{0}{{\includegraphics[width= 5.5cm]{figures_spec86-eps-converted-to.pdf} }}} {\hspace*{0.cm} \rotatebox{0}{{\includegraphics[width= 5.5cm]{figures_spec101-eps-converted-to.pdf} }}} {\hspace{-0.0cm}\rotatebox{0}{{\includegraphics[width=5.5cm]{figures_spec102-eps-converted-to.pdf} }}} \caption{Unfolded spectra (top) and residuals (bottom) for the eight representative states shown in Figs.~1,2. The reflection, power-law, and disk component are shown in red, blue and green respectively. The total model is shown in black. All vertical scales are the same except for that of the falling LHS (Bottom right). The approximate MJD of each observation are shown in brackets. } \vspace*{0.3cm} \label{fig4} \end{figure*} Figure~3 shows the evolution of all the parameters of interest in this work\footnote{We refer the reader to the work of \citet{Dunn2010disk} for the evolution of the disk properties during the outburst. As mentioned above, despite the fact that the authors did not correctly account for reflection in their work, this plays a minor role at the energies of interest in regards to the disk properties and as such that work is still a valid and important reference for the evolution of accretion disks.}, and Fig.~4 shows the best-fit to eight representative spectra roughly covering the eight periods highlighted in Figs.~1,2 and described in detail in Homan et al.~(2003). The spectra used for illustration are shown in Fig.~2 with diamonds. The top four panels in Fig.~3 show the evolution of the extrapolated 0.1-1000\hbox{$\rm\thinspace keV$}\ fluxes for the total, \rm{\small POWERLAW}, \rm{\small REFLIONX}\ and \rm{\small DISKBB}\ components, from top to bottom respectively. All fluxes were obtained using \rm{\small CFLUX\/}\ in \hbox{\small XSPEC~\/}. The vertical dotted lines running through all the panels highlight the eight periods shown in Figs.~1,2. We see a clear increase in the disk flux during the first $\sim30$ days followed by a clear flattening as it moves into the HSS. It is also visually apparent that the reflection flux varies relatively less than the power-law continuum. This will be investigated further in what follows. The following two panels show the evolution of the disk temperature and ionization parameter, $\xi$. Early in the outburst, the disk was relatively cold ($\lesssim0.5$\hbox{$\rm\thinspace keV$}) and only moderately ionised with $\xi\approx200\hbox{$\rm\thinspace erg~cm~s^{-1}$}$. As the system moves into the HSS, the ionization increases smoothly until $\sim2$~days before the transition to the SIS when $\xi$ sharply increases to $\xi\approx3000\hbox{$\rm\thinspace erg~cm~s^{-1}$}$ and remains at that level through the transition up to the end of the SIS. The disk temperature, on the other hand, appears to reach a relativity stable value of $\sim0.6-0.65$\hbox{$\rm\thinspace keV$}\ early in P2, approximately half way through the HIS. As the system progress into the HSS, the disks becomes fully ionised with $\xi$ reaching the maximum allowed value in the model (log~$\xi=4$), before coming back down to the low hundreds towards the end of the outburst. The reflection fraction $R$ shown in the third-from-bottom panel of Fig.~3 is here defined as a measure of the ratio between the (observed) continuum power-law to the reflection flux emitted by the disk. Since a fraction of the power-law illuminating the accretion disk is down-scattered as it is reprocessed in the disk, the reflection fraction in \rm{\small REFLIONX}\ is calculated by dividing the extrapolated (1\hbox{$\rm\thinspace eV$} - 1000\hbox{$\rm\thinspace keV$}) \rm{\small REFLIONX}\ flux by the 0.1-1000\hbox{$\rm\thinspace keV$}\ power-law flux. At the start of the outburst, through to the end of the HIS, $R$ increases smoothly between $\approx 0.6-1$. At the transition between the HIS to the SIS, $R$ displays a sharp increase to $\approx 4$ where it remains until the beginning of the disk dominated HSS where the power-law effectively disappears. We also show in this panel the ratio of the 3-100\hbox{$\rm\thinspace keV$}\ reflection to powerlaw flux. The behavior described above is still qualitatively the same and we still see a clear jump in this ratio at the transition from the HIS to the SIS. However, when limited to the 3-100\hbox{$\rm\thinspace keV$}\ flux, this ratio is systematically a factor of $\approx1.5$ less than the extrapolated ratio; a direct result of not accounting for the extra down-scattered flux at low energies. As a further test of both the qualitative (clear jump in \textit{R} between HIS and SIS) as well as quantitative (change from $R\approx 0.6$ to $R\approx 4$ between P1 and P3) behaviors found here for the reflection fraction, we temporally replace the \rm{\small REFLIONX}\ model with a combination of \rm{\small LAOR}\ plus \rm{\small PEXRIV}\ and employ this model to the observations highlighted in Figs. 2 and 4 for P1 and P3. In using this model, we have blurred the \rm{\small PEXRIV}\ component with the same parameters as the \rm{\small LAOR}\ line profile Figure~5 shows the confidence range for the reflection fraction (a free parameter of \rm{\small PEXRIV}) for these two representative spectra. In agreement with our previous results, we see that in the HIS the reflection fraction is constrained to $R=0.58^{+0.08}_{-0.11}$ and in the SIS it is $R>2.7$ at the 90~per~cent level of confidence ($\Delta\chi^2 = 2.71$). \begin{figure}[] \centering \vspace{0cm} {\hspace*{-0.2cm}\rotatebox{0}{{\includegraphics[width=9cm]{figures_pexriv-eps-converted-to.pdf} }}} \caption{Goodness-of-fit versus reflection fraction for the two representative spectra describing the HIS-P1 (black) and SIS-P3 (red). The spectra used refers to those highlighted in Figs. 2 and 4 and the \rm{\small REFLIONX}\ component inherent in the base model has been replaced with a combination of \rm{\small PEXRIV}\ together with \rm{\small LAOR}. The solid blue horizontal line shows the 90~per~cent confidence range. } \label{fig5} \end{figure} \begin{figure*}[] \vspace*{0.0cm} \centering {\hspace{-0.0cm} \rotatebox{0}{{\includegraphics[width=14cm]{figure_reducedchi_histogram4-eps-converted-to.pdf} }}} \vspace*{-0.1cm} \caption{Left: Distribution of reduced $\chi^2$ for model with a Newtonian emissivity profile ($q=3)$ and a steeper values $\geq 3$. In both cases we see a peak at reduced $\chi^2 =1$, however this is much clearer after allowing the emissivity to vary beyond its Newtonian value. Right: Distribution of emissivity parameter (blue) and their 90\% lower limit (red) for the four states highlighted in each panel Bottom: In all cases, the emissivity index was constrained to $3\leq q \leq 10$. } \vspace*{0.2cm} \label{fig6} \end{figure*} The fact that $\xi$ is maximal in the HSS (P4) despite the fact that this is when the irradiation appears to be at its lowest level (second panel from the top) can be explained with a number of scenarios. We showed in \S~1.1 that in stellar mass black holes the intrinsic hot disk can result in significant thermal ionization \citep{refbhb} which will be strongest in the disk dominated HSS. Thus, in this scenario, the high ionization measured could also be in part due to thermal ionization. If this thermal component is the sole source of ionisation in the HSS, $R$ would indeed go to zero. A further possibility is that the disk is indeed highly photo-ionised as a result of strong focusing of the coronal photons onto the disk. This would significantly remove the number of hard-photons escaping the system (thus explaining the second panel from the top) and cause the disk to be highly ionised. Indeed, observations of disk winds originating in the HSS of various BHBs consistently show winds having $\xi \sim 10^4 \hbox{$\rm\thinspace erg~cm~s^{-1}$}$ \citep[e.g.][]{Miller2008j1655wind,NeilsenLee2009,KingMiller2012,Ponti2012diskjet}. Unfortunately, the lack of reliable constrains on the reflection fraction in the HSS prevents us from making any solid claims on the nature of the disk-corona interaction in this state. \subsection{Disk Emissivity} In Fig.~6 (left) we show the distribution of the reduced ${\chi^{2}}$ (for 115 degrees of freedom) from all observations assuming the simple Newtonian `lamp-post' like geometry in which the the emissivity profile follows a $q=3$ power-law (light blue), as well as after relaxing this assumption (red). In both cases there is a clear peak at reduced-${\chi^{2}} = 1$, however this peak is much more distinct upon relaxing the Newtonian approximation. The Newtonian approximation naturally does not take into account the effects of general relativity that will be experienced by the emission from the corona and accretion disk near the black hole. Relativistic effects (strong gravity as well as relativistic time dilation) acts to steepen the emissivity in the inner regions of the disk. The right panels of Fig.~6 show either the distribution of the emissivity index (blue) or the 90~per~cent lower limit in their value (red) for the spectral states indicated in each panel. We refer the reader to the work of \citet{Miniu04, wilkins2011, wilkins2012, fabian2012cyg} and references therein for a detailed study examining non-Newtonian values for the emissivity index, but note here that steep emissivities similar to those found here for the HIS and SIS are a natural and unavoidable consequence of strong gravity. The bottom panel of Fig.~6 is used here as a simple illustration of the evolution in $q$ as well as the count-rate in both the PCA and HEXTE data. It is clear that $q$ can only be constrained when the PCA data is at its highest level as this constraint does not come from energies $>25$\hbox{$\rm\thinspace keV$}. At the end of the outburst, when the PCA signal-to-noise level drops significantly, the data cannot differentiate between a Newtonian $q=3$ and a steeper value. Following the recipe provided for Cygnus X-1 by \citet{fabian2012cyg} in dealing with sources where the spin is expected to be high (as is the case for \hbox{\rm XTE~J1650--500}), we have repeated our fits with a double emissivity profile such that within a break radius (initially frozen at 4${\it r}_{\rm g}$\ but later allowed to vary) the emissivity is $>3$ and beyond it is frozen at 3. The initial value of 4${\it r}_{\rm g}$\ for the break radius was chosen based upon the value for Cygnus X-1 \citep{fabian2012cyg}. We find that, as long as the emissivity is not fixed at $q=3$, the quality of the fits, and distribution of reduced ${\chi^{2}}$ are similar to that of a single, unbroken emissivity, and we proceed by using this single power-law emissivity profile as our standard but emphasise that the results presented here do not change if we employ a broken emissivity profile. This is very likely to be due to the comparatively low spectral resolution afforded by RXTE which does not reflect the subtle changes in the reflection profile in a similar manner as {\it XMM-Newton}\ or {\it Suzaku}\ observations. \begin{figure*}[!t] \vspace*{-0cm} \centering {\hspace{-0.0cm} \rotatebox{0}{{\includegraphics[width=14.6cm]{fig_flux_flux3-eps-converted-to.pdf} }}} \vspace*{-0.0cm} \caption{ Left: Flux-Flux relation during the outburst. The dashed line shows the one-to-one relation. Top-right: Close up of the HIS (blue and cyan) and SIS (green). The solid black lines shows the best linear fit for each state having a slope of $2.7\pm0.1$ and $-0.24\pm0.02$ in the SIS and HIS respectively. The solid red curve shows the expected flux-flux relation under the light bending model of \citet[][see their Figure 2]{Miniu04} for a system with 60\deg\ inclination, somewhat similar to \hbox{\rm XTE~J1650--500}, with a corona varying in height from 1--20${\it r}_{\rm g}$. Note that this is not a fit and has been rescaled from the original (see \S~4.3). Bottom-right: Similar but for fluxes between 3--100\hbox{$\rm\thinspace keV$}. } \label{fig7} \end{figure*} \begin{table*} \vspace{-0.4cm} \caption{Summary of Spearman's Rank Correlations and Partial Correlation tests on a combination of various model parameters. } \centering \begin{tabular}{cc|cccc|cccc} \hline \hline • & • & \multicolumn{4}{c|}{HIS (P1 + P2)} & \multicolumn{4}{c}{SIS (P3)}\\ \hline • & • & \multicolumn{2}{c|}{Spearman's} & \multicolumn{2}{c|}{Partial } & \multicolumn{2}{c|}{Spearman's} & \multicolumn{2}{c}{Partial } \\ • & • & \multicolumn{2}{c|}{rank-order} & \multicolumn{2}{c|}{correlation } & \multicolumn{2}{c|}{rank-order} & \multicolumn{2}{c}{correlation } \\ \hline Parameter 1 & Parameter 2 & $\rho$ & p-value & $\rho$ & p-value & $\rho$ & p-value & $\rho$ & p-value \\ \hline $\xi$ & $F_{\rm powerlaw}$ & -0.953 & $ 2.0\times 10^{-13}$ & -0.473 & 0.014 & 0.319 & 0.071 & 0.092& 0.621 \\ $\xi$ & $F_{\rm disk}$ & 0.956 & $ 9.3\times 10^{-14}$ & 0.515 &0.006 & 0.006 & 0.972 & 0.082& 0.656 \\ $F_{\rm disk}$ & $F_{\rm powerlaw}$ & -0.955 &$ 1.1\times 10^{-13}$ &-0.427 & 0.0031 & -0.310 & 0.079 & -0.334 & 0.056 \\ $\xi$ & $F_{\rm reflionX}$ & 0.742 & $ 2.2\times 10^{-5}$ & -0.140 & 0.516 & 0.382 & 0.028 & 0.218 & 0.230\\ $F_{\rm disk}$ & $F_{\rm reflionX}$ & 0.766 & $ 7.8\times 10^{-6}$ & 0.088 & 0.685 & -0.116 &0.520 &0.140 &0.446 \\ $F_{\rm powerlaw}$ & $F_{\rm reflionX}$ & -0.799 & $ 1.7\times 10^{-6}$ & -0.373 & 0.066 & 0.719 & $2.4\times 10^{-6}$ & 0.683 & $ 4.6\times 10^{-7}$ \\ \hline \end{tabular} \vspace{0.2cm} Notes:- Spearman's rank correlations and Partial Correlation test were made for combinations of the reflection, power-law and disk fluxes as well as the ionization parameter. The Partial Correlation test measures the degree of associations between the two parameters listed on the first two columns whilst controlling for the remaining two parameters. The Spearman's coefficient $\rho$ is a measure of the degree of correlation with +1 or -1 indicating a perfect monotone function and 0 a lack of correlation.\vspace*{0.2cm} \end{table*} \subsection{Light-Bending and General Relativity} Hints of the expected effects of light bending, as described in the introduction, can be seen in the top four panels in Fig.~3. Most important, is the apparent constancy of the reflection flux in comparison with that of the direct power-law early in the outburst. We investigate this further in Fig.~7. The left panel shows the flux-flux relation through the whole outburst with the various spectral states shown in different colors. The top-right panel is a close up of the period covering the HIS and SIS during the first $\sim$~30 days of the outburst\footnote{Excluding the first 3 days when \hbox{\rm XTE~J1650--500}\ was in the rising LHS (see Fig.~1).}. Figure~7 is remarkably similar to figure~3 of \citet{Rossi2005j1650}, where the authors used the flux in the iron line as a proxy for the total reflection in \hbox{\rm XTE~J1650--500}. We have superimposed in this figure the expectation from the light bending model for a compact corona varying in height from 1--20${\it r}_{\rm g}$\ with a disk having an inclination of 60\deg, from \citet[][see their Figure 2]{Miniu04}. In order to correctly describe the shape of the function shown graphically in \citet{Miniu04}, we used the Dexter Java application of \citet{dexter_ads} to obtain a fourth-order polynomial fit to their curve from which we applied a linear normalisation of $1.5\times10^{-9}$ and $4.5\times10^{-9}$ to their Y (arbitrary Fe line flux) and X-axis (arbitrary powerlaw flux) respectively. The model reproduces extremely well the broad shape of the relation. Finally, the bottom right panel shows this behavior when the non-extrapolated, 3-100\hbox{$\rm\thinspace keV$}\ fluxes are used instead\footnote{In this case the normalisations were $1.5\times10^{-9}$ and $4.5\times10^{-9}$ for the Fe-line and powerlaw flux, respectively). }. We again see that qualitatively, the behavior is the same as above. As discussed in the introduction, the light bending model of Miniutti et al.~(2004) predicts the existence of semi-distinct regimes in this flux-flux relation. When the corona is located at a height of $\sim 10$${\it r}_{\rm g}$\ the model predicts a flattening in the relation similar to that observed for the HIS (both P1 and P2). The fluxes in this hard intermediate state are clearly correlated, and a Spearman's rank correlation test gives a coefficient of $\rho = -0.799$ corresponding to a $1.7\times10^{-6}$ chance of a false correlation (Table~1). The slope of this relation is $-0.24\pm0.02$ (standard error, s.d.) and this linear fit is shown in the top-right panel of Fig.~7 as a solid black line. As the location of the corona reaches the more extreme environment within a few ${\it r}_{\rm g}$\ from the black hole, the model predicts a steep, positive linear relation between the reflection and power-law flux similar to that seen in the SIS, although there is large scatter dominated by poor statistics. A Spearman's rank correlation test here gives $\rho = 0.719$, with a false correlation probability of just $2.4\times10^{-6}$. The slope of this relationship is greater than unity, with a best-fit value of $2.7\pm0.1$ (shown as a further solid black line in Fig.~7) . Note that this is highly inconsistent with the expectation for a static Newtonian corona with intrinsically varying luminosity, where the slope should be unity across the entire flux range. It is the combination of a slope~$\gg1$ at low powerlaw flux together with a near-flattening at higher fluxes that provides evidence for relativistic effects in this case. It has been suggested \citep[e.g.][]{BallantyneVaughan2003, Ballantyne2011} that changes in the ionization of the inner regions of the accretion disk can give rise to changes the reflection spectrum that can mimic somewhat this flat behavior. In order to robustly assess the strength of the correlations seen in Fig.~7 in both the HIS and SIS, we have performed a number of correlation tests which are summarised in Table. 1. We performed, for both HIS and SIS, Spearman rank-order tests for a combination of four parameters (power-law, reflection and disk fluxes as well as the disk ionization) as well as Partial Correlation tests (PCT) for two parameters while controlling for the third and fourth variable. The PCT is of particular importance for our purpose as it removes any potential association of the ionization parameter (or any other potential source of unwanted correlation) in the flux-flux relations shown in Fig.~7. From the Spearman rank-order tests performed in the HIS (Table~1), it would initially appear that all four variables are strongly correlated with one another in some way, as all combinations display $|\rho|\gtrsim0.7$. However, after performing the partial correlation test for all combinations we see that for two of the previous strong correlations ($\xi-F_{\rm reflionX}$ and $F_{\rm disk}-F_{\rm reflionX}$) were in fact driven by the mutual dependence of these parameters on $F_{\rm powelaw}$. These tests clearly indicate that the reflection flux in both states is better correlated with the power-law flux than with ionization and the slope of the correlations (mildly negative in the HIS and strongly positive in the SIS) are highly indicative of gravitational light-bending in the General Relativistic~regime. The behavior seen here is consistent with a drop in the height of the corona during the hard-intermediate phase (P1 and P2) followed by intrinsic variations in its luminosity by a factor of a few during the soft-intermediate state (P3). Following the disk-dominated soft-state (P4), the height of the corona increases again (P5 and P6) and the outburst finishes with the intrinsic power dropping as the source goes back into the LHS. \subsection{$R-\Gamma$ relation} A strong correlation has been shown to exist between the amplitude of the reflected component ($R$) and the photon index ($\Gamma$) of the Comptonized spectrum in XRBs {\it in the hard state} \citep[e.g.][]{Ueda1994rgamma}. This $R-\Gamma$ relation has since been robustly tested by a number of authors \citep[e.g.][]{Gilfanov1999rgamma, Zdziarski1999, Zdziarsk2003, NowakWilmsDove2002gx, ibragimov05} and it is now thought to also apply to Seyferts and radio galaxies \citep[e.g.][]{Zdziarski1999}, further cementing the similarities in the coronal properties at all scales. If this relation indeed turns out to be real (and evidence attests to this; but see \citealt{Molina2009rgamma}) then this is could be telling us about the feedback between the hot corona and cold gas in an accretion disk. We briefly investigate this relationship for \hbox{\rm XTE~J1650--500}\ in Fig.~8. For clarity, we only consider data with a fractional uncertainty of less than 50~per~cent. The black arrows approximately show the evolution of the system in time. Although as a whole the data does not strongly support the presence of a relationship between $R$ and $\Gamma$, when the states are roughly separated (different colors) it does appear that early in the outburst through to the last few days of the hard-intermediate state the relation seems to hold. It is clear, at least, that the rising-LHS and the HIS populate different regions in the figure to the SIS. The potential presence of this relation early in the outburst suggests a feedback process between the soft photons in the disk and the corona. There are a number of theoretical interpretations for the presence of this correlation \citep{PoutanenKrolik1997,Gilfanov1999rgamma,Gilfanov2000rgamma, beloborodov99, MalzacBeloborodov2001} with the two leading contenders often described as the disk-truncation and dynamic corona model \citep[see][for a detailed study of these models]{done2002review,Beloborodov1999review}. To summarise, in the former, an increase in the reflection fraction is caused by the accretion disk penetrating deeper into a central hot corona thus receiving more illuminating hard photons. The presence of the disk in return offers more soft photons consequently cooling the plasma. For a purely thermal distribution of electrons, the greater the number of soft seed photons, the softer the power-law spectra. The latter model by \cite{beloborodov99} invokes bulk-motion of the corona above the accretion disk. If the corona is outflowing with mildly relativistic velocities, this would reduce the amount of hard photons hitting the disk, which in turn reduces $R$ and subsequently softens $\Gamma$ as few reprocessed soft photons reach the outflow. Recent evidence for the coronal plasma ejection model has come from a strong positive correlation between reflected X-ray flux and radio flux in the black hole binary Cygnus X-1 \citep{miller2012cyg}. We will show in \S~4.6 that all evidence points toward the disk radius remaining stable during the HIS to SIS transition (see Figs.~11,12). This constancy effectively rules out the ``disk-truncation" explanation for the $R-\Gamma$ correlation. Instead, one may hypothesise whether the ``outflowing corona" and light bending can be combined to explain the behavior so far detailed for \hbox{\rm XTE~J1650--500}. In the previous section we showed that the flux-flux behavior (Fig.~7) could be explained if early in the outburst (during the HIS) the corona was located relatively far ($\sim10$${\it r}_{\rm g}$) from the black hole and thus behaved according to Regime 2 of \citet[][see also \S1.2]{Miniu04}. As the system evolved, the corona collapsed to a few~${\it r}_{\rm g}$\ and began to experience a higher level of light bending towards the disk (regime~1). In this scenario, a potential gradient in the outflow velocity of the corona as a function of height could also explain the behavior seen in Fig.~8. i.e. as the corona collapses from a large height (large outflow velocity; low $R$ and hard $\Gamma$) the outflow velocity decreases ($R$ increases and $\Gamma$ softens) until it becomes effectively static and the system transitions into the SIS. We will expand upon this possible scenario in what follows and summarise our ideas in \S~5. \begin{figure}[!t] \hspace*{-0.5cm} \centering {\rotatebox{0}{{\includegraphics[width=9.2cm]{fig_r_vs_gamma-eps-converted-to.pdf} }}} \vspace{-0.5cm} \caption{ The well know R-gamma relation appears present in the rising-LHS and during most of the HIS. However, just prior to the transition to the SIS and thereafter, this relation does not hold. We discuss a potential explanation for this behavior in \S~4.4. Only data with errors less than 50\% their value are used in this figure. The black arrows show approximately the evolution of the system in time. } \vspace*{0.2cm} \label{fig8} \end{figure} \subsection{QPOs and Spectral States: A collapsing corona} \begin{figure*}[!t] \hspace*{-0.35cm} \centering {\rotatebox{0}{{\includegraphics[width=16cm]{fig_qpo_all_combined-eps-converted-to.pdf} }}} \vspace*{-0.cm} \caption{Top-left: HF QPO coherence as a function of reflection fraction. The relation is well described by a linear function similar to that shown in the figure. Bottom-left: QPO frequency as a function of reflection fraction. The dashed line shows a relation, $f ({\rm Hz}) = (102\pm2)\times {\rm ln} [R \times (3.07\pm 0.05)]$, that fits these data. Right: QPO frequency as a function of disk surface ionisation. These figures show a clear link between the coherence/frequency of the QPO and the reflection fraction or level of disk ionisation, which we subsequently interpret as being linked to the size/position of the corona (see \S~4.5). } \label{fig9} \end{figure*} Throughout this work, we followed the selection made by Homan et al.~(2003) which roughly separates the outburst into six periods coinciding with significant changes in both the hardness intensity diagram (Fig.~2) as well as in the shape of their power spectra\footnote{In keeping with that work, the HIS is divided into two periods (P1 and P2). Here, we also add two extra periods which we have denoted as the LHS-rising and falling.} As highlighted in the introduction (\S1.3), those authors demonstrate the presence of high-frequency (HF) variability in \hbox{\rm XTE~J1650--500}, together with a HFQPO which was shown to evolve in both frequency and coherence during the outburst. The highest frequency which was reliably measured was at $\sim250$Hz in the SIS, with the frequency being much lower ($\sim50$Hz) at the onset of the outburst. In Fig.~9 (top-left), we show the presence of a strong (Pearson's $r=0.997$) positive relation between the reflection fraction and quality factor of the QPO ($Q-R$ relation). In order to create this figure, we have averaged the values of $R$ shown in Fig.~3 for each of the periods in question and used the values for the coherence provided by Homan et al.~(2003; Table~1). The bottom-left panel shows the frequency of the HF QPO as a function of $R$ ($f-R$ relation). We also show in Fig.~9 (right), the QPO frequency as a function of disk (surface) ionisation parameter. QPOs are notoriously difficult to explain and it is not our purpose to provide a quantitative description of this phenomenon. However, it is worth stressing that most models \citep[e.g.][]{Nowak1997, CuiZhangChen1998, PsaltisBellonivanderKlis1999, StellaVietriMorsink1999, McKinneyqpo2012} strongly link the origin of QPOs with orbits and/or resonances in the inner accretion disk close to the black hole. Current models cannot fully explain, in a physical manner, the range in coherence observed in various systems nor the manner in which the frequencies change with states. To explain the range in coherence observed in accreting neutron stars, \citet{BarretOlive2006,BarretOlive2007} devised a toy model in which the changes in $Q$ are driven by changes in the scale height of the disk. A small scale height gives rise to high coherence and vice versa. Expanding on this idea, it appears that at least for \hbox{\rm XTE~J1650--500}, it is physical changes in the radius/size of the corona that give rise to changes in both quality factor $Q$ and QPO frequency. To illustrate this hypothesis, consider Fig.~9 (bottom) together with our interpretation of the behavior displayed in Fig.~7 (\S~4.3). Early in the outburst the frequency of the HF QPO appeared at $\sim55$Hz. The Keplerian frequency at a given radius is $f ({\rm Hz}) \approx 3.2\times 10^4 M^{-1} r^{-3/2}$, where $M$ and $r$ are in units of Solar mass and Gravitational radius respectively. Hence, during the brief P1 period, the HF QPO frequency is close to the orbital frequency at $\sim28$${\it r}_{\rm g}$\ assuming a $4\hbox{$\rm\thinspace M_{\odot}$}$ black hole and potentially moves to $\sim15$${\it r}_{\rm g}$\ in the second half of the HIS. As the outburst continues and the corona continues to collapse, it is plausible that the frequency continues to increase (corresponding to $\sim10$${\it r}_{\rm g}$\ in the SIS) eventually approaching a value that should be consistent with the Keplerian frequency at the ISCO. The continued decrease in the size of the corona gives rise to the increase in coherence. In this scenario, the frequency of the QPO should relate to the size of the corona and thus would naturally increase as the corona collapses. The relationship between the QPO frequency and the surface ionisation parameter could be suggestive of an intrinsic relationship between the irradiation of the disk and its magnetic field properties, the latter which has recently been proposed as a possible means to produce(low-frequency) QPOs \citep[see e.g.][]{oneill2011qpo, Oishi2011qpo}. This possibility will be addressed in forthcoming work. Finally, the relation between the coherence of the HF QPOs and the reflection fraction also leads to the interesting prediction that HF QPOs should only be observed when $R\gtrsim 0.4$ when the coherence is $Q>0$. This results is consistent with observations, with no HF QPO ever having been found in the LHS where $R\lesssim1$. \subsection{Radio (jet) Emission and Reflection Fraction} \label{jet} \citet{CorbelFender2004} presented a comprehensive analyses of the radio emission observed during the outburst of \hbox{\rm XTE~J1650--500}. In that work, the authors suggest that the transition between the HIS and SIS\footnote{Referred to as the intermediate and steep powerlaw states respectively in \citet{CorbelFender2004}.} is associated with a massive radio ejection event. The observations in the LHS was found to be consistent with the presence of a steady, compact jet, as is often seen in the LHS of black hole binaries \citep[see e.g.][and references therein]{Fender2001jets,Fender091}. This potential ejection event during the HIS--SIS transition coincides with the time where we see a sharp jump in the reflection-fraction. In Fig.~10 we show the radio flux density\footnote{Obtained directly from Table~1 of \citet{CorbelFender2004}. We are using the flux densities at 4800~MHz for all observations, except for the rising-LHS where this was not available. In this case, we proceeded by averaging the values presented for 1384 and 2496~MHz.} vs the reflection fraction calculated herein. At the time of the steady compact jet (in the LHS \& HIS), the radio flux density increased by a factor of $\sim5$ with no statistically significant change in $R$. However, immediately following the radio ejection in the SIS, the reflection fraction increases dramatically resulting in a bi-modality in the radio-flux density--reflection fraction plane. \textit{This suggests an intimate link between the jet ejection site and the collapsing corona.} Indeed, \citet{beloborodov99} predicts a link between radio jets and reflected flux, which was also seen in Cygnus X-1 \citep{miller2012cyg}. At later stages in the outburst (HIS-P6, falling LHS), the measured radio flux density is likely a dominated by emission from the zone where the ejected plasma interacts with the ambient ISM surrounding the binary system. Unfortunately, due to the low spatial resolution of the radio observations, this emission is not resolved from that due to any reformation of the steady jet close to the black hole. \begin{figure}[!t] \hspace{-0.5cm} {\rotatebox{0}{{\includegraphics[width=9.cm]{fig_jet1-eps-converted-to.pdf} }}} \vspace{-0.4cm} \caption{ Radio flux density as a function of reflection fraction. The arrows show the direction of the outburst. There appears to be two distinct branches where the reflection fraction is either constant at $R\sim0.5$ or at $R\sim4$. These two branches corresponds to the LHS/HIS and SIS respectively. Radio data obtained from \citet{CorbelFender2004}. }\vspace*{0.2cm} \label{fig10} \end{figure} \label{spin} \subsection{Disk (inner) Radius and State Transition} \begin{figure*}[!t] \vspace*{-0cm} \centering {\hspace{-0.0cm} \rotatebox{0}{{\includegraphics[ width=16.5cm]{fig_rin_hist3-eps-converted-to.pdf} }}} \vspace*{-0.cm} \caption{Left: The inner radius during the outburst are shown in blue. We only show measurements where the uncertainty in the radius is $< 50\%$ of its value. The dashed lines show the evolution of the PCA (grey) and HEXTE-A (green) count rate in a similar manner to Fig.~6 (bottom). Right: Distribution of inner radii having errors $\leq50\%$ its value. In red we show the standard normal distribution with a mean radius of 1.65${\it r}_{\rm g}$\ and standard deviation of 0.08${\it r}_{\rm g}$. } \label{fig11} \end{figure*} \begin{figure*}[] \vspace*{-0.4cm} \centering {\hspace{-0.0cm} \rotatebox{0}{{\includegraphics[ width=8.cm]{fig_mcmc_link1-eps-converted-to.pdf} }}} {\hspace{-0cm} \rotatebox{0}{{\includegraphics[ width=8.cm]{fig_mcmc_chain2-eps-converted-to.pdf} }}} \vspace*{-0.cm} \caption{MCMC results for the inner radius (a proxy for spin) obtained from the simultaneous fit to the first ten spectra in the HIS. Left: Figure tracing 50 out of a total of 170 ``walkers" during their random walk. The figure shows that the various chains converge pretty quickly indicating efficient sampling. The inset shows a close up of the first 20,000. steps. Right: Full MCMC containing all 170 walkers, after having the first 5,000 elements ``burnt-in". For clarity, we only show every 1000th element of the chain.} \vspace*{0.2cm} \label{fig12} \end{figure*} As has been discussed throughout this paper, a popular explanation for state transitions is a radial variation in the extent of the accretion disk. This model has been highly successful in part due to its flexibility and ease in which it can explain the ``weak" reflection fraction ($R<1$) often found in the LHS. However, over the past few years it has consistently been shown that in the luminous phases of the LHS -- at least above $\sim1\times 10^{-3}~L_{\rm Edd}$\footnote{Contrast this with the broadband analyses of GX~339-4 presented by \citet{tomsick09gx} where the authors find clear evidence for the recession of the accretion disk beginning only at Eddington luminosities below $\sim\times10^{-3}~L_{\rm Edd}$. } -- the disk does not appear to be truncated away from the radius of the ISCO \citep{Millergxlhs2006, MillerHomanMiniutti2006, miller08gx, reisj1118, reislhs, Reynold2011swift, Waltonreis2012}. Figure~11 (left) shows that the present work can statistically rule out a disk being truncated further than $\sim3$${\it r}_{\rm g}$\ even in the LHS. During the brighter, intermediate states, we constrain this radius to $\sim1.65$${\it r}_{\rm g}$. This adds support to the idea that the inner disc radius remains roughly constant throughout the LHS--HIS--SIS state transitions in black hole binaries. Where we have not been able to constrain the radius, this has largely been due to the data quality (the falling phase of the LHS is inherently less luminous) as well as the fact that reflection is intrinsically weaker in the LHS. The strongest reflection features are expected in the intermediate states where the disk receives a larger fraction of the hard X-ray emission \citep[see e.g.][]{hiemstra1652}. Note also that despite the comparatively low spectral resolution afforded by the {\it RXTE} -PCA (18\% FWHM energy resolution at 5.9\hbox{$\rm\thinspace keV$})~\citet{WilmsNowak2006cyg} showed that this instrument can indeed resolve line-widths down to a least $\sigma \sim0.3$\hbox{$\rm\thinspace keV$}. Higher resolution observations of \hbox{\rm XTE~J1650--500}\ early in the outburst showed that, when modelled with a ``Gaussian", the Fe$K\alpha$\ emission line is consistent with having a width of~$\sigma \sim~1.1$\hbox{$\rm\thinspace keV$}~\citep[Table~4 in][]{Waltonreis2012}\footnote{The original analyses of this {\it XMM-Newton}\ dataset performed by \citet{miller02j1650} included an extra smeared edge component at $\sim6.8$\hbox{$\rm\thinspace keV$}\ which resulted in the ``Gaussian"\ having a width of only $\sim250$\hbox{$\rm\thinspace eV$}. }. Assuming the stable radius shown in Fig.~11 for the two intermediate states is indeed the radius of the ISCO, we find, using the relationship between ISCO and black hole spin of \citet{Bardeenetal1972}, a dimensionless spin parameter of $0.977^{+0.006}_{-0.007}$ consistent with the value found in detailed analyses of single, high quality data obtained with {\it XMM-Newton}\ ($0.84\leq a \leq 0.98$; \citealt{Waltonreis2012}) or {\it BeppoSAX}\ ($a \gtrsim 0.93$; \citealt{MiniuttiFabianMiller2004j1650}\footnote{Spin converted from the lower limit on the inner radius of $\sim2.1$${\it r}_{\rm g}$.}). As a test of the robustness of this result, we have performed a joint fit to the first ten observations in the HIS. We used the same base model as before with each individual observation having their own set of parameters -- disk emissivity index, temperature, normalisation and parameter, power-law index, as well as the normalisation of the powerlaw and reflection component. However, this time we forced the inner radius in the various observations to be a global parameter thus assuming a constant value. This simultaneous fit contains a total of 81 free parameters\footnote{The sheer number of free parameters and computational time required to do ${\chi^{2}}$ fitting as well as the MCMC analyses described in what follows drove the need to constrain this analyses to only 10 observations as opposed to all 116.} and with this comes a high chance of mistaking a local minima in ${\chi^{2}}$ space for the global best fit. In order to address these limitations, we proceeded by minimised the fit using standard ${\chi^{2}}$ fitting techniques within \hbox{\small XSPEC~\/}\ until a reasonable fit was produced (${\chi^{2}}/\nu <2$) at which point we halted the minimisation\footnote{The actual quality of the fit at this time was ${\chi^{2}}/\nu = 2059.7/1149$.} and proceeded with Monte Carlo Markov Chain (MCMC) analysis. We employed the MCMC procedure described in \citet{mcmc2012} (code found at \href{http://danfm.ca/emcee/}{http://danfm.ca/emcee/}) and implemented in the \hbox{\small XSPEC~\/} spectral fitting package by Jeremy Sanders (\hbox{\small XSPEC~\/} implementation described in \href{https://github.com/jeremysanders/xspec_emcee}{https://github.com/jeremysanders/xspec\_emcee}). MCMC techniques have been successfully used to address similar problems in constraining the black hole spin of NGC~3783 \citep{Reynolds20123783} as well as in modelling the kinematics of the microquasor XTE~J1550--564 \citep{SteinerMcClintock2012jet1550}. We added a 5\% random perturbation to all the parameters in the fit described above, and increased the value of the inner radius from the starting value of $\sim1.6$${\it r}_{\rm g}$\ to 2.5${\it r}_{\rm g}$\ in order to guarantee that the chain could freely converge to the global minimum. We used a total of 170 ``walkers", each iterated (``walking") 10,000 times. Figure~12 (left) shows the evolution of the walk in inner radius for 50 randomly selected walkers. It is clear that the walkers converge to the same value efficiently. Nonetheless, in order to be conservative we have ignored (``burned-in") the first 5000 elements of each chain and show on the right the full MCMC chain for the radius which is clearly well behaved with a peak distribution at $1.66\pm0.01$${\it r}_{\rm g}$\ (s.d.), in excellent agreement with the results in Fig.~11. \section{Summary} In this work, we try to take a broad systematic approach to not only the data reduction but also in the manner in which the spectra are fit during all spectral states observed during the outburst. We have presented a number of empirical results inferred from our a priori assumption that the observed spectra and their evolution is a consequence of variation in three separates emission components -- the power-law continuum, thermal disk and reprocessed reflection. Although we have tried to convey a fully consistent, albeit qualitative picture of the spectral evolution of \hbox{\rm XTE~J1650--500}, it is very difficult to obtain unique interpretation of the results and nearly impossible to generalise this to all systems. In a forthcoming publication, we will apply similar techniques to a larger set of objects at which point we hope to be able to make a stronger statement regarding the global population of stellar mass black hole binaries and possibly AGN. For now, we summarise the main results of the presented spectral analyses as follows: \begin{enumerate} \item The outburst is well characterised by a model consisting of a hard X-ray continuum, a thermal disk component and reprocessed emission (reflection). \item The emissivity profile of the disk is not well characterised by a simple Newtonian approximation and, where this value can be constrained early in the outburst, it is steeper than the Newtonian value $q=3$. \item The $F_{\rm reflection}-F_{\rm powerlaw}$ plane for the periods covering the hard-intermediate to the soft-intermediate spectral states displays two distinct behaviors. Early in the outburst, during the HIS, this plane is nearly flat (slope of $-0.24\pm0.02$) with the powerlaw flux varying by $\sim 5\times$ and the corresponding reflection flux by $\sim1.5\times$ During the SIS that followed, this behavior become distinctively different with a $F_{\rm reflection}-F_{\rm powerlaw}$ slope of $2.7\pm0.1$. \item The nearly-flat behavior of the $F_{\rm reflection}-F_{\rm powerlaw}$ plane seen for the HIS cannot be explained away by variations in the ionization of the accretion disk and we propose that the most likely explanation is that of the the light bending model of \cite{Miniu04}. \item The HIS--SIS transition is accompanied by a sharp increase in the reflection fraction, which we interpret as a sudden collapse of the corona as the system approaches the thermal state. The collapsed corona now experiences stronger effects of light-bending due to its proximity to the black hole. The radiation from the corona is focused towards the disk thus systematically increasing the reflection and decreasing the continuum. \item We confirm the $R-\Gamma$ correlation during the low-hard and early hard-intermediate states but find that this relation does not hold once the system has transited into the SIS. \item We find a strong linear correlation between the reflection fraction and the coherence of the high frequency QPOs. We also find a correlation between the frequency of the QPO and $R$ which is well explained with a simple log-linear relation. \item We find a strong correlation between the frequency of the QPO and the ionisation state of the accretion disk. This relationship is suggestive of an intrinsic relationship between the irradiation of the disk and its magnetic field properties. \item We have presented a scenario within the collapsing corona toy model where the increase in the coherence of QPO with increasing reflection fraction is a consequence of decreasing emitting region as the corona collapse to regions closer to the black hole where light bending significantly increases $R$. Similarly, as the corona collapses, the increase in the QPO frequency could be associated with a decrease in the radial extent of the corona. \item In the low-hard and hard-intermediate states, the radio flux density varies by a factor of $\sim5$ with no change in the reflection fraction. The ``ballistic" radio ejection associated with the HIS--SIS transition is accompanied by a sharp increase in the reflection fraction. \item We show that the HIS-SIS transition is not due to variations in the inner radius of the accretion disk which is found to be stable at $1.65\pm0.08$${\it r}_{\rm g}$. Assuming this is the radius of the innermost stable circular orbit, we find a spin of $a\gtrsim0.96$ in excellent agreement with values measured in other works. \item Our work shows that the configuration of the corona -- and possibly that of the magnetic field -- is instrumental in defining the state of the system. \end{enumerate} \section*{Acknowledgements} RCR thanks the Michigan Society of Fellows and NASA. RCR is supported by NASA through the Einstein Fellowship Program, grant number PF1-120087 and is a member of the Michigan Society of Fellows. ACF thanks the Royal Society. \vspace*{0.5cm} \bibliographystyle{mnras}
1,116,691,498,756
arxiv
\section{Introduction} \label{sec_intro} \IEEEPARstart{C}{onsensus} has been infiltrated into control and machine learning, e.g., in \textit{distributed} optimization \cite{xin2019distributed}, estimation \cite{tcns2020,ISJ_cyber}, and resource allocation \cite{gharesifard2013distributed}. \textit{Network resource allocation} is the problem of allocating constant amount of resources among agents to minimize the cost, with application to several fields, such as, the distributed Economic Dispatch Problem (EDP) \cite{cherukuri2015tcns,yu2017distributed,molzahn2017survey,yang2013consensus,chen2018fixed,chen2016distributed,li2017distributed,yi2016initialization}, distributed coverage control \cite{higuera2012distributing,MSC09}, congestion control \cite{srikant2004mathematics}, and distributed load balancing \cite{mach2017mobile}. Such problems are subject to inherent \textit{physical constraints} on the agents, leading to nonlinear dynamics with respect to actuation and affecting the stability. This work formulates a general solution considering such nonlinear agents to solve distributed allocation. \ab{Another example is the Automatic Generation Control (AGC) in electric power systems \cite{kundur, hiskens_agc} which regulates the generators' output power compensating for any generation-load mismatch in the system. The AGC generators' deviations are subject to limits based on the available power reserves and also on Ramp Rate Limits (RRLs) (or rate saturation), i.e, the speed their produced power can increase or decrease is constrained. Under such nonlinear constraints, a linear (ideal) model for generators as given by \cite{cherukuri2015tcns,gharesifard2013distributed} may not remain feasible or result in a sub-optimal solution.} \textit{Related literature:} The literature spans from preliminary linear \cite{gharesifard2013distributed,cherukuri2015tcns,boyd2006optimal} and accelerated linear \cite{shames2011accelerated} solutions to more recent sign-based consensus \cite{wang2020distributed}, Newton-based \cite{martinez2021distributed}, derivative-free swarm-based \cite{hui2014optimal}, predictive online saddle-point methods \cite{chen2017online}, $2$nd-order autonomous dynamics \cite{deng2019distributed,yu2017distributed,deng2017distributed,wang2018second}, distributed mechanism over local message-passing networks \cite{heydaribeni2019distributed}, multi-objective \cite{li2020distributed}, \ab{primal-dual \cite{turan2020resilient,nesterov2018dual,feijer2010stability}, Lagrangian-based \cite{xu2017distributed,dominguez2012decentralized,kar2012distributed,li2017distributed}, and projected proximal sub-gradient algorithms \cite{iiduka2018distributed}, among others. These works cannot address different inherent physical nonlinearities on the agents' model, such as the RRL for distributed AGC, or some other designed nonlinear models intended for improving computation load and convergence rate, e.g., reaching fast convergence. In general, model nonlinearities such as limited computational capacities, constrained actuation, and model imperfections may significantly affect the convergence or degrade the resource allocation performance. For example, none of the mentioned references can address quantization, saturation, and sign-based actuation altogether, while ensuring feasibility at all times. In reality, under such model nonlinearities there is no guarantee that the existing solutions accurately follow the ideally-designed dynamics and preserve feasibility, optimality, or specified convergence rate. Some existing Lagrangian-based methods \cite{dominguez2012decentralized,kar2012distributed} are not \textit{anytime} feasible, but reach feasibility upon the convergence \cite{cherukuri2015tcns}. In a different line of research, \textit{inequality-constrained} problems are solved via primal-dual methods and Lagrangian relaxation \cite{turan2020resilient,nesterov2018dual,feijer2010stability}. This differs from equality-constrained problems which are typically solved via Laplacian gradient methods. The latter is is used for the optimal resource allocation in EDP \cite{cherukuri2015tcns,yu2017distributed,molzahn2017survey}, but without addressing the RRL nonlinearity on the power \textit{rate}. } \textit{Main contributions:} We propose a general $1$st-order \textit{Laplacian-gradient dynamics} for distributed resource allocation. The proposed localized solution generalizes many nonlinear constraints on the agents including, but not limited to, (i) \textit{saturation} and (ii) \textit{quantization}. Further, some specific constraints (e.g., on the convergence or robustness), impose nonlinearities on the agents' dynamics. For example, it is practical in applications to design (iii) \textit{fixed-time} and \textit{finite-time} convergent solutions, and/or (iv) \textit{robust} protocols to impulsive noise and uncertainties. Our proposed dynamics generalizes many \textit{symmetric sign-preserving} model nonlinearities. \ab{We prove uniqueness, anytime feasibility, and convergence over generally sparse, time-varying, undirected (and not necessarily connected) networks, referred to as \textit{uniform-connectivity}. The proofs are based on nonsmooth Lyapunov theory \cite{cortes2008discontinuous}, graph theory, and convex analysis, \textit{irrespective of the type of nonlinearity}. This generalized 1st-order solution is more practical as it considers all possible sign-preserving physical constraints on the agents dynamics, and further, can be extended to consider nonlinearities on the agents' communications \cite{taes2020finite,mrd_2020fast}.} \vspace{-0.2cm} \section{Problem Statement} \label{sec_prob} The network resource allocation problem is in the form\footnote{Note the subtle abuse of notation where the overall state $\mathbf{X}$ is represented in matrix form to simplify the notation in proof analysis throughout the paper. }, \begin{align} \label{eq_dra} \min_\mathbf{X} F(\mathbf{X},t) = \sum_{i=1}^{n} f_i(\mathbf{x}_i,t), ~ \text{s.t.} ~ \mathbf{X}\mathbf{a} = \mathbf{b} \end{align} with $\mathbf{x}_i \in \mathbb{R}^d$, $\mathbf{X} = [\mathbf{x}_1,\dots,\mathbf{x}_n] \in \mathbb{R}^{d \times n}$, vectors $\mathbf{a}=[{a}_1;\dots;{a}_n] \in \mathbb{R}^n$, and $\mathbf{b}=[{b}_1;\dots;{b}_d] \in \mathbb{R}^d$. \ab{The entries of $\mathbf{a}$ are assumed to not be very close to zero to avoid unbounded solutions. If ${a}_j =0$ for agent $j$, its state $\mathbf{x}_j$ is decoupled from the other agents, and problem~\eqref{eq_dra} can be restated for $n-1$ coupled agents plus an unconstrained optimization on $f_j(\mathbf{x}_j,t)$. } $f_i(\mathbf{x}_i,t):\mathbb{R}^{d+1} \rightarrow \mathbb{R}$ in \eqref{eq_dra} denotes the local time-varying cost at agent $i$ as $f_i(\mathbf{x}_i,t) = \widetilde{f}_i(\mathbf{x}_i) + \widehat{f}_i(t)$, with $\widehat{f}_i(t) \neq 0$ representing the time-varying cost. In some applications, the states are subject to the \textit{box constraints}, $\underline{\mathbf{m}} \leq \mathbf{x}_i \leq \overline{\mathbf{m}}$, denoting element-wise comparison. Using exact penalty functions, these constraints are added into the local objectives as $f_i^\epsilon (\mathbf{x}_i,t) = f_i(\mathbf{x}_i,t) + \epsilon h^\epsilon(\mathbf{x}_i - \overline{m}) + \epsilon h^\epsilon(\underline{m} - \mathbf{x}_i)$ with $h^\epsilon(u)= \max \{u,\mathbf{0}\}$. The smooth equivalent substitutes are $\frac{1}{\mu} \log (1+\exp (\mu u))$ \cite{csl2021}, \textit{quadratic} penalty $(\max \{u,\mathbf{0}\})^2$ (or $\theta$-logarithmic barrier \cite{li2017distributed}) with the gap inversely scaling with $\epsilon$. \begin{ass} \label{ass_strict} The (time-independent part of) local functions, $\widetilde{f}_i(\mathbf{x}_i):\mathbb{R}^{d} \rightarrow \mathbb{R}$, are strictly convex and differentiable. \end{ass} This assumption ensures unique optimizer (see Lemma~\ref{lem_unique_feasible}) and existence of function gradient. This paper aims to design a \textit{localized} general nonlinear dynamic to solve \eqref{eq_dra} based on partial information at agents over a network. \vspace{-0.4cm} \section{Definitions and Auxiliary Results } \label{sec_def} \subsection{ Graph Theory and Nonsmooth Analysis} The multi-agent network is modeled as a time-varying undirected graph $\mathcal{G}(t)=\{\mathcal{V},\mathcal{E}(t)\}$ with links $\mathcal{E}(t)$ and nodes $\mathcal{V}=\{1,\dots,n\}$. $(i,j) \in \mathcal{E}(t)$ denotes a link from agent $i$ to $j$, and the set $\mathcal{N}_i(t)=\{j|(j,i)\in \mathcal{E}(t)\}$ represents the direct neighbors of agent $i$ over $\mathcal{G}(t)$. Every link $(i,j) \in \mathcal{E}(t)$ is assigned with a positive weight $W_{ij}>0$, in the associated weight matrix $W(t)=[W_{ij}(t)] \in \mathbb{R}^{n \times n}_{\geq0}$ of $\mathcal{G}(t)$. In $\mathcal{G}(t)$ define a \textit{spanning tree} as a subset of links in which there exists only one path between every two nodes (for all $n$ nodes). \begin{ass} \label{ass_G} The following assumptions hold on $\mathcal{G}(t)$: \begin{itemize} \item The network $\mathcal{G}(t)$ is undirected. This implies a symmetric associated weight matrix $W(t)$, i.e., $W_{ij}(t)=W_{ji}(t)\geq0$ for $i,j \in \{1,\dots,n\}$ at all time $t\geq0$, which is not necessarily row, column, or doubly stochastic. \item There exist a sequence of non-overlapping finite time-intervals $[t_k,t_k+l_k]$ in which $\bigcup_{t=t_k}^{t_k+l_k}\mathcal{G}(t)$ includes an undirected spanning tree (uniform-connectivity). \end{itemize} \end{ass} \ab{Next, we restate some nonsmooth set-value analysis from \cite{cortes2008discontinuous}. For a nonsmooth function $h:\mathbb{R}^m \rightarrow \mathbb{R}$, define its \textit{generalized gradient} as \cite{cortes2008discontinuous},} \ab{\begin{align} \partial h(\mathbf{x})= \mathrm{co}\{\lim \nabla h(\mathbf{x}_i): \mathbf{x}_i \rightarrow \mathbf{x}, \mathbf{x}_i \notin \Omega_h \cup S \} \end{align} where $\mathrm{co}$ denotes convex hull, $S \subset \mathbb{R}^m$ is any set of zero Lebesgue measure, and $\Omega_h \in \mathbb{R}^m$ is the set of points in which $h$ is non-differentiable. If $h$ is \textit{locally Lipschitz} at $x$, then $\partial h(\mathbf{x})$ is nonempty, compact, and convex, and the set-valued map $\partial h:\mathbb{R}^m\rightarrow \mathcal{B}\{\mathbb{R}\}$ (with $\mathcal{B}\{\mathbb{R}\}$ denoting the collection of all subsets of $\mathbb{R}$), $\mathbf{x} \mapsto \partial h(\mathbf{x})$, is upper semi-continuous and locally bounded. Then, its \textit{set-valued Lie-derivative} $\mathcal{L}_\mathcal{H} h : \mathbb{R}^m \rightarrow \mathbb{R}$ with respect to system dynamics $\dot{\mathbf{x}} \in \partial \mathcal{H}(\mathbf{x})$ (with a unique solution) at $\mathbf{x}$ is, \begin{align} \mathcal{L}_\mathcal{H} h = \{\eta \in \mathbb{R}| \exists \nu \in \mathcal{H}(\mathbf{x})~s.t.~ \zeta^\top \nu = \eta,~\forall \zeta \in \partial h(\mathbf{x})\} \end{align} These are used for nonsmooth Lyapunov analysis in Section~\ref{sec_conv}. } \vspace{-0.8cm} \subsection{Preliminary Results on Convex Optimization} Following the Karush-Kuhn-Tucker (KKT) condition and Lagrange multipliers method, optimal solution to problem \eqref{eq_dra} satisfies the \textit{feasibility condition} as described below. \begin{defn} (\textbf{Feasibility Condition}) Define $\mathcal{S}_\mathbf{b} = \{\mathbf{X} \in \mathbb{R}^{d \times n}|\mathbf{X}\mathbf{a} = \mathbf{b}\}$ and $\mathbf{X} \in \mathcal{S}_\mathbf{b}$ as the feasible set and value. \label{def_feas} \end{defn} \ab{Note that problem \eqref{eq_dra} differs from \textit{unconstrained} distributed optimization \cite{xin2019distributed,garg2021} due to \textit{feasibility} constraint $\mathbf{X}\mathbf{a} = \mathbf{b}$ which is of dimension $n-1$. Some works consider \textit{inequality constraints} $\mathbf{X}\mathbf{a} \leq \mathbf{b}$ \cite{turan2020resilient,nesterov2018dual,feijer2010stability}, which represents a half-space of dimension $n$, with example application in network utility maximization, saying that the weighted sum of utilities $\mathbf{X}\mathbf{a}$ should not \textit{exceed} certain value $\mathbf{b}$. These problems may encounter many such relaxed inequality constraints. In contrast, having one \textit{equality} constraint, e.g., in EDP, the weighted sum of generated power $\mathbf{X}\mathbf{a}$ should \textit{exactly} meet the load demand constraint $\mathbf{b}$ at all times, i.e., $\mathbf{X}\mathbf{a}=\mathbf{b}$ \cite{cherukuri2015tcns,yu2017distributed,molzahn2017survey,dominguez2012decentralized,kar2012distributed}. Having $m>1$ equality constraints, it can be algebraically reduced to a cost optimization of $n-m+1$ states subject to one feasibility constraint of dimension $n-m$, where the other $m-1$ states are dependent variables. For comparison of different constraints and solutions see \cite[Table~I]{mrd_2020fast}. \begin{defn} Given a convex function $h(\mathbf{X}): \mathbb{R}^{d \times n} \rightarrow \mathbb{R}$, the level set $L_\gamma(h)$ for a given $\gamma \in \mathbb{R}$ is the set $L_\gamma(h)= \{\mathbf{X} \in \mathbb{R}^{d \times n}|h(\mathbf{X}) \leq \gamma\}$. It is known that for a \textit{strictly} convex $h(\mathbf{X})$, all its level sets $L_\gamma(h)$ are also strictly convex, closed, and compact for all scalars $\gamma$. \end{defn}} \begin{lem}\label{lem_optimal_solution} Problem~\eqref{eq_dra} under Assumption~\ref{ass_strict} has a unique optimal feasible solution $\mathbf{X}^* \in \mathcal{S}_\mathbf{b}$ as $\nabla \widetilde{F}(\mathbf{X}^*) = \pmb{\varphi}^* \otimes \mathbf{a}^\top$, with $\pmb{\varphi}^* \in \mathbb{R}^d$, $\widetilde{F}(\mathbf{X}) = \sum_{i=1}^{n}\widetilde{f}_i(\mathbf{x}_i)$, $\nabla \widetilde{F}(\mathbf{X}^*) = [\nabla \widetilde{f}_1(\mathbf{x}^*_1),\dots,\nabla \widetilde{f}_n(\mathbf{x}^*_n)]$ as the gradient (with respect to $\mathbf{X}$) of the function $\widetilde{F}$ at $\mathbf{X}^*$, and $\otimes$ as the Kronecker product. \end{lem} \begin{proof} The proof follows \cite{boyd2006optimal} by using KKT method. \end{proof} In the following, we analyze feasible solution set using the notion of \textit{level sets}. \ab{For two distinct $\mathbf{X}$ and $\mathbf{Y}$ with $h(\mathbf{X}) > h(\mathbf{Y})$ on two level sets $L_{h(\mathbf{X})}$ and $L_{h(\mathbf{Y})}$, $\mathbf{e}_p^\top(h(\mathbf{Y}) - h(\mathbf{X}))\mathbf{e}_p> \mathbf{e}_p^\top \nabla h(\mathbf{X})(\mathbf{Y}-\mathbf{X})^\top \mathbf{e}_p$ and $\mathbf{e}_p^\top (h(\mathbf{X}) - h(\mathbf{Y})) \mathbf{e}_p> \mathbf{e}_p^\top \nabla h(\mathbf{Y})(\mathbf{X}-\mathbf{Y})^\top \mathbf{e}_p$; adding the two, \begin{equation} \label{eq_level} \mathbf{e}_p^\top (\nabla h(\mathbf{Y})-\nabla h(\mathbf{X}))(\mathbf{Y}-\mathbf{X})^\top \mathbf{e}_p > \mathbf{0},~p\in\{1,\dots,d\} \end{equation}} with $\mathbf{e}_p$ as the unit vector of the $p$'s coordinate. \begin{lem}\label{lem_unique_feasible} For every feasible set $\mathcal{S}_\mathbf{b}$ there exists only one unique point $\mathbf{X}^* \in \mathcal{S}_\mathbf{b}$ (under Assumption~\ref{ass_strict}) such that $\nabla\widetilde{F}(\mathbf{X}^*) = \Lambda \otimes \mathbf{a}^\top$ with $\Lambda \in \mathbb{R}^d$. \end{lem} \begin{proof} From strict convexity of $\widetilde{F}(\mathbf{X})$ (Assumption~\ref{ass_strict}), only one of its strict convex level sets, say $L_\gamma(\widetilde{F})$, touches the constraint facet $\mathcal{S}_\mathbf{b}$ only at a single point, say $\mathbf{X}^*$. Clearly, the gradient $\nabla \widetilde{F}(\mathbf{X}^*)$ is orthogonal to $\mathcal{S}_\mathbf{b}$, and $\frac{\nabla \widetilde{f}_i(\mathbf{x}^*_i)}{a_i}=\frac{\nabla \widetilde{f}_j(\mathbf{x}^*_j)}{a_j} = \Lambda$ for all $i$. By contradiction, consider two points $\mathbf{X}^{*1},\mathbf{X}^{*2} \in \mathcal{S}_\mathbf{b}$ for which $\nabla \widetilde{F}({\mathbf{X}^{*1}}) = \Lambda_1 \otimes \mathbf{a}^\top$, $\nabla \widetilde{F}(\mathbf{X}^{*2}) = \Lambda_2 \otimes \mathbf{a}^\top$ (two possible optimum), implying that either (i) one level set $L_\gamma(\widetilde{F})$, $\gamma = \widetilde{F}(\mathbf{X}^{*1}) = \widetilde{F}(\mathbf{X}^{*2})$ is adjacent to the affine constraint $\mathcal{S}_\mathbf{b}$ at both $\mathbf{X}^{*1},\mathbf{X}^{*2}$, or (ii) there are two level sets $L_{\widetilde{F}(\mathbf{X}^{*1})}$ and $L_{\widetilde{F}(\mathbf{X}^{*2})}$, touching the affine set $\mathcal{S}_\mathbf{b}$ at $\mathbf{X}^{*1}$ and $\mathbf{X}^{*2}$ respectively, \ab{and thus, at both points $\nabla \widetilde{F}({\mathbf{X}^{*2}})$ and $\nabla \widetilde{F}({\mathbf{X}^{*2}})$ need to be orthogonal to $(\mathbf{X}^{*1}-\mathbf{X}^{*2})$ in $\mathcal{S}_\mathbf{b}$. Since $\mathcal{S}_\mathbf{b}$ forms a linear facet, the former case contradicts the strict convexity of the level sets. In the latter case, \begin{align} \label{eq_proof1} \mathbf{e}_p^\top (\nabla (\widetilde{F}({\mathbf{X}^{*1}})-\nabla \widetilde{F}({\mathbf{X}^{*2}}))(\mathbf{X}^{*1}-\mathbf{X}^{*2})^\top \mathbf{e}_p = 0,\forall p \end{align} which contradicts \eqref{eq_level}. This proves the lemma. } \end{proof} This proof analysis is further recalled in the next sections. \section{The Proposed $1$st-Order Nonlinear Dynamics} \label{sec_dynamic} We propose a $1$st-order protocol $\mathcal{F}: \mathbb{R}^{d\times n} \rightarrow \mathbb{R}^d$ coupling the agents' dynamics to solve problem \eqref{eq_dra}, while addressing model nonlinearities and satisfying \textit{feasibility at all times}, \begin{align} \dot{\mathbf{x}}_i = -\frac{1}{a_i}\sum_{j \in \mathcal{N}_i} W_{ij} g\Big(\frac{\nabla \widetilde{f}_i(\mathbf{x}_i)}{a_i} - \frac{\nabla \widetilde{f}_j(\mathbf{x}_j)}{a_j}\Big) :~ \mathcal{F}_i(\mathbf{x}_i), \label{eq_sol} \end{align} with $W_{ij}$ as the weight of the link between agents $i$ and $j$ and $\nabla \widetilde{f}_i(\mathbf{x}_i)$ as the gradient of (time-invariant part of) the local objective $\widetilde{f}_i$ with respect to $\mathbf{x}_i$ and $g$ defines the nonlinearity to be explained later. \ab{ Following Assumption~\ref{ass_strict}, given a state point $\mathbf{X}_0$, the level set $L_{\widetilde{F}(\mathbf{X}_0)}$ is closed, convex, and compact. Then, the solution set $L_{\widetilde{F}(\mathbf{X}_0)} \cap \mathcal{S}_b$ under \eqref{eq_sol} is closed and bounded. Indeed \eqref{eq_sol} represents a \textit{differential inclusion} due to discontinuity of RHS of \eqref{eq_sol} \cite{cortes2008discontinuous}, where for the sake of notation simplicity ''$=$'' is used instead of ''$\in$''. From \cite{cortes2008discontinuous}, it is straightforward to see that the trajectory $\mathcal{F}$ is locally bounded, upper semi-continuous, with non-empty, compact, and convex values, and thus, from \cite[Proposition~S2]{cortes2008discontinuous} and similar to \cite{garg2021,parsegov2013fixed}, the solution under \eqref{eq_sol} for initial condition $\mathbf{X}_0 \in \mathbf{S}_b$ exists and is unique. } Recall that the time-varying and time-invariant parts of the local objectives are decoupled. Dynamics \eqref{eq_sol} represents a \textit{$1$st-order weighted gradient tracking}, with no use of the Hessian matrix, Thus, function $\widetilde{f}_i(\cdot)$ is not needed to be twice-differentiable (in contrast to $2$nd-order dynamics, e.g., in \cite{deng2019distributed}). This allows to incorporate smooth \textit{penalty functions} to address the \textit{box constraints}. In case of communication network among agents, \textit{periodic} communication with sufficiently small period $\tau$ is considered, see \cite{KIA2015254} for details. The state of every agent $i$ evolves under influence of its direct neighbors $j \in \mathcal{N}_i$ weighted by $W_{ji}$, e.g, via information sharing networks \cite{KIA2015254} where every agent $i$ shares its local gradients $\nabla \widetilde{f}_i(\mathbf{x}_i)$ along with the weight $W_{ji}$. Therefore, the proposed resource allocation dynamics \eqref{eq_sol} is only based on local information-update, and is \textit{distributed} over the multi-agent network. \begin{ass} \label{ass_gW} (\textbf{Strongly sign-preserving nonlinearity}) In dynamics \eqref{eq_sol}, $g: \mathbb{R}^d \rightarrow \mathbb{R}^d$ is a nonlinear odd mapping such that $g(\mathbf{x}) = - g(-\mathbf{x})$, $g(\mathbf{x})\succ \mathbf{0}$ for $\mathbf{x}\succ \mathbf{0}$, $g(\mathbf{0}) = \mathbf{0}$, and $g(\mathbf{x})\prec\mathbf{0}$ for $\mathbf{x}\prec \mathbf{0}$. Further, $\nabla g(\mathbf{0})\neq \mathbf{0}$. \end{ass} Some causes of such practical nonlinearities as function $g(\cdot)$ in \eqref{eq_sol}, e.g., physics-based nonlinearities, are given next. \textbf{Application 1:} Function $g(\cdot)$ can be adopted from finite-time and fixed-time literature \cite{taes2020finite,parsegov2013fixed,mrd_2020fast} as $\mbox{sgn}^\mu(\mathbf{x})=\mathbf{x}\|\mathbf{x}\|^{\mu-1}$, where $\|\cdot\|$ denotes the Euclidean norm and $\mu\geq 0$. In general, system dynamics as $\dot{\mathbf{x}}_i = -\sum_{j =1}^n W_{ij}(\mbox{sgn}^{\mu_1}(\mathbf{x}_i-\mathbf{x}_j)+\mbox{sgn}^{\mu_2}(\mathbf{x}_i-\mathbf{x}_j)) $ converge in finite/fixed-time \cite{parsegov2013fixed}, motivating fast-convergent allocation dynamics \cite{mrd_2020fast} as, \begin{align} \dot{\mathbf{x}}_i = &-\sum_{j \in \mathcal{N}_i} W_{ij}( \mbox{sgn}^{\mu_1}(\mathbf{z})+ \mbox{sgn}^{\mu_2}(\mathbf{z})), \label{eq_sol_fixed} \end{align} with $\mathbf{z}=\frac{\nabla \widetilde{f}_i(\mathbf{x}_i)}{a_i} - \frac{\nabla \widetilde{f}_j(\mathbf{x}_j)}{a_j}$, $0<\mu_1<1$, and $0<\mu_2<1$ (finite-time case) or $1<\mu_2$ (fixed-time case). \textbf{Application 2:} Quantized allocation by choosing $g(\cdot)$ as, \begin{align} g_{l}(\mathbf{z}) = \mbox{sgn}(\mathbf{z}) \exp(g_{u}(\log(|\mathbf{z}|))), \label{eq_quan_log} \end{align} where $g_{u}(\mathbf{z}) = \delta \left[ \frac{\mathbf{z}}{\delta}\right]$ represents the \textit{uniform} quantizer with $\left[ \cdot\right]$ as rounding operation to the nearest integer \cite{wei2018nonlinear,frasca_quntized,guo2013consensus}, $\mbox{sgn}(\cdot)$ follows $\mbox{sgn}^\mu(\cdot)$ with $\mu =0$, $\delta$ is the quantization level, and function $g_l$ denotes \textit{logarithmic} quantizer. \textbf{Application 3:} Sign-preserving nonlinear dynamics \cite{wei2017consensus,stankovic2020nonlinear} robust to \textit{impulsive noise} can be achieved via $g_{p}(\mathbf{z}) = -\frac{d(\log p(\mathbf{z}))}{d \mathbf{z}}$ with $p$ as the noise density. For example, for $p$ following \textit{approximately uniform} $\mathcal{P}_1$ or \textit{Laplace} class $\mathcal{P}_2$ \cite{stankovic2020nonlinear}, \begin{align} \label{eq_robust_uni} p \in \mathcal{P}_1 &: ~ g_p(\mathbf{z}) = \begin{cases} \frac{1 -\epsilon}{\epsilon d}\mbox{sgn}(\mathbf{z}) & |\mathbf{z}| > d\\ 0 & |\mathbf{z}| \leq d \end{cases} \\ \label{eq_robust_sgn} p \in \mathcal{P}_2 &: ~ g_p(\mathbf{z}) = 2 \epsilon \mbox{sgn}(\mathbf{z}), \end{align} with $0<\epsilon<1 $, $ d>0$. \textbf{Application 4:} Saturation nonlinearities \cite{liu2019global,yi2019distributed} (or \textit{clipping}) are due to limited actuation range for which the saturation level may affect the stability, convergence, and general behavior of the system. For a given saturation level $\kappa>0$, \begin{align} \label{eq_sat} g_\kappa(\mathbf{z}) = \begin{cases} \kappa\mbox{sgn}(\mathbf{z}) & |\mathbf{z}| > \kappa\\ \mathbf{z} & |\mathbf{z}| \leq \kappa \end{cases} \end{align} \ab{ \begin{rem} Recall that Eq. \eqref{eq_sol} represents \textit{Laplacian-gradient-type dynamics} (see \cite{cherukuri2015tcns} for details) which can ensure \textit{feasibility at all times under various nonlinearities of $g(\cdot)$} in contrast to the Lagrangian-type methods \cite{li2017distributed,dominguez2012decentralized,kar2012distributed,turan2020resilient,nesterov2018dual,feijer2010stability}. If the actuator is not subject to nonlinearities, one may select a linear function for $g(\cdot)$, i.e., $g(\mathbf{z}) = \mathbf{z}$ and utilize linear methods \cite{gharesifard2013distributed,cherukuri2015tcns,shames2011accelerated}. However, our focus is to provide a more general solution method that is applicable also to agents with nonlinearities (inherent or by design). For example, the generators are known to be physically constrained with RRLs which is a determining factor on the stability of the grid \cite{hiskens_agc}. Linear methods cannot consider RRLs and may result in solutions with a high rate of change in power generation $\dot{\mathbf{x}}_i$, which cannot be followed in reality and may result in infeasibility or sub-optimality. However, such limits can be satisfied considering $g(\cdot)$ as in \eqref{eq_sat} where the limits can be tuned by $\kappa$. \end{rem} } \section{Analysis of Convergence} \label{sec_conv} In this section, combining convex analysis from Lemma~\ref{lem_optimal_solution}-\ref{lem_unique_feasible} with Lyapunov theory, we prove the convergence of the general protocol~\eqref{eq_sol} to the optimal value of problem \eqref{eq_dra} subject to the constraint on the weighted-sum of resources. The proof is, in general, irrespective of the nonlinearity types, i.e., holds for any nonlinearity satisfying Assumption~\ref{ass_gW}, including \eqref{eq_sol_fixed}-\eqref{eq_sat}. \begin{lem} \label{lem_feasible_intime} (\textbf{Anytime Feasibility}) Suppose Assumption~\ref{ass_gW} holds. The states of the agents under dynamics~\eqref{eq_sol} remain feasible, i.e., if $\mathbf{X}_0 \in \mathcal{S}_\mathbf{b}$, then $\mathbf{X}(t) \in \mathcal{S}_\mathbf{b}$ for $t>0$. \end{lem} \begin{proof} Having $\mathbf{X}_0 \in \mathcal{S}_\mathbf{b}$ implies that $\mathbf{X}_0\mathbf{a}=\mathbf{b}$. For the general state dynamics \eqref{eq_sol}, \small \begin{align} \label{eq_proof_feas} \frac{d}{dt}(\mathbf{X}\mathbf{a})=\sum_{i=1}^n \dot{\mathbf{x}}_ia_i = -\sum_{i=1}^n\sum_{j \in \mathcal{N}_i} W_{ij} g\Big(\frac{\nabla \widetilde{f}_i(\mathbf{x}_i)}{a_i} - \frac{\nabla \widetilde{f}_j(\mathbf{x}_j)}{a_j}\Big). \end{align} \normalsize From Assumptions~\ref{ass_G} and~\ref{ass_gW}, $W_{ij}=W_{ji}$ and $g(-\mathbf{x})=-g(\mathbf{x})$. Therefore, the summation in \eqref{eq_proof_feas} is equal to zero, ${\frac{d}{dt}(\mathbf{X}\mathbf{a})=\mathbf{0}}$, and $\mathbf{X}\mathbf{a}$ is time-invariant under dynamics \eqref{eq_sol}. Thus, having feasible initial states $\mathbf{X}_0\mathbf{a}=\mathbf{b}$, then ${\mathbf{X}(t)\mathbf{a}=\mathbf{b}}$ remains feasible over time, i.e. $\mathbf{X}(t) \in \mathcal{S}_{\mathbf{b}}$ for all $t>0$. \end{proof} \ab{The above proves \textit{anytime feasibility}, i.e., nonlinear dynamic \eqref{eq_sol} remains feasible \textit{at all times}, which is privileged over consensus-based solutions \cite{dominguez2012decentralized,kar2012distributed,li2017distributed}. For AGC subject to RRL, $\dot{\mathbf{x}}_i$ (and thus $g(\cdot)$) needs to be further of limited range. Further, Lemma~\ref{lem_feasible_intime} shows that $\mathcal{S}_b$ is \textit{positively invariant} under the nonlinear dynamics \eqref{eq_sol}.} \begin{thm} \label{thm_tree} (\textbf{Equilibrium-Uniqueness}) Under Assumptions~\ref{ass_G} and \ref{ass_gW}, the equilibrium point $\mathbf{X}^*$ of the solution dynamics \eqref{eq_sol} is only in the form $\nabla \widetilde{F}(\mathbf{X}^*) = \Lambda \otimes \mathbf{a}^\top$ with $\Lambda \in \mathbb{R}^d$, and coincides with the unique optimal point of \eqref{eq_dra}. \end{thm} \begin{proof} From dynamics~\eqref{eq_sol}, $\dot{\mathbf{x}}^*_i = \mathbf{0},\forall i$ for $\mathbf{X}^*$ satisfying $\nabla \widetilde{F}(\mathbf{X}^*) = \Lambda \otimes \mathbf{a}^\top$, and such point $\mathbf{X}^*$ is clearly an equilibrium of~\eqref{eq_sol}. We prove that there is no other equilibrium with $\nabla \widetilde{F}(\mathbf{X}^*) \neq \Lambda \otimes \mathbf{a}^\top$ by contradiction. Assume $\widehat{\mathbf{X}}$ as the equilibrium of~\eqref{eq_sol} such that $ \frac{\nabla \widetilde{f}_i(\widehat{\mathbf{x}}_i)}{a_i}\neq\frac{\nabla \widetilde{f}_j(\widehat{\mathbf{x}}_j)}{a_j}$ for at least two agents $i,j$. Let $\nabla \widetilde{F}({\widehat{\mathbf{X}}}) = (\widehat{\Lambda}_1,\dots,\widehat{\Lambda}_n)$. Consider two agents $\alpha = \argmax_{q\in \{1,\dots,n\}} \widehat{\Lambda}_{q,p}$ and $\beta = \argmin_{q \in \{1,\dots,n\}} \widehat{\Lambda}_{q,p}$ for any entry $p \in \{1,\dots,d\}$. Following the Assumption~\ref{ass_G}, the existence of an (undirected) spanning tree in the union network $\bigcup_{t=t_k}^{t_k+l_k}\mathcal{G}(t)$ implies that there is a mutual path between nodes (agents) $\alpha$ and $\beta$. In this path, there exists at least two agents $\overline{\alpha}$ and $\overline{\beta}$ for which $\widehat{\Lambda}_{\overline{\alpha},p} \geq \widehat{\Lambda}_{\mathcal{N}_{\overline{\alpha}},p},~ \widehat{\Lambda}_{\overline{\beta},p}\leq \widehat{\Lambda}_{\mathcal{N}_{\overline{\beta}},p}$ with $\mathcal{N}_{\overline{\alpha}}$ and $\mathcal{N}_{\overline{\beta}}$ as the neighbors of $\overline{\alpha}$ and $\overline{\beta}$, respectively. The strict inequality holds for at least one neighboring node in $\mathcal{N}_{\overline{\alpha}}$ and $\mathcal{N}_{\overline{\beta}}$. From Assumption~\ref{ass_G} and \ref{ass_gW}, in a sub-domain of $[t_k,t_k+l_k]$, we have $\dot{\widehat{\mathbf{x}}}_{\overline{\alpha},p} <0$ and $\dot{\widehat{\mathbf{x}}}_{\overline{\beta},p} > 0$. Therefore, $\dot{\widehat{\mathbf{X}}} \neq \mathbf{0}$ which contradicts the assumption that $\widehat{\mathbf{X}}$ is the equilibrium of~\eqref{eq_sol}. Recall that, from Lemma~\ref{lem_unique_feasible}, this point coincides with the optimal solution of \eqref{eq_dra}, as for every feasible initialization in $\mathcal{S}_b$ there is only one such point $\mathbf{X}^*$ satisfying $\nabla\widetilde{F}(\mathbf{X}^*) = \Lambda \otimes \mathbf{a}^\top$. This completes the proof. \end{proof} The above lemma paves the way for convergence analysis via the Lyapunov stability theorem, as it shows that the dynamics \eqref{eq_sol} \textit{has a unique equilibrium for any feasible initial condition.} \begin{lem} \cite[Lemma~3]{mrd_2020fast} \label{lem_sum} Let nonlinearity $g(\cdot) $ and matrix $W$ satisfy Assumptions~\ref{ass_G} and \ref{ass_gW}. Then, for $\pmb{\psi} \in \mathbb{R}^d$ we have, \vspace{-0.5cm} \small \begin{align} \nonumber \sum_{i =1}^n \pmb{\psi}_i^\top\sum_{j =1}^nW_{ij}g(\pmb{\psi}_j-\pmb{\psi}_i)= \sum_{i,j =1}^n \frac{W_{ij}}{2} (\pmb{\psi}_j-\pmb{\psi}_i)^\top g(\pmb{\psi}_j-\pmb{\psi}_i). \end{align} \normalsize \end{lem} Following the convex analysis in Lemmas~\ref{lem_unique_feasible}-\ref{lem_feasible_intime}, and Theorem~\ref{thm_tree} along with Lemma~\ref{lem_sum}, we provide our main theorem next. \begin{thm} \label{thm_converg} (\textbf{Convergence}) Suppose Assumptions~\ref{ass_strict}-\ref{ass_gW} hold. Then, initializing by $\mathbf{X}_0 \in \mathcal{S}_\mathbf{b}$, the proposed dynamics \eqref{eq_sol} solves the network resource allocation problem \eqref{eq_dra}. \end{thm} \begin{proof} Following Lemmas~\ref{lem_unique_feasible},~\ref{lem_feasible_intime}, and Theorem~\ref{thm_tree} and initializing from $\mathbf{X}_0 \in \mathcal{S}_\mathbf{b}$ for any $b \in \mathbb{R}^d$, there is a unique feasible equilibrium $\mathbf{X}^*$ for solution dynamics~\eqref{eq_sol} in the form $\nabla \widetilde{F}(\mathbf{X}^*) = \pmb{\varphi}^* \otimes \mathbf{a}^\top$. Define the nonsmooth residual function $\overline{F}(\mathbf{X})=F(\mathbf{X},t)-F(\mathbf{X}^*,t)$. Clearly, $\overline{F}(\mathbf{X})=\sum_{i=1}^n (\widetilde{f}_i(\mathbf{x}_i)-\widetilde{f}_i(\mathbf{x}^*_i))>0$ is \textit{purely a function of $\mathbf{X}$}, with $\mathbf{X}^*$ as its unique equilibrium. \ab{For this continuous (but nonsmooth) regular and locally Lipschitz Lyapunov function $\overline{F}(\mathbf{X})$, its generalized derivative $t \mapsto \overline{F}(\mathbf{x}(t))$, for $\mathbf{x}$ as the solution to \eqref{eq_sol}, satisfies $\partial_t \overline{F}(\mathbf{X}(t)) \in \mathcal{L}_\mathcal{F} \overline{F}(\mathbf{X}(t))$, see \cite[Proposition~10]{cortes2008discontinuous}. Then (dropping $t$ for notation simplicity),} \ab{ \footnotesize \begin{align} \partial_t \overline{F} = \nabla F^\top \dot{\mathbf{X}} = \sum_{i =1}^n -\frac{\nabla \widetilde{f}_i(\mathbf{x}_i)}{a_i}^\top\sum_{j \in \mathcal{N}_i} W_{ij} g\Big(\frac{\nabla \widetilde{f}_i(\mathbf{x}_i)}{a_i} - \frac{\nabla \widetilde{f}_j(\mathbf{x}_j)}{a_j}\Big). \nonumber \end{align} \normalsize Following Lemma~\ref{lem_sum}, \footnotesize \begin{align} \label{eq_proof_decay} \partial_t \overline{F} = -\sum_{i,j =1}^n \frac{W_{ij}}{2}\Big(\frac{\nabla \widetilde{f}_i(\mathbf{x}_i)}{a_i} - \frac{\nabla \widetilde{f}_j(\mathbf{x}_j)}{a_j}\Big)^\top g\Big(\frac{\nabla \widetilde{f}_i(\mathbf{x}_i)}{a_i} - \frac{\nabla \widetilde{f}_j(\mathbf{x}_j)}{a_j}\Big). \end{align} \normalsize From Assumption~\ref{ass_gW}, $g(\mathbf{x})$ is odd and strongly sign-preserving, i.e., $\mathbf{x}^\top g(\mathbf{x})\geq0$. Therefore, $\partial_t \overline{F} \leq 0$ with the largest invariant set $\mathcal{I}$ contained in $ \{\mathbf{X} \in L_{\widetilde{F}(\mathbf{X}_0)} \cap \mathcal{S}_b| \mathbf{0} \in \mathcal{L}_\mathcal{F} \overline{F}(\mathbf{X})\}$, i.e., $\mathcal{I}$ includes the unique point $\mathbf{X}^*\in \mathcal{S}_b$ for which $\nabla \widetilde{F}(\mathbf{X}) \in \mbox{span}\{\mathbf{a}\}$ (or $\frac{\nabla \widetilde{f}_i(\mathbf{x}^*_i)}{a_i} = \frac{\nabla \widetilde{f}_j(\mathbf{x}^*_j)}{a_j}=\pmb{\varphi}^*,~\forall i,j$) from Lemmas~\ref{lem_optimal_solution} and \ref{lem_unique_feasible}. Using LaSalle invariance principle for differential inclusions \cite[Theorem~2.1]{cherukuri2015tcns}, initializing by $\mathbf{X}_0 \in \mathcal{S}_b$, the trajectory set $\{L_{\widetilde{F}(\mathbf{X}_0))} \cap \mathcal{S}_b\}$ remains feasible and positively invariant under \eqref{eq_sol} (Lemma~\ref{lem_feasible_intime}), and converges to the largest invariant set $\mathcal{I} = \{\mathbf{X}^*\}$ including the unique equilibrium of \eqref{eq_sol} (as shown in Theorem~\ref{thm_tree}), $\overline{F}$ is monotonically non-decreasing and radially unbounded, $\max \mathcal{L}_\mathcal{F} \overline{F}(\mathbf{X}(t)) <0$ for all $\mathbf{X \in \mathcal{S}_b \setminus \mathcal{I}}$, and thus, from \cite[Theorem~1]{cortes2008discontinuous} $\mathbf{X}^*$ is globally strongly asymptotically stable}. This proves that agents' states under dynamics \eqref{eq_sol} converge to $\mathbf{X}^*$. \end{proof} \ab{The above proof holds for any $b$ value and any initialization state $\mathbf{X}_0 \in \mathbf{S}_b$, and the solution converges to $\mathbf{X}^*$ in Lemma~\ref{lem_optimal_solution}. } \begin{rem} \ab{Following similar analysis as in \cite{cherukuri2015tcns}, assuming $\exists u_{\min},K_{\min}$ such that $u_{\min} \leq \nabla^2 f_i(\mathbf{x}_i)$ (strongly convex cost with smooth gradient) and $ K_{\min} \leq \frac{g(\mathbf{z})}{\mathbf{z}}$, Eq. \eqref{eq_proof_decay} over a connected network $\mathcal{G}$ with $\lambda_2$ as its algebraic connectivity (Fiedler-value) and $\mathbf{a}=\mathbf{1}_n$ gives the decay rate of $\overline{F}$ as, \begin{align} \partial_t \overline{F} \leq -2 u_{\min} K_{\min} \lambda_2 \overline{F} \end{align}} \ab{For a disconnected network with at least one link $(i,j)$, the summation in \eqref{eq_proof_decay} is positive and $\partial_t \overline{F}$ is negative if $\frac{\nabla \widetilde{f}_i(\mathbf{x}_i)}{a_i} \neq \frac{\nabla \widetilde{f}_j(\mathbf{x}_j)}{a_j}$. From Assumption~\ref{ass_G}, $\partial_t \overline{F}$ is negative over sub-intervals of every time-interval $[t_k~t_k+l_k]$ (infinitely often) having $\frac{\nabla \widetilde{f}_i(\mathbf{x}_i)}{a_i} \neq \frac{\nabla \widetilde{f}_j(\mathbf{x}_j)}{a_j}$ for (at least) $2$ neighbors $i,j$ till reaching the optimizer $\mathbf{X}^*$ (for which $\frac{\nabla \widetilde{f}_i(\mathbf{x}^*_i)}{a_i} = \frac{\nabla \widetilde{f}_j(\mathbf{x}^*_j)}{a_j}~\forall i,j$). One may also consider discrete Lyapunov analysis and simply prove that $\overline{F}(\mathbf{X}(t_k+l_k))<\overline{F}(\mathbf{X}(t_k))$ for all $\mathbf{X}(t_k) \in \mathbf{\mathcal{S}_b \setminus \mathcal{I}}$.} \end{rem} \section{Simulation over Sparse Networks} \label{sec_sim} \begin{figure}[t] \centering \includegraphics[width=1.7in]{fig_quant1.eps} \includegraphics[width=1.7in]{fig_sat1.eps} \includegraphics[width=1.7in]{fig_quant2.eps} \includegraphics[width=1.7in]{fig_sat2.eps} \caption{ The time-evolution of (Top) the cost function versus the time-varying optimal value and (Bottom) the associated Lyapunov function for quantized resource allocation over example switching networks for quantized (Left) and saturated (Right) actuation dynamics for resource allocation.} \label{fig_cost} \end{figure} We simulate protocol~\eqref{eq_sol} for (i) quantized and (ii) saturated resource allocation over $4$ weakly-connected Erdos-R{\'e}nyi networks of $n=100$ agents changing every $0.1$ second with switching command $s:\lceil 10t-4\lfloor 2.5t \rfloor\rceil$ satisfying Assumption~\ref{ass_G}. Consider strictly convex cost as \cite{boyd2006optimal}, \small\begin{align} \label{eq_F2} \begin{cases} \widetilde{f}_i(\mathbf{x}_i) &= \sum_{j=1}^4 \bar{a}_{i,j}(\mathbf{x}_{i,j}-\bar{c}_{i,j})^2 \\ & ~~+ \log(1+\exp(\bar{b}_{i,j}(\mathbf{x}_{i,j}-\bar{d}_{i,j}))), \\ \widehat{f}_i(t) &= \sum_{j=1}^4 \bar{e}_{i,j}\sin(\alpha_{i,j}t+\phi_{i,j}) \end{cases} \end{align} \normalsize with random parameters. Assume $ \mathbf{b}= 10\mathbf{1}_4$ and $a_i$ in $[0.1,1]$. To solve~\eqref{eq_dra}, we accommodate \eqref{eq_sol} for two cases: (i) quantized actuation via the logarithmic quantizer \eqref{eq_quan_log} with $\delta=1$, and (ii) saturated actuation \eqref{eq_sat} with $\kappa=1$. The time-evolution of the cost \eqref{eq_F2} and the Lyapunov $\overline{F}(\mathbf{X})=F(\mathbf{X},t)-F^*(t)$ are shown in Fig.~\ref{fig_cost}. As it is clear, the cost functions converge to the optimal (time-varying) values, with Lyapunov functions (residuals) decreasing in time. \section{Application: Automatic Generation Control} \label{sec_edp} \ab{The AGC adjusts the power generation based on predetermined reserve limits to compensate for any generation-load mismatch in a time scale of minutes. We assume that the generation-load mismatch is known (e.g. generator outage) and we aim to allocate that mismatch to the generators by minimizing their power deviation cost. Let $\mathbf{x}_i$ represent the power deviation for generator $i$. The optimization problem finds the optimal mismatch allocation to $n$ generators while satisfying the reserve limits and is given by:} \ab{ \begin{align} \label{eq_f_quad} \min_\mathbf{X} &\sum_{i=1}^n \gamma_i \mathbf{x}_i^2+ \beta_i \mathbf{x}_i + \alpha_i,\\ ~\mbox{s.t.}~&\sum_{i=1}^n \mathbf{x}_i = P_{mis}, ~~ -\underline{R}_i \leq \mathbf{x_i} \leq \overline{R}_i,~ i=1,..,n. \nonumber \end{align} The generation-load mismatch is $P_{mis}$ and the reserve limits for decreasing and increasing the power generation are $\underline{R}$ and $ \overline{R}$, respectively. Mapping the problem to formulation \eqref{eq_dra}, $d=1$, $a_i=1$, ${\mathbf{b}=P_{mis}}$, $\underline{m}=\underline{R}_i$, $\overline{m}=\overline{R}_i$. The example of Fig. \ref{fig_edp1} is derived using ${n=10}$, $P_{mis}=800~MW$, $\underline{R}_i=50$, $\overline{R}_i=150$, and a set of realistic generator cost parameters. The initial allocated power is $\frac{D}{n}=80~MW$. We apply (i) the dynamics~\eqref{eq_sol_fixed} with $\mu_1=0.7$, $\mu_2=1.4$ and (ii) the robustified dynamics~\eqref{eq_sol} via $g_p(\cdot)$ in \eqref{eq_robust_sgn} with $\epsilon = 0.5$ to optimally allocate power over a cyclic communication network with random link weights. We compare the results with linear \cite{cherukuri2015tcns,boyd2006optimal}, accelerated linear \cite{shames2011accelerated}, finite-time \cite{chen2016distributed}, and initialization-free \cite{yi2016initialization} protocols in Fig.~\ref{fig_edp1}. \begin{figure}[t] \centering \includegraphics[width=3.1in]{fig_compare.eps} \includegraphics[width=3.1in]{fig_edp_dot.eps} \caption{(Top) This figure compares the residual of the dynamics~\eqref{eq_sol_fixed} (solid black) and robustified~\eqref{eq_sol} via \eqref{eq_robust_sgn} (dashed blue) with some recent literature. (Bottom) The power rates are compared at one generator. The horizontal dashed lines represent $\pm 1\frac{MW}{min}$ as the RRL. Only the robustified saturated dynamics (Proposed 2) met these limitations. } \vspace{-1.3cm} \label{fig_edp1} \end{figure} From Fig.~\ref{fig_edp1}, considering RRL in the context of AGC, clearly the robustified dynamics converges with fixed-rate over time to keep the generation power within the ramp-limits (dashed blue), while other solutions impose a high rate of power generation that is impossible for the generators to follow. Such \textit{rate-constraints} cannot be easily addressed via primal-dual \cite{turan2020resilient,nesterov2018dual,feijer2010stability} methods. Note that in case there are no RRL requirements, the proposed fixed-time protocol (solid black) converges faster than the linear and other solutions. } \vspace{-0.2cm} \section{Conclusion} \label{sec_conclusion} This paper proposes general nonlinear-constrained solutions for resource allocation over uniformly-connected networks. The proposed solution can solve the allocation problem subject to nonlinearities in Applications (i)-(iv) in Section~\ref{sec_dynamic}, their composition mapping (as it is also odd and strongly sign-preserving), or any other nonlinearity satisfying Assumption~\ref{ass_gW}. \bibliographystyle{IEEEbib}
1,116,691,498,757
arxiv
\section{Introduction} \bold{Main result.} The first non-zero Laplace eigenvalue $\lambda_1$ of a hyperbolic surface controls the speed of mixing of geodesic flow, the error term in the Geometric Prime Number Theorem, and measures the extent to which the surface is an expander. In high genus, the best one can hope for is that $\lambda_1$ is close to $\frac14$. Indeed, if $\Lambda=\limsup_{g\to \infty} \Lambda_g$, where $\Lambda_g$ denotes the maximum value of $\lambda_1$ over $\cM_g$, then we have $\Lambda \leq \frac14$ \cite{Huber, Cheng}. It is natural to conjecture that $\lambda_1$ is typically close to this optimal value of $\frac14$ in large genus, especially since the corresponding statement for regular graphs is true \cite{Friedman, Bor}. The difficulty of this conjecture is highlighted by the fact that it is not even known if $\Lambda=\frac14$; despite extensive study of Selberg's related eigenvalue $\frac14$ conjecture, it has not been proven that there is a sequence of surfaces with $g\to \infty$ and $\lambda_1\to \frac14$. In this paper, we study this conjecture by averaging the Selberg trace formula over $\cM_g$ and using ideas originating in Mirzakhani's thesis. We establish the following: \begin{thm}\label{T:main} For all $\varepsilon>0$, the Weil-Petersson probability that a surface in $\cM_g$ has $\lambda_1<\frac3{16}-\varepsilon$ goes to zero as $g\to\infty$. \end{thm} The same result was obtained independently by Wu and Xue \cite{WuXue}. Previously Mirzakhani showed the same result with $\frac3{16}$ replaced with $\frac14\left(\frac{\log(2)}{2\pi+\log(2)}\right)^2 \approx 0.002$ \cite{Mirzakhani:Growth}. Related results for random covers of a fixed surface, again with $\frac3{16}$ appearing, were obtained for closed surfaces by Magee, Naud, and Puder in \cite{MNP} and for convex cocompact surfaces by Magee and Naud \cite{MN}. \bold{Idea of the proof.} Our work is inspired by and builds on recent work of Mirzakhani and Petri \cite{MirzakhaniPetri:Lengths}. They fix a constant length $L$, and consider geodesics of length at most $L$. As the genus goes to infinity, they show in particular that, averaged over $\cM_g$, \begin{enumerate} \item most geodesics of length at most $L$ are simple and non-separating, and \item the number of simple non-separating geodesics of length at most $L$ can be estimated using Mirzakhani's integration formula. \end{enumerate} This paper extends these observations to lengths scales $L$ that grow slowly with genus. As the error term in the Geometric Prime Number Theorem suggests, bounds on the number of geodesics translate into bounds on $\lambda_1$. The starting point for the first observation above are computations that show, on average, there aren't too many subsurfaces which a non-simple geodesic of length at most $L$ can fill. Given this, one must show that most such subsurfaces don't have too many closed geodesics. This is more difficult when $L$ grows with genus, and requires that we establish new bounds in Section \ref{S:Bounds}. Even though our analysis shows that the contribution of the non-simple geodesics is a lower order term at the length scales we consider, it may be nescessary to better understand this term to move beyond the $\frac3{16}$ barrier. \bold{Broader significance.} Our results are broadly applicable to any problem that relates to counting geodesics or that might be studied by averaging the Selberg Trace Formula over $\cM_g$, and provide tools towards improved error terms in limit multiplicity laws \cite{Monk, Seven}, calculations with error terms for the average number of geodesics with lengths in intervals with sizes growing or shrinking with the genus, and first steps towards understanding eigenvalue spacing \cite{Sarnak}. Additionally, we believe our results concerning the nature of geodesics of different lengths scales are just the tip of the iceberg. We suggest the following as an accessible starting point for further investigation. \begin{conj} As $g$ goes to infinity, on most surfaces in $\cM_g$ most geodesics of length significantly less than $\sqrt{g}$ are simple and non-separating, and most geodesics of length significantly greater than $\sqrt{g}$ are not simple. \end{conj} Because error terms are central in our analysis, a proof of this conjecture will not necessarily yield any improvements to Theorem \ref{T:main}. However it would improve our understanding of high genus surfaces. \bold{Structure of the proof.} The proof of Theorem \ref{T:main} is divided into a geometric bound on geodesics and an argument using the Selberg Trace Formula. We state these results now. Given a compactly supported real function $F$, define $F_{\mathrm{all}}:\cM_g \to \bR$ by setting $F_{\mathrm{all}}(X)$ to be the sum of $F$ over the lengths of all oriented closed primitive geodesics on $X$. When not otherwise specified, we refer to the Weil-Petersson measure on $\cM_g$. \begin{thm}\label{T:GeometricMain} For any constants $D>0$ and $1>\kappa>0$, there are subsets $\cM_g'$ of $\cM_g$ such that \begin{enumerate} \item ${\Vol(\cM_g')}/\Vol(\cM_g) \to 1,$ \item every surface in $\cM'_g$ has systole at least $1/\log(g)$, and \item for any non-negative function $F$ with support in $[0, D\log(g)]$, the average of $F_{\mathrm{all}}$ over $\cM_g'$ is at most $$(1+O(g^{-1+\kappa})) I_F,$$ where $$I_F = \int_0^\infty F(\ell) \ell \frac{\sinh(\ell/2)^2}{(\ell/2)^2} d\ell.$$ \end{enumerate} \end{thm} \begin{thm}\label{T:SpectralMain} Fix $D > 4$ and $1 > \kappa > 0.$ Let $\mu$ be a Borel probability measure on $\cM_g$ such that \begin{enumerate} \item $\mu$ is supported on the $e^{- g^{o(1)}}$-thick part of $\cM_g$, and \item for any non-negative function $F$ with support in $[0, D\log(g)]$, the $\mu$-average of $F_{\mathrm{all}}$ is at most $$(1+O(g^{-1+\kappa})) I_F.$$ \end{enumerate} Then the $\mu$-probability that $\lambda_1(X) \leq \frac{1}{4} - b^2$ is at most $$O \left( g^{ 1 - 4b \left(1 - \frac{\kappa}{2} \right) + o(1)} \right),$$ where the implicit constant in the big O notation depends only on $D$, $\kappa$, and the implicit constant in $O(g^{-1+\kappa})$. \end{thm} \begin{proof}[Proof of Theorem \ref{T:main} given Theorems \ref{T:GeometricMain} and \ref{T:SpectralMain}] Let $\mu_g$ be the restriction of the Weil-Petersson measure to $\cM_g'$, normalized to be a probability measure. Theorem \ref{T:SpectralMain} proves that the $\mu_g$ probability that $\lambda_1<\frac3{16}-\varepsilon$ goes to zero as $g\to \infty$. Since the complement of $\cM_g'$ has probability measure going to zero as $g\to \infty$, this gives the result. \end{proof} \bold{Additional context.} Mirzakhani pioneered the study of Weil-Petersson random surfaces \cite{Mirzakhani:Growth}, and devoted her ICM address to this topic \cite{Mirzakhani:ICM}. Building on her previous study of Weil-Petersson volume polynomials, she proved in particular that the probability that the Cheeger constant is smaller than $0.099$ goes to zero as the genus goes to infinity. More recent works motivated by the problem of understanding $\lambda_1$ of typical high genus surfaces include results on Weil-Petersson volume polynomials \cite{AM} and the geometry of typical surfaces \cite{MT}. For some additional open problems related to random surfaces, see \cite[Section 10.4]{Tour}. In analogy with regular graphs \cite{Alon}, we expect that Riemann surfaces of high genus have Cheeger constant bounded away from 1, and that Theorem \ref{T:main} cannot be obtained using the Cheeger inequality. See \cite{NWX, WuXueParlier} for a recent study of separating geodesics. Additional results on small eigenvalues can be found in \cite{WuXue2, Dub}. \bold{Prerequisites.} In Sections \ref{S:thin} and \ref{S:GeometricMain}, we assume the reader is familiar with the formula Mirzakhani gave in her thesis for integrating certain functions over $\cM_g$ \cite{Mirzakhani:Invent}. See \cite[Section 4]{Tour} for a short introduction sufficient for our purposes. Let $V_{g,n}$ denote the volume of $\cM_{g,n}$, and $V_{g,n}(L_1, \ldots, L_n)$ denote the volume of the moduli space of genus $g$ hyperbolic surfaces with boundary geodesics of lengths $L_1, \ldots, L_n$. Given a compactly supported real function $F$, define $F_{\mathrm{sns}}:\cM_g \to \bR$ by letting $F_{\mathrm{sns}}(X)$ be the sum of $F$ over the lengths of all oriented simple non-separating geodesics on $X$. A special case of Mirzakhani's integration formula is $$\int_{\cM_g} F_{\mathrm{sns}} = \int_0^\infty \ell F(\ell) V_{g-1,2}(\ell, \ell) d\ell.$$ It is also nescessary to know the statement of asymptotics of $V_{g,n}$ in \cite[Theorem 1.8]{MirzakhaniZograf:LargeGenus}. \bold{Organization.} In Section \ref{S:Bounds} we give bounds on the number of closed geodesics on surfaces with boundary, showing that often there are vastly fewer geodesics than one expects on closed surfaces. (In particular, the critical exponent is often close to $0$.) These bounds however require a lower bound on the systole. For this reason, we require estimates for a version of Mirzakhani's Integration Formula over the thin part of $\cM_g$, which we obtain in Section \ref{S:thin} via an inclusion-exclusion argument. This section also gives more precise estimates for the volume of the thin part than were previously known. Sections \ref{S:GeometricMain} and \ref{S:SpectralMain} prove Theorems \ref{T:GeometricMain} and \ref{T:SpectralMain} respectively. Our arguments in Sections \ref{S:thin} and \ref{S:GeometricMain} crucially rely on estimates of Mirzakhani \cite{Mirzakhani:Growth}, Mirzakhani-Zograf \cite{MirzakhaniZograf:LargeGenus}, and Mirzakhani-Petri \cite{MirzakhaniPetri:Lengths}. For the convenience of the reader, we revisit the proofs of these estimates in Appendices \ref{A:VolPoly} and \ref{A:Vols} to verify some results on uniformity that were not explicitly included in the original statements. Similarly, in Appendix \ref{localweyllawappendix} we review a standard local Weyl law argument used in Section \ref{S:SpectralMain}. \bold{Acknowledgements.} We thank Farrell Brumley, Andrew Granville, Rafe Mazzeo, Peter Sarnak, and Scott Wolpert for helpful conversations. We also especially thank Paul Apisa for detailed comments on an earlier draft. During the preparation of this paper, the first author was partially supported by a NSERC Discovery Grant, and the second author was partially supported by a Clay Research Fellowship, NSF Grant DMS 1856155, and a Sloan Research Fellowship. \section{Surfaces with few geodesics}\label{S:Bounds} Throughout this paper, all hyperbolic surfaces and subsurfaces are assumed to be compact, and are either closed or have geodesic boundary. The purpose of this section is to show the following theorem. \begin{thm}\label{T:GraphBound} For any $A>0$, there exists a $C>0$ such that if $X$ is a hyperbolic surface of area $A$ and $L_0>1$ and $\frac12>\varepsilon>0$ are such that \begin{enumerate} \item $X$ does not have any pants or one-holed tori of total boundary length less than $L_0$, and \item $X$ has systole at least $\varepsilon$, \end{enumerate} then for all $\ell>0$, the number of closed geodesics on $X$ of length at most $\ell$ is at most $$\left(\frac{C L_0 \log(1/\varepsilon)}{\varepsilon}\right)^{C \ell/L_0+3}.$$ \end{thm \begin{rem} When getting upper bounds of the form $O(e^{\delta \ell})$ on the number of closed geodesics on a general hyperbolic surface, it isn't possible to do better than $\delta=1$. But, for fixed $A$ and $\varepsilon$, Theorem \ref{T:GraphBound} gives such a bound with $\delta$ a multiple of $\log(L_0)/L_0$. \end{rem} It is easy to see that $L_0$ can only be large if $X$ has long boundary, assuming a bound on $\varepsilon$ and $A$. We think of Theorem \ref{T:GraphBound} as saying that many surfaces with long boundary have very few geodesics. Surfaces satisfying the first condition in Theorem \ref{T:GraphBound} are studied in \cite{MT}, where they are called $L_0/2$-tangle-free. \subsection{The thin part} If $X$ is a hyperbolic surface with boundary, then we may form the double $X_d$ of $X$ by gluing together two copies of $X$ along their boundary to obtain a closed surface. The double is equipped with an involution whose quotient is $X$. For any $\delta$, the $\delta$-thin part of $X_d$ is defined to be the subset where the injectivity radius is less than $\delta$, and the $\delta$-thin part of $X$ is the image of this set in $X$. For $\delta$ small enough, the $\delta$-thin part of $X_d$ is a disjoint union of collars around short geodesics. Each such collar is either fixed by the involution or exchanged with another collar. In the fixed case, if the two boundary circles of the collar are exchanged we call the quotient of the collar a half-collar, and otherwise we call the quotient of the collar a thin rectangle. Below we will treat half-collars as a special case of collars. (In fact is harmless to assume there are no half-collars, since half-collars contain short boundary geodesics, and Theorem \ref{T:GraphBound} can only be applied with all boundary geodesics are long.) We thus see that the $\delta$-thin part of $X$ is a union of collars and thin rectangles. Thin rectangles correspond to regions of $X$ where two segments of the boundary of $X$ are very close to each other. We define the width of a rectangle to be the minimal distance between the components of its boundary in the interior of $X$, so very thin rectangles have very large width. From now on, we fix $\delta$ and refer to the $\delta$-thin part simply as the thin part. Its complement is called the thick part. Recall the following standard fact. \begin{lem} For all $A>0$ there exists a constant $C$ such that if $X$ is a hyperbolic surface of area at most $A$ with boundary, then there is a set of at most $C$ points on $X$ such that every point in the thick part is within distance $1$ of one of these points. \end{lem} From this we immediately get the following, keeping in mind that a curve of length $\varepsilon$ has a collar of size $O(\log(1/\varepsilon))$. \begin{cor}\label{C:Net} For all $A>0$ there exists a constant $C$ such that that if $X$ is a hyperbolic surface of area at most $A$ with boundary, $L_0>1$ is arbitrary, and the systole of $X$ is at least $\varepsilon>0$, then there is a set of at most $$C+C \log(1/\varepsilon)/L_0$$ points on $X$ such that every point not in a thin rectangle of width at least $L_0$ is within distance $L_0/{48}$ of one of these points. \end{cor} For the remainder of this section, given a surface $X$ as in Theorem \ref{T:GraphBound}, and a choice of $L_0$, we fix a set $\mathrm{Net}(X)$ of points as in Corollary \ref{C:Net}. Each thin rectangle has two boundary components in the interior of $X$, which we think of as the two ends of the rectangle. For each end of each rectangle of width at least $L_0$, add a point on this end to $\mathrm{Net}(X)$. These points will be distinguished in that we will remember that that each such point is associated to the end it lies on. Since the number of thin rectangles is bounded linearly in terms of $A,$ we have added at most a constant (linear in $A$) number of points to $\mathrm{Net}(X)$. \subsection{Good segments} Define a \emph{good segment} to be a geodesic segment joining two points in $\mathrm{Net}(X)$ that either \begin{itemize} \item has length at most $L_0/12$, or \item is contained in a thin rectangle of width at least $L_0$ and starts and ends at the chosen points on each end. \end{itemize} The zero length geodesic joining a point in $\mathrm{Net}(X)$ to itself will be considered a good segment. The purpose of this subsection is to show the following. \begin{prop}\label{P:SegmentCount} Let X be as in Theorem \ref{T:GraphBound}. For any two points $p_1, p_2\in \mathrm{Net}(X)$, there are at most $$3+\frac{L_0}{6\varepsilon}$$ good segments joining $p_1$ and $p_2$. \end{prop} \begin{lem}\label{L:AnnulusCount} Let $\gamma \in \Isom^+(\bH)$ have translation length $T>0$. Then, for any two points in $\bH/\langle \gamma \rangle$, there are at most $$2+\frac{2\ell}{T}$$ geodesic segments of length at most $\ell$ joining these two points. \end{lem} \begin{proof} The projection from $\bH/\langle \gamma \rangle$ onto its unique closed geodesic is distance non-increasing, so it suffices to assume the two points lie on this geodesic. In each of the two possible directions along this geodesic, the segment must first go from one point to the other, and then can make at most $\ell/T$ complete loops around the geodesic. \end{proof} The following elementary observation applies both when $X$ is closed and when it has geodesic boundary and cusps. \begin{lem}\label{L:BallGood} Suppose that a hyperbolic surface $X$ doesn't have any pants or one-holed tori with total boundary length less than $L_0$. Then the ball $B$ of radius $R=L_0/12$ centered at any point $p\in X$ is isometric to either a subset of $\bH$ or to a subset of $\bH/\langle \gamma \rangle$ for some $\gamma \in \Isom^+(\bH)$. \end{lem} A version of this lemma appears in \cite[Proposition B]{MT}. \begin{proof} It suffices to consider the case when $X$ doesn't have boundary. In order to find a contradiction, assume that $B$ is not homeomorphic to a ball or an annulus. \begin{sublem} $B$ contains two simple loops $\alpha_1$ and $\alpha_2$, each of length at most $2R$, that intersect at most once. \end{sublem} \begin{proof} Slowly grow an open ball centered at $p$, starting with small radius and then increasing the radius to $R=L_0/12$. Let $R_1<R$ be the maximum radius where this ball is embedded, so the closure of the ball of radius $R_1$ contains a point $q_1$ that appears with multiplicity at least two in the boundary circle of the ball. Define $\alpha_1$ to be the simple loop that travels from $q_1$ to $p$ and then back out to the other appearance of $q_1$ on the boundary of the ball. See Figure \ref{F:alpha12} (left). \begin{figure}[ht!] \includegraphics[width=\linewidth]{alpha12.pdf} \caption{Finding $\alpha_1$ and $\alpha_2$.} \label{F:alpha12} \end{figure} If $q_1$ is not unique, then we can similarly define $\alpha_2$. If $q_1$ appears with multiplicity greater than 2, we define $\alpha_2$ as in Figure \ref{F:alpha12} (right). Otherwise, the ball of radius slightly larger than $R_1$ is topologically an annulus. Let $R_2<R$ be the maximum radius where this remains true. So there is a point $q_2$ which appears at least twice on the boundary of the ball of radius $R_2$ and isolated among such points. We now define $\alpha_2$ analogously to $\alpha_1$. \end{proof} If $\alpha_1$ and $\alpha_2$ intersect once, then the boundary of the neighborhood of their union can be tightened to a geodesic of length at most $$2\ell(\alpha_1)+2\ell(\alpha_2)\leq 8R,$$ and this geodesic bounds a torus. If they don't intersect, we can form a simple curve $\alpha_3$ by going around $\alpha_1$, then taking a minimal length path $\beta$ to $\alpha_2$, going around $\alpha_2$ once, and then going back to $\alpha_1$ along $\beta$. This $\alpha_3$ has length at most $$\ell(\alpha_1) + \ell(\alpha_2) + 2\ell(\beta) \leq 8R.$$ The geodesic representatives of $\alpha_1, \alpha_2, \alpha_3$ bound a pants with total boundary at most $12R$. \end{proof} \begin{proof}[Proof of Proposition \ref{P:SegmentCount}.] Call the two points $p_1$ and $p_2$. There is at most 1 good segment joining $p_1$ and $p_2$ of length greater than $L_0/12$; this segment can only exist if $p_1$ and $p_2$ are the points associated to opposite ends of a thin rectangle. So we now count the number of geodesics joining $p_1$ and $p_2$ of length at most $L_0/12$. Any such geodesic is of course contained in the ball of radius $L_0/12$ about $p_1$. By Lemma \ref{L:BallGood}, this ball is isometric to a subset of $\bH$ or to a subset of $\bH/\langle \gamma \rangle.$ In the first case, there is only one geodesic from $p_1$ to $p_2$, and in the second case Lemma \ref{L:AnnulusCount} (with $T\geq\varepsilon$ and $\ell=L_1/12$) gives the necessary bound. \end{proof} \subsection{Loops of good segments} We now relate closed geodesics to good segments. \begin{lem}\label{L:Homotope} Every closed geodesic $\gamma$ of length at most $\ell$ is homotopic to a loop of at most $$2+\frac{24\ell}{L_0}.$$ good segments. \end{lem} \begin{proof} There is some point $p_1 \in \mathrm{Net}(X)$ of distance at most $L_0/48$ from a point $p_1'$ of $\gamma$. Pick an orientation along $\gamma$. Having defined $p_i\in \mathrm{Net}(X)$ and $p_i'\in \gamma$, define $p_{i+1}'$ and $p_{i+1}$ as follows: \begin{enumerate} \item If $p_i$ is within distance $L_0/24$ of $p_1$ in the forward direction along $\gamma$, set $p_{i+1}=p_i$ and $p_{i+1}'=p_i'$, and conclude the construction. \item If the point distance $L_0/24$ along $\gamma$ is not in a thin rectangle of width at least $L_0$, then define $p_{i+1}'$ to be this point. Pick $p_{i+1}$ to be any point of $\mathrm{Net}(X)$ within distance $L_0/48$ of $p_i'$. \item In the remaining case, if $p_i$ is not the the point of $\mathrm{Net}(X)$ associated to the entry end of this this rectangle, set $p_{i+1}\in \mathrm{Net}(X)$ to be this point on the entry end; and otherwise let $p_{i+1}\in \mathrm{Net}(X)$ be the point associated to the exit end of the rectangle. In either case, let $p_{i+1}'$ be a point in $\gamma$ of distance at most 1 away from $p_{i+1}$. \end{enumerate} For each $i$, fix a path from $p_i$ to $p_i'$ of minimal length; this length must be at most $L_0/48$ in all cases. Define $\gamma_i$ be the geodesic representative of the path which goes from $p_i$ to $p_i'$, then goes along $\gamma$ to $p_{i+1}'$, then goes to $p_{i+1}$. So, by definition $\gamma$ is homotopic to the concatenation of the $\gamma_i$. \begin{sublem} Each $\gamma_i$ is a good segment. \end{sublem} \begin{proof} Since $1/48+1/24+1/48 = 1/12$, $\gamma_i$ can only have length greater than $L_0/12$ in the final case above, when it crosses a thin rectangle, and in this case $\gamma_i$ is good by definition. \end{proof} It now suffices to bound the number of segments $\gamma_i$, or equivalent the number of points $p_i' \in \gamma$. Suppose the number of such points is $n$. The fact that the distance from $p_{n}'$ to $p_1'$ may be arbitrarily small slightly complicates the bound, since it means the last segment of $\gamma$ is unusual. If $i\leq n-2$, then either the distance along $\gamma$ from $p_i'$ to $p_{i+1}'$ is exactly $L_0/24$, or the distance from $p_i'$ to $p_{i+2}'$ is at at least $L_0.$ In this way we see that the average length of either the first $n-2$ or the first $n-1$ of these distances is at least $L_0/24$, and hence we get $(n-2) L_0/24 \leq \ell$. \end{proof} \begin{proof}[Proof of Theorem \ref{T:GraphBound}] By Lemma \ref{L:Homotope}, it suffices to bound the number of loops consisting of at most $n=\lfloor 2+\frac{24\ell}{L_0}\rfloor$ good segments. We will actually bound the number of loops with a choice of basepoint in $\mathrm{Net}(X)$, which is larger. Since we allow zero length good segments, we can assume there are exactly $n$ good segments in the loop. The number of paths in $\mathrm{Net}(X)$ that can be traced out by such a loop is bounded by $|\mathrm{Net}(X)|^n$. Hence, Proposition \ref{P:SegmentCount} gives that the total number of such loops is at most $$|\mathrm{Net}(X)|^n \left(3+\frac{L_0}{6\varepsilon}\right)^n.$$ It now suffices to note that, since $\varepsilon<1$ and $L_0>1$, $$|\mathrm{Net}(X)| \left(3+\frac{L_0}{6\varepsilon}\right) = \left(C+\frac{C \log(1/\varepsilon)}{L_0}\right) \left(3+\frac{L_0}{6\varepsilon}\right)$$ can be bounded by $$\frac{C' L_0 \log(1/\varepsilon)}{\varepsilon}$$ for some different constant $C'$. \end{proof} \section{Integrating over the thin part}\label{S:thin} Let $\cM_g^{<\varepsilon}$ denote the subset of $\cM_g$ where the surface has a closed geodesic of length less than $\varepsilon$. The purpose of this section is to prove the following result. \begin{thm}\label{T:ThinExpectedValue} There is a constant $\varepsilon_0>0$ such that for $\varepsilon<\varepsilon_0$ and $F$ a non-negative function with support in $[0, g^{o(1)}]$, the average of $F_{\mathrm{sns}}$ over $\cM_g^{<\varepsilon}$ is at least $$(1-g^{-1+o(1)}) I_F.$$ \end{thm} More formally, this means that for any function $s(g) \in o(1)$, there exists a function $p(g) \in o(1)$ such that if the support is in $[0,g^{s(g)}]$ then the average is at least $(1-g^{-1+p(g)}) I_F$. The function $p(g)$ does not depend on $\varepsilon$. \begin{cor}\label{C:ThickAverage} With the same assumptions, the average of $F_{\mathrm{sns}}$ over $\cM_g^{>\varepsilon}$ is at most $${(1+g^{-1+o(1)})} I_F $$ \end{cor} \begin{proof} Let $\delta$ denote the measure of $\cM_g^{<\varepsilon}$ divided by the measure of $\cM_g$, so the desired average is $$\frac1{1-\delta} \left(\frac1{V_g} \int_{\cM_g} F_{\mathrm{sns}} - \frac{\delta V_g} {V_g} \frac1{\delta V_g} \int_{\cM_g^{<\varepsilon}} F_{\mathrm{sns}} \right).$$ Mirzakhani's integration formula gives $$\frac1{V_g} \int_{\cM_g} F_{\mathrm{sns}} = \frac{1}{V_g} \int_0^\infty F(\ell) \ell V_{g-1, 2}(\ell, \ell) d\ell.$$ The volume asymptotics with error term in \cite[Theorem 1.8]{MirzakhaniZograf:LargeGenus} imply that \begin{equation}\label{E:aratio} \frac{V_{g-k,2k}}{V_g} = 1 + O\left( \frac{k^2}g\right). \end{equation} This statement with $k=2$ together with the sinh upper bound (Lemma \ref{L:sinh}) gives that $$\frac1{V_g} \int_{\cM_g} F_{\mathrm{sns}} \leq I_F(1+g^{-1+o(1)}).$$ Theorem \ref{T:ThinExpectedValue} gives that $$\frac1{\delta V_g} \int_{\cM_g^{<\varepsilon}} F_{\mathrm{sns}} \geq (1-g^{-1+o(1)}) I_F.$$ Thus the desired average is at most $$\frac{I_F}{1-\delta} \left(1 - \delta(1-g^{-1+o(1)}) \right) = \frac{I_F}{1-\delta} \left(1-\delta +\delta g^{-1+o(1)} \right),$$ giving the result. (Since $\varepsilon$ is small we can assume $\delta < 1/2$.) \end{proof} The average in Theorem \ref{T:ThinExpectedValue} is the integral over $\cM_g^{<\varepsilon}$ divided by the measure of $\cM_g^{<\varepsilon}$, and we estimate the numerator and denominator separately. We will always assume $\varepsilon$ is small enough so that geodesics of length at most $\varepsilon$ are simple and pairwise disjoint. \begin{prop}\label{P:ThinMeasure} There is a constant $\varepsilon_0>0$ such that for $\varepsilon<\varepsilon_0$, the volume of $\cM_g^{<\varepsilon}$ is $$\left(1+O \left(g^{-1+o(1)}\right) \right) \cdot (1-\exp(-I_\varepsilon)) V_g,$$ where $$I_\varepsilon = \frac12 \int_0^\varepsilon \delta \frac{\sinh(\delta/2)^2}{(\delta/2)^2} d\delta.$$ \end{prop} The implicit $o(1)$ function does not depend on $\varepsilon$. When $\varepsilon$ is small, $I_\varepsilon$ is about $\varepsilon^2/4$. For fixed $\varepsilon$, this agrees with the asymptotic for the volume of $\cM_g^{<\varepsilon}$ obtained in \cite{MirzakhaniPetri:Lengths}. Both $g^{-1+o(1)}$ and $O(g^{-1+o(1)})$ denote terms within a sub-polynomial factor of $g^{-1}$, but the later does not specify the sign of the term. \begin{proof} Let $A_k$ be the integral over $\cM_g$ of the function which counts the number of sets $S$ of $k$ disjoint unoriented geodesics, each of length at most $\varepsilon$, on a surface $X\in \cM_g$. \begin{lem}\label{L:IE} If $n(g)=3g-3$, then the volume of $\cM_g^{<\varepsilon}$ is exactly $$\sum_{k=1}^{n(g)} (-1)^{k+1} A_k.$$ The same sum with $n(g)$ any odd integer gives an upper bound, and with $n(g)$ even it gives a lower bound. \end{lem} \begin{proof} This follows directly from the inclusion exclusion principle, or equivalently the identity $$\sum_{k=1}^{n} (-1)^{k+1} {r \choose k} = 1- (-1)^n {r-1 \choose n},$$ where ${r-1 \choose n}$ is defined to be zero if $n \geq r$. Every surface in $\cM_g^{<\varepsilon}$ has some number $r$ of geodesics of length at most $\varepsilon$, where $1\leq r \leq 3g-3$. \end{proof} Let $n(g)$ be the floor of $\log g / \log \log \log g$. For each $i$, write $A_k = G_k + B_k$, where the good contribution $G_k$ is the integral of the number of sets $S$ of $k$ disjoint unoriented geodesics, each of length at most $\varepsilon$, where the complement of $S$ is connected, and the bad contribution $B_k$ is defined similarly for sets where $S$ is separating. \begin{lem}\label{L:ThinMeasureGood} There is a constant $\varepsilon_0>0$ such that for $\varepsilon<\varepsilon_0$, $$\sum_{k=1}^{n(g)} (-1)^{k+1} G_k = V_g (1+O( g^{-1+o(1)})) (1-\exp(-I_\varepsilon)).$$ Moreover $G_{n(g)+1}=O(V_g g^{-1+o(1)}(1-\exp(-I_\varepsilon))).$ \end{lem} \begin{proof} Mirzakhani's integration formula gives $$G_k=\frac1{2^k k!} {\int_0^\varepsilon \cdots \int_0^\varepsilon} \delta_1 \cdots \delta_k V_{2g-k, 2k}(\delta_1, \delta_1, \ldots, \delta_k, \delta_k) d \delta_1 \ldots d \delta_k. $$ Since $k\leq n(g)$, the sinh approximation (Lemma \ref{L:sinh}) gives $$G_k = (1+O(g^{-1+o(1)})) \frac{V_{g-k,2k}} { k!} I_\varepsilon^k .$$ As in equation \eqref{E:aratio}, the volume asymptotics with error term in \cite[Theorem 1.8]{MirzakhaniZograf:LargeGenus} give that $V_{g-k,2k}$ is very close to $V_g$, so this implies the same statement with $V_{g-k,2k}$ replaced by $V_g$. Hence $$ \sum_{k=1}^{n(g)} (-1)^{k+1} G_k = V_g \sum_{k=1}^{n(g)} \frac{(-1)^{k+1}}{k!} I_\varepsilon^k + O\left(V_g g^{-1+o(1)}\sum_{k=1}^{n(g)} \frac{1}{k!} I_\varepsilon^k\right).$$ Taylor's Theorem implies that $$\sum_{k=1}^{n(g)} \frac{(-1)^{k+1}}{k!} I_\varepsilon^k = 1-\exp(-I_\varepsilon) + O\left( \frac{(I_\varepsilon)^{n(g)+1}}{(n(g)+1)!} \right).$$ We think of $1-\exp(-I_\varepsilon)$ as being the main term, and note that it is approximately $I_\varepsilon$ when $\varepsilon$ is small. We need to compare the error here and above to this main term. We start by noting that $$\sum_{k=1}^{n(g)} \frac{1}{k!} I_\varepsilon^k \leq \exp(I_\varepsilon)-1.$$ Since this is also about $I_\varepsilon$ when $\varepsilon$ is small, we have $$V_g g^{-1+o(1)}\sum_{k=1}^{n(g)} \frac{1}{k!} I_\varepsilon^k= O\left( V_g g^{-1+o(1)} (1-\exp(-I_\varepsilon)) \right),$$ bounding with the first source of error above. Next we consider the error the Taylor approximation, namely $$ \frac{(I_\varepsilon)^{n(g)+1}}{(n(g)+1)!} \leq \frac{I_\varepsilon}{(n(g)+1)!} = \frac{O(1-\exp(-I_\varepsilon))}{(n(g)+1)!} , $$ where we assume in particular that $\varepsilon$ is small enough to get $I_\varepsilon<1$. This error term is small enough when $(n(g)+1)!>g$. Using Stirling's formula, this is certainly true when $$ (n(g)/e)^{n(g)} = e^{(\log(n(g))-1) n(g)} > g,$$ which is guaranteed by our choice of $n(g)$. The final statement follows from the arguments above, and can also be obtained by summing up to $n(g)+1$ instead of $n(g)$ and then subtracting the two sums to isolate the $k=n(g)+1$ term. \end{proof} \begin{lem}\label{L:ThinMeasureBad}There is a constant $\varepsilon_0>0$ such that for $\varepsilon<\varepsilon_0$, $$\sum_{k=1}^{n(g)+1} B_k \leq V_g \cdot g^{-1+o(1)} (1-\exp(-I_\varepsilon)).$$ \end{lem} \begin{proof} By Mirzakhani's Integration Formula and the sinh upper bound in Lemma \ref{L:sinh}, $B_k$ is at most $I_\varepsilon^k$ times a sum of products of $V_{g_i, n_i}$, summed over all ways of pinching $k$ curves to a get a surface with at least $2$ components, where the components have genus $g_i$ and $n_i$ nodes. Lemma \ref{L:CountStrata} states that there are at most $2^{k+q^2} g^{q'-1}$ ways to pinch a collection of $k$ curves and get a nodal surface with $q$ components, $q'$ of which aren't spheres with three marked points or tori with one marked point. Lemma \ref{L:ProductBound} states that for each such configuration, $$\prod_{i=1}^q V_{g_i, n_i} \leq V_g \left( \frac{C_1}{g}\right)^{q+q'-2}.$$ Note that $$2^{k+q^2} g^{q'-1} \left( \frac{C_1}{g}\right)^{q+q'-2} = \frac{2^{k+q^2} C_1^{q+q'-2}}{g^{q-1}} \leq (2C_1^2)^{k-1} \left(\frac{ g^{o(1)} }{g}\right)^{q-1},$$ where the final inequality uses that $2^q \leq 2^{n(g)+1} = g^{o(1)}$. For each $q$, there are at most $q$ values of $q'.$ Also, $q$ is at most $k+1.$ Thus, $$ \sum_{k=1}^{n(g)+1} B_k \leq V_g \sum_{k=1}^{n(g)+1} I_\varepsilon^k (2C_1^2)^{k-1} \sum_{q=2}^{k+1} \left(\frac{ g^{o(1)} }{g}\right)^{q-1}. $$ If $\varepsilon_0$ is small enough, then $I_\varepsilon \cdot (2 C_1^2) < 1/2$, so this is bounded by $$V_g I_\varepsilon \sum_{k=1}^{n(g)+1} \frac{1}{2^{k-1}} \sum_{q=2}^{n(g)+1} \left(\frac{ g^{o(1)} }{g}\right)^{q-1} = V_g I_\varepsilon g^{o(1)-1}.$$ Since $1-\exp(-I_\varepsilon)$ is comparable to $I_\varepsilon$, this gives the result. \end{proof} To conclude the proof of the proposition, first note that the alternating over/underestimate of the truncated inclusion exclusion bounds from Lemma \ref{L:IE} shows that the error in the truncated inclusion exclusion is at most $A_{n(g)+1}=G_{n(g)+1}+B_{n(g)+1}$. Lemma \ref{L:ThinMeasureGood} bounds $G_{n(g)+1}$, and Lemma \ref{L:ThinMeasureBad} overestimates $B_{n(g)+1}$, since the sum goes up to $n(g)+1$. Hence Lemmas \ref{L:ThinMeasureGood} and \ref{L:ThinMeasureBad} give the proposition. \end{proof} \begin{prop}\label{P:ThinIntegral} There is a constant $\varepsilon_0>0$ such that for $\varepsilon<\varepsilon_0$ and $F$ a non-negative function with support in $[0, g^{o(1)}]$, the integral of $F_{\mathrm{sns}}$ over $\cM_g^{<\varepsilon}$ at least $$V_g(1-g^{-1+o(1)}) \cdot (1-\exp(-I_\varepsilon)) \cdot I_F .$$ \end{prop} \begin{proof} The proof is almost identical to the previous proposition, so we only give a sketch. Let $n(g)$ be an even integer closest to $\log g / \log \log \log g.$ Let $A_k'$ be the integral over $\cM_g$ of the sum over simple non-separating geodesics $\gamma$ on $X\in \cM_g$ of $F(\ell_X(\gamma))$ times the number of sets $S$ of $k$ disjoint geodesic of length at most $\varepsilon$ all of which are disjoint from $\gamma$. As in Lemma \ref{L:IE}, since $n(g)$ is even, the desired integral is bounded below by $$\sum_{k=1}^{n(g)} (-1)^{k+1} A_k'.$$ Indeed, if $\gamma$ is a simple non-separating geodesic, let $r$ denote the number of geodesics of length at most $\varepsilon$ disjoint from $\gamma$. If $r=0$, $\gamma$ does not contribute to this sum, and if $r>0$ then, since $n(g)$ is even, the proof of Lemma \ref{L:IE} shows that $\gamma$ contributes at most once. For each $i$, decompose $A_k' = G_k' + B_k'$, where $G_k'$ is the contribution from $S\cup \gamma $ non-separating, and $B_k$ is contribution from $S\cup \gamma $ separating. \begin{lem}\label{L:ThinIntegralGood} There is a constant $\varepsilon_0>0$ such that for $\varepsilon<\varepsilon_0$, $$\sum_{k=1}^{n(g)} (-1)^{k+1} G_k' = V_g (1+g^{-1+o(1)})\cdot (1-\exp(-I_\varepsilon)) \cdot I_F. $$ Moreover $G_{n(g)+1}'=O(V_g g^{-1+o(1)}\cdot (1-\exp(-I_\varepsilon)) \cdot I_F).$ \end{lem} \begin{proof} In this case, because of our assumption on the support of $F$, the sinh approximation gives $$G_k' = (1-g^{-1+o(1)}) \frac{V_g} { k!} I_\varepsilon^k I_F,$$ and otherwise the proof proceeds as in Lemma \ref{L:ThinMeasureGood}, since this expression for $G_k'$ is $I_F$ times the expression for $G_k$ that appeared in Lemma \ref{L:ThinMeasureGood}. \end{proof} \begin{lem}\label{L:ThinIntegralBad} $$\sum_{k=1}^{n(g)+1} B_k' \leq V_g (g^{-1+o(1)}) \cdot(1-\exp(-I_\varepsilon))\cdot I_F.$$ \end{lem} \begin{proof} The proof is similar to Lemma \ref{L:ThinMeasureBad}. \end{proof} The proposition follows from Lemmas \ref{L:ThinIntegralGood} and \ref{L:ThinMeasureBad}. \end{proof} \section{Proof of Theorem \ref{T:GeometricMain}}\label{S:GeometricMain} Fix $\kappa$ and $D$, and consider the locus $\cN_g\subset \cM_g$ where \begin{enumerate} \item there are no separating multi-curves of total length less than $(\kappa/2) \log(g)$ whose complement has two components, and \item there are no separating multi-curves of total length less than $2D\log(g)$ whose complement has two components, each of area at least $2\pi (4D+1)$. \end{enumerate} \begin{lem}\label{L:NgMeasure} The measure of the complement of $\cN_g$ is $$O( g^{\kappa-1} V_g ). $$ \end{lem} \begin{proof} Corollary \ref{C:SepVol} states that, for integer $a\geq 0$, the probability that a surface in $\cM_g$ has a multi-geodesic of length at most $L$ cutting the surface into two components each area at least $2\pi a$ is $$O(e^{2L} \cdot g^{-a}) .$$ Thus, the probability of having a separating multi-curve of total length less than $(\kappa/2) \log(g)$ is $$O( g^\kappa \cdot g^{-1})$$ and the probability of having a separating multi-curve of total length less than $2D\log(g)$ cutting the surface into two components each area at least $2\pi (4D+1)$ is $$O( g^{4D} \cdot g^{-(4D+1)}).$$ This proves the lemma. \end{proof} Define $\cN_g^{>\varepsilon}= \cN_g \cap \cM_g^{>\varepsilon}$. For now we require only that $\varepsilon$ is smaller than some universal constant, but ultimately we will take $\varepsilon$ to zero as $g\to \infty$. Throughout this section, we assume $F$ is supported on $[0, D\log(g)]$. \begin{lem}\label{L:snsMain} The average over $\cN_g^{>\varepsilon}$ of $F_{\mathrm{sns}}$ is at most $$\left(1+ O(g^{\kappa-1}) \right) I_F.$$ \end{lem} \begin{proof} Compute \begin{eqnarray*} \frac{1}{\Vol(\cN_g^{>\varepsilon})}\int_{\cN_g^{>\varepsilon}} F_{\mathrm{sns}} &=& \frac{\Vol(\cM_g^{>\varepsilon})}{\Vol(\cN_g^{>\varepsilon})} \cdot \frac{1}{\Vol(\cM_g^{>\varepsilon})} \int_{\cN_g^{>\varepsilon}} F_{\mathrm{sns}} \\&\leq& \frac{\Vol(\cM_g^{>\varepsilon})}{\Vol(\cN_g^{>\varepsilon})} \cdot \frac{1}{\Vol(\cM_g^{>\varepsilon})} \int_{\cM_g^{>\varepsilon}} F_{\mathrm{sns}}. \end{eqnarray*} We will separately give bounds in the first and second factor of this expression. Corollary \ref{C:ThickAverage} states that the second factor is at most $(1+g^{-1+o(1)}) I_F$. Note that \begin{eqnarray*} \frac{\Vol(\cM_g^{>\varepsilon})}{\Vol(\cN_g^{>\varepsilon})} &=& 1 + \frac{\Vol(\cM_g^{>\varepsilon}\setminus \cN_g^{>\varepsilon} )}{\Vol(\cN_g^{>\varepsilon})} \\ & \leq & 1 + \frac{\Vol(\cM_g\setminus \cN_g )}{\Vol(\cN_g^{>\varepsilon})} \\ & \leq & 1 + \frac{2\Vol(\cM_g\setminus \cN_g )}{V_g}, \end{eqnarray*} where in the last line we used the extremely weak bound $\Vol(\cN_g^{>\varepsilon}) \geq V_g/2$. So Lemma \ref{L:NgMeasure} gives that the first factor is at most $1+ O( g^{\kappa-1})$. \end{proof} \begin{prop}\label{P:nsnsMain} The average over $\cN_g^{>\varepsilon}$ of $F_{\mathrm{all}}-F_{\mathrm{sns}}$ is at most $$O( g^{o(1)-1} I_F).$$ \end{prop} A union of closed geodesics is said to fill a hyperbolic surface if every component of the complement is either a contractible polygon or an annular region around a boundary geodesics. Recall the following. \begin{lem}\label{L:fill} Suppose a union $\gamma$ of closed geodesics of total length $\ell$ fills a hyperbolic surface $X$ of area $A$ with boundary of length $B\geq 0$. Then $B< 2\ell$ and $\ell > A/4$. \end{lem} \begin{proof} Each boundary geodesic can be obtained by tightening a path of segments of $\gamma$, and each segment can contribute at most twice in this way. So $B<2\ell$. A version of the isoperimetric inequality gives that, for each component of the complement of $\gamma$, the length of the boundary of this component is greater than the area \cite[page 211]{Buser}. So $2\ell+B >A.$ \end{proof} \begin{cor}\label{C:subsurface} If $g$ is larger than a constant depending on $D$, then any non-simple geodesic of length at most $D\log(g)$ on a surface in $\cN_g$ is contained in a subsurface with boundary at most $2D\log(g)$ and area at most $2\pi (4D+1)$ and with connected complement. \end{cor} \begin{proof} By Lemma \ref{L:fill}, any non-simple geodesic of length at most $D\log(g)$ fills a subsurface $S$ with boundary of length at most $2D\log(g)$ and area at most $4D\log(g)$. A surface of that area has at most $2D\log(g)/\pi$ boundary circles, so the complement of $S$ can have at most that many components. Let $C$ be the component of the complement of $S$ with largest area, so $C$ has area at least $$ \frac{2\pi(2g-2)- 4D\log(g)}{2D\log(g)/\pi}.$$ Assume $g$ is large enough so that this quantity is greater than $2\pi (4D+1)$. Let $S'$ be the complement of $C$. Note that $S'$ is connected, because it contains $S$, which is adjacent to every component of the complement of $S$. By the second condition in the definition of $\cN_g$, we see that $S'$ must have area at most $2\pi (4D+1)$, since its complement is connected and area greater than $2\pi (4D+1)$. Since the geodesic is contained in $S'$, this gives the result. \end{proof} \begin{proof}[Proof of Proposition \ref{P:nsnsMain}] We will use that $F$ has support in $[0, D\log(g)]$. Since $D$ is fixed, there are only a finite number of possible topological types for a subsurface of area at most $2\pi (4D+1)$. Thus Corollary \ref{C:subsurface} motivates the following. \begin{lem}\label{L:g1k} For fixed $g_1$ and $k$, if $\varepsilon$ is such that $1/\varepsilon$ is $g^{o(1)}$, then the average over $\cN_g^{>\varepsilon}$ of the sum of $F$ over geodesics contained in a subsurface of genus $g_1$ with $k$ boundary components and connected complement is at most $$ O( g^{-1+o(1)} I_F).$$ \end{lem} \begin{proof} We estimate the average number of such geodesics of length at most $L$, assuming $L\leq D\log(g)$. Each such geodesic is contained in a subsurface of boundary at most $2L$. Corollary \ref{C:SepVol} (with $a=2g_1-2+k$ fixed) gives that the average number of such subsurfaces is at most $$O(e^{(2L)/2} (2L)^{p} g^{-1})$$ for some $p$. Note that, since the volume of $\cN_g^{>\varepsilon}$ is certainly at least half that of $\cM_g$, the average over $\cN_g^{>\varepsilon}$ is at most twice the average over $\cM_g$. Theorem \ref{T:GraphBound} gives that the number of geodesics of length at most $L$ in each such subsurface is at most $$\left(\frac{C L_0 \log(1/\varepsilon)}{\varepsilon}\right)^{C L/L_0+3}$$ where $L_0=(\kappa/2)\log(g)$. Given $L\leq D\log(g)$, the exponent $C L/L_0+3$ is $O(1).$ Given the restriction on $\varepsilon$, this whole expression is $g^{o(1)}$, with little $o$ function depending on $D$ and $\kappa$. So the average over $\cN_g^{>\varepsilon}$ of the number of geodesics of length at most $L\leq D\log(g)$ contained in a subsurface of genus $g_1$ with $k$ boundary components is $$O(e^{L} L^{p} g^{-1+o(1)}).$$ Given the bound on $L$ this is $$O(g^{-1+o(1)} L \sinh^2(L/2) / (L/2)^2).$$ Integrating this against $F(L)$ gives the result, since the probability density function for the number of geodesics in question of length exactly $L$ is certainly bounded by the probability density function for geodesics of length at most $L$. \end{proof} \begin{lem}\label{L:SepAvg} The average over $\cN_g^{>\varepsilon}$ of the sum of $F$ over simple separating geodesics is at most $$O( g^{o(1)-1} I_F).$$ \end{lem} \begin{proof} First consider the average number of separating geodesics of length at most $L$, averaged over $\cN_g^{>\varepsilon}$. Since the volume of $\cN_g^{>\varepsilon}$ is certainly at least half that of $\cM_g$, this is at most twice the average over $\cM_g$. Corollary \ref{C:SepVol} (with $k=1$) gives that the average over $\cM_g$ is $O(e^{L} L^{2} g^{-1})$. Assuming $L\leq D\log(g)$, this is bounded by a constant times $$g^{-1+o(1)} L \sinh^2(L/2) / (L/2)^2.$$ Integrating this against $F(L)$ gives the result. \end{proof} The two lemmas prove the proposition, since every geodesics contributing to $F_{\mathrm{all}}-F_{\mathrm{sns}}$ is either simple and separating, and hence controlled by Lemma \ref{L:SepAvg}, or contained in a subsurface of one of finitely many topological types by Corollary \ref{C:subsurface}, and hence controlled by Lemma \ref{L:g1k}. \end{proof} \begin{proof}[Proof of Theorem \ref{T:GeometricMain}] Set $\varepsilon=1/\log(g)$ and define $\cM_g' = \cN_g^{>\varepsilon}$. Since $\varepsilon\to 0$ as $g\to \infty$, the probability measure of $\cM_g^{<\varepsilon}$ goes to zero as $g\to\infty$. Lemma \ref{L:NgMeasure} gives that the probability measure of the complement of $\cN_g$ goes to zero as $g\to \infty$. So ${\Vol(\cM_g')}/{V_g} \to 1.$ Lemma \ref{L:snsMain} and Proposition \ref{P:nsnsMain} give the estimate on the integral of $F_{\mathrm{all}}$. \end{proof} \section{Proof of Theorem \ref{T:SpectralMain}}\label{S:SpectralMain} In this section we prove Theorem \ref{T:SpectralMain} by averaging Selberg's trace formula \cite{Selberg}. \subsection{Statement of the trace formula} For smooth, even, compactly supported functions $f$ on $\mathbb{R},$ define $$F_f(x) = x \cdot \sum_{k = 1}^\infty \frac{f(kx)}{2 \sinh(kx/2)}.$$ We continue to use $F_{f,\mathrm{all}}(X)$ to denote the sum of $F_f$ over the lengths of primitive oriented closed geodesics on $X$. \begin{thm}\label{T:TF Let $f$ be a smooth, even, compactly supported function on $\mathbb{R}.$ Let $X$ be a closed hyperbolic surface of genus $g$ with Laplace eigenvalues $\lambda_n = \frac{1}{4} + r_n^2$. Then $$\sum_{r_n } \widehat{f}(r_n) = (g-1) \int_{-\infty}^{\infty} \widehat{f}(r) \cdot r \cdot \tanh(\pi r) \; dr + F_{f,\mathrm{all}}(X).$$ \end{thm} The left hand side is called the spectral side, and the right hand side the geometric side. The first summand on the geometric side is called the identity contribution. The imaginary parameters $r_n,$ corresponding to eigenvalues strictly less than $\frac{1}{4},$ are called exceptional. \medskip Since $f$ is even, its Fourier transform equals $$\widehat{f}(r) = \int_{\mathbb{R}} f(x) e^{-ir \cdot x} dx = 2\int_{0}^\infty f(x) \cosh(-ir x) dx.$$ \subsection{A preliminary observation.} We start by noting that the integral $I_{F_f}$ is close to the $\lambda_0 = 0$ contribution to the trace formula. \begin{lem}\label{L:IFf} $|I_{F_f}- \widehat{f}( i/2 )| \leq 4 \|f\|_{1}.$ \end{lem} \begin{proof} Directly from the definitions, we get \begin{eqnarray*} I_{F_f} &=& 2 \sum_{k = 1}^\infty \int_0^\infty \frac{f(k\ell) \sinh(\ell/2)^2}{\sinh(k\ell/2)} d \ell \\&=& 2\int_0^\infty f(\ell) \sinh(\ell/2) d \ell + 2 \sum_{k = 2}^\infty \int_0^\infty \frac{f(k\ell) \sinh(\ell/2)^2}{\sinh(k\ell/2)} d \ell. \\&=& \widehat{f} \left( i/2 \right) - 2\int_0^\infty f(\ell) e^{-\ell/2} d\ell+ 2 \sum_{k = 2}^\infty \int_0^\infty \frac{f(k\ell) \sinh(\ell/2)^2}{\sinh(k\ell/2)} d \ell. \end{eqnarray*} Since $e^{-\ell/2}\leq 1$, the middle term is at most $2\|f\|_1$. When $k\geq 2$, convexity of $\sinh$ gives $$\frac{\sinh(\ell/2)^2}{\sinh(k\ell/2)} \leq \frac{\sinh(\ell/2)^2}{\frac{k}2 \sinh(\ell)} \leq \frac1{k} . $$ Applying the change of variables $u=k\ell$ and noting that $2\sum_{k=2}^\infty \frac1{k^2}< 2$, this gives that the third term is at most $2\|f\|_1$. \end{proof} \begin{cor}\label{C:Cancel} Under the assumptions of Theorem \ref{T:SpectralMain}, if $f$ is even and has support in $[-D\log(g) ,D\log(g)]$, we have the one-sided bound $$ \int_{\cM_g}\left(F_{f,\mathrm{all}}(X)- \widehat{f}\left( i/2 \right)\right) d \mu(X) \leq 5\|f\|_{1}+ O(g^{\kappa-1}) \widehat{f}\left( i/2 \right).$$ \end{cor} This cancellation on average is the essential point in our arguments below. \subsection{Picking test functions} Fix a smooth, compactly supported, even test function $f$ on $\mathbb{R}$ satisfying \begin{itemize} \item $f$ is non-negative and supported on $[-1,1]$ and \item $\widehat{f} \geq 0$ on $\mathbb{R} \cup i \mathbb{R}$ with $\widehat{f} > 0$ on $i \mathbb{R}.$ \end{itemize} For example, $f$ could be be the convolution square of a smooth, even, non-negative function $g$ supported on $[-1/2,1/2]$ with $g(0) > 0.$ Define $$f_L = \frac{1}{2}( f(x + L) + f(x - L) ).$$ The Fourier transform intertwines translation and multiplication by characters, so $\widehat{f_L}(r) = \widehat{f}(r) \cdot \cos(Lr)$ and $\widehat{f_L}(it) = \widehat{f}(it) \cdot \cosh(Lt).$ We will assume that $L\leq D\log(g)-1$. Our goal is to give an upper bound for $$p = \mu \left( \left\{ X \in \cM_g: \lambda_1(X) \leq \frac{1}{4} - b^2 \right\} \right).$$ We start by relating this to the contribution of exceptional eigenvalues. \begin{lem}\label{L:AVGlower} The $\mu$-average of $$\sum_{r_n \in (0 \cdot i, \frac{1}{2} \cdot i) } \widehat{f_L}(r_n(X)) $$ is at least $p \cdot \cosh(Lb) \cdot m,$ where $m = \min_{t \in [0,1/2]} \widehat{f}(it)$. \end{lem} \begin{proof} This follows immediately from monotonicity of $\cosh$ and the non-negativity property of $\widehat{f}$. \end{proof} In the remainder of the proof, we use the trace formula to give an upper bound for this average, which will translate into an upper bound for $p$. \begin{lem}\label{L:AVGupper} The $\mu$-average of $$\sum_{r_n \in (0 \cdot i, \frac{1}{2} \cdot i) } \widehat{f_L}(r_n(X)) $$ is less than or equal to $O \left( e^{L/2} \cdot g^{\kappa - 1} + g^{1 + o(1)} \right).$ \end{lem} \begin{proof} The trace formula allows us to write $\sum_{r_n \in (0 \cdot i, \frac{1}{2} \cdot i) } \widehat{f_L}(r_n)$ as $$ (g-1) \int_{-\infty}^{\infty} \widehat{f_L}(r) \cdot r \cdot \tanh(\pi r) \; dr - \sum_{r_n \text{ real}} \widehat{f_L}(r_n) + \left( F_{f_L,\mathrm{all}}(X) - \widehat{f_L}(i/2) \right). $$ We will show that the first two terms are small, and that the $\mu$-average of the third term is small. To start, note that since $\widehat{f_L}(r) = \widehat{f}(r) \cos(Lr),$ the first term is bounded by $$(g-1) \int_{-\infty}^{\infty} |\widehat{f}(r)| r = O( g ).$$ In Corollary \ref{C:AppendixC}, for fixed $h$, we show using a standard local Weyl law argument that, for all $X\in \cM_g$, \begin{align*} \sum_{r_n \text{ real}}|h(r_n)| &= O \left(g \cdot \logp\left(\frac{1}{\mathrm{sys}(X)}\right) \right), \end{align*} where $\logp(x) = \max \{0, \log(x) \}+1$. For $X$ in the support of $\mu$, keeping in mind that $|\widehat{f_L}| \leq |\widehat{f}|$, this gives the bound $$ \sum_{r_n \text{ real}} | \widehat{f_L}(r_n(X)) | = O\left(g^{1 + o(1)} \right)$$ for the second term above. Finally, Corollary \ref{C:Cancel} shows that the third term is at most $$3\|f_L\|_{1}+ O(g^{\kappa-1}) \widehat{f_L}(i/2) = O(1+g^{\kappa-1} e^{\frac{L}2}). $$ Combining the bounds for the three terms gives the lemma. \end{proof} We can now conclude the proof. \begin{proof}[Proof of Theorem \ref{T:SpectralMain}] Combining the upper bound from Lemma \ref{L:AVGupper} with the lower bound from Lemma \ref{L:AVGlower} yields \begin{equation*} p = O \left( e^{(\frac{1}{2} - b) L} \cdot g^{\kappa - 1} + g^{1 + o(1)} \cdot e^{-Lb} \right). \end{equation*} The two summands here are equal when $L$ equals $L_0 = (4 - 2\kappa + o(1)) \log g.$ For this particular choice of $L_0$ we get \begin{equation*} p = O \left( g^{ 1 - 4b \left(1 - \frac{\kappa}{2} \right) + o(1)} \right), \end{equation*} proving the theorem. \end{proof}
1,116,691,498,758
arxiv
\section{Frenkel-Kontorova model} From a theoretical point of view, their most attractive feature is the analytical tractability afforded by their one-dimensional nature. Below we introduce the Frenkel-Kontorova model \cite{braun2004}, a versatile one-dimensional model for the treatment of crowdions and also dislocation lines. The starting point is the Lagrangian \begin{eqnarray} {\mathcal L} & = & \sum_{n=-\infty}^{\infty} \left\{ \frac{m\dot z_n^2}{2} - \frac{\beta}{2} \left(z_{n+1}-z_n-a\right)^2 - V(z_n)\right\}, \nonumber\\ & \to & \int_{-\infty}^{\infty} \left\{\frac{ m}{2}\left(\frac{\partial u}{\partial t}\right)^2 -\frac{\beta a^2}{2}\left(\frac{\partial u}{\partial z}\right)^2 - V\left(u(z,t)\right)\right\} {\rm d} z \label{eqn:FK}, \end{eqnarray} where the sum runs over the close-packed string containing one additional atom, which have mass $m$, position $z_n$, and are connected by harmonic springs with constant $\beta$. The interaction with the surrounding ``perfect'' lattice is encoded in the periodic potential $V(z_n)$. Assuming the atomic displacement $u_n \equiv z_n - na$ varies slowly with the atomic index $n$, it can be described by a continuous function $u(z,t)$, with boundary conditions $u(-\infty)=a,u(\infty)=0$, corresponding to the single additional atom in the string. $a$ is the equilibrium spacing, and is given by $r_0\sqrt 3/2 $ for the $\langle 111\rangle$ direction in a bcc crystal with lattice constant $r_0$. The simplest choice for the lattice potential is $V_0\sin^2\left(\pi z/a\right)$, and if we seek a static solution to the Euler-Lagrange equation corresponding to Eq.(\ref{eqn:FK}), we find \begin{equation} u(z;z_0) = \frac{2a}{\pi}\tan^{-1}{\rm e}^{-\mu(z-z_0)}, \label{eqn:profile} \end{equation} where $\mu^2 = 2\pi^2 V_0/\beta a^4$. This displacement profile smoothly varies from 0 to $a$ as $z$ goes from $-\infty$ to $\infty$, with the variation taking place over a lengthscale $1/\mu$. Thus $\mu$ encodes the width of the crowdion, reflecting the relative strengths of the intra-string ($\beta$) and surrounding lattice ($V_0$) interactions. $z_0$ is the crowdion centre-of-mass coordinate, i.e. its position in the $\langle 111\rangle$ string. In the continuum limit, the energy of a static crowdion can be calculated by inserting the displacement profile Eq.\ref{eqn:profile} into the (static) Hamiltonian \cite{kosevich2006} \begin{equation} E_0 = \int_{-\infty}^{\infty} \left\{ \frac{\beta a^2}{2}\left(\frac{\partial u}{\partial z}\right)^2 + V\left(u(z,t)\right)\right\}{\rm d} z = \left( \frac{\beta a^4\mu^2}{2\pi^2} + V_0\right)\frac{2}{a\mu} = \frac{2a}{\pi}\sqrt{2V_0\beta}.\label{eqn:E0} \end{equation} Note how the two terms in the energy, corresponding to the intra-string ($\beta$) and surrounding lattice ($V_0$) interactions, are equal at every point. Also, $E_0$ is independent of $z_0$, and so is independent of position. This is an artefact of the continuum limit we have taken, and discreteness can be approximately reintroduced by assuming the crowdion's profile remains fixed as it moves through the crystal, and exploiting the equipartition of the energy between string and lattice to write \begin{equation} E_{\rm discrete} = \sum_{n=-\infty}^{\infty} \left( \frac{\beta}{2}\left(z_{n+1} - z_n - a\right)^2 + V(z_n)\right) \to 2\sum_{n=-\infty}^{\infty} V(u_n); \; u_n = u(na), \end{equation} {\it i.e.} the continuum solution is evaluated at each discrete atom. The Poisson summation formula then leads to a Fourier series for the {\it Peierls potential} for the defect: \begin{equation} E = E_0 + \frac{2V_0\pi^2}{\mu^2 a^2}\sum_{n=1}^{\infty}\, n\, \cos\frac{2\pi n z_0}{a}\,\, {\rm cosech}\frac{\pi^2 n}{\mu a}. \end{equation} This is the potential in which the defect moves, and the ${\rm cosech} ({\pi^2 n}/{\mu a})$ factor strongly suppresses its magnitude when $\mu a < 1$, which is the case for crowdions. This is delocalization: the intra-string interaction is greater than the lattice interaction, meaning the displacement is spread over many atoms. Moving the defect centre-of-mass one lattice parameter corresponds to tiny motions of many atoms, leading to a suppressed migration barrier. The first term in the series is adequate, and \begin{equation} E_{\rm mig}\approx\frac{8V_0\pi^2}{\mu^2 a^2}{\rm cosech} \frac{\pi^2}{\mu a}, \end{equation} which is in the $\mu$eV range for reasonable values of the parameters (see \cite{fitzgerald2008b} $V_0\sim$ 1eV, $\beta a^2\sim$ 50-100 eV). \section{Double sine-Gordon model} Atomistic simulations \cite{derlet2007} suggest that, whilst very low, the crowdion migration barrier is in the meV rather than $\mu$eV range, indicating that the model described above is not the whole story. In fact, the assumption that the lattice potential is sinusoidal is not always accurate, as density functional calculations show. Particularly for the group VI metals Cr, Mo and W, the potential shows a local minimum midway between the main $a$-period minima, as can be seen in Fig. \ref{fig:potdisp}, \cite{fitzgerald2008b}. \begin{figure} \centering \includegraphics[width=0.85\textwidth]{posns.eps} \caption{Atomic positions for crowdions in the single- (top) and double-sine (bottom) models. Parameters are for vanadium and tungsten respectively.} \label{fig:posns} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{pots.eps} \includegraphics[width=0.45\textwidth]{disp.eps} \caption{Lattice potential (left) and atomic displacement gradients (right) for crowdions in the bcc transition metals (DFT; data from \cite{fitzgerald2008b}). Solid lines: double sine fits; dashed line: single sine fit.} \label{fig:potdisp} \end{figure} These curves can be well-fitted by a double-sine potential \begin{equation} V(z)= V_0\left(\sin^2\left(\vphantom{\frac{2\pi z}{a}} \frac{\pi z}{a} \vphantom{\frac{2\pi z}{a}} \right) + \frac{\alpha^2 - 1}{4}\sin^2\left( \frac{2\pi z}{a}\right) \right),\label{eqn:V} \end{equation} and the analysis carries forward, leading to a displacement solution \begin{equation} u(z;z_0) = \frac{a}{\pi}\arctan\left[ \frac{\alpha}{\sinh\left(\mu \alpha(z-z_0)\right)}\right], \label{eqn:disp} \end{equation} and the width of the crowdion is now encoded by the combination $\mu\alpha$. A similar, yet more involved, calculation yield the Peierls potential \begin{equation} E(z_0) = E_0 + \sum_{j=1}^{\infty}I_j\cos\left(\frac{2\pi j z_0}{a}\right), \label{eqn:exp} \end{equation} where \begin{equation} I_j = \frac{2V_0\alpha\pi}{\mu a}{\rm cosech}\left(\frac{\xi\pi}{2}\right) \times \left\{\xi \cos\left(\frac{\xi}{4}\ln\frac{q_+}{q_-}\right) \right. \left. - \frac{1}{\alpha \sqrt{\alpha^2-1}}\sin\left(\frac{\xi}{4}\ln\frac{q_+}{q_-}\right)\right\}, \label{eqn:I} \end{equation} and $\xi = 2\pi j/\alpha\mu a$ and $q_{+,-} = 1-2\alpha^2 \pm 2\alpha\sqrt{\alpha^2 - 1}$. The input parameters can be determined from density functional calculations, and the results for the migration barrier heights for V, Nb, Ta are $6.8\times 10^{-4}, 0.25\times 10^{-4}$ and $0.087\times 10^{-4}$ eV respectively, and those for Cr, Mo, W are $12\times 10^{-3}, 2.4\times 10^{-3}$ and $2.6\times 10^{-3}$ eV respectively. A clear group-specific trend emerges, with the group VI metals having a deeper local minimum, and hence a larger migration barrier, than their group V counterparts. Still, all these barriers are remarkably low. \section{Multi-crowdion solutions} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{peierls.eps} \caption{Effect of number of undefected neighbour strings on crowdion Peierls potential. An isolated crowdion has 6, whereas most boundary crowdions in a cluster have 2.} \label{fig:peierls} \end{figure} Crowdions cluster together to form $\bm{b} = \frac1{2}\langle 111\rangle$ prismatic dislocation loops, and the Frenkel-Kontorova model can be extended to treat these clusters \cite{dudarev2003}. Using the single-sine form for simplicity, the interaction potential between two crowdions in neighbouring parallel $\langle 111\rangle$ strings with displacement fields $u_0 = u(z;0),u_1=u(z;x)$ can be written \begin{eqnarray} E_{\rm int}(x) & = & \int_{-\infty}^{\infty} \frac{V_0}{6}\sin^2\left(\frac{\pi}{a}\left(u_0 - u_1\right)\right){\rm d}z\nonumber \\ & = & \frac{2V_0}{3\mu}\tanh \frac{\mu x}{2}\left( \frac{\mu x}{2} {\rm sech}^2 \frac{\mu x}{2} + \tanh \frac{\mu x}{2}\right), \end{eqnarray} where $x$ is the separation between the crowdions' centres of mass \cite{fitzgerald2015crowdion}. The factor of $1/6$ arises because $V_0$ was defined as the lattice potential for an isolated crowdion, surrounded by 6 neighbours. Each member of a crowdion pair has 5 undefected neighbour strings, so its $\mu \to \sqrt{5/6}\mu$ compared to an isolated crowdion. This small correction has important effects due to the extreme nonlinearity of the Peierls potential. For tungsten, the 2-crowdion interaction potential above is a slight (maximum 0.3eV) repulsion for large distances, and an attractive well when the separation is less than about 12 atomic spacings. The well depth is $\sim$3eV (DFT gives somewhat less than this \cite{marinica2013}, but the agreement for the single sine model is reasonable), so crowdions bind strongly together. The consequence for their displacement profile is that their $\mu$ is reduced, and hence they are more spread out down the $\langle 111\rangle$ string. For large clusters, only crowdions near the edge experience strong interactions with the undefected lattice. Crowdions in the interior are delocalized to such an extent that they are indistinguishable from perfect lattice, and the cluster becomes a prismatic dislocation loop, with strain localized to the perimeter. At the perimeter, each boundary crowdion has 2 or 3 undefected neighbour strings (depending on the geometry of the loop -- small $\bm{b} = \frac1{2}\langle 111\rangle$ loops are typically hexagonal, so ``corner'' crowdions have 3 perfect neighbours, whilst ``edge'' crowdions have 2). Fig.\ref{fig:peierls} shows the effect this has on the Peierls potential for crowdions in tungsten. The enhanced delocalization reduces the Peierls potential by at least 4 orders of magnitude, rendering it zero to all intents and purposes. This suppression comes again from the cosech$(.../\mu)$ term, which is an extremely nonlinear function of $\mu$. This completely outweighs the increased number of boundary crowdions experiencing the Peierls potential\footnote{Loops would need to contain several million defects to have the $>$10,000 boundary crowdions required.} therefore prismatic dislocation loops can move through the crystal effectively unimpeded, more easily even than isolated crowdions. \section{3D diffusion of single crowdions} Most defects migrate stochastically through the crystal with a diffusivity $D$ that takes the form $D = D_0\exp(E_{\rm mig}/k_{\rm B}T)$, corresponding to hops through the lattice that occur with an Arrhenius rate proportional to $\exp(E_{\rm mig}/k_{\rm B}T)$ ($T$ is the temperature, $E_{\rm mig}$ is the migration barrier and $k_{\rm B}$ is Boltzmann's constant). This expression depends on the implicit assumption that $E_{\rm mig}\gg k_{\rm B}T$, i.e. the hops are rare events. This clearly does not apply to crowdions for all but cryogenic temperatures. Indeed, for $E_{\rm mig}\ll k_{\rm B}T$, the diffusion is effectively free. For 1D motion in a sinusoidal potential, an exact solution for the hop rate exists for all temperatures, see e.g. \cite{swinburne2013}. Molecular dynamics simulations \cite{derlet2007} confirm the fast 1D nature of crowdion migration, but also show the defects changing from one $\langle 111\rangle$ direction to another. This occurs at a slower rate, comparable to the ``rare event'' hops of other crystal defects. This allows the crowdion to explore the entirety of the crystal, and in this section we calculate the effect on 3D diffusion, and outline a Monte Carlo algorithm for its simulation. Firstly assume that the direction-changing transition is a Poisson process with rate $\Gamma$. Then the time intervals between changes of direction will be exponentially distributed, with pdf $\psi(t) = \Gamma\exp(-\Gamma t)$. If we further assume that, during the time interval $t$ spent between direction changes, the crowdion diffuses normally with diffusivity $D$, then the hop lengths $x$, conditioned on the time interval $t$, will have the normal distribution $\Lambda(x|t) = \exp(-x^2/2Dt)/\sqrt{2\pi Dt}$. Since the hops are independent, we can reorder the series of hops, treat each $\langle 111\rangle$ direction independently in 1D, and project onto 3D space at the end. The fact that the directions along which the crowdion can diffuse are linearly dependent is immaterial, as shown below. In the bcc lattice, there are four (unsigned) $\langle 111\rangle$ directions along which crowdions can move, with unit vectors $\bm{\hat e_{1,2,3,4}}$. The final position of the crowdion is $\bm{x_f} = \sum_{i=1}^4s_i\bm{\hat e_i}$, where $s_i$ is the sum of the signed hop lengths in the $i$ direction. Each of these hop lengths is normally distributed with zero mean and variance $D\Delta\! t$ (and the $\Delta\! t$s are exponentially distributed, though that is not required). The total time $t= \sum_{i=1}^4t_i$ where $t_i$ is the time spent hopping in each direction, i.e. the sum of the $\Delta\! t$s for each direction. The expected value for $|\bm{x_f}|^2$ is given by \begin{eqnarray} {\mathbb E}\left(|\bm{x_f}|^2\right) = {\mathbb E}\left(\sum_{i=1}^4s_i\bm{\hat e_i}\right)^2 & = & |\bm{\hat e_i}|^2 {\mathbb E}\left(s_1^2\right) + ... + 2\bm{\hat e_1\cdot\hat e_2} {\mathbb E} (s_1s_2) + ... \nonumber\\ & = & Dt_1 + Dt_2 + Dt_3 + Dt_4 + 0\nonumber\\ & = & Dt, \end{eqnarray} where in the second line we used the fact that variances add in sums of normally distributed random variables, and that ${\mathbb E}(s_1s_2) = {\mathbb E}(s_1){\mathbb E}(s_2) = 0$ by independence. The $\bm{\hat e_i}$s need not be orthogonal. \begin{figure} \centering \includegraphics[width=0.24\textwidth]{t1.png} \includegraphics[width=0.24\textwidth]{t2.png} \includegraphics[width=0.24\textwidth]{t3.png} \includegraphics[width=0.24\textwidth]{t4.png} \caption{Left to right: increasing magnification views of an example trajectory from crowdion Monte Carlo. Only at the smallest scales is the anisotropy of the diffusion evident. The time $t$ spent on a particular $\langle 111\rangle$ direction is drawn from an exponential distribution, then the distance diffused on that direction prior to the change is drawn from a normal distribution with variance $Dt$.} \label{fig:sim} \end{figure} With the above assumptions, the pdf $W$ for the crowdion position $x$ at time $t$ satisfies the Chapman-Kolmogorov equation \cite{montroll} \begin{equation} W(x,t) = \int_0^t\int_{-\infty}^{\infty}\psi(t-t')\Lambda(x-x'|t-t')W(x',t'){\rm d} x'{\rm d} t' + \left(1-\int_0^t\psi(t){\rm d} t\right)W(x,0). \end{equation} This reflects the sum over all possible hop lengths and times, and the second term is the probability density for the particle remaining at $x = 0$ until time $t$, $W(x,0) = \delta(x)$. Inserting the above forms for $\psi$ and $\Lambda$ then taking Fourier transforms in $x$ and Laplace transforms in $t$, $W(x,t)\to W(k,s)$, leads to \begin{equation} W(k,s) = \frac{2s + \Gamma k^2D}{(s+\Gamma)(2s+k^2D)}. \end{equation} Now, since \begin{equation} \frac{\partial^2 W(k,t)}{\partial k^2}\equiv \int_{-\infty}^{\infty} {\rm e}^{ikx}(-x^2) W(x,t){\rm d} x, \end{equation} we can differentiate $W(k,s)$ twice with respect to $k$ and set $k=0$ to get (minus) the Laplace-transformed expected value for $x^2$. Inverting the transform gives \begin{equation} \langle x^2\rangle = D\left(t - \frac{1-\exp(-\Gamma t)}{\Gamma}\right)\sim Dt \;{\rm when}\; t\gg\frac1{\Gamma}. \end{equation} So for sufficiently large times, the effective diffusivity is that of the 1D fast motion, but how long until this approximation is reasonable is controlled by the rate of direction changes, $\Gamma$. Indeed, for $t\ll 1/\Gamma$, $\langle x^2\rangle\sim D\Gamma t^2/2$. The MD simulations of \cite{derlet2007} give a rate \begin{equation} \Gamma = 6.59\times 10^{12} \exp(-0.385 \,{\rm eV}/k_{\rm B}T)/{\rm sec} \end{equation} for crowdions in tungsten, whereas the migration energy for vacancies is found to be 1.78eV. This suggests that, on the timescale of vacancy diffusion, crowdion diffusion is effectively isotropic, and the 1D nature of hops can be neglected. Crowdion clusters/prismatic loops, on the other hand, stick to single $\langle 111\rangle$ directions for much longer. Whilst rotations for very small loops are not impossible \cite{arakawa2006changes}, the activation energy is much higher. Stochastic computer simulations are most efficient when the events being sampled have rates as similar as possible. A kinetic Monte Carlo simulation of, say, crowdion and vacancy hopping would spend the vast majority of its time moving crowdions since their barriers are so low compared to those for vacancies (this is known generically as the low barrier problem). A more efficient approach would be to sample the direction-changing events, and then draw the crowdion's 1D motion from a normal distribution with appropriate time-dependent variance, as in the analytical approach above. Fig. \ref{fig:sim} shows an example trajectory from a million step simulation of this type, which can be performed in under a minute on an ordinary laptop. The 1D $\langle 111\rangle$ hops are only apparent when `zoomed in', and at larger scales are indistinguishable from standard diffusion. Indeed, given that crowdions' diffusion rate is typically many orders of magnitude higher than any other species', it may be advantageous to treat the crowdions using a density functional, in analogy with the DFT approach to electrons. \section{Conclusions} In this paper, we have derived the surprising result that clusters of crowdions (aka prismatic dislocation loops) can move through a bcc crystal lattice virtually unimpeded (aside from dissipation). The periodic (Peierls) potential in which they move is fractions of a micro eV: several orders of magnitude lower than even that for an isolated crowdion. The reason for this is {\it delocalization} -- the lattice displacement induced by the additional atoms is spread over many atoms, meaning the translation of its centre of mass corresponds to the tiny motions of many more atoms. This is analogous to how the existence of dislocations allows the plastic deformation of crystals at far lower applied stresses than their ``theoretical strength'' would suggest. We then showed that the highly anisotropic diffusion of crowdions, which atomistic simulations have demonstrated, can be safely neglected at timescales sufficiently far above the timescale for direction changes. This will aid the development of hybrid mesoscale Monte Carlo simulations of defect structure evolution, by avoiding the low barrier problem associated with the suppression of the Peierls potential. \section*{Acknowledments} SPF thanks Dr D Nguyen Manh and Dr M-C Marinica for many helpful discussions. This work was supported in part by the UK EPSRC, Grant number EP/R005974/1.
1,116,691,498,759
arxiv
\section{Introduction} \label{sec:introduction} Symmetry plays a crucial role in many areas of mathematics and physics. Conventionally, group actions are used to model symmetries and with the advent of more general mathematical structures called quantum groups in~\cites{Drinfeld:Quantum_groups, Jimbo:Yang-baxter_eq, Woronowicz:CQG}. It should be natural to consider the actions (defined in a suitable sense depending on the algebraic or analytic framework chosen) of quantum groups on classical and noncommutative spaces. In this context, a very interesting programme is to study quantum symmetries of classical spaces. One may hope that there are many more quantum symmetries of a given classical space than classical group symmetries which will help one understand the space better. Indeed, it has been a remarkable discovery of S. Wang that for \(n\geq 4\), a finite set of cardinality \(n\) has an infinite dimensional compact quantum group (`quantum permutation group') of symmetries. For the relevance of quantum group symmetries in a wider and more geometric context, we refer the reader to the discussion on `hidden symmetry in algebraic geometry' in \cite{Manin:Qnt_grp_NCG}*{Chapter 13} where Manin made a remark about possible genuine Hopf algebra symmetries of classical smooth algebraic varieties. Recently, several examples of faithful continuous action by genuine (i.e. not of the form \(\Cont(G)\) for a compact group \(G\)) compact quantum groups on \(\Cont(X)\) for a connected compact space \(X\) were constructed by H. Huang~\cite {Huang:Faithful_act_CQG}. In \cite{Etingof-Walton:Semisimple_Hopf_act} an example of a faithful action by the finite dimensional genuine compact quantum group on the algebra of regular function of a non\nobreakdash-smooth variety was given. However, it turns out rather formidable to construct such actions when the space is smooth (and connected) and the action is also smooth in some natural sense. In \cite{Goswami-Joardar:Rigidity_CQG_act}, it is conjectured that no smooth faithful action of genuine compact quantum group on a compact connected smooth manifold can exist. The conjecture has been proved in that paper in two important cases: (i) when the action is isometric and (ii) when the compact quantum group is finite dimensional. We also mention the work of Etingof and Walton \cite{Etingof-Walton:Semisimple_Hopf_act} which gives a similar no go result in the algebraic framework. It is, however, expected that genuine non\nobreakdash-compact quantum groups may have faithful smooth actions on smooth connected manifolds. Indeed, there are examples of such actions in the algebraic set-up (see~\cite{Goswami-Joardar:Rigidity_CQG_act}*{Example 14.2}). This motivates one to construct examples of faithful $\Cst$\nobreakdash-actions of non\nobreakdash-compact locally compact quantum groups on \(\Contvin(X)\) where \(X\) is a smooth connected manifold. It is also desirable to see if the actions are smooth in any suitable sense. We give a method to construct such actions using the theory of bicrossed products due to Baaj, Skandalis and Vaes~\cite{Baaj-Skandalis-Vaes:Non-semi-regular} and Vaes and Vainermann \cite{Vaes-Vainerman:Extension_of_lcqg}. There is one remarkable observation: there are several non\nobreakdash-Kac quantum groups with faithful \(\Cst\)\nobreakdash-actions (even ergodic) on \(\Contvin(X)\). This could not be possible in the realm of compact quantum groups, as any compact quantum group acting faithfully on a commutative \(\Cst\)\nobreakdash- algebra must be of Kac type (see~\cite{Huang:Inv_subset_CQG_act}). Thus, there seems to be more freedom for getting quantum symmetries on classical spaces in the realm of locally compact quantum groups than the compact ones. On the other hand, it should be noted that if we are interested in actions which are isometric in a natural sense (as in~\cite{Goswami:Qnt_isometry}). We cannot possibly get any genuine locally compact quantum group actions on classical (connected) Riemannian manifolds. Indeed, such a no-go result is obtained within the class of locally compact quantum groups considered by us in this paper (see Theorem~\ref{the:rig_iso_faithful}). \medskip The plan of the paper is as follows. We gather some basic definitions and facts about locally compact quantum groups and their actions in the von Neumann as well as \(\Cst\)\nobreakdash-algebraic set-up in Subsection~\ref{subsec:LCQG}, followed by a brief account of the bicrossed product construction of locally compact groups in Subsection~\ref{subsec:Bicross}. In Section~\ref{sec:Podles_cond}, we specialise to the locally compact quantum groups arising from bicrossed product construction of two groups say \(G_{1},G_{2}\) forming a matched pair, with \(G_1\) being an abelian Lie group. We describe a natural action of this bicrossed product quantum group on \(\Contvin(\widehat{G_1})\) and verify that it satisfies Podle\'s\nobreakdash-type density conditions. Section~\ref{sec:properties} is devoted to investigate necessary and sufficient conditions for this action to be faithful or isometric. Using this, we observe that that there is a large class of genuine locally compact quantum groups having faithful actions on commutative \(\Cst\)\nobreakdash-algebra of \(\Contvin\)\nobreakdash-functions on a locally compact manifold in Section~\ref{sec:Example}. However, it is shown in Subsection~\ref{sec:Isometry} that no such genuine quantum group actions can be isometric. \section{Preliminaries} \label{sec:preliminaries} All Hilbert spaces and \(\Cst\)\nobreakdash-algebras are assumed to be separable. For two norm\nobreakdash-closed subsets~\(X\) and~\(Y\) of a~\(\Cst\)\nobreakdash-algebra, let \[ X\cdot Y\mathrel{\vcentcolon=}\{xy : x\in X, y\in Y\}^{\textup{CLS}}, \] where CLS stands for the~\emph{closed linear span}. For a~\(\Cst\)\nobreakdash-algebra~\(A\), let~\(\Mult(A)\) be its multiplier algebra and \(\U(A)\) be the group of unitary multipliers of~\(A\). The unit of \(\Mult(A)\) is denoted by~\(1_{A}\). Next recall some standard facts about multipliers and morphisms of \(\Cst\)\nobreakdash-algebras from~\cite{Masuda-Nakagami-Woronowicz:C_star_alg_qgrp}*{Appendix A}. Let~\(A\) and~\(B\) be~\(\Cst\)\nobreakdash-algebras. A \(^*\)\nb-{}homomorphism \(\varphi\colon A\to\Mult(B)\) is called \emph{nondegenerate} if \(\varphi(A)\cdot B=B\). Each nondegenerate \(^*\)\nb-{}homomorphism \(\varphi\colon A\to\Mult(B)\) extends uniquely to a unital \(^*\)\nb-{}homomorphism \(\widetilde{\varphi}\) from~\(\Mult(A)\) to \(\Mult(B)\). Let \(\Cstcat\) be the category of \(\Cst\)\nobreakdash-algebras with nondegenerate \(^*\)\nb-{}homomorphisms \(A\to\Mult(B)\) as morphisms \(A\to B\); let Mor(A,B) denote this set of morphisms. We use the same symbol for an element of~\(\Mor(A,B)\) and its unique extension from~\(\Mult(A)\) to~\(\Mult(B)\). A \emph{representation} of a \(\Cst\)\nobreakdash-algebra~\(A\) on a Hilbert space~\(\Hils\) is a nondegenerate \(^*\)\nb-{}homomorphism \(\pi\colon A\to\Bound(\Hils)\). Since \(\Bound(\Hils)=\Mult(\Comp(\Hils))\), the nondegeneracy conditions \(\pi(A)\cdot\Comp(\Hils)=\Comp(\Hils)\) is equivalent to begin \(\pi(A)(\Hils)\) is norm dense in~\(\Hils\), and hence this is same as having a morphism from~\(A\) to~\(\Comp(\Hils)\). The identity representation of~\(\Comp(\Hils)\) on \(\Hils\) is denoted by~\(\Id_{\Hils}\). The group of unitary operators on a Hilbert space~\(\Hils\) is denoted by \(\U(\Hils)\). The identity element in \(\U(\Hils)\) is denoted by~\(1_{\Hils}\). We use~\(\otimes\) both for the tensor product of Hilbert spaces, minimal tensor product of \(\Cst\)\nobreakdash-algebras, and von Neumann algebras which is well understood from the context. We write~\(\Flip\) for the tensor flip \(\Hils\otimes\Hils[K]\to \Hils[K]\otimes\Hils\), \(x\otimes y\mapsto y\otimes x\), for two Hilbert spaces \(\Hils\) and~\(\Hils[K]\). We write~\(\flip\) for the tensor flip isomorphism \(A\otimes B\to B\otimes A\) for two \(\Cst\)\nobreakdash-algebras or von Neumann algebras \(A\) and~\(B\). Let~\(A_{1}\), \(A_{2}\), \(A_{3}\) be \(\Cst\)\nobreakdash-algebras. For any~\(t\in\Mult(A_{1}\otimes A_{2})\) we denote the leg numberings on the level of~\(\Cst\)\nobreakdash-algebras as \(t_{12}\mathrel{\vcentcolon=} t\otimes 1_{A_{3}} \in\Mult(A_{1}\otimes A_{2}\otimes A_{3})\), \(t_{23}\defeq1_{A_{3}}\otimes t_{12}\in\Mult(A_{3}\otimes A_{1}\otimes A_{2})\) and~\(t_{13}\mathrel{\vcentcolon=}\flip_{12}(t_{23})=\flip_{23}(t_{12})\in\Mult(A_{1}\otimes A_{3}\otimes A_{2})\). In particular, let \(A_{i}=\Bound(\Hils_{i})\) for some Hilbert spaces~\(\Hils_{i}\), where \(i=1,2,3\). Then for any \(t\in\Bound(\Hils_{1}\otimes\Hils_{2})\) the leg numberings are obtained by replacing~\(\flip\) with the conjugation by~\(\Flip\) operator. \subsection{Locally compact quantum groups and their actions} \label{subsec:LCQG} For a general theory of \(\Cst\)\nobreakdash-algebraic locally compact quantum groups we refer~\cites{Masuda-Nakagami-Woronowicz:C_star_alg_qgrp, Kustermans-Vaes:LCQGvN}. A~\(\Cst\)\nobreakdash-\emph{bialgebra}~\(\Bialg{A}\) is a~\(\Cst\)\nobreakdash-algebra \(A\) and a comultiplication~\(\Comult[A]\in\Mor(A,A\otimes A)\) that is coassociative: \((\Id_{A}\otimes\Comult[A])\circ\Comult[A]=(\Comult[A]\otimes\Id_{A})\circ\Comult[A]\). Moreover, \(\Bialg{A}\) is~\emph{bisimplifiable} \(\Cst\)\nobreakdash-bialgebra if~\(\Comult[A]\) satisfies the cancellation property \begin{equation} \label{eq:cancellation} \Comult[A](A)\cdot (1_{A}\otimes A)=\Comult[A](A)\cdot (A\otimes 1_{A})=A\otimes A . \end{equation} Let~\(\varphi\) be a faithful (approximate) KMS weight on~\(A\) (see~\cite{Kustermans-Vaes:LCQG}*{Section 1}). The set of all positive \(\varphi\)\nobreakdash-integrable and~\(\varphi\)\nobreakdash-square integrable elements are defined by~\(\mathcal{M}_{\varphi}^{+} \mathrel{\vcentcolon=}\{a\in A^{+}\text{ \(\mid\) \(\varphi(a)<\infty\)}\}\) and \(\mathcal{N}_{\varphi}\mathrel{\vcentcolon=}\{a\in A\text{ \(\mid\) \(\varphi(a^{*}a)<\infty\)}\}\), respectively. Moreover, \(\varphi\) is called \begin{enumerate} \item \emph{left invariant} if \(\omega((\Id_{A}\otimes\varphi)\Comult[A](a))=\omega(1)\varphi(a)\) for all~\(\omega\in A_{*}^{+}\), \(a\in\mathcal{M}^{+}_{\varphi}\); \item \emph{right invariant} if \(\omega((\varphi\otimes\Id_{A})\Comult[A](a))=\omega(1)\varphi(a)\) for all~\(\omega\in A_{*}^{+}\), \(a\in\mathcal{M}^{+}_{\varphi}\). \end{enumerate} \begin{definition}[\cite{Kustermans-Vaes:LCQG}*{Definition 4.1}] \label{def:Qnt_grp} A \emph{locally compact quantum group} (\emph{quantum groups} from now onwards) is a bisimplifiable~\(\Cst\)\nobreakdash-bialgebra~\(\Qgrp{G}{A}\) with left and right invariant approximate KMS weights~\(\varphi\) and~\(\psi\), respectively. \end{definition} By~\cite{Kustermans-Vaes:LCQG}*{Theorem 7.14 \& 7.15}, invariant weights~\(\varphi\) and~\(\psi\) are unique up to a positive scalar factor; hence they are called the left and right \emph{Haar weights} for \(\G\). Moreover, there is a unique (up to isomorphism) Pontrjagin dual~\(\DuQgrp{G}{A}\) of~\(\G\), which is again a quantum group. Next we consider the GNS triple~\((\textup{L}^{2}(\G),\pi,\Lambda)\) for~\(\varphi\). There is an element \(\Multunit\in\U(\textup{L}^{2}(\G)\otimes\textup{L}^{2}(\G))\) satisfying the pentagon equation: \begin{equation} \label{eq:pentagon} \Multunit_{23}\Multunit_{12} = \Multunit_{12}\Multunit_{13}\Multunit_{23} \qquad \text{in \(\U(\textup{L}^{2}(\G)\otimes\textup{L}^{2}(\G)\otimes\textup{L}^{2}(\G)).\)} \end{equation} \(\Multunit\) is called \emph{multiplicative unitary}. Furthermore,~\cite{Kustermans-Vaes:LCQG}*{Proposition 6.10} shows that \(\Multunit\) is manageable (in the sense of~\cite{Woronowicz:Multiplicative_Unitaries_to_Quantum_grp}*{Definition 1.2}). Also~\(\Multunit\) \emph{generates}~\(\G\) (or~\(\G\) is \emph{generated} by~\(\Multunit\)) in the following sense: \begin{enumerate} \item the dual multiplicative unitary \(\DuMultunit\mathrel{\vcentcolon=}\Flip\Multunit[*]\Flip\in\U(\textup{L}^{2}(\G)\otimes\textup{L}^{2}(\G))\) is also manageable. \item the slices of~\(\Multunit\) defined by \begin{alignat}{2} \label{eq:slice_first} \hat{A} &\mathrel{\vcentcolon=} \{(\omega\otimes\Id_{\textup{L}^{2}(\G)})\Multunit : \omega\in\Bound(\textup{L}^{2}(\G))_*\}^\CLS ,\\ \label{eq:slice_second} A &\mathrel{\vcentcolon=}\{(\Id_{\textup{L}^{2}(\G)}\otimes\omega)\Multunit : \omega\in\Bound(\textup{L}^{2}(\G))_*\}^\CLS , \end{alignat} are nondegenerate \(\Cst\)\nobreakdash-subalgebras of \(\Bound(\textup{L}^{2}(\G))\). \item \(\Multunit\in\U(A\otimes\hat{A})\subseteq\U(\textup{L}^{2}(\G)\otimes \textup{L}^{2}(\G))\). \item the comultiplication maps~\(\Comult[A]\) and \(\DuComult[A]\) are defined by \begin{equation} \label{eq:comults} \Comult[A](a)\mathrel{\vcentcolon=}\Multunit[*](1\otimes a)\Multunit , \qquad \DuComult[A](\hat{a})\mathrel{\vcentcolon=}\flip\big(\Multunit(\hat{a}\otimes 1)\Multunit[*]\big), \end{equation} for all~\(a\in A\), \(\hat{a}\in\hat{A}\). \end{enumerate} A general theory of locally compact quantum groups in the von\nobreakdash-Neumann algebraic framework has been developed by Kustermans and Vaes in~\cite{Kustermans-Vaes:LCQGvN}. Moreover, there is a nice interplay between \(\Cst\)\nobreakdash-algebraic and von\nobreakdash-Neumann algebraic locally compact quantum groups in general via multiplicative unitaries. We briefly recall this in the group case. Let~\(G\) be a locally compact group and~\(\mu\) be its left Haar measure. Define, \(\Multunit\in\U(L^{2}(G\times G,\mu\times \mu)\) by \(\Multunit\xi(x,y)=\xi(x,x^{-1}y)\) for all \(\xi\in L^{2}(G\times G,\mu\times \mu)\), \(x,y\in G\). A simple computation shows that~\(\Multunit\) is a multiplicative unitary. Furthermore, \begin{alignat*}{2} A &=\Contvin(G)=\{(\Id_{\textup{L}^{2}(G,\mu)}\otimes\omega)\Multunit : \omega\in\Bound(\textup{L}^{2}(G))_*\}^\CLS ,\\ M &=L^\infty(G)=\{(\Id_{\textup{L}^{2}(\G)}\otimes\omega)\Multunit : \omega\in\Bound(\textup{L}^{2}(\G))_*\}^{\textup{weak closure}}. \end{alignat*} Using~\eqref{eq:comults} we get~\(\Comult[M]\colon L^\infty(G)\to L^\infty(G\times G)\) by \(\Comult[M](f)(x,y)=f(xy)\) for all~\(f\in L^\infty(G)\). The pair~\((M,\Comult[M])\) is a von\nobreakdash-Neumann bialgebra. The left Haar weight \(\varphi\) on~\(L^{\infty}(G)\) is given by the integration with respect to the left Haar measure~\(\mu\) on~\(G\): \(\varphi(f)\mathrel{\vcentcolon=}\int f(h) \diff\mu (h)\) for~\(f\in L^\infty(G_{2})^{+}\). Similarly, the right Haar weight~\(\psi\) on~\((M,\Comult[M])\) is obtained from the to the right Haar measure on \(G\). The pair \(\Qgrp{G}{M}\) is the von Neumann algebraic locally compact quantum group associated to~\(G\). Similarly,~\(\Comult[A]\colon \Contvin(G)\to\Contb(G\times G)\) is obtained by restricting~\(\Comult[M]\) on~\(\Contvin(G)\). The restriction of left and right Haar weight of~\((M,\Comult[M])\) on~\(\Contvin(G)\) defines the left and Haar weight on~\((A,\Comult[A])\), respectively. Thus~\((A,\Comult[A])\) is the \(\Cst\)\nobreakdash-algebraic version of~\(\G\). Let~\(\hat{A}=\Cred(G)\) and~\(\hat{M}=L(G)\). Let~\(\lambda\) be the left regular representation of \(G\) on~\(L^{2}(G,\mu)\). Define, \(\DuComult[M]\colon L(G)\to L(G\times G)\) by~\(\DuComult(\lambda_g)=\lambda_g \otimes\lambda_g\) and \(\DuComult[A]\colon \Cred(G)\to\Mult(\Cred(G\times G))\) as the restriction of~\(\DuComult[M]\). The left and right invariant invariant Haar weights on~\(\hat{M}\) is given by~\(\hat{\varphi}(\lambda(f))=f({1_{G}})\) for all~\(f\in\CompSupp(G)\) such that~\(\lambda(f)\in L(G)^{+}\). Then \(\DuQgrp{G}{A}\) and~\(\DuQgrp{G}{M}\) are the dual of~\(\G\) in~\(\Cst\)\nobreakdash-algebraic and von Neumann algebraic setting, respectively. \begin{definition} \label{def:cont_action} A \emph{\textup(right\textup) \(\Cst\)\nobreakdash-action} of~\(\G\) on a \(\Cst\)\nobreakdash-algebra~\(C\) is a morphism \(\gamma\colon C\to C\otimes A\) with the following properties: \begin{enumerate} \item \(\gamma\) is a comodule structure, that is, \begin{equation} \label{eq:right_action} (\Id_{C}\otimes\Comult[A])\gamma=(\gamma\otimes\Id_{A})\gamma ; \end{equation} \item \(\gamma\) satisfies the \emph{Podleś condition}: \begin{equation} \label{eq:Podles_cond} \gamma(C)\cdot(1_C\otimes A)=C\otimes A . \end{equation} \end{enumerate} \end{definition} Some authors demands~\(\gamma\) to be injective. Similarly, the von Neumann algebraic version~\(\Bialg{M}\) of \(\G\) acts on von Neumann algebras as well. \begin{definition} \label{def:vN_coact} A \emph{\textup(right\textup) von Neumann algebraic action} of~\(\G\) on a von Neumann algebra~\(N\) is a faithful, normal, unital \(^*\)\nb-{}homomorphism \(\gamma\colon N\to N\otimes M\) satisfying~\((\Id_{N}\otimes\Comult[M])\gamma=(\gamma\otimes\Id_{M})\gamma\). \end{definition} \subsection{Bicrossed product of groups} \label{subsec:Bicross} Bicrossed product construction for mathced pair of locally compact quantum groups goes back to the work of Baaj and Vaes \cite{Baaj-Vaes:Double_cros_prod} and Vaes and Vainerman~\cite{Vaes-Vainerman:Extension_of_lcqg}. In this article, we shall restrict out attention to the bicrossed product construction for locally compact groups. \begin{definition}[\cite{Vaes-Vainerman:Extension_of_lcqg}*{Definition 4.7}] \label{def:mathced_pair} Let~\(G_{1}\), \(G_{2}\) and~\(G\) be locally compact groups with fixed left Haar measures. The pair \((G_{1},G_{2})\) is called a \emph{matched pair} if \begin{enumerate} \item there exist a homomorphism \(i\colon G_{1}\hookrightarrow G\) and an anti\nobreakdash-homomorphism \(j\colon G_{2}\hookrightarrow G\) with closed images and homeomorphism onto these images; \item \(\theta\colon G_{1}\times G_{2}\to G\) defined by \(\theta((g,h))=i(g)j(h)\) is an homeomorphism onto an open subgroup~\(\Omega\) of~\(G\) having a complement of measure zero. \end{enumerate} \end{definition} This allows to define almost everywhere and measurable left action \((\alpha_{g})_{g\in G_{1}}\) of~\(G_{1}\) on~\(G_{2}\) and right action \((\beta_{h})_{h\in G_{2}}\) on \(G\), and satisfies \(j(\alpha_{g}(h))i(\beta_{h}(g))=i(g)j(h)\) for almost all~\(g\in G_{1}\) and~\(h\in G_{2}\). By~\cite{Vaes-Vainerman:Extension_of_lcqg}*{Lemma 4.9}, the maps \(G_{1}\times G_{2}\to G_{2}\colon (g,h)\mapsto\alpha_{g}(h)\) and \(G_{1}\times G_{2}\to G_{1}\colon (g,h)\mapsto\beta_{h}(g)\) are measurable, defined almost everywhere, and satisfy the following relations: \begin{alignat}{2} \label{eq:alpha_comp} \alpha_{gs}(h) &=\alpha_{g}(\alpha_{s}(h)), \qquad \beta_{h}(gs) &=\beta_{\alpha_{s}(h)}(g)\beta_{h}(s),\\ \label{eq:beta_comp} \beta_{ht}(g) &=\beta_{h}(\beta_{t}(g)), \qquad \alpha_{g}(ht) &=\alpha_{\beta_{t}(g)}(h)\alpha_{g}(t). \end{alignat} for almost all~\(g, s\in G_{1}\), \(h, t\in G_{2}\). Also, \(\alpha_{g}(1_{G_{2}})=1_{G_{2}}\), and \(\beta_{h}(1_{G_{1}})=1_{G_{1}}\) for all~\(g\in G\), \(h\in H\). Then \(\alpha\colon L^{\infty}(G_{2})\to L^{\infty}(G_{1}\times G_{2})\) defined by \(\alpha(f)(g,s)\mathrel{\vcentcolon=} f(\alpha_{g}(s))\) is a (left) von Neumann algebraic action of the locally compact quantum group \(\G_{1}=(\Bialg{L^{\infty}(G_{1})}\) on~\(L^{\infty}(G_{2})\). The von Neumann algebraic version of the bicrossed product~\(\Qgrp{G}{M}\) is given by \begin{equation} \label{eq:bicros_qnt_grp} M\mathrel{\vcentcolon=} \bigl(\alpha(L^{\infty}(G_{2}))(L(G_{1})\otimes 1)\bigr)'' , \qquad \Comult[M](z)\mathrel{\vcentcolon=}\Multunit[*](1\otimes z)\Multunit \end{equation} Let~\(\lambda\) be the left regular representation of~\(G_{1}\). The left (and right) Haar weight on \(L^{\infty}(G_{2})\). Let~\(\hat{\alpha}\) be the dual action of~\(L(G_{1})\) on the crossed product~\(M\). Then~\(\hat{\alpha}\colon M\to L(G_{1})\otimes M\) is defined by~\(\hat{\alpha}(\alpha(y))=1_{L(G_{1})}\otimes\alpha(\eta)\) and \(\hat{\alpha}(x\otimes 1_{L^\infty(G_2)})=\DuComult[L(G_{1})](x)\otimes 1_{L^\infty(G_2)}\) for all~\(x\in L(G_{1})\) and~\(y\in L^{\infty}(G_2)\). Let~\(\hat{\varphi}_{1}\) and~\(\varphi_{2}\) be the left Haar weights on \(L(G_{1})\) and~\(L^{\infty}(G_{2})\), respectively. By~\cite{Vaes-Vainerman:Extension_of_lcqg}*{Definition 1.13}, the left Haar weight~\(\varphi\) on~\(\G\) is given by \( \varphi\mathrel{\vcentcolon=} \varphi_{2}\alpha^{-1}(\hat{\varphi}_{1}\otimes\Id\otimes\Id)\hat{\alpha} \). A simple computation shows, for any~\(f\in\mathcal{N}_{\hat{\varphi}_{1}}\) and \(\eta\in\mathcal{N}_{\varphi_{2}}\), \begin{equation} \label{eq:Haar_wt_prod} \varphi\bigl((\alpha(\eta)(\lambda(f)\otimes 1)\bigr)=\hat{\varphi}_{1}(\lambda(f))\varphi_{2}(\eta) =\varphi\bigl((\lambda(f)\otimes 1)(\alpha(\eta)\bigr) \end{equation} Let~\(\Hils=L^{2}(G_{1}\times G_{2})\) be the Hilbert space of square integrable functions with respect to the product of the left Haar measures of~\(G_{1}\) and~\(G_{2}\). Using \cite{Baaj-Skandalis-Vaes:Non-semi-regular}*{Definition 3.3} for left Haar measure, we obtain a multiplicative unitary~\(\Multunit\in\U(\Hils\otimes\Hils)\) for~\(\G\) defined by \begin{equation} \label{eq:Multunit_bicros} \Multunit\xi(g,s,h,t)\mathrel{\vcentcolon=}\xi(\beta_{\alpha_{g}(s)^{-1}t}(h)g,s,h,\alpha_{g}(s)^{-1}t), \end{equation} for~\(\xi\in L^{2}(G_{1}\times G_{2}\times G_{1}\times G_{2})\), and for almost all~\(g, s \in G_{1}\), \(h, t\in G_{2}\). Finally, we recall the \(\Cst\)\nobreakdash-algebraic version of~\(\G\) from \cite{Baaj-Skandalis-Vaes:Non-semi-regular}*{Section 3}. Equip the quotient space~\(G_{1}\backslash G\) with its canonical invariant measure class. Then embedding \(G_{2}\to G_{1}\backslash G\) identifies \(G_{2}\) with a Borel subset of \(G_{1}\backslash G\) with complement of measure zero. Then~\cite{Baaj-Skandalis-Vaes:Non-semi-regular}*{Proposition 3.2} gives an isomorphism between \(L^{\infty}(G_{2})\) and \(L^{\infty}(G_{1}\backslash G)\); hence we can restrict~\(\alpha\) to \(\Contvin(G_{1}\backslash G)\). The \(\Cst\)\nobreakdash-algebraic version~\(\Bialg{A}\) of~\(\G\) is given by~\cite{Baaj-Skandalis-Vaes:Non-semi-regular}*{Proposition 3.6} \begin{equation} \label{eq:Cst_bicros} A\mathrel{\vcentcolon=} \alpha(\Contvin(G_{1}\backslash G))\cdot (\lambda(\Cred(G_{1}))\otimes 1), \qquad \alpha\in\Mor(\Contvin(G_{2}), A) . \end{equation} The dual~\(\DuQgrp{G}{A}\) is obtained by exchanging the roles of~\(G_{1}\) and~\(\alpha\) by \(G_{2}\) and \(\beta\), respectively. \section{Existence of C*-actions of bicrossed products on spaces} \label{sec:Podles_cond} Let~\(G_{1}\), \(G_{2}\) be locally compact groups and \((G_{1},G_{2})\) form a matched pairs. Let~\(\Qgrp{G}{A}\) be the associated (\(\Cst\)\nobreakdash-algebraic) bicrossed product quantum group. From now onwards we assume that~\(G_1\) is abelian. \begin{theorem} \label{the:Cst_coact} There is a \(\Cst\)\nobreakdash-action of~\(\G\) on~\(\Contvin(\widehat{G_{1}})\). \end{theorem} Throughout~\(\lambda\) denotes the (right) regular representation of~\(G_{1}\): \(\lambda(f)\xi'(h)\mathrel{\vcentcolon=}\int f(g)\xi'(hg) \diff\mu_{1}(g)\), for~\(f\in\CompSupp(G_{1})\) and \(\xi'\in L^{2}(G_{1})\). Clearly, we have an element~\(i\in\Mor(\Cred(G_1),A)\) and its extension, denoted by~\(i\) again, to~\(\Mult(\Cred(G_1))\) is given by \(i(x)\mathrel{\vcentcolon=} x\otimes 1\) for all~\(x\in\Mult(\Cred(G_{1}))\). Hence, \(\gamma\mathrel{\vcentcolon=}\Comult[A]\circ i\) is an element in \(\Mor(\Cred(G_{1}),A\otimes A)\). In order to interpret \(\gamma\) as the desired \(\Cst\)\nobreakdash-action in Theorem~\ref{the:Cst_coact} following proposition will be crucial. \begin{proposition} \label{prop:def_gamma} \(\gamma\) is a \(^*\)\nb-{}homomorphism from~\(\Cred(G_{1})\) to~\(\Mult(\Cred(G_{1})\otimes A)\) and defined by \begin{equation} \label{eq:def_gamma} \gamma(\lambda(f))\xi(g,h,t)\mathrel{\vcentcolon=} \int f(z)\xi(g\beta_{\alpha_{h}(t)}(z),hz,t) \diff\mu_{1}(z) \end{equation} for all~\(f\in\CompSupp(G_{1})\) and~\(\xi\in L^{2}(G_{1}\times G_{1}\times G_{2})\). \end{proposition} \begin{proof} The adjoint~\(\Multunit[*]\in\U(L^{2}(G_{1}\times G_{2}\times G_{1}\times G_{2}))\) of the multiplicative unitary~\(\Multunit\) of~\(\G\) in~\eqref{eq:Multunit_bicros} is defined by \[ \Multunit[*]\xi'(g,s,h,t)\mathrel{\vcentcolon=}\xi'(g',s,h,\alpha_{g'}(s)t) \qquad\text{for~\(\xi'\in L^{2}(G_{1}\times G_{2}\times G_{1}\times G_{2})\),} \] where~\(g'\mathrel{\vcentcolon=}\beta_{t}(h)^{-1}g\) for all~\(g,h\in G_{1}\) and~\(s,t\in G_{2}\). Using the definition of the comultiplication~\ref{eq:comults} and the property~\eqref{eq:beta_comp} of \(\beta\) we obtain \begin{align} \label{eq:gamma} \gamma(\lambda(f)) \xi'(g,s,h,t) \nonumber &= \Comult[A](1_{L^{2}(G_{1}\times G_{2})}\otimes\lambda(f)\otimes 1_{L^{2}(G_{2})})\xi'(g,s,h,t)\\ \nonumber &=\bigl(\Multunit[*](1_{L^{2}(G_{1}\times G_{2})}\otimes\lambda(f)\otimes 1_{L^{2}(G_{2})})\Multunit\bigr)\xi'(g,s,h,t)\\ \nonumber &= \bigl((1_{L^{2}(G_{1}\times G_{2})}\otimes\lambda(f)\otimes 1_{L^{2}(G_{2})})\Multunit\bigr)\xi'(g',s,h,\alpha_{g'}(s)t)\\ \nonumber &= \Multunit\Bigl(\int f(z)\xi'(g',s,hz,\alpha_{g'}(s)t)\diff\mu_{1}(z)\Bigr)\\ &=\int f(z)\xi'(g\beta_{\alpha_{h}(t)}(z),s,hz,t)\diff\mu_{1}(z). \end{align} Since \(G_{1}\) is abelian, we indenify~\(\Mult(\Cred(G_{1})\otimes A)\) with \(\Contb(\widehat{G_{1}},\Mult(A))\subset\Bound(L^{2}(\widehat{G_{1}}\times G_{1}\times G_{2}))\) using the Fourier transform. Let~\(f\colon \widehat{G_1}\to\Mult(A)\) be a strictly continuous function. Define an operator~\(M_{f}\) acting on \(L^{2}(\widehat{G_1}\times G_{1}\times G_{2})\) by \[ M_{f}(\xi_{1}\otimes\xi_{2})(\hat{g},h,t) \mathrel{\vcentcolon=} \xi_{1}(\hat{g}) (f(\hat{g})\xi_{2})(h,t) \quad\text{for all~\(\xi_{1}\in L^{2}(\widehat{G_1})\), \(\xi_{2}\in L^{2}(G_{1}\times G_{2})\),} \] which is an element in \(M_{f}\in\Contb(\widehat{G_1},\Mult(A))\cong\Mult(\Cred(G_{1})\otimes A)\). Recall the dual pairing~\(\langle\cdot,\cdot\rangle\colon G_{1}\times\widehat{G_1}\to \mathbb{T}\) defined by~\(\langle h,\hat{g}\rangle\mathrel{\vcentcolon=} \hat{g}(h)\) for all \(h\in G_{1}\) and~\(\hat{g}\in\widehat{G_1}\). Let~\(D\) be a~\(\Cst\)\nobreakdash-algebra. For an element \(F\in L^{1}(G_{1},D)\), the Fourier transform~\(\widehat{F}\) is defined by \(\widehat{F}(\hat{g})\mathrel{\vcentcolon=}\int F(h)\langle h, \hat{g}\rangle \diff\mu_{1}(h)\) for \(\hat{g}\in\widehat{G_{1}}\), and \(\widehat{F}\) is an element in~\(\Contvin(\widehat{G_{1}},D)\). Then, for a fixed~\(z\in G_{1}\), consider~\(T_{z}\colon\widehat{G_{1}}\to\Mult(A)\) defined by \[ T_{z}(\hat{g})\xi_{2}(h,t)\mathrel{\vcentcolon=} \langle \beta_{\alpha_{h}(t)}(z),\hat{g}\rangle\xi_{2}(hz,t) \qquad \text{for all~\(\xi_{2}\in L^{2}(G_{1}\times G_{2})\).} \] Clearly, \(\hat{g}\mapsto T_{z}(\hat{g})\) is continuous in the strict topology. Hence, any~\(f\in \CompSupp(G_{1})\) gives \(\int f(z)T_{z} \diff\mu_{1}(z)\) is an element in~\(\Contb(\widehat{G_1},\Mult(A)) \cong\Mult(\Cred(G_{1})\otimes A)\). Using the Fourier transform in the first leg of~\eqref{eq:gamma} we observe \[ \gamma(\lambda(f))=\Flip_{12} (\int f(z) (\Id_{L^{2}(G_{2})}\otimes T_{z} \diff\mu_{1}(z)) \Flip_{12} \] in~\(\Bound(L^{2}(\widehat{G_1}\times G_{2}\times G_{1}\times G_{2}))\). Finally, putting~\(s=1_{G_{2}}\) in~\eqref{eq:gamma} we obtain~\eqref{eq:def_gamma}. \end{proof} We gather some standard facts related to Fourier transform in the next lemma. \begin{lemma} \label{lemm:var_prop} Let \(K\) be a compact subset of~\(G_{1}\). Define \begin{align*} S(K,D) &\mathrel{\vcentcolon=} \{F\in L^{1}(G_{1},D) \text{ \(\mid\)~\(\textup{supp}(F)\subset K\)}\}\subset L^{1}(G,D);\\ \widehat{S(K,D)} &\mathrel{\vcentcolon=}\{\widehat{F} \text{ \(\mid\)~\(F\in S(K,D)\)}\}\subset \Contvin(\widehat{G_{1}},D). \end{align*} We have the following: \begin{enumerate} \item\label{eq:can_der} Let~\(\widehat{G_{1}}\) is a Lie group, and let \(\delta=(\delta_{1},\cdots,\delta_{n})\) be the canonical derivation on \(\widehat{G_{1}}\) for~\(i=1,\cdots , n\). Then \(\widehat{S(K,D)}\subset\dom(\tilde{\delta})\) and \(\tilde{\delta}\bigl(\widehat{S(K,D)}\bigr)\subset \widehat{S(K,D)}\), where~\(\tilde{\delta}_{i}\mathrel{\vcentcolon=}\delta_{i}\otimes \Id_{D}\) for~\(i=1,\cdots , n\), and~\(\tilde{\delta}\mathrel{\vcentcolon=}(\tilde{\delta}_{1},\cdots,\tilde{\delta}_{n})\). \item\label{eq:isometry} \(S(K,D), \widehat{S(K,D)}\subset L^{2}(G_{1},D)\). \item\label{eq:morph} Let~\(D'\) be a~\(\Cst\)\nobreakdash-algebra and \(\rho\colon D\to D'\) be a completely bounded map. Then \((\Id\otimes\rho)S(K,D)\subset S(K,D')\) and \((\Id\otimes\rho)\widehat{S(K,D)}\subset\widehat{S(K,D')}\). \end{enumerate} \end{lemma} \begin{proof} By definition~\(\delta_{i}(\lambda_{g})\mathrel{\vcentcolon=}\rho_{i}(g)\lambda_{g}\), where \(\lambda_{g}\in\Mult(\Cred(G_{1}))\cong\Contb(\widehat{G_{1}})\) and~\(\rho_{i}\in\widehat{G_{1}}\) for~\(i=1,\cdots, n\); hence gives~\ref{eq:can_der}. The second fact follows because the Fourier transform is \(L^{2}\)\nobreakdash-isometry. The last fact is trivial. \end{proof} For any~\(f\in\CompSupp(G_{1})\) and~\(\eta\in\CompSupp(G_{2})\) define~\(\hat{\pi}_{1}(f)\mathrel{\vcentcolon=} \lambda(f)\otimes 1\) and~\(\pi_{2}(\eta) \mathrel{\vcentcolon=}\alpha(\eta)\). Therefore, \(\hat{\pi}_{1}(f)\pi_{2}(\eta)\in A\). \begin{proposition} \label{prop:str_podles} Define \(w\mathrel{\vcentcolon=}\gamma(\hat{\pi}_{1}(f))(1\otimes\pi_{2}(\eta)) \in S(K,\Mult(A))\), where \(K=\textup{supp}(f)\). Then~\(p\mapsto (\Id\otimes\varphi)(\hat{w}(p)^{*}\hat{w}(p))\in\Contvin(\widehat{G_{1}})\). \end{proposition} \begin{proof} By definition \[ w^{*}w =(1\otimes\pi_{2}(\eta^{*}))\gamma\bigl(\hat{\pi}_{1}(\abs{f}^{2})\bigr)(1\otimes\pi_{2}(\eta)) \] Using~\eqref{eq:Haar_wt_prod} we get \begin{align*} (\Id\otimes\varphi)(w^{*}w) &=(\Id\otimes\varphi) \bigl(1\otimes\pi_{2}(\eta^{*}))\gamma(\hat{\pi}_{1}(\abs{f}^{2}))(1\otimes\pi_{2}(\eta)\bigr)\\ &=(\Id\otimes\varphi) \bigl(1\otimes\pi_{2}(\abs{\eta}^{2})\gamma(\hat{\pi}_{1}(\abs{f}^{2})\bigr) \end{align*} By virtue of Proposition~\ref{prop:def_gamma} and~\eqref{eq:Haar_wt_prod}, for~\(p\in\widehat{G_{1}}\), we get \begin{align*} (\Id\otimes\varphi)(\hat{w}(p)^{*}\hat{w}(p)) &= \iint \abs{f}^{2}(g)\abs{\eta}^{2}(s)\langle \beta_{s}(g), p\rangle \diff\mu_{1}(g)\diff\mu_{2}(s)\\ &= \iint \abs{f}^{2}(\beta_{s^{-1}}(g'))\abs{\eta}^{2}(s)\theta(g',s)\langle g',p\rangle \diff\mu_{1}(g')\diff\mu_{2}(s), \end{align*} where~\(g'=\beta_{s}(g)\) and~\(\theta(g,s)\mathrel{\vcentcolon=}\abs{\frac{\diff}{\diff\mu_{1}}\beta_{s^{-1}}(g)}\). Define~\(G_{s}(g)\mathrel{\vcentcolon=}\abs{f}^{2}(\beta_{s^{-1}}(g))\abs{\eta}^{2}(s)\theta(g,s)\). Then, \[ (\Id\otimes\varphi)(\hat{w}(p)^{*}\hat{w}(p))=\int\widehat{G_{s}}(p)\diff\mu_{2}(s). \] A simple computation gives \begin{align*} \norm{G_{s}}_{1} = \int\norm{G_{s}(g)} \diff\mu_{1}(g) &=\int\abs{f}^{2}(\beta_{s^{-1}}(g))\abs{\eta}^{2}(s)\theta(g,s) \diff\mu_{1}(g)\\ &=\int\abs{f}^{2}(g)\abs{\eta}^{2}(s) \diff\mu_{1}(g)= (\norm{f}_{2})^{2}\abs{\eta}^{2}(s). \end{align*} Therefore,~\(\widehat{G_{s}}\in\Contvin(\widehat{G_{1}})\) for almost all~\(s\in G_{2}\). Also~\(\int\abs{\eta}^{2}(s) \diff\mu_{2}(s)\le\infty\). By dominated convergence theorem, for any sequence~\(\{p_{n}\}\subset\widehat{G_{1}}\) such that \(\abs{p_{n}}\to\infty\) as~\(n\to\infty\), we have \[ \lim_{n\to\infty}\int\widehat{G_{s}}(p_{n})\diff\mu_{2}(s) =\int\bigl(\lim_{n\to\infty}\widehat{G_{s}}(p_{n})\bigr) \diff\mu_{2}(s) =0. \qedhere \] \end{proof} \begin{proof}[Proof of Theorem~\textup{\ref{the:Cst_coact}}] Proposition~\ref{prop:def_gamma} and Proposition~\ref{prop:str_podles} give \(\gamma\) is an element in \(\Mor(\Contvin(\widehat{G_{1}}),\Contvin(\widehat{G_{1}})\otimes A)\). The coassociativity of~\(\Comult[A]\) gives~\eqref{eq:right_action} for~\(\gamma\). Let~\((L^{2}(\G),\pi,\Lambda)\) be the GNS triple for the the left Haar weight \(\varphi\) in~\eqref{eq:Haar_wt_prod}. For any~\(v\in L^{2}(\G)\), define the operator~\(\Theta_{v}(c)\mathrel{\vcentcolon=} cv\) for~\(c\in\mathbb C\). Let~\((e_{i})_{i\in\mathbb N}\) be an orthonormal basis of~\(L^{2}(\G)\). For~\(w\in S(K,\Mult(A))\) in Proposition~\ref{prop:str_podles}, define \[ x_{i}^{*}(p)\mathrel{\vcentcolon=} (\Id\otimes\Theta_{e_{i}}^{*})(\Id\otimes\Lambda)\Comult[A](\hat{w}(p)) \in\Mult(A) \qquad\text{for~\(p\in\widehat{G_{1}}\).} \] Also, for~\(q\in\mathcal{N}_{\varphi}\) define~\(q_{i}\mathrel{\vcentcolon=} (\Id\otimes\Theta_{e_{i}}^{*})(\Id\otimes\Lambda)\Comult[A](q)\in\Mult(A)\). We compute, \begin{align*} \sum_{i=1}^{\infty}x_{i}(p)q_{i} &= \sum_{i=1}^{\infty}(\Id\otimes\Lambda)\Comult[A](w(p))^{*}(1\otimes\Theta_{e_{i}}\Theta_{e_{i}}^{*}) (\Id\otimes\Lambda)\Comult[A](q)\\ &= (\Id\otimes\Lambda)\Comult[A](w(p))^{*}(\Id\otimes\Lambda)\Comult[A](q)\\ &=\varphi(\hat{w}(p)^{*}q)1_{\Mult(A)} \end{align*} By virtue of Proposition~\ref{prop:str_podles}, \(F_{N}(p)\mathrel{\vcentcolon=}\Sigma_{i=1}^{N}x_{i}(p)q_{i}\) is strictly convergent. Hence, for any given~\(q'\in A\), the sequence~\(\{F_{N}(1\otimes q')\}\) converges uniformly over every compact subset of~\(\widehat{G_{1}}\). In order to establish Podle\'s condition~\eqref{eq:Podles_cond} for~\(\gamma\) we need to show \(\{F_{N}(1\otimes q')\}\) converges uniformly over~\(\widehat{G_{1}}\) for~\(q'\in A\). By a similar argument used in~\cite{Vaes-VanDaele:Hopf_Cstalg}*{Proposition 5.11}, and Proposition~\ref{prop:str_podles} gives \(\sum_{i=1}^{n}x_{i}^{*}(p)x_{i}(p)\) strictly converges to~\(\varphi(\hat{w}(p)^{*}\hat{w}(p))1_{\Mult(A)}\). Similarly, \(\sum_{i=1}^{n}q_{i}^{*}q_{i}\) is bounded and strictly convergent. Let~\(\norm{\sum_{i=1}^{n}q_{i}^{*}q_{i}}<C^{2}\). Given~\(\epsilon> 0\), we can choose a compact subset \(K'\) of~\(\widehat{G_{1}}\), such that~\((\Id\otimes\varphi)(\hat{w}^{*}(p)\hat{w}(p)) \leq(\frac{\epsilon}{C})^{2}\) for all~\(p\notin K'\). Hence~\(\norm{\sum_{i=m}^{n}x_{i}(p)x_{i}^{*}(p)}\leq(\frac{\epsilon}{C})^{2}\) for all \(p\notin K'\), and for all~\(m, n\). Now choose~\(N_{0}\) such that for all~\(m,n\geq N_{0}\), \(\norm{(F_{m}-F_{n})(p)q'}<\epsilon\) for~\(p\in K'\), \(q'\in A\). Finally, for all~\(m,n\geq N_{0}\) \[ \left\Vert(F_{m}-F_{n})(p)\right\Vert \leq\left\Vert\sum_{i=m}^{n}x_{i}(p)x_{i}^{*}(p)\right\Vert^{\frac{1}{2}} \left\Vert\sum_{i=m}^{n} q_{i}^{*}q_{i}\right\Vert^{\frac{1}{2}} <\epsilon\norm{q'}, \] for~\(p\notin K'\). Hence~\(\{F_{N}(1\otimes q')\}\) is Cauchy sequence in norm for~\(q'\in A\). \end{proof} \section{Properties of bicrossed product C*-actions} \label{sec:properties} Let~\(G_{1}\), \(G_{2}\), and \(\Qgrp{G}{A}\) be as before, so that \(G_{1}\) is abelian. In this section we shall discuss various properties of the \(\Cst\)\nobreakdash-action \(\gamma\) of~\(\G\) on \(\Contvin(\widehat{G_1})\) constructed in Theorem~\ref{the:Cst_coact}. Recall, a right action~\(\gamma\) of a von Neumann algebraic quantum group~\(\Qgrp{G}{M}\) on a von Neumann algebra \(N\) is called \emph{ergodic} if \(N^{\gamma}\mathrel{\vcentcolon=}\{x\in N :\text{ \(\gamma(x)=x\otimes 1_{M}\)}\}\) is equal to~\(\mathbb C\cdot 1_{N}\). \begin{proposition} \label{prop:Ergodic} The von Neumann algebraic action \(\gamma\) of~\(\G\) on~\(L(G_1)\) is ergodic. \end{proposition} \begin{proof} Let~\(\Bialg{M}\) be the von Neumann algebraic version of~\(\G\). By construction, \(\gamma\) is obtained from the comultiplication \(\Comult[A]\) of the \(\Cst\)\nobreakdash-algebraic version of \(\G\). Since, \(\Comult[A]\) extends uniquely to the comultiplication~\(\Comult[M]\) on~\(M\) by~\eqref{eq:bicros_qnt_grp}. Then, in a similar way, \(\gamma\) also extends to a von Neumann algebraic action, denoted again by \(\gamma\), of \(\G\) on~\(L(G_1)\). Now ergodicity of~\(\Comult[M]\) (see~\cite{Kustermans-Vaes:LCQG}*{Result 5.13} or \cite{Meyer-Roy-Woronowicz:Homomorphisms}*{Theorem 2.1}) implies the same for~\(\gamma\). \end{proof} \subsection{Faithfulness} Motivated by~\cite{Goswami-Joardar:Rigidity_CQG_act}*{Definition 4.5} we propose the following definition, as a possible generalisation of faithful actions on locally compact quantum groups on \(\Cst\)\nobreakdash-algebras. \begin{definition} \label{def:coact_faithful} A \(\Cst\)\nobreakdash-action \(\gamma\colon C\to C\otimes A\) of~\(\Qgrp{G}{A}\) on a~\(\Cst\)\nobreakdash-algebra \(C\) is called \emph{faithful} if the \(^*\)\nb-{}algebra generated by~\(\{(\omega\otimes\Id_{A})\gamma(c) : \text{\(\omega\in C'\), \(c\in C\)}\}\) is strictly dense in~\(\Mult(A)\). \end{definition} \begin{example} \label{ex:comult_faithful} In particular, the comultiplication map~\(\Comult[A]\) is a \(\Cst\)\nobreakdash-action of~\(\G\) on~\(A\). Given any~\(\omega\in A'\) and~\(a\in A\) define \(\omega\cdot a\in A'\) by~\(\omega\cdot a(b)\mathrel{\vcentcolon=}\omega(ab)\) for~\(b\in A\). Now the space~\(\{\omega\cdot a :\text{\(\omega\in A'\), \(a\in A\)}\}\) is weak~\(^*\)\nb-{}dense in~\(A'\). The cancellation property~\eqref{eq:cancellation} of \(\Comult[A]\) shows that \(\{(\omega\otimes\Id_{A})\Comult[A](A) : \text{ \(\omega\in A\) and~\(a\in A\)}\}\) is norm dense in~\(A\); hence \(\Comult[A]\) is a faithful \(\Cst\)\nobreakdash-action. \end{example} \begin{theorem} \label{the:faithful_Cst_coact} The \(\Cst\)\nobreakdash-action~\(\gamma\) of~\(\G\) on~\(\Contvin(\widehat{G_{1}})\) is faithful if and only if~\(\beta\) is non\nobreakdash-trivial. \end{theorem} \begin{proof} If possible, assume that \(\beta\) is trivial. Then Proposition~\ref{prop:def_gamma} gives \[ \gamma(\lambda(f))=\Comult[\Cred(G_{1})](\lambda (f))\otimes 1 \quad\text{for all \(f\in\CompSupp(G_{1})\).} \] Here \(\Comult[\Cred(G_{1})]\) denotes the comultiplication on~\(\Cred(G_{1})\). By Example~\ref{ex:comult_faithful} we observe that the set \(\{(\omega\otimes\Id_{A})\gamma(\lambda(f)) :\text{ \(\omega\in \Cred(G_{1})'\), \(f\in\CompSupp(G_{1})\)}\}\) is strictly dense in~\(\Mult(\Cred(G_{1})\); hence \(\gamma\) is not faithful. Therefore, by contraposition we obtain that the faithfulness of \(\gamma\) implies the nontriviality of \(\beta\). Conversely, assume that~\(\beta\) is non\nobreakdash-trivial. By virtue of \cite{Baaj-Skandalis-Vaes:Non-semi-regular}*{Proposition 3.6} it is enough to show the set \(\{(\omega\otimes\Id_{A})\gamma(\lambda(f)) : \text{ \(\omega\in \Cred(G_{1})'\)}\}\) is norm dense in \(\alpha(\Contvin(G_{2}))\cdot (\Cred(G_{1})\otimes 1)\). Let~\(K\) and~\(K'\) be compact subsets of~\(G_{1}\) such that \(\mu_{1}(K)\neq 0\) and~\(\mu_{1}(K')\neq 0\). Let~\(\chi\) and \(\chi'\) be characteristic functions of~\(K\) and~\(K'\) respectively. For a given~\(f\in\CompSupp(G_{1})\) define \[ A_{\chi,\chi'}=(\omega_{\chi,\chi'}\otimes\Id_{A})\gamma(\lambda(f)). \] Here~\(\omega_{\chi,\chi'}\) denotes the contraction with respect to \(\chi_{K}\) and~\(\chi_{K'}\). For all~\(\xi\in L^{2}(G_{1}\times G_{2})\) we get, \begin{align*} A_{\chi,\chi'}\xi(h,t) &= \int\conj{\chi(g)}\gamma(\lambda(f))(\chi'(g)\otimes\xi(h,t)\diff\mu_{1}(g) \\ &= \iint \conj{\chi(g)}f(z)\chi'(g\beta_{\alpha_{h}(t)}(z))\xi(hz,t)\diff\mu_{1}(g)\diff\mu_{1}(z)\\ &= \int \mu_{1}(K\cap\beta_{\alpha_{h}(t)}(z)K') f(z)\xi(hz,t)\diff\mu_{1}(z) \end{align*} Let~\(\{K_{n}\}_{n\in\mathbb{N}}\) be an increasing sequence of compact subsets of~\(G_{1}\) and growing up to the whole group~\(G_{1}\). Let~\(\chi_{n}\) denotes the characteristic function of \(K_{n}\) for all~\(n\in\mathbb{N}\). By dominated convergence theorem, \[ \lim_{n\to\infty}A_{\chi_{n},\chi'}\xi(h,t) = \int\mu_{1}(\beta_{\alpha_{h}(t)}(z)K') f(z)\xi(hz,t)\diff\mu_{1}(z). \] Invariance of Haar measure~\(\mu_{1}\) gives~\(\mu_{1}(\beta_{\alpha_{h}(t)}(z)K') =\mu_{1}(K')\) for all~\(h,z\in G_{1}\), \(t\in G_{2}\). Therefore, \[ \lim_{n\to\infty}A_{\chi_{n},\chi'}\xi(h,t) = \mu_{1}(K')\int f(z)\xi(hz,t)\diff\mu_{1}(z) =\mu_{1}(K')(\lambda(f)\otimes 1)\xi(h,t). \] Thus~\(A_{\chi_{n},\chi}\) goes strictly to~\(\lambda_{g_{0}}\) for some \(g_{0}\in G_{1}\). Therefore, for any~\(\eta,\eta'\in L^{2}(G_{1})\), the operator \(B_{\eta,\eta',g_{0}}\mathrel{\vcentcolon=}(\omega_{\eta,\eta'}\otimes\Id_{A})\gamma(\lambda(g_{0}))(\lambda_{g_{0}^{-1}}\otimes 1)\) is in the desired algebra. Next we compute, \begin{align*} B_{\eta,\eta',g_{0}}\xi(h,t) &= \big((\omega_{\eta,\eta'}\otimes\Id_{A})\gamma(\lambda(g_{0}))\big)(\lambda_{g_{0}^{-1}}\otimes 1)\xi(h,t)\\ &= (\lambda_{g_{0}^{-1}}\otimes 1)\int \conj{\eta(g)}\eta'(g\beta_{\alpha_{h}(t)}(g_{0}))\xi(hg_{0},t)\diff\mu_{1}(g)\\ &= \int \conj{\eta(g)}\eta'(g\beta_{\alpha_{h}(t)}(g_{0}))\diff\mu_{1}(g) \xi(h,t), \qquad\text{for \(\xi\in L^{2}(G_{1}\times G_{2})\).} \end{align*} Since, \(\beta\) is non\nobreakdash-trivial, varying~\(\eta\), \(\eta'\), \(g_{0}\) and using Stone-Weierstrass theorem we get the set of functions \(x\to \int\conj{\eta(g)}\eta'(g\beta_{x}(g_{0}))\diff\mu_{1}(g)\) for \(x\in G_{2}\) which is norm dense in~\(\Contvin(G_{2})\). Therefore, the set of operators~\(B_{\eta,\eta',g_{0}}\) is norm dense in~\(\alpha(\Contvin(G_{2}))\). \end{proof} \subsection{Isometry} \label{sec:Isometry} We recall the definition of isometric action from \cite{Goswami:Qnt_isometry} for compact quantum groups. In analogy to this, it is natural to make the following definition of isometric action in the locally compact set-up: \begin{definition} Let \(\gamma\) be a \(\Cst\)\nobreakdash-action of a locally compact quantum group \(\G\) on \(\Contvin(X)\) where \(X\) is a smooth Riemannian (possibly non\nobreakdash-compact) manifold with the Hodge-Laplacian \(\mathcal{L}=-d^*d\), where \(d\) denotes the de\nobreakdash-Rham differential operator. We say that \(\gamma\) is for every bounded linear functional \(\omega\) on \(M(Q)\), \(({\rm id} \otimes \omega) \circ \gamma\) maps \(\Contb^\infty(X)\) to itself and commutes with \(\mathcal{L}\) on that subspace. \end{definition} Just as in \cite{Goswami-Joardar:Rigidity_CQG_act}, it is easy to prove that for any isometric action \(\gamma\) and any smooth vector field \(\chi\) on \(X\), \(f, \phi \in \Contb^{\infty}(X)\), \((\chi \otimes {\rm id})(\gamma(f))\) and \(\gamma(\phi)\) will commute. \begin{proposition} \label{Prop:Isometry} Assume~\(\widehat{G_1}\) is a Lie group. Then the~\(\Cst\)\nobreakdash-action \(\gamma\) of~\(\G\) on~\(\Contvin(\widehat{G_1})\) is isometric whenever either~\(\alpha\) or \(\beta\) is trivial. \end{proposition} \begin{proof} As noted before, the condition of isometry of~\(\gamma\) implies the operators \((\delta_{i}\otimes\Id_{A})\gamma(\lambda_{g_{1}})\) and \(\gamma(\lambda_{g_{2}})\) commute for all~\(g_{1},g_{2}\in G_{1}\) and derivations~\(\delta_{i}\) on~\(\widehat{G_{1}}\). Let~\(\xi\in L^{2}(G_{1}\times G_{1}\times G_{2})\). Using Proposition~\ref{prop:def_gamma}, we compute \begin{align*} L &=(\delta_{i}\otimes\Id_{A})\gamma(\lambda_{g_{1}})\gamma(\lambda_{g_{2}})\xi(g,h,t)\\ &= \rho_{i}(\beta_{\alpha_{h}(t)}(g_{1}))\gamma(\lambda_{g_{2}})\xi\big(g\beta_{\alpha_{h}(t)}(g_{1}),hg_{1},t\big)\\ &= \rho_{i}(\beta_{\alpha_{h}(t)}(g_{1}))\xi\big(g\beta_{\alpha_{h}(t)}(g_{1})\beta_{\alpha_{hg_{1}}(t)}(g_{2}), hg_{1}g_{2},t\big) \end{align*} Using~\eqref{eq:beta_comp} and commutativity of~\(G_{1}\) we get, \[ L=\rho_{i}(\beta_{\alpha_{h}(t)}(g_{1}))\xi\big(g\beta_{\alpha_{h}(t)}(g_{1}g_{2}), hg_{1}g_{2},t\big). \] A similar computation gives \[ R =\gamma(\lambda_{g_{2}})(\delta_{i}\otimes\Id_{A})\gamma(\lambda_{g_{1}})\xi(g,h,t) =\rho_{i}(\beta_{\alpha_{hg_{2}}(t)}(g_{1}))\xi\big(g\beta_{\alpha_{h}(t)}(g_{1}g_{2}), hg_{1}g_{2},t\big). \] Now~\(L=R\) for all~\(\xi\in L^{2}(G_{1}\times G_{1}\times G_{2})\), \(g_{1},g_{2}\in G_{1}\), and \(\rho_{i}\in\widehat{G_{1}}\), implies \[ \beta_{\alpha_{h}(t)}(g_{1})=\beta_{\alpha_{hg_{2}}(t)}(g_{1}) \qquad\text{ for all~\(g_{1},g_{2}, h\in G_{1}, t\in G_{2}\).} \] This is true if either of the actions~\(\alpha\) or \(\beta\) is trivial. \end{proof} We prove the main result of this section: \begin{theorem} \label{the:rig_iso_faithful} Assume \(\widehat{G_{1}}\) is a Lie group and the \(\Cst\)\nobreakdash-action \(\gamma\) of \(\G\) on~\(\Contvin(\widehat{G_1})\) is faithful and isometric. Then~\(\G\) is classical group. \end{theorem} \begin{proof} By Theorem~\ref{the:faithful_Cst_coact}, faithfulness of \(\gamma\) implies that \(\beta\) is non\nobreakdash-trivial. On the other hand, \(\gamma\) is isometric; hence Proposition~\ref{Prop:Isometry} forces \(\alpha\) to be trivial. From~\eqref{eq:bicros_qnt_grp} and using the fact that~\(G_{1}\) is abelian we get~\(M=L^\infty(\widehat{G_1}\times G_2)\). \end{proof} \section{Examples} \label{sec:Example} `\(ax+b\)' is the group of affine transformations of the real line~\(\mathbb R\). The natural action of~\(ax+b\) on~\(\mathbb R\) given by~\(x\mapsto ax+b\) for~\(a\in\mathbb R\setminus\{0\}\) and~\(b,x\in\mathbb R\) is faithful. We apply our results on two versions of quantum~\(ax+b\) group discussed in~\cite{Vaes-Vainerman: Extension_of_lcqg}*{Section 5}. Both of them are genuine non\nobreakdash-compact, non\nobreakdash-discrete, non\nobreakdash-Kac quantum groups. We show that they act ergodically and faithfully on non\nobreakdash-compact Riemannian manifolds. However none of these actions are not isometric. \subsection{Baaj-Skandalis' \texorpdfstring{$\textup{ax+b}$}{ax+b} group} \label{subsec:ax+b} Assume~\(G_{1}=G_{2}=\mathbb R\setminus\{0\}\) and the group operation is the usual multiplication. Let~\(G=\{(a,b)\text{ \(\mid\) \(a\in\mathbb R\setminus \{0\}\), \(b\in\mathbb R\)}\}\) with \((a,b)(c,d)=(ac,d+cb)\). Define~\(i\colon G_{1}\mapsto G\) and \(j\colon G_{2}\mapsto G\) by \[ i(g)\mathrel{\vcentcolon=} (g,g-1), \qquad j(s)\mathrel{\vcentcolon=} (s,0), \] for all~\(g\in G_{1}\) and~\(s\in G_{2}\). This way \((G_{1},G_{2})\) is a matched pair in the sense of Definition~\ref{def:mathced_pair}. Associated actions~\(\alpha\) and~\(\beta\) are defined by \[ \alpha_{g}(s)=\frac{gs}{s(g-1)+1}, \qquad \beta_{s}(g)\mathrel{\vcentcolon=} s(g-1)+1. \] for all~\(g,s\in \mathbb R\setminus\{0\}\) such that~\((g-1)\neq -s^{-1}\). Associated bicrossed product~\(\G\) is the Baaj-Skandalis' quantum \(ax+b\) group (see~\cite{Vaes-Vainerman:Extension_of_lcqg}*{Section 5.3}). By~\cite{Vaes-Vainerman:Extension_of_lcqg}*{Proposition 5.2 \& 5.3}, \(\G\) is self dual, non\nobreakdash-Kac, non\nobreakdash-compact, non\nobreakdash-discrete quantum group. \begin{proposition} \label{prop:ax+b} There is an ergodic, faithful and non\nobreakdash-isometric \(\Cst\)\nobreakdash-action of~\(\G\) on~\(\Contvin(\mathbb R\setminus\{0\})\). \end{proposition} \begin{proof} Clearly, \(G_{1}\) is abelian and~\(\widehat{G_1}\) is a Lie group with two connected components. Since~\(\beta\) is non\nobreakdash-trivial and~\(\G\) is a genuine quantum group, by Proposition~\ref{prop:Ergodic}, Theorem~\ref{the:faithful_Cst_coact}, and Theorem~\ref{the:rig_iso_faithful} the \(\Cst\)\nobreakdash-action \(\gamma\) of~\(\G\) on~\(\Contvin(\mathbb R\setminus\{0\})\) in Theorem~\ref{the:Cst_coact} is ergodic, faithful and not isometric. \end{proof} \subsection{Split--Extension} \label{subsec:Split_ext} Assume~\(G_{1}=\{(a,b)\text{ \(\mid\) \(a>0\), \(b\in\mathbb R\)}\}\) with \((a,b)(c,d)\mathrel{\vcentcolon=} (ac,ad+\frac{b}{c})\) and \(G_{2}=(\mathbb R,+)\). Let~\(K\) be the multiplicative group with two elements. Define~\(G=SL_{2}(\mathbb R)/K\), and \(i\colon G_{1}\mapsto G\), \(j\colon G_{2}\mapsto G\) by \[ i(a,b)\mathrel{\vcentcolon=} \left( \begin{array}{cc} a & b\\ 0 & \frac{1}{a}\end{array}\right) \text{ mod }K, \qquad j(x)\mathrel{\vcentcolon=} \left( \begin{array}{cc} 1 & 0\\ x & 1\end{array}\right)\text{ mod }K . \] This way \((G_{1},G_{2})\) is a matched pair in the sense of Definition~\ref{def:mathced_pair}. Associated actions~\(\alpha\) and~\(\beta\) are defined by \[ \alpha_{(a,b)}(x)\mathrel{\vcentcolon=} \frac{x}{a(a+bx)}, \qquad \beta_{x}(a,b)\mathrel{\vcentcolon=}\left\{ \begin{array}{ll} (a+bx,b) & \quad\text{if~\(a+bx>0\),}\\ (-a-bx,-b) & \quad\text{if~\(a+bx<0\).} \end{array}\right . \] whenever~\(ax+b\neq 0\). By \cite{Vaes-Vainerman:Extension_of_lcqg}*{Proposition 5.5}, associated bicrossed product \(\G\) and its dual \(\DuG\) are non\nobreakdash-Kac, non\nobreakdash-compact, and non\nobreakdash-discrete quantum group. Also, \(\DuG\) is not unimodular. Moreover, \cite{Vaes-Vainerman:Extension_of_lcqg}*{Remark 5.6} shows that~\(\DuG\) is deformation of some generalised~\(ax+b\) group. \begin{proposition} \label{prop:ex_split_ext} There is an ergodic, faithful and non\nobreakdash-isometric \(\Cst\)\nobreakdash-action of~\(\DuG\) on~\(\Contvin(\mathbb R)\). \end{proposition} \begin{proof} Clearly, \(G_{2}\) is an abelian and~\(\widehat{G_2}\) is a connected Lie group. Recall that \(\DuG\) is obtained by exchanging \(G_1\) and~\(\alpha\) with \(G_2\) and~\(\beta\). Since~\(\alpha\) is non\nobreakdash-trivial, Proposition~\ref{prop:Ergodic}, Theorem~\ref{the:faithful_Cst_coact}, and Theorem~\ref{the:rig_iso_faithful} give the \(\Cst\)\nobreakdash-action \(\gamma\) of~\(\DuG\) on~\(\Contvin(\mathbb R)\) in Theorem~\ref{the:Cst_coact} is ergodic, faithful and not isometric. \end{proof} \begin{bibdiv} \begin{biblist} \bib{Baaj-Skandalis-Vaes:Non-semi-regular}{article}{ author={Baaj, Saad}, author={Skandalis, Georges}, author={Vaes, Stefaan}, title={Non-semi-regular quantum groups coming from number theory}, journal={Comm. Math. Phys.}, volume={235}, date={2003}, number={1}, pages={139--167}, issn={0010-3616}, review={\MRref {1969723}{2004g:46083}}, doi={10.1007/s00220-002-0780-6}, } \bib{Baaj-Vaes:Double_cros_prod}{article}{ author={Baaj, Saad}, author={Vaes, Stefaan}, title={Double crossed products of locally compact quantum groups}, journal={J. Inst. Math. Jussieu}, volume={4}, date={2005}, number={1}, pages={135--173}, issn={1474-7480}, review={\MRref {2115071}{2006h:46071}}, doi={10.1017/S1474748005000034}, } \bib{Drinfeld:Quantum_groups}{article}{ author={Drinfel{$^\prime $}d, Vladimir Gershonovich}, title={Quantum groups}, booktitle={Proceedings of the {I}nternational {C}ongress of {M}athematicians, {V}ol. 1, 2 ({B}erkeley, {C}alif., 1986)}, pages={798--820}, publisher={Amer. Math. Soc.}, address={Providence, RI}, date={1987}, review={\MRref {934283}{89f:17017}}, } \bib{Etingof-Walton:Semisimple_Hopf_act}{article}{ author={Etingof, Pavel}, author={Walton, Chelsea}, title={Semisimple Hopf actions on commutative domains}, journal={Adv. Math.}, volume={251}, date={2014}, pages={47--61}, issn={0001-8708}, review={\MRref{3130334}{}}, doi={10.1016/j.aim.2013.10.008}, } \bib{Goswami:Qnt_isometry}{article}{ author={Goswami, Debashish}, title={Quantum group of isometries in classical and noncommutative geometry}, journal={Comm. Math. Phys.}, volume={285}, date={2009}, number={1}, pages={141--160}, issn={0010-3616}, review={\MRref{2453592}{2009j:58036}}, doi={10.1007/s00220-008-0461-1}, } \bib{Goswami-Joardar:Rigidity_CQG_act}{article}{ author={Goswami, Debashish}, author={Joardar, Soumalya}, title={Rigidity of action of compact quantum groups on compact, connected manifolds}, note={\arxiv{1309.1294}}, status={eprint}, } \bib{Huang:Faithful_act_CQG}{article}{ author={Huang, Huichi}, title={Faithful compact quantum group actions on connected compact metrizable spaces}, journal={J. Geom. Phys.}, volume={70}, date={2013}, pages={232--236}, issn={0393-0440}, review={\MRref{3054297}{}}, doi={10.1016/j.geomphys.2013.03.027}, } \bib{Huang:Inv_subset_CQG_act}{article}{ author={Huang, Huichi}, title={Invariant subsets under compact quantum group actions}, note={\arxiv{1210.5782v2}}, status={eprint}, date={2013}, } \bib{Jimbo:Yang-baxter_eq}{article}{ author={Jimbo, Michio}, title={A {$q$}-difference analogue of {$U(\mathfrak {g})$} and the {Y}ang-{B}axter equation}, journal={Lett. Math. Phys.}, volume={10}, date={1985}, number={1}, pages={63--69}, issn={0377-9017}, review={\MRref {797001}{86k:17008}}, doi={10.1007/BF00704588}, } \bib{Kustermans-Vaes:LCQG}{article}{ author={Kustermans, Johan}, author={Vaes, Stefaan}, title={Locally compact quantum groups}, journal={Ann. Sci. \'Ecole Norm. Sup. (4)}, volume={33}, date={2000}, number={6}, pages={837--934}, issn={0012-9593}, review={\MRref {1832993}{2002f:46108}}, doi={10.1016/S0012-9593(00)01055-7}, } \bib{Kustermans-Vaes:LCQGvN}{article}{ author={Kustermans, Johan}, author={Vaes, Stefaan}, title={Locally compact quantum groups in the von Neumann algebraic setting}, journal={Math. Scand.}, volume={92}, date={2003}, number={1}, pages={68--92}, issn={0025-5521}, review={\MRref{1951446}{2003k:46081}}, eprint={http://www.mscand.dk/article.php?id=198}, \bib{Manin:Qnt_grp_NCG}{book}{ author={Manin, Yuri Ivanovich}, title={Quantum groups and noncommutative geometry}, publisher={Universit\'e de Montr\'eal, Centre de Recherches Math\'ematiques, Montreal, QC}, date={1988}, pages={vi+91}, isbn={2-921120-00-3}, review={\MRref{1016381}{91e:17001}}, } \bib{Masuda-Nakagami-Woronowicz:C_star_alg_qgrp}{article}{ author={Masuda, Tetsuya}, author={Nakagami, Yoshiomi}, author={Woronowicz, Stanis\l aw Lech}, title={A $C^*$\nobreakdash -algebraic framework for quantum groups}, journal={Internat. J. Math}, volume={14}, date={2003}, number={9}, pages={903--1001}, issn={0129-167X}, review={\MRref {2020804}{2004j:46100}}, doi={10.1142/S0129167X03002071}, } \bib{Meyer-Roy-Woronowicz:Homomorphisms}{article}{ author={Meyer, Ralf}, author={Roy, Sutanu}, author={Woronowicz, Stanis\l aw Lech}, title={Homomorphisms of quantum groups}, journal={M\"unster J. Math.}, volume={5}, date={2012}, pages={1--24}, issn={1867-5778}, review={\MRref{3047623}{}}, eprint={http://nbn-resolving.de/urn:nbn:de:hbz:6-88399662599}, \bib{Vaes-Vainerman:Extension_of_lcqg}{article}{ author={Vaes, Stefaan}, author={Vainerman, Leonid}, title={Extensions of locally compact quantum groups and the bicrossed product construction}, journal={Adv. Math.}, volume={175}, date={2003}, number={1}, pages={1--101}, issn={0001-8708}, review={\MRref {1970242}{2004i:46103}}, doi={10.1016/S0001-8708(02)00040-3}, } \bib{Vaes-VanDaele:Hopf_Cstalg}{article}{ author={Vaes, Stefaan}, author={Van Daele, Alfons}, title={Hopf {$C^*$}-algebras}, journal={Proc. London Math. Soc. (3)}, volume={82}, date={2001}, number={2}, pages={337--384}, issn={0024-6115}, review={\MRref{1806875}{2002f:46139}}, doi={10.1112/S002461150101276X}, } \bib{Woronowicz:CQG}{article}{ author={Woronowicz, Stanis\l aw Lech}, title={Compact quantum groups}, conference={ title={Sym\'etries quantiques}, address={Les Houches}, date={1995}, }, book={ publisher={North-Holland}, place={Amsterdam}, }, date={1998}, pages={845--884}, review={\MRref {1616348}{99m:46164}}, } \bib{Woronowicz:Multiplicative_Unitaries_to_Quantum_grp}{article}{ author={Woronowicz, Stanis\l aw Lech}, title={From multiplicative unitaries to quantum groups}, journal={Internat. J. Math.}, volume={7}, date={1996}, number={1}, pages={127--149}, issn={0129-167X}, review={\MRref {1369908}{96k:46136}}, doi={10.1142/S0129167X96000086}, } \end{biblist} \end{bibdiv} \end{document}
1,116,691,498,760
arxiv
\section{Introduction}\label{sec-intro} Almost twenty years after supermassive black holes were discovered in a few nearby galactic nuclei, it is thought that most galaxies host a SMBH with a mass between $10^5-10^{10}\,M_\odot$~\citep{kr95,FF05,gul09}. The growth and evolution of the SMBH is intertwined with the growth and evolution of its galaxy host, and this is borne out through tight correlations between the SMBH mass and fundamental properties of the galaxy such as bulge luminosity~\citep{MM13}, dark matter halo mass~\citep{FF05}, and stellar velocity dispersion~\citep{FM00,geb00}. Recent observations probing both the high and low mass end of the galaxy distribution, however, have uncovered examples of embedded SMBHs that are orders of magnitude more massive than the correlations predict. Henize 2-10, a dwarf starburst galaxy with $10^6\,M_\odot$ SMBH \citep{rei11}, is nearly 100 times more massive than it should be based on its dynamical mass. NGC 4486B, a dwarf elliptical, has a $5\times10^{8}\,M_\odot$ SMBH comprising 10\% of the total galaxy mass~\citep{mag98}. NGC1277, a compact lenticular galaxy, may host the most massive SMBH to date, at $1.7\times10^{10}\,M_\odot$ (\citet{van12}, though see \citet{ems13}, which advocates for a more modest mass of $5\times10^9\,M_\odot$). Current efforts are underway to determine if these represent an extreme class of SMBH or if they are statistical outliers. When the SMBH is such a significant contribution to the mass of the galactic nucleus, it should exact a profound change on the shape, structure, and dynamics of its host. SMBHs are known to alter the shape of a triaxial galaxy \citep{VM98,KHB02}, and can stabilize a galaxy against bar formation \citep{SS04}. It is widely accepted that SMBH feedback can quench star formation (e.g, \citet{pag12,DS05,bun08}), thereby altering the global star formation rate, gas content, and host baryon fraction. It is not clear, however, how SMBH coalescence proceeds in this extreme case. Typically, a galaxy merger is expected to be followed eventually by a binary black hole merger deep within the core of the merger. In this framework, SMBH separation is governed on kiloparsec scales by dynamical friction, until the pair joins and forms a hard binary. The separation on the parsec scale is dictated by 3-body scattering of stars within the binary's loss cone. Once the ejected stars have extracted enough energy from the binary orbit to shrink the separation to roughly milli-parsec scales, gravitational radiation dominates, and the SMBHs coalesce~\citep{BBR80}. Naively, the orbit of an ultramassive binary SMBH should stall at the parsec scale, since it would need to eject roughly its own mass in stars to reach the gravitational wave stage, and in some cases that could be over half the bulge mass. Indeed, this {\it final parsec problem} is known to be especially problematic for massive black holes ($>10^{7.5}\,M_{\odot}$ ) and high mass ratios~\citep{MM05}; this may imply that these ultramassive black holes are unlikely to coalesce. In this paper, we simulate the dynamics of an ultramassive black hole during a galaxy merger using direct $N$-body simulations. We find that the merger remnant is quite axisymmetric well within the SMBH influence radius, but it remains triaxial farther out. Contrary to expectations, the black holes reach the gravitational wave regime very efficiently. Surprisingly, the separation between the binary SMBH is mainly driven by dynamical friction; the 3-body scattering phase is very brief, if it exists at all. By the time the binary is hard, gravitational radiation is already copious, and the black holes coalesce quickly after. We discuss out experimental setup in Section\ref{gal-set}, the results, including post-newtonian effects in Sections \ref{str} \& \ref{results}, and we discuss the implications in Section \ref{sum}. \section{Simulation Technique}\label{gal-set} We launch two equal-mass, SMBH-embedded, gas-free, equilibrium, spherical and isotropic galaxy models on a merger orbit to study how ultramassive black holes affect SMBH binary formation and evolution within a realistic live merger remnant. Here, each model represents an elliptical galaxy nucleus, one either embedded with an especially massive black hole, or one hosting a SMBH on the $M_\bullet-\sigma$ relation. To ensure that the binary black hole evolution is well-resolved, we zoom-in on the late stages of an equal mass galaxy merger; the simulation starts when the nuclei are only separated by 750 parsec. The nuclei are set on an initial eccentric orbit of 0.76, consistent with typical merger eccentricities~\citep{wet11}. At first pericenter, the SMBH separation is a mere 100 parsecs. We represent the stellar density distribution in each nucleus by a~\citet{deh93} profile: \begin{equation} \rho(r)=\frac{(3-\gamma)M_\mathrm{gal}}{4\pi}\frac{r_{0}}{r^{\gamma}(r+r_{0})^{4-\gamma}}, \label{denr}\\ \end{equation} \noindent where $M_\mathrm{gal}$ is the total stellar mass in the galaxy, $r_0$ is its scale radius, and $\gamma$ is logarithmic slope of the central density profile. Increasing $\gamma$ concentrates more stellar mass in the center. We construct the model with a central point mass to represent the SMBH and our stellar velocity distribution is chosen from the equilibrium distribution function of this black hole-embedded model \citep{tre94}. We create three models to represent the primary galaxy, each of which host an extremely massive black hole. Two of our primary models host a SMBH with $0.6\,M_{\rm bulge}$ -- we identify this model as the ultramassive case. The key difference between the two ultramassive models is $\gamma$, which we vary between 1.0 and 1.5. Our final primary galaxy model contains a central SMBH with $0.2\,M_{\rm bulge}$, which we denote as the overmassive model; this mass is chosen to be consistent with conservative estimates of the SMBH mass in brightest cluster galaxies \citep{pri09}. Two additional models represent the secondary; each model has a $\gamma=1$ slope, but different SMBHs that bracket the range of possible masses on the $M_\bullet$-$M_{\rm bulge}$ relation~\citep{mer01,har04,sct13}. The galaxy parameters are described in Table \ref{TableA}. The total stellar mass of each model, $G$ and $r_0$ are all set to 1.0. \begin{table} \caption{Initial Galaxy Models} \centering \begin{tabular}{c c c c c c c c c} \hline Galaxy & Role & $N$ & $\gamma$ & ${{M_\bullet}\over{M_{\rm bulge}}}$ & $e_{\rm init}$ & $r_{\rm half}$ & $R_e$ & $r_{\rm infl}$ \\ \hline UM & Primary & 256k & $1.0$& $0.6$ & 0.76 & 0.71 & 0.53 & 11\\ UM-cusp & Primary &256k & $1.5$& $0.6$ & 0.76& 0.63 & 0.47& 11\\ OV & Primary &256k & $1.0$& $0.2$ & 0.76& 0.71 & 0.53 & 1.7\\ SM & Secondary & 256k & $1.0$& $0.001$ & 0.76 & 0.71 & 0.53 & 0.045\\ LG & Secondary & 256k & $1.0$& $0.005$ & 0.76 & 0.71 & 0.53 & 0.12\\ \hline \end{tabular}\label{TableA} \tablecomments{Column 1: Galaxy. Column 2: Role in the merger. Column 3: Particle number. Column 4: Density cusp. Column 5 Central SMBH mass in units of the bulge mass. Column 6: Initial eccentricity. Column 7: Half mass radius. Column 8: Effective radius. Column 9: SMBH influence radius.} \end{table} \subsection{Physical Scaling} We use NGC1277 as a reference to scale our primary galaxy models. We set our mass to the bulge mass of NGC1277, $\sim3\times10^{10}\,M_{\odot}$ \citep{van12}. To obtain the length scaling, we compare the SMBH influence radius in NGC1277 (565 pc) to that of our UM model ( $\sim 11$ model units); this sets one model unit to $\sim50$ pc. Note that since the SMBH is so prominent, the radius of influence encloses 95 percent of the stellar mass. Figure \ref{pro2} shows the stellar mass distribution and density in our galaxy models. One time step is 0.031 Myr. We integrate at least until the SMBHs form a hard binary; as we shall see, this often marks the transition to the gravitational wave regime. The longest N-body integration time is ($\sim$1570 model units=50 Myrs) for RUN1 and shortest is for RUN3 ($\sim$305 model units=10 Myrs). The speed of light is 192.15 in our model units. \begin{table} \caption{Galaxy Merger Runs} \centering \begin{tabular}{c c c c c c c} \hline Run & Primary & Secondary & Measured & Estimated & Estimated \\ & & &$t_{\rm 1 pc}$ & $t_{\rm df}$ & $t_{\rm coal}$ \\ \hline RUN1 & UM & SM & 35 & 49 & 80 \\ RUN2 & UM & LG & 8 & 12 &23 \\ RUN3 & UM-cusp & LG & 5 & 4 & 12\\ RUN4 & OV & LG & 3 & 4 & 55\\ \hline \end{tabular}\label{TableB} \tablecomments{Column 1: Merger simulation. Column 2,3: Merging galaxy names. Column 4: Time for SMBH separation to reach 1 parsec. Column 5: Estimated dynamical friction timescale, based on \citet{aj11}. Column 6: Estimated SMBBH coalescence time, based on extrapolating the hardening rate and gravitational radiation emission from equations 2-4. All times are quoted in Myr.} \end{table} \subsection{Numerical code} \label{num-code} We carry out $N$-body integrations using $\phi$-GRAPE+GPU, an updated version of $\phi$-GRAPE\footnote{\tt ftp://ftp.ari.uni-heidelberg.de/staff/berczik/phi-GRAPE/} \citep{harfst}, described in section 2.2 of \citet{kh13}. For the current study, we employ a softening of $10^{-4}$ for star-star interactions and 0 for SMBH-SMBH interactions. The simulations were run on 8 nodes (each containing 4 Graphic Processing Units (GPU) cards) on \textit{ACCRE}, a high performance GPU cluster at Vanderbilt University. \section{Structure and Morphology of the Merger Remnant} \label{str} Figure~\ref{evol} shows the cumulative mass profile of the merger remnants. RUN1 and RUN2 have similar mass profiles; only the secondary SMBH differs. For RUN3, the steep density cusp is conserved in the remnant, consistent with the interpretation that the three-body scattering phase has not played an active role in sculpting the remnant despite the fact that, as we shall see below, the SMBHs coalesce. With so few 3-body scattering events to scour a core, the most massive SMBH class could reside in cuspy galactic nuclei. \begin{figure} \centerline{ \resizebox{0.85\hsize}{!}{\includegraphics[angle=270]{profile2.ps}} } \centerline{ \resizebox{0.85\hsize}{!}{\includegraphics[angle=270]{triax.ps}} } \centerline{ \resizebox{0.85\hsize}{!}{\includegraphics[angle=270]{triax500pc.ps}} } \caption[]{ Top: Cumulative stellar mass scaled to NGC 1277. The SMBH dominates the nucleus -- $r_{\rm infl}\sim10\,R_e$ Middle: Evolution of intermediate-to-major (b/a) and minor-to-major (c/a) axes ratios for RUN3 at a radius of 100 pc. Bottom: Same at 500 pc, or $\sim\,r_{\rm infl}$ of the primary SMBH. } \label{evol} \end{figure} We analyzed the shape evolution of the merger remnant by calculating axes ratios both well inside $r_{\rm infl}$ and at $\sim\,r_{\rm infl}$. Figure \ref{evol} shows the morphological evolution in RUN3; near the center, the system is initially triaxial, but as it evolves, the triaxiality decreases, and we are left with a moderately flattened axisymmetric central region. However the situation is different at $r_{\rm infl}$ where the system remains stably triaxial. This behavior is consistent with equilibrium studies of SMBH-embedded triaxial galaxies \citep{KHB02,VM98}, where the SMBH induces chaos in the centrophilic orbit population inside $r_{\rm infl}$, a situation that may well be amplified by the 3-body scattering of the SMBH binary. \section{Dynamics of Ultramassive SMBH Binaries }\label{results} Here we discuss the formation and evolution of the SMBH binaries in each galaxy merger simulation. Figure~\ref{evol1} shows the binary parameter evolution. We display physical units by scaling the primary to NGC1277 to get a clearer picture of physical domains and time scales involved. To focus on resolving the dynamics of the SMBHs within the remnant, the simulation begins assuming the merger is well underway, and the secondary galactic nucleus has already sunk to the inner kpc of the primary. As a result, the galaxy merger is complete in only a few Myr; at this time, the density cusp of each galaxy is indistinguishable in phase space. During this rapid merger phase, the two SMBHs form a binary within the center separated by $\sim\,100$ parsec. \begin{figure} {\centerline{ \resizebox{0.85\hsize}{!}{\includegraphics[angle=270]{sep.ps}} } \centerline{ \resizebox{0.85\hsize}{!}{\includegraphics[angle=270]{sem.ps}} } \centerline{ \resizebox{0.85\hsize}{!}{\includegraphics[angle=270]{ecc.ps}} }} \caption[]{ Evolution of SMBH binary parameters in each galaxy merger simulation: Top: SMBH separation as a function of time. Middle: Evolution of inverse semi-major axes of the SMBH binary, or the hardening timescale. Bottom: Eccentricity evolution of the SMBH binary.} \label{evol1} \end{figure} \subsection{RUN1--UM+SM} The red line (figure \ref{evol1}, top) shows the evolution of SMBH separation in RUN1. Here, the primary has a $\gamma=1$ profile and the secondary has a central SMBH mass of 0.001. As expected, dynamical friction governs the early inspiral of the secondary SMBH. Initially the inspiral is slow because the background stellar density is low, but as the SMBH moves further in toward the scale radius at 50 pc, the plunge accelerates. The middle panel of this figure shows how the inverse semi-major axis of the binary evolves with time, or the hardening timescale. It is easy to see this initial slow inspiral on the diffuse outskirts of the galactic nucleus. Note that the noise in 1/a is caused by the primary stellar cusp, and reduces as SMBH decouples from the global stellar background and binds to the primary SMBH. By 35 Myr, the SMBHs are separated by only 2 pc and the hardening rate slows down, hinting a transition from the dynamical friction phase to three-body scattering phase. The dynamical friction time we observe in the simulation is consistent with the predicted dynamical friction decay time for a SMBH in a SMBH-embedded Hernquist ($\gamma=1$) cusp~\citep{a j11}. In the dynamical friction phase, the eccentricity (bottom panel of figure) circularizes, reaching an eccentricity $>0.1$, but once the three-body scattering phase begins to dominate, the eccentricity increases very rapidly, reaching $\sim0.6$ at the end of the simulation at 50 Myr. \subsection{RUN2--UM+LG} For this run, the secondary SMBH is intended to represent the upper envelope of the $M_\bullet$-$M_{\rm bulge}$ relation and is therefore five times more massive than in RUN1. We expect the SMBH separation to shrink roughly five times faster in the dynamical friction phase, since the background remains same. Figure \ref{evol1} bears this out; indeed this large SMBH sinks to 1 pc in about 10 Myr (magenta)--approximately five times faster than RUN1. The inverse semi-major axis evolves quickly until the separation is a parsec and then evolves more slowly, indicating a transition from dynamical friction to three-body scattering. Again we see a very rapid increase in eccentricity from below 0.1 to nearly 0.8 when we halt the simulation at 20 Myr because gravitational wave emission dominates the evolution. \subsection{RUN3--UMcusp+LG} Here our primary galaxy mass model is more concentrated, with a $\gamma=1.5$ cusp. This more concentrated model is a better analog to compact ellipticals like NGC 1277~\citep{tri14}. The binary black hole evolution is shown by the blue line in figure \ref{evol1}. The initial secondary SMBH inspiral is slower than that in RUN2 due to the lower stellar density in the galaxy outskirts. Once the second SMBH enters the primary scale radius ($\sim50$ pc), however, the inspiral rate outpaces RUN2, and the SMBH separation shrinks below 1 parsec in less than 10 Myr. This rapid evolution is also reflected in the short hardening timescale of the SMBH binary -- a mere 2 Myr three-body scattering phase ushers the SMBH binary into the gravitational wave regime. Like the previous two runs, the SMBH binary eccentricity dramatically increases during the 3-body scattering phase. \subsection{RUN4--OV+LG} Since there is some debate on the masses of this extreme SMBH class, we use RUN4 to explore a more conservative primary black hole mass; at $0.2\,M_{\rm bulge}$, this SMBH is still a distinct outlier on the black hole-bulge mass relation, but is a factor of 3 smaller than in the ultramassive models. Dynamical friction efficiently shrinks the separation between the two SMBHs down to a parsec. From table \ref{TableB}, we can see that estimated dynamical friction time and observed decay time match very well. Like all previous cases eccentricity increases rapidly at the junction of dynamical friction and three-body scattering phase. The closest comparison is with RUN2; however, the inspiral time is shorter in this case because the satellite suffers less mass loss, thereby behaving as massive `particle' that experiences stronger dynamical friction. The binary forms and hardens at a larger separation. The three-body scattering phase is prolonged compared to RUN2, and the coalescence happens after 50 Myr, roughly twice RUN2. \subsection{Eccentricity Evolution} The SMBH binary eccentricity quickly rises in each run, marking when the inspiraling SMBH delves deep enough into core that it encloses a stellar mass roughly equivalent to its own mass. This trend was noticed by \citet{aj11}, and though it marked the transition between dynamical friction and hard binary evolution, the reason was not clear. \subsection{Modeling the Evolution in the Post-Newtonian Regime} We can calculate the subsequent SMBH binary evolution with reasonable confidence by following the scheme adopted in \citet{kh12a, kh12b} to model the binary in the Post-Newtonian regime. This, plus the $N$-body evolution allows us to estimate the total SMBH binary coalescence time. We determine the average hardening rate $s=d/dt(1/a)$ during the last few Myr of our simulations, and assume that this rate remains constant until the gravitational wave regime. We also take the final eccentricity $e$ to be constant. SMBH binary evolution can then be estimated as: \begin{equation} \frac{da}{dt}=\left(\frac{da}{dt}\right)_\mathrm{NB}+\Big\langle\frac{da}{dt}\Big\rangle_\mathrm{GW}=-s a^{2}(t)+\Big\langle\frac{da}{dt}\Big\rangle_\mathrm{GW}~\label{ratea} \end{equation} For gravitational wave hardening, we use the orbit-averaged expression from \citet{pet64} \begin{mathletters} \begin{eqnarray} \Big\langle\frac{da}{dt}\Big\rangle_\mathrm{GW}&=&-\frac{64}{5}\frac{G^{3}M_{\bullet1}M_{\bullet2}(M_{\bullet1}+M_{\bullet2})}{a^{3}c^{5}(1-e^{2})^{7/2}}\times\nonumber \\ &&\left(1+\frac{73}{24}e^{2}+\frac{37}{96}e^{4}\right),~\label{dadt}\\ \Big\langle\frac{de}{dt}\Big\rangle_\mathrm{GW}&=&-\frac{304}{15}e\frac{G^{3}M_{\bullet1}M_{\bullet2}(M_{\bullet1}+M_{\bullet2})}{a^{4}c^{5}(1-e^{2})^{5/2}} \times\nonumber\\ &&\left(1+\frac{121}{304}e^{2}\right). \label{dedt} \end{eqnarray} \end{mathletters} We solve these coupled equations numerically to calculate the SMBH binary evolution. Estimates of 1/a are shown in figure \ref{sem2a}. \begin{figure} \centerline{ \resizebox{0.85\hsize}{!}{\includegraphics[angle=270]{sem2a.ps}} } \centerline{ \resizebox{0.85\hsize}{!}{\includegraphics[angle=270]{semiA4.ps}} } \caption[]{ Top: Estimates of SMBH binary hardening in the post-Newtonian regime for all runs. The green dots indicate when the estimate begins. Bottom: Comparison (A4) from \citep{kh12a}. Blue: SMBH binary evolution from stellar dynamics. Brown: estimated transition between the stellar dynamical and gravitational wave-dominated regime. Red: SMBH binary evolution including $\mathcal{PN}$ terms. } \label{sem2a} \end{figure} The SMBH binaries in RUN2 and RUN3 are already in the gravitational wave-dominated regime at the end of our direct $N-$body runs. The total coalescence times, starting from 750 pc when each black hole is embedded in a separate galactic nucleus, are both remarkably small -- 23 Myr and 12 Myr, respectively. For RUN4 which has a less massive primary SMBH by a factor of 3, the coalescence is 60 Myr. Three-body scattering phase is prolonged compared to RUN2, and the coalescence takes place after 50 Myr, roughly twice that of RUN2. It appears that when an extreme SMBH is involved in an interaction, the coalescence proceeds quickly and is mediated predominantly by dynamical friction. For comparison, we include the results of our earlier merger study from \citet{kh12a} where each SMBH is on the $M_\bullet$-$M_{\rm bulge}$ relation (run A4). This run was scaled to M87 with a $3.6\times10^9\,M_{\odot}$ SMBH. We see a long-term 3-body hardening phase, resulting in a coalescence time that is 20-30 times larger than what we witness in this study. \section{Summary and Conclusion} \label{sum} We performed direct $N$-body simulations of the merger of two SMBH-embedded galactic nuclei, in which the primary hosts an extremely massive SMBH. The overall goal was to investigate how these SMBHs affect the structure of the merger remnant, as well as the formation and evolution of the SMBH binary. Though we choose the particular scaling of the primary SMBH to be analogous to the range of estimates for NGC1277, the results are generic for this most massive SMBH class. We followed the late stages of the merger, from kiloparsec separations, through the formation of a bound SMBH binary, to a hard SMBH binary with a separation of less than a parsec. We have two models for the NGC1277 bulge to represent both shallow and cuspier density profiles. We found that the two nuclei merge in a span of few Myrs and a SMBH binary forms with separation of $\sim100$ pc. Dynamical friction is efficient in driving SMBH binary separation below a parsec. The sinking timescale is more rapid when the secondary SMBH is more massive, or when the primary density profile is higher, since dynamical friction scales with the infalling mass and the stellar background density. With such large binary separations, this class could represent an excellent prospect for the electromagnetic detection of SMBH binaries, though the lifetime in this phase may preclude direct detection. While at these large separations, gravitational wave emission is already significant; this will likely boost the SMBH binary signal expected by pulsar timing \citep{ra14}. This dynamical friction phase ends when the inspiralling SMBH sinks deep enough into the center that the enclosed stellar mass is comparable to its own mass. In cases where the secondary SMBH is 0.5 percent of its host bulge mass, dynamical friction is highly efficient, ushering this SMBH so close to the overmassive SMBH that gravitational waves dominate. In these cases, the binary bypasses the 3-body scattering phase seen in typical SMBH binaries. We expect far fewer hypervelocity stars as a result, and far less significant scouring of the central density profile; indeed, flattened density cusps may not be indicative of this most massive SMBH class. When the inspiralling SMBH is 0.1\% its host mass, there is a brief period of 3-body scattering before the binary enters the gravitational wave-dominated regime. We notice a sudden increase in binary eccentricity at the end of dynamical friction phase. The merger of two galactic nuclei results in an initially triaxial structure throughout. Within a dynamical time, the binary SMBH induces chaos in the centrophilic orbits that define the long axis of the triaxial potential, and with these orbits scattered ergodically, a more axisymmetric figure remains. At SMBH influence radius, the merger product is still triaxial as is expected from previous studies. Estimated coalescence times of $\sim tens$ Myr are remarkably short compared to all other collisionless studies involving typical SMBH masses. There, the coalescence times are $\sim$0.5-3\,Gyr \citep{kh11,kh12a,kh12b,kh13,gm12,pre11}. Overall, these rapid coalescence times may aid in pinning a future detection of a gravitational wave coalescence to a particular galaxy by requiring that the host is a recent merger remnant. For this most massive SMBH class, we predict that the black hole merger rate would closely track the host galaxy merger rate, with no significant lag time, and no final parsec problem. \acknowledgments The authors wish to thank the Kavli Institute of Theoretical Physics for hosting an excellent SMBH program, during which much of this paper was hashed out. Simulations were run on Vanderbilt's ACCRE GPU cluster, built and maintained through NSF MRI-0959454, and a Vanderbilt IDEAS grant. KHB acknowledges support from NSF CAREER Award AST-0847696.
1,116,691,498,761
arxiv
\section{Introduction} The transit origin-destination (O-D) matrix is a major input for public transit agencies to conduct scheduling and operations planning, long-term planning and design, performance analysis, and market evaluations. However, the traditional data sources for transit O-D matrices are transit on-board surveys, a time-consuming and labor-intensive collection process that is prone to sampling errors. Fortunately, recent years have witnessed a growing interest in building transit O-D matrices from data sources that are automatically collected through intelligent transportation systems, such as Automatic Vehicle Location (AVL), Automatic Passenger Count (APC), and Automatic Fare Collection (AFC) systems \cite{Iliopoulou2019ITStransportplanning}. In particular, automatic passenger counter is an electronic device usually installed on transit vehicles which records boarding and alighting data. Enabled by technologies such as infrared or light beams, digital cameras, thermal imaging and ultrasonic detection, the collected data is of high accuracy and can be easily validated \cite{Wiki2019APC,ITS2019APC}. APC technology is commonly deployed with AVL technology which provides access to real-time transit vehicle dispatching and tracking data through information technology and Global Positioning Systems \cite{Wiki2019AVL}. Most transit agencies have installed APC systems on at least 10\% to 15\% of their bus fleet \cite{Furth2005APCmainstream} and AVL is expected to be present in most fixed-route systems \cite{Wiki2019AVL}, as well as bus rapid transit systems \cite{Parker2008AVLUpdates}. The abundant AVL/APC data can jointly link passenger data to vehicle location \cite{Wiki2019APC} and thus offers a rich source of data in both spatial and temporal dimensions. Ever since the proliferation of AVL/APC technologies, researchers have worked on the conceptualization and development of methodologies to exploit the large-scale transit data they collect for use in transit performance analysis and travel demand modeling. Traditionally, such data contributed greatly to transit performance analysis and service management: They mainly address the problems of determining vehicle loads or run times \cite{Tetreault2010RunTime}, diagnosing or improving transit system performance \cite{Mandelzys2010AVLAPC, Furth2003AVLAPC}, and analyzing transit ridership \cite{Furth2003AVLAPC, Golani2007AVLAPC}. Recently, many efforts have been directed to O-D flow estimation based on AVL/APC data. Iterative Proportion Filtering (IPF) procedure is one of the most widely accepted methodologies: It aims at estimating population O-D flow on each transit route based on sampled stop-level boarding and alighting counts. Deming and Stephan \cite{Deming1940IPF} first proposed a procedure to adjust sampled frequency tables with known marginal totals obtained from different sources. The method is a good fit for route-level O-D estimation where a base matrix sampled from on-board surveys is adjusted by boarding and alighting counts at each stop. Ben-Akiva \cite{BenAkiva1985firstIPF} then showed IPF to be cost-effective for route-level O-D table estimation when combined with on-board survey data. However, IPF has limitations for its dependence on a base matrix constructed from on-board surveys. The vast amount of AVL/APC data made available recently reduces this dependence in many ways, including by enabling more cost-effective choices of base matrices and by inspiring new methods that require little survey data. Using the APC data available by the campus transit service in Ohio State University, McCord et al. \cite{McCord2010routeIPF} demonstrated that the IPF procedure, even with a non-informative, null base matrix, can achieve comparable O-D estimates as on-board surveys. Empirical results then showed that assuming no \textit{a priori} estimate of O-D flows, an arbitrary base matrix that is adjusted iteratively with APC data can achieve a higher accuracy than the null base matrix \cite{Ji2014IterativeIPF}. Moreover, Ji et al. \cite{Ji2015Survey&APC} developed a heuristic expectation-maximization method using APC and on-board survey data, which was shown to outperform the IPF procedure when little survey data is present. In \cite{Ji2015StatsRouteCounts}, a Markov chain Monte Carlo simulation approach with sampling is proposed to infer route O-D with large amounts of APC data only and validated through a numerical test. These advances in exploiting AVL/APC data allow for accurate extrapolation of O-D flows along the specific route even in absence of costly, time-consuming and error-prone survey data. However, it is important to realize that the route-level O-D matrix, though helpful to transit planning in terms of designing route patterns and service frequencies \cite{McCord2010routeIPF}, does not reflect the true O-D flows of transit passengers. The route-level O-D matrix only represents the flow distribution along a single route, while the true passenger trajectories might include additional travels to and from outside this route \cite{Ji2015StatsRouteCounts}. Thus there is an identified need to infer transit network O-D flow matrices based on AVL/APC data. However, the state-of-the-art analysis techniques targeting transit network level is still lacking. As APC data only records the time-stamped number of stop-level boardings and alightings for each route, they cannot differentiate initial and transfer boardings \cite{chu2004ridership}. Also, most route-level O-D estimation methods cannot generalize to transit network if only AVL/APC data is available \cite{McCord2010routeIPF,Ji2015Survey&APC,Ji2015StatsRouteCounts}. Thus a transfer identification algorithm is required to resolve this issue and to produce an overall O-D matrix on the transit network. The goal of this paper is thus to propose novel optimization models to identify transfer activities from AVL/APC data and observed proportion of transferring passengers at different transit centers and other stops. As a result, the paper addresses a limitation of methodologies for APC data analysis, by extending the state-of-the-art route-level O-D matrix estimation and producing a network-level O-D matrix. Observe that the O-D matrix is not intended to predict or analyze individual activities, but rather to assess travel behavior and demand at an aggregate level. This is perfectly reasoned given that the aggregate nature of APC data already makes it challenging to accurately recover the path choices of each rider \cite{Ji2015Survey&APC, Ji2015StatsRouteCounts}. Moreover, many applications in transit planning do not need individual-level activities, but need aggregate-level travel demand information, e.g., at the Traffic Analysis Zone (TAZ) level or other self-defined geographical boundaries. The optimization models proposed in this paper generate aggregate O-D matrices at different geographical resolutions and hence can inform future transit planning and investment decisions. The rest of the paper is organized as follows. The next section defines the problem with a specification of available input data and desired output data. The following section proposes three optimization models to solve the transfer identification problem. These models are then evaluated on a case study. The paper concludes by summarizing the results and discussing its practical applications in transportation planning. \section{Problem Definition} The time-stamped route O-D flow information is made accessible for transportation planners and researchers through wide adoption of APC technologies for raw data collection and well-developed methodologies for estimating population route O-D flows \cite{McCord2010routeIPF, Ji2014IterativeIPF, Ji2015Survey&APC, Ji2015StatsRouteCounts}. The obtained boarding-alighting pairs or route-level O-D flows do not equate to O-D transit trips \cite{McCord2010routeIPF}. Indeed, due to the transfer activities, many actual trips may contain several boarding-alighting pairs where each is called a trip segment or trip leg. A trip can be generally represented as one segment or a sequence of segment(s), where the former is referred to as a singleton trip in this study. Each trip segment in the latter is described by a corresponding ordinal number; for example, a trip with two segments has a first leg and a second leg. Accordingly, there is a need for transfer identification when integrating O-D flows on different routes to generate the transit network O-D matrix. This Transfer Identification Problem (TIP) is the focus of this paper and the rest of this section specifies its input and output. \subsection{Input Specification} Since transfers are based on individual activities, identifying them first requires disaggregation of time-stamped route O-D flows into individual records each specifying the route, the boarding and alighting times and stops. Equivalently, it requires the time-stamped passenger counts describing the demand at a stop as trip origin or destination, and the route O-D matrices specifying the distribution of alighting stops for all boarding passengers at a stop. The model also needs access to the bus schedules for each route and the stop locations on all routes. In addition, the model assumes the availability of observed or estimated proportions of transfers\footnote{The terms transfer probabilities and transfer rates are referred to as the proportions of transfers.}. Such data can be obtained directly from relatively long-term observations at stop terminals or estimated by transportation professionals based on years of experience. This data collection process outperforms on-board surveys in terms of ease to implement and reliability: Enumerating the passengers leaving the terminals requires less efforts compared to conducting detailed surveys for individual passengers, and it is more reliable than on-board surveys by reflecting the whole population rather than a proportion of passengers on selected routes. Moreover, the observed transfer probabilities can be obtained by averaging over a large amount of historical observations, thus less prone to inaccurate or misrecorded data entries on individual passengers. \subsection{Transfer Assumptions} The transfer activities modeled in this work are characterized by three behavioral assumptions as stated, justified and discussed below. \begin{assumption} Transfer activities that happen in transit centers should be differentiated from those occurring at non-transit centers. \end{assumption} It follows from the observation that more transfer trips are expected at transit centers, which are designed to be served by multiple bus or rail routes synchronized for facilitating transfers. Consequently, the observed transfer probabilities at transit centers are expected to be more significant than at other stops. This model thus evaluates them separately. \begin{assumption} A transfer between two trip segments is only feasible when the following three conditions are satisfied. First, the two trip segments linked by a transfer must not belong to the same route. Second, passengers only transfer within some thresholds for walking distance and transfer time. Third, transfers are directional such that the prior trip segment must have ended earlier than the boarding of the subsequent one. \end{assumption} These assumptions on travel behaviors are widely applied and tested for generating trip chains \cite{Alsger2015SmartCard, trepanier2007individual, barry2002origin, munizaga2012estimation}. To verify the feasibility of a transfer between any two trips in terms of walking distance and transfer time intervals, the stop locations and bus schedules are required, which are often made publicly available from local regulators. In particular, this information can be extracted from the General Transit Feed Specification (GTFS) data, which is a common format for public transportation schedules and the associated geographic information \cite{GoogleTransitAPIs2019}. Notably, the route and stop ID information can jointly map each bus stop to a geographical location; the alighting times can be calculated from boarding times and bus schedules. \begin{assumption} Passengers transfer at most once. \end{assumption} In general, most trips in a bus-based transit network involve no transfer or a single transfer. This study adopts this \textit{one-transfer assumption}. The model however can easily handle the more general case (i.e., two or more transfers) if needed. \subsection{Output Specification} The desired output for this study is a transit network level O-D flow count matrix with each origin or destination at the stop level. The optimization models to be presented identify each trip segment as either a singleton trip or a trip leg. The O-D matrix can be calculated elementwise and its $(i,j)^{th}$ entry represents the flow estimation from transit stop $i$ to $j$: Its value is the sum of the number of singleton trips that start in $i$ and end at $j$, and the number of multi-legged trips whose first leg starts in $i$ and last leg ends in $j$. It is also possible to construct the network O-D flows between origins and destinations at varying geographical resolutions based on the stop-level matrices. \section{Methodology} This section presents optimization models to compute the aggregate O-D matrix. The optimization models do not compute the O-D matrix directly; instead they solve the Transfer Identification Problem (TIP) that identifies whether each trip segment is followed by a transfer, in which case the next trip segment is also identified. It is then simple to use the TIP solution to compute the aggregate O-D matrix. The optimization models choose the values of these decision variables in order to minimize the distance between the observed and estimated transfer probabilities at transit stops, subject to the transfer assumptions discussed earlier. This section presents three approaches for solving the TIP: A Quadratic Integer Program (QIP), a two-stage approach based on a continuous relaxation of the QIP and rounding, and an Integer Program (IP). \subsection{A QIP Model for the TIP} Figure \ref{fig:opt} presents the QIP formulation for the TIP: the model specifies the setup and parameters, the decision variables, the objective function and the constraints, which are now discussed in detail. \begin{figure*}[!t] \begin{tabbing} \tabrule\\ 123\=123\=123\=12312312123341234\=12345678901234567\=123\=123\=123\=\kill {\bf Data:} \\ \> $T$: set of recorded trips; \\ \> $C$: set of transit centers; \\ \> $p_{1,i}^*$: observed transfer probability at transit center $i$; \\ \> $p_{2}^*$: observed transfer probability at stops other than transit centers; \\ \> $t$: maximal transfer interval time in minutes; \\ \> $d$: maximal transfer walking distance in miles; \\ \> $\forall j \in T:$ \\ \>\> $l_j$: route of trip $j$; \\ \>\> $b_j$: boarding stop of trip $j$; \\ \>\> $a_j$: alighting stop of trip $j$; \\ \>\> $s_j$: boarding time of trip $j$; \\ \>\> $t_j$: alighting time of trip $j$; \\ \>\> $T_j := \{k \in T \mid l_j \neq l_k, ~ dist(a_j, b_k) < d, ~ 0 < s_k - t_j < t \} $, set of possible transfers from $j$. \\ {\bf Variables:} \\ \> $x_j \in \{0,1\}$ \>\>\> $(j \in T)$ \> --- $x_j = 1$ if $j$ is a first leg; \\ \> $y_{j,k} \in \{0,1\}$ \>\>\> $(j \in T, k \in T_j)$ \> --- $y_{j,k} = 1$ if trip segment $j$ transfers to $k$; \\ \> $p_{1,i} \in [0,1]$ \>\>\> $(i \in C)$ \> --- calculated transfer probability at $i$; \\ \> $p_{2} \in [0,1]$ \>\>\> --- calculated transfer probability at stops other than transit centers. \\ {\bf Objective:} \\ \> minimize ${\displaystyle \sum_{i \in C} \; (p_{1,i} - p_{1,i}^*)^2 + (p_{2} - p_{2}^*)^2}$ \\ {\bf Constraints:} \\ \> ${\displaystyle \sum_{k \in T_j} y_{j,k} = x_j}$ \>\>\> $(j \in T)$ \>\> (0.1) \\ \> ${\displaystyle \sum_{j \in T \mid k \in T_j}y_{j,k} \leq 1 - x_k}$ \>\>\> $(k \in T)$ \>\> (0.2) \\ \> ${\displaystyle p_{1,i} = \frac{\sum_{j \in T \mid a_j = i} \ x_j}{\mid \{j \in T \mid a_j = i \} \mid }}$ \>\>\> $(i \in C)$ \>\> (0.3) \\ \> ${\displaystyle p_{2} = \frac{\sum_{j \in T \mid a_j \notin C} \ x_j}{\mid \{j \in T \mid a_j \notin C \} \mid}}$ \>\>\>\>\> (0.4) \\ \tabrule \end{tabbing} \caption{The QIP Model for the TIP.} \label{fig:opt} \end{figure*} The model is defined over the set $T$ of trip segments and the set $C$ of transit centers. Each trip segment $j \in T$ is characterized by its boarding stop $b_j$, alighting stop $a_j$, boarding time $s_j$, alighting time $t_j$, and bus line $l_j$. Two sets of decision variables are associated with each segment: Binary variable $x_j$ is 1 if and only if trip segment $j$ has a transfer and binary variable $y_{j,k}$ is 1 if and only if segment $j$ transfers to segment $k$. Recall that a transfer between two segments $j$ and $k$ has positive probability if and only if they are on different routes, the maximum walking distance and transfer time constraints are satisfied, and trip segment $k$ starts after $j$ ends. These constraints can be expressed as $$ l_z \neq l_c \ \& \ dist(a_j, b_k) < d \ \& \ 0 < s_k - t_j < t $$ \noindent where the function $dist(\cdot,\cdot)$ is a metric (e.g., geodesic, Euclidean, or Manhattan distances), $d$ and $t$ are parameters chosen to denote the maximum allowed walking distance and transfer time. For each segment $j$, the set of feasible second legs from $j$ is denoted by $T_j$. Constraint (0.1) specifies that there is a transfer after trip segment $j$ (i.e. $x_j = 1$) if and only if there is exactly one transfer from this trip to one of its feasible successors (i.e., exactly one of $y_{j,k} = 1$ for $k \in T_j$). Constraint (0.2) states the one-transfer assumption: If segment $j$ transfers to segment $k$ (i.e., $y_{j,k} = 1$), then $k$ cannot make any subsequent transfers (i.e., $x_k = 0$). Conversely, if segment $k$ has a transfer (i.e., $x_k = 0$), then any feasible prior leg $j$ (i.e., all $j ~\textrm{such that}~ k \in T_j$) cannot transfer to $k$ (i.e., $y_{j,k} = 0$). Constraints (0.3) and (0.4) respectively compute the estimated transfer probabilities at each transit center and all other stops, where $\sum_{j \in T \mid a_j = i} \ x_j$ represents the number of first legs transferring at transit center $i$ and $\mid \{j \in T \mid a_j = i \} \mid$ counts the total number of segments ending at transit center $i$. Recall that the goal of the TIP is to select potential transfers for each trip segment so that the aggregate-level transfer probabilities given by Constraints (0.3) and (0.4) are as close as possible to the observed transfer probabilities. In the QIP, closeness is measured with an L2-norm. The sets of first legs, second legs, and singleton trips are denoted respectively by $T_1$, $T_2$ and $T_s$, which form a partition of the set $T$. They are defined explicitly as follows, \begin{align*} T_1 &:= \{j \in T \mid x_j = 1\} \\ T_2 &:=\{k \in T \mid y_{j,k} = 1 ~\textrm{for some}~ j \in T\} \\ T_s &:= T \setminus (T_1 \cup T_2) \end{align*} The O-D matrix can then be estimated elementwise from the QIP solution. Each entry $(i, i')$ of the O-D matrix records the expected number of trips from transit stop $i$ to $i'$ and is a sum of two components: The number of singleton trips starting at stop $i$ and ending at $i'$ and the number of two-legged trips whose first leg starts at stop $i$ and second leg ends at $i'$. Let $\textbf{1}_{ \{ \cdot \} }$ be an indicator function which equals to 1 when the statement is true and 0 otherwise. Then the O-D matrix can be computed as follows: \begin{align*} OD_{i,i'} = &\sum_{j \in T_s} \textbf{1}_{\{b_j=i,~a_j=i'\}} \\ + &\sum_{j \in T_1} \sum_{k \in (T_j \cap T_2)} y_{j,k} \textbf{1}_{\{b_j=i,~a_k=i'\}}. \end{align*} The proposed QIP formulation is applied to the case study to be introduced in Section IV and shown to have severe scalability issues: It cannot be solved by Gurobi \cite{gurobi}, a state-of-the-art commercial optimization solver, within 24 hours. In general, the QIP formulation does not have guaranteed tractability due to the (potentially) quadratic number of variables and the large size of the data sets, even for small-size cities. \subsection{Rounding the Continuous QIP Relaxation} This section explores a scalable two-stage approach which consists of (1) solving the continuous relaxation of the QIP and (2) rounding the solutions of continuous relaxation to derive a feasible binary substitute. The continuous relaxation relaxes the domain of the variables from the set $\{0,1\}$ to the interval $[0,1]$, producing a Convex Quadratic Program (QCP), which can be solved efficiently. The QCP solution for variables $x_j$ and $y_{j, k}$ $(j \in T, k \in T_j)$ now assigns values in the range $[0,1]$ to the decision variables which can thus be interpreted as the probability of having a transfer after segment $j$ and the probability that segment $k$ be the second leg of that transfer. The QCP relaxation is depicted in Figure \ref{fig:opt1} and mimics the QIP. \begin{figure*}[!ht] \begin{tabbing} \tabrule\\ 123\=123\=123\=12312312123341234\=12345678901234567\=123\=123\=123\=\kill {\bf Variables:} \\ \> $x_j \in [0,1]$ \>\>\> $(j \in T)$ \> --- probability of $j$ being a first leg; \\ \> $y_{j,k} \in [0,1]$ \>\>\> $(j \in T, k \in T_j)$ \> --- probability of $j$ transferring to $k$; \\ \> $p_{1,i} \in [0,1]$ \>\>\> $(i \in C)$ \> --- calculated transfer probability at $i$; \\ \> $p_{2} \in [0,1]$ \>\>\> --- calculated transfer probability at stops other than transit centers. \\ {\bf Objective:} \\ \> minimize ${\displaystyle \sum_{i \in C} \; (p_{1,i} - p_{1,i}^*)^2 + (p_{2} - p_{2}^*)^2}$ \\ {\bf Constraints:} \\ \> ${\displaystyle \sum_{k \in T_j} y_{j,k} = x_j}$ \>\>\> $(j \in T)$ \>\> (1.1) \\ \> ${\displaystyle \sum_{j \in T \mid k \in T_j}y_{j,k} \leq 1 - x_k}$ \>\>\> $(k \in T)$ \>\> (1.2) \\ \> ${\displaystyle p_{1,i} = \frac{\sum_{j \in T \mid a_j = i} \ x_j}{\mid \{j \in T \mid a_j = i \} \mid }}$ \>\>\> $(i \in C)$ \>\> (1.3) \\ \> ${\displaystyle p_{2} = \frac{\sum_{j \in T \mid a_j \notin C} \ x_j}{\mid \{j \in T \mid a_j \notin C \} \mid}}$ \>\>\>\>\> (1.4) \\ \tabrule \end{tabbing} \caption{The QCP Relaxation of the TIP.} \label{fig:opt1} \end{figure*} To obtain an aggregate O-D matrix, it is necessary to round the variables and assign them binary values. The second stage is based on a feasible rounding strategy that proceeds as follows. First, the segments likely to have a transfer are rounded to 1 by choosing a threshold $x^* \in [0,1]$ and selecting those variables whose value in the QCP relaxation exceeds the threshold, i.e., $$ \hat{x}_j = \begin{cases} 1, &if~x_j \geq x^* \\ 0, &otherwise. \end{cases} $$ \noindent The threshold can be obtained by rounding up the variables $x_j~(j \in T)$ with $n$ largest probability of having a transfer to 1 and rounding down the rest to 0. For notation simplicity, define the observed transfers: $\delta_{1,i}$ for transit center $i$, $\delta_2$ for other non-transit stops and $n$ for the total number of transfers based on the observed transfer probabilities as follows, $$ \begin{array}{ll} \delta_{1,i} &= p^*_{1,i} \times |\{j \in T \mid a_j = i \}| \\ \delta_2 &= p^*_2 \times |\{j \in T \mid a_j \notin C \}| \\ n &= \sum_{i \in C} \delta_{1,i} + \delta_2. \end{array} $$ The threshold $x^*$ can be derived as below, $$ x^* = \min_{j \in T} x_j ~\textrm{s.t.}~ |\{j' \in T|x_{j'} \geq x_j \}| \leq n ,$$ and the set of first legs consists of all trip segments indexed by $j \in T$ with the corresponding variable $x_j \geq x^*$, that is, $$ T_1 := \{ j \in T | \hat{x}_j = 1 \} = \{ j \in T | x_j \geq x^* \}. $$ It remains to determine the set of second legs and the set of singleton trips. The likelihood of a segment to be a second leg can be approximated by summing the transfer probabilities from all of its possible first legs, i.e., $\mathcal{L}_k := \sum_{j \in T_1} y_{j,k} ~(k \in T \setminus T_1)$. Note that, since the transfer probabilities from different first legs are computed in different probability spaces, their sums do not directly translate to a probabilistic interpretation. However, it is still a sensible measure for identifying the set of segments most likely to be second legs. Recall that the one-transfer assumption requires that the number of second legs equals the number of first legs and that $n$ as defined earlier denotes the total observed transfers: The set of second legs can now be defined by those segments $k$ whose measure $\mathcal{L}_k$ is among the $n$ largest values , i.e., $$ T_2 := \{k \in T \setminus T_1 \mid ~~ \big \lvert \{k' \in T\setminus T_1 \mid \mathcal{L}_{k'} \geq \mathcal{L}_k\} \big \rvert \leq n\}. $$ Once the trips are grouped as first-leg, second-leg, or singleton-trip, the probabilities of transferring from a first-leg trip $j$ to any feasible second leg $k \in (T_j \cap T_2)$ are normalized to produce, for each first leg, a well-defined probability distribution over its feasible second legs, i.e., $\forall j \in T_1$, $\sum_{k \in T_j \cap T_2} p_{j,k} = 1$ and $p_{j,k} \geq 0 ~~(k \in T_j \cap T_2)$. The normalized probabilities are calculated as, $$ p_{j,k} = \frac{y_{j,k}}{\sum_{k' \in T_j \cap T_2} y_{j,k'}}~~ \forall k \in T_j \cap T_2, \forall j \in T_1. $$ The O-D matrix can then be estimated elementwise as a probability-weighted sum of trips. The expected number of trips from transit stop $i$ to $i'$ is the sum of two components: The count of singleton trips starting at stop $i$ and ending at $i'$, and the sum of (normalized) probabilities for all feasible two-legged trips whose first leg starts at stop $i$ and the second leg ends at $i'$, i.e., \begin{align*} OD_{i,i'} = &\sum_{j \in T_s} \textbf{1}_{\{b_j=i,~a_j=i'\}} \\ + &\sum_{j \in T_1} \sum_{k \in (T_j \cap T_2) } p_{j,k} \textbf{1}_{\{b_j=i,~a_k=i'\}}. \end{align*} \subsection{The Integer Programming (IP) Model} This section proposes a third approach based on Integer Programming (IP). The key idea behind the IP model is to replace the L2-norm by a L1-norm and to reason about the \textit{observed transfers} instead of the \textit{observed transfer probabilities}. Recall that the observed transfers are defined based on the observed transfer probabilities as $\delta_{1,i}$ for each transit center $i \in C$ and $\delta_2$ for other non-transit stops. Figure \ref{fig:opt2} describes the resulting IP model. The objective function minimizes the absolute differences between the observed and estimated numbers of transfers. The logical constraints are the same as in the QIP, but there is no need to reason about transfer probabilities. The aggregate O-D matrix can then be computed from the optimal solution as for the QIP. \begin{figure*}[!ht] \begin{tabbing} \tabrule\\ 123\=123\=123\=12312312123341234\=12345678901234567\=123\=123\=123\=\kill {\bf Data:} \\ \> $\delta_{1,i} \in \mathbb{N} $ \>\>\> $(i \in C)$ \> --- observed transfers at transit center $i$; \\ \> $\delta_{2} \in \mathbb{N}$ \>\>\>\> --- observed transfers at other stops. \\ {\bf Variables:} \\ \> $x_j \in \{0,1\}$ \>\>\> $(j \in T)$ \> --- $x_j = 1$ if trip $j$ transfers; \\ \> $y_{j,k} \in \{0,1\}$ \>\>\> $(j \in T, k \in T_j)$ \> --- $y_{j,k} = 1$ if trip $j$ transfers to $k$; \\ {\bf Objective:} \\ \> minimize ${\displaystyle \sum_{i \in C} \| \delta_{1,i} - \sum_{j \in T \mid a_j = i} x_j \|_1 + \| \delta_2 - \sum_{j \in T \mid a_j \notin C} x_j \|_1}$ \\ {\bf Constraints:} \\ \> ${\displaystyle \sum_{k \in T_j} y_{j,k} = x_j}$ \>\>\>\> $(j \in T)$ \>\>\> (2.1) \\ \> ${\displaystyle \sum_{j \in T \mid k \in T_j}y_{j,k} \leq 1 - x_k}$ \>\>\>\> $(k \in T)$ \>\>\> (2.2) \\ \tabrule \end{tabbing} \caption{The Integer Programming Model for the TIP.} \label{fig:opt2} \end{figure*} \section{Case Study} This section applies the proposed methodology to a case study for the broader Ann Arbor--Ypsilanti region in Michigan and validates the O-D flow matrices estimated from both the rounded QCP and the IP. \begin{figure*}[!t] \includegraphics[width=16cm]{network.png} \centering \caption{Transit Network Operated by AAATA} \label{fig:aaata} \end{figure*} \subsection{Data Description} The data in this study was provided by the Ann Arbor Area Transportation Authority (AAATA), consisting of boarding-only smart-card transactions. The data was collected from the transit network operated by AAATA as depicted in Figure \ref{fig:aaata}, where a total of 58 inbound or outbound routes (denoted by blue lines in Figure \ref{fig:aaata}) connect 1,232 stops including two transit centers, namely Blake Transit Center and Ypsilanti Transit Center. The bus schedule and stop location information were extracted from the GTFS data \cite{TheRide2019}. We conducted two experiments using the Go!pass data and the Period Pass data, respectively. Go!pass is purchased by local businesses at downtown Ann Arbor as a benefit for their employees who can get unlimited bus services provided by AAATA. Period Pass, on the other hand, allows the holders to take unlimited rides on any bus routes within the specified period and offers discounted prices for senior and students \cite{TheRide2019}. There are overall 32,840 transactions for Go!Pass and 43,660 for Period Pass from all weekdays in October, 2017. \subsection{Experimental Settings} In this study, we use the smart-card data to validate the proposed methodology. To be specific, we apply the trip-chaining method on the boarding-only smart-card data to infer alighting stops, identify transfers, and produce a transit O-D matrix to evaluate and validate our optimization models \cite{trepanier2007individual,munizaga2012estimation}. Note that we assume the ground truth is given by the trip-chaining method on the smart-card data. The trip-chaining method exploits unique IDs for passengers to link their consecutive transit trip segments. It relies on two major assumptions: 1) the alighting point of each trip is within walking distance of the consequent boarding location (usually assumed to be in the range of 400 m to 2,000 m); 2) the alighting point of the last boarding of the day for a passenger is adjacent to his/her first boarding stop of the same day. In addition, researchers typically assume a time threshold (e.g., 30 minutes) to identify transfer activities: A passenger is assumed to take a transfer if the interval between the alighting time and the subsequent boarding is less than the specified threshold \cite{devillaine2012detection}. For the trip-chaining benchmark, we assume the maximum walking distance is a quarter mile (402 meters), transfer time threshold is 30 min, and the last destination is assumed to be the closest stop to the first origin as suggested in \cite{alsger2016validating}. Recall that the optimization models require all route-level time-stamped O-D matrices collected from a transit network as its input, which can be readily calculated from APC data using the IPF techniques. Therefore, to ensure the validity of the comparison, we process the same smart-card data to generate the route-level O-D matrices by directly aggregating the inferred route-level boarding-alighting pairs for each transaction into time-stamped route-level O-D flows, which will serve as the input for our optimization models. To be consistent with the settings of the benchmark, the optimization models also assume the same maximum walking distance (402 meters) and transfer time thresholds (30 min). There are two transit centers in the region, i.e., one in downtown Ann Arbor, the other in downtown Ypsilanti. We assume that the Blake Transit Center (in Ann Arbor) takes $i = 1$ and the Ypsilanti Transit Center takes $i = 2$. The ground-truth transfer rates computed from the smart-card analysis are: For Go!Pass, $$P_{11}^* = 0.232, ~P_{12}^* = 0.591 ~\textrm{and}~ P_2^* = 0.062;$$ for Period Pass, $$P_{11}^* = 0.588, ~P_{12}^* = 0.554 ~\textrm{and}~ P_2^* = 0.143.$$ \subsection{Geographical Resolutions for Model Evaluation} The estimated O-D matrices by our models are evaluated at various geographical resolutions, i.e., the stop level, the TAZ level (which is widely adopted for transportation planning), and the Transit Analysis Clusters (TACs). TACs are self-defined zones, obtained by using Hierarchical Clustering Analysis (HCA) \cite{HCA2013}. In HCA, each stop is initially assigned to a cluster on its own, and at each step, two most similar clusters (based on distance) are joined until a single cluster is left. The end result is a giant cluster organized in a tree structure. To obtain TACs at different geographical resolutions, it suffices to cut the tree at a given height: The resultant clusters have pairwise distances approximately at that height which is chosen to capture the desired distance threshold. There are three major reasons behind this choice of TACs through HCA. First, riders may choose different origin and destination stops via different bus routes in a day, so aggregating the travel demand of spatially close stops is critical when constructing O-D matrices \cite{luo2017constructing}. Second, the stop-to-stop O-D matrix does not directly reveal the travel demand pattern like a zone-to-zone O-D matrix which shows the equilibrium between demand and supply in the transit system and measures the access or egress times for transit trips \cite{tamblay2016zonal}. Third, traditional TAZs have boundaries on the streets, where most bus stops are located. Hence the TAZs boundaries might create artificial divisions between closely related stops and influence the estimation. We will present the evaluation metrics at stop level, TAZ level and TAC level with varying radius to provide a more comprehensive examination of model performance. \subsection{Experimental Results} The R-squared metric is used to evaluate the accuracy of the O-D flow estimates as in previous studies \cite{Tavassoli2016HowCT} and \cite{EconometricsInTransportation}. Given a ground-truth square matrix $OD^{*}$ and its estimation $OD$, both of which with size of $n \times n$, the R-squared metric is defined as $$ \theta = 1 - \frac{\sum_{i,j=1}^n (OD^{*}_{ij} - OD_{ij})^2 }{\sum_{i,j=1}^n (OD^{*}_{ij} - \overline{OD^{*}})^2 }. $$ \noindent where $\overline{OD^{*}}$ denotes the average of all entries of $OD^{*}$. The R-squared metric can be interpreted as the percentage of total variability (in terms of sum of squared errors) that can be explained by the estimated matrix, compared to a mean model. A higher R-squared value generally indicates a better estimation. As discussed, the models are evaluated at different geographical resolutions: The stop level, the TAZ level, and the TAC level with radius ranging from 0.5 miles to 2 miles in increments of 0.5 miles. The stop level can also be seen as a TAC level with radius of 0 mile. The results at the TAZ level are now summarized. For the Go!Pass data, the R-squared evaluated against the ground-truths inferred from trip-chaining methods is equal to 88.71\% for the rounded QCP model and 95.57\% for the IP model. For the Period Pass data, the R-squared reaches 67.15\% for rounded QCP model and 85.06\% for the IP model. Figure 5 illustrates the R-Squared evaluated using both Go!Pass and Period Pass data for the radius of TAC ranging from 0 mile to 2 miles. \begin{figure*}[t!] \includegraphics[width=16cm]{rsquared.png} \centering \caption{R-Squared Values at Varying Radius of TACs.} \label{fig:rsq} \end{figure*} Both approaches can be solved efficiently on a personal computer of i7-7500U CPU with Gurobi 8.0 through Python 3.6. The rounded QCP approach takes 121.78 seconds on Go!Pass data (with 32,840 transactions) and 156.96 seconds on Period Pass data (with 43,660 transactions); the IP approach solves faster with a computational time of 2.34 seconds on Go!Pass data and 5.97 seconds on Period Pass data. There are three key general observations. First, the IP approach performs significantly better than rounded QCP approach on both data sets. Second, the predictive power of both approaches improves with increasing TAC radius but the rate of improvement typically decreases as the clusters expand in their radius. The Rounded QCP approach on Go!Pass increases monotonically from 84.70\% at 0-mile radius to 94.92\% at 2-mile radius. A similar trend can be observed from the IP approach on the Go!Pass data where $\theta$ increases monotonically from 92.39\% at 0-mile radius to 97.91\% at 2-mile radius. For both models, the same behavior is also observed on the Period Pass data. Furthermore, note that the accuracy of 1-mile TAC is an important turning point as shown in Figure \ref{fig:rsq}---the improvement rate below the 1-mile threshold is significantly larger than above it. Third, the model performance on the Period Pass data is worse than the Go!Pass data, which might result from the irregular travel behavior of Period Pass users. Recall that the Go!Pass is a special pass purchased by companies located in downtown areas for their daily commuting employers, who tend to have more consistent and predictable commuting patterns during the weekdays. Therefore, for the Go!Pass data, the daily variation in transfers at each stop may be less and can be better described by a single observed long-term transfer rate. In comparison, Period Pass targets a more general audience and offers special discounts for the seniors and students who tend to take more spontaneous and thus less predictable trip chains. It is also observed that the true transfer rates for some days in the Period Pass data deviate dramatically from the long-term observation used in the optimization models. Also, the transfer rates for Period Pass is much higher than Go!Pass, which might lead to more feasible transfers for each first-leg trips and add to the modeling complexity. These discussions might indicate an inherent limitation in modeling transfer identification with inadequate information. Without unique identifier of passengers, it is expected that when dealing with extremely high transfer probabilities (such as 50\% or above), the aggregate-level variability is difficult to capture in detail. \section{Conclusion} This paper presented optimization models to estimate the transit network O-D flow matrix based on time-stamped and location-based boarding and alighting counts, and observed or estimated proportions of transferring passengers at each transit center and other non-transit centers. It proposed a QIP approach, a two-stage approach based on a QCP relaxation of the QIP and a feasible rounding procedure, and an IP model that replaces the L2-norm of the QIP by a L1-norm. While the QIP is not tractable for real data sets, the QCP and IP approaches can be solved efficiently for the transit data provided by AAATA. Moreover, the IP model is superior to the QCP approach in terms of accuracy. When measuring against the ground-truth as calculated from trip-chaining methods using the R-squared metric, the IP model can achieve up to 95.59\% at the TAZ level and 96.99\% at the 1-mile self-defined TAC level for the Go!Pass data (which exhibits more consistent travel behavior) and 85.06\% at the TAZ level and 90.61\% at the 1-mile TAC level for the Period Pass data (which exhibits more irregular travel patterns). There is also a clear improvement in predictive accuracy with lower spatial resolution. The results suggest that the IP model can produce accurate estimation for applications requiring varying levels of spatial resolutions. In particular, the IP model can meet the needs for tasks ranging from predicting O-D flow among bus stops to constructing a zone-level transit-trip O-D matrix to inform future transit planning. The results indicate the IP model is especially promising for cases with moderate or relatively low transfer rates and for populations with consistent transfer patterns. It is because the observed transfer probabilities are used as benchmarks, against which the deviation of estimated transfer probabilities is minimized. Therefore, the capability of observed transfer probabilities to capture a consistent pattern for transfer activities is a critical factor for accurate modeling. It is recommended to apply the transfer probabilities observed or surveyed during a relatively long period (i.e., monthly or yearly), which reduces the potential inaccuracy brought by daily variations. If such information is absent, the expert judgment from transit operators could be used instead. This study can be further developed from the following perspectives. First, the IP model directly applies the parameters for behavioral assumptions as suggested in \cite{alsger2016validating} for their case study on South-East Queensland public transport network in Australia. However, the parameters for such behavioral assumptions might differ across case studies due to differences in transit systems, built environments and socio-demographics of the regions under analysis. More comprehensive studies to validate the proposed methodology would welcome experimental results on more case studies or a sensitivity analysis on the choice of maximally allowed distance and transfer time. Second, the current IP formulation can be easily extended to account for multiple transfers. Future work can verify the effectiveness of integer programming modeling multiple transfers. Also, the optimization models for transfer identification may have many symmetric solutions, leading to large deviations when transfer probabilities are high. As a result, it is important to clearly identify these equivalent solutions and conduct closer case-dependent analysis to obtain a more accurate prediction of the O-D pairs. This is a key direction for future research. The methodology proposed in this study mainly serves to extend the current analysis of AVL/APC data and produce a network-level O-D matrix to inform transportation planning. Our models will also be suitable for analyzing smart-card data or Automatic Farebox Collection (AFC) data with hidden unique identifiable information. Recently, as stated in \cite{pelletier2011smart}, the use of smart-card data has raised privacy concerns. One major problem is the vulnerability of the central database which stores smart-card transactions and user information, especially when the data is used for multiple purposes and accessible by multiple groups. Withholding unique ID information when releasing the data to third parties could significantly reduce the risk of private information disclosure. However, lacking unique ID information would prevent the use of the trip-chaining methods. Therefore, the transfer identification model provides a tangible tool for estimating travel demand for such data at an aggregate level. \section*{Acknowledgment} This research is funded by the Michigan Institute of Data Science (MIDAS) and by Grant 7F-30154 from the Department of Energy. The authors would like to thank Forest Yang from the AAATA for his assistance in providing the data. Findings presented in this paper do not necessarily represent the views of the funding agencies. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Introduction} The transit origin-destination (O-D) matrix is a major input for public transit agencies to conduct scheduling and operations planning, long-term planning and design, performance analysis, and market evaluations. However, the traditional data sources for transit O-D matrices are transit on-board surveys, a time-consuming and labor-intensive collection process that is prone to sampling errors. Fortunately, recent years have witnessed a growing interest in building transit O-D matrices from data sources that are automatically collected through intelligent transportation systems, such as Automatic Vehicle Location (AVL), Automatic Passenger Count (APC), and Automatic Fare Collection (AFC) systems \cite{Iliopoulou2019ITStransportplanning}. In particular, automatic passenger counter is an electronic device usually installed on transit vehicles which records boarding and alighting data. Enabled by technologies such as infrared or light beams, digital cameras, thermal imaging and ultrasonic detection, the collected data is of high accuracy and can be easily validated \cite{Wiki2019APC,ITS2019APC}. APC technology is commonly deployed with AVL technology which provides access to real-time transit vehicle dispatching and tracking data through information technology and Global Positioning Systems \cite{Wiki2019AVL}. Most transit agencies have installed APC systems on at least 10\% to 15\% of their bus fleet \cite{Furth2005APCmainstream} and AVL is expected to be present in most fixed-route systems \cite{Wiki2019AVL}, as well as bus rapid transit systems \cite{Parker2008AVLUpdates}. The abundant AVL/APC data can jointly link passenger data to vehicle location \cite{Wiki2019APC} and thus offers a rich source of data in both spatial and temporal dimensions. Ever since the proliferation of AVL/APC technologies, researchers have worked on the conceptualization and development of methodologies to exploit the large-scale transit data they collect for use in transit performance analysis and travel demand modeling. Traditionally, such data contributed greatly to transit performance analysis and service management: They mainly address the problems of determining vehicle loads or run times \cite{Tetreault2010RunTime}, diagnosing or improving transit system performance \cite{Mandelzys2010AVLAPC, Furth2003AVLAPC}, and analyzing transit ridership \cite{Furth2003AVLAPC, Golani2007AVLAPC}. Recently, many efforts have been directed to O-D flow estimation based on AVL/APC data. Iterative Proportion Filtering (IPF) procedure is one of the most widely accepted methodologies: It aims at estimating population O-D flow on each transit route based on sampled stop-level boarding and alighting counts. Deming and Stephan \cite{Deming1940IPF} first proposed a procedure to adjust sampled frequency tables with known marginal totals obtained from different sources. The method is a good fit for route-level O-D estimation where a base matrix sampled from on-board surveys is adjusted by boarding and alighting counts at each stop. Ben-Akiva \cite{BenAkiva1985firstIPF} then showed IPF to be cost-effective for route-level O-D table estimation when combined with on-board survey data. However, IPF has limitations for its dependence on a base matrix constructed from on-board surveys. The vast amount of AVL/APC data made available recently reduces this dependence in many ways, including by enabling more cost-effective choices of base matrices and by inspiring new methods that require little survey data. Using the APC data available by the campus transit service in Ohio State University, McCord et al. \cite{McCord2010routeIPF} demonstrated that the IPF procedure, even with a non-informative, null base matrix, can achieve comparable O-D estimates as on-board surveys. Empirical results then showed that assuming no \textit{a priori} estimate of O-D flows, an arbitrary base matrix that is adjusted iteratively with APC data can achieve a higher accuracy than the null base matrix \cite{Ji2014IterativeIPF}. Moreover, Ji et al. \cite{Ji2015Survey&APC} developed a heuristic expectation-maximization method using APC and on-board survey data, which was shown to outperform the IPF procedure when little survey data is present. In \cite{Ji2015StatsRouteCounts}, a Markov chain Monte Carlo simulation approach with sampling is proposed to infer route O-D with large amounts of APC data only and validated through a numerical test. These advances in exploiting AVL/APC data allow for accurate extrapolation of O-D flows along the specific route even in absence of costly, time-consuming and error-prone survey data. However, it is important to realize that the route-level O-D matrix, though helpful to transit planning in terms of designing route patterns and service frequencies \cite{McCord2010routeIPF}, does not reflect the true O-D flows of transit passengers. The route-level O-D matrix only represents the flow distribution along a single route, while the true passenger trajectories might include additional travels to and from outside this route \cite{Ji2015StatsRouteCounts}. Thus there is an identified need to infer transit network O-D flow matrices based on AVL/APC data. However, the state-of-the-art analysis techniques targeting transit network level is still lacking. As APC data only records the time-stamped number of stop-level boardings and alightings for each route, they cannot differentiate initial and transfer boardings \cite{chu2004ridership}. Also, most route-level O-D estimation methods cannot generalize to transit network if only AVL/APC data is available \cite{McCord2010routeIPF,Ji2015Survey&APC,Ji2015StatsRouteCounts}. Thus a transfer identification algorithm is required to resolve this issue and to produce an overall O-D matrix on the transit network. The goal of this paper is thus to propose novel optimization models to identify transfer activities from AVL/APC data and observed proportion of transferring passengers at different transit centers and other stops. As a result, the paper addresses a limitation of methodologies for APC data analysis, by extending the state-of-the-art route-level O-D matrix estimation and producing a network-level O-D matrix. Observe that the O-D matrix is not intended to predict or analyze individual activities, but rather to assess travel behavior and demand at an aggregate level. This is perfectly reasoned given that the aggregate nature of APC data already makes it challenging to accurately recover the path choices of each rider \cite{Ji2015Survey&APC, Ji2015StatsRouteCounts}. Moreover, many applications in transit planning do not need individual-level activities, but need aggregate-level travel demand information, e.g., at the Traffic Analysis Zone (TAZ) level or other self-defined geographical boundaries. The optimization models proposed in this paper generate aggregate O-D matrices at different geographical resolutions and hence can inform future transit planning and investment decisions. The rest of the paper is organized as follows. The next section defines the problem with a specification of available input data and desired output data. The following section proposes three optimization models to solve the transfer identification problem. These models are then evaluated on a case study. The paper concludes by summarizing the results and discussing its practical applications in transportation planning. \section{Problem Definition} The time-stamped route O-D flow information is made accessible for transportation planners and researchers through wide adoption of APC technologies for raw data collection and well-developed methodologies for estimating population route O-D flows \cite{McCord2010routeIPF, Ji2014IterativeIPF, Ji2015Survey&APC, Ji2015StatsRouteCounts}. The obtained boarding-alighting pairs or route-level O-D flows do not equate to O-D transit trips \cite{McCord2010routeIPF}. Indeed, due to the transfer activities, many actual trips may contain several boarding-alighting pairs where each is called a trip segment or trip leg. A trip can be generally represented as one segment or a sequence of segment(s), where the former is referred to as a singleton trip in this study. Each trip segment in the latter is described by a corresponding ordinal number; for example, a trip with two segments has a first leg and a second leg. Accordingly, there is a need for transfer identification when integrating O-D flows on different routes to generate the transit network O-D matrix. This Transfer Identification Problem (TIP) is the focus of this paper and the rest of this section specifies its input and output. \subsection{Input Specification} Since transfers are based on individual activities, identifying them first requires disaggregation of time-stamped route O-D flows into individual records each specifying the route, the boarding and alighting times and stops. Equivalently, it requires the time-stamped passenger counts describing the demand at a stop as trip origin or destination, and the route O-D matrices specifying the distribution of alighting stops for all boarding passengers at a stop. The model also needs access to the bus schedules for each route and the stop locations on all routes. In addition, the model assumes the availability of observed or estimated proportions of transfers\footnote{The terms transfer probabilities and transfer rates are referred to as the proportions of transfers.}. Such data can be obtained directly from relatively long-term observations at stop terminals or estimated by transportation professionals based on years of experience. This data collection process outperforms on-board surveys in terms of ease to implement and reliability: Enumerating the passengers leaving the terminals requires less efforts compared to conducting detailed surveys for individual passengers, and it is more reliable than on-board surveys by reflecting the whole population rather than a proportion of passengers on selected routes. Moreover, the observed transfer probabilities can be obtained by averaging over a large amount of historical observations, thus less prone to inaccurate or misrecorded data entries on individual passengers. \subsection{Transfer Assumptions} The transfer activities modeled in this work are characterized by three behavioral assumptions as stated, justified and discussed below. \begin{assumption} Transfer activities that happen in transit centers should be differentiated from those occurring at non-transit centers. \end{assumption} It follows from the observation that more transfer trips are expected at transit centers, which are designed to be served by multiple bus or rail routes synchronized for facilitating transfers. Consequently, the observed transfer probabilities at transit centers are expected to be more significant than at other stops. This model thus evaluates them separately. \begin{assumption} A transfer between two trip segments is only feasible when the following three conditions are satisfied. First, the two trip segments linked by a transfer must not belong to the same route. Second, passengers only transfer within some thresholds for walking distance and transfer time. Third, transfers are directional such that the prior trip segment must have ended earlier than the boarding of the subsequent one. \end{assumption} These assumptions on travel behaviors are widely applied and tested for generating trip chains \cite{Alsger2015SmartCard, trepanier2007individual, barry2002origin, munizaga2012estimation}. To verify the feasibility of a transfer between any two trips in terms of walking distance and transfer time intervals, the stop locations and bus schedules are required, which are often made publicly available from local regulators. In particular, this information can be extracted from the General Transit Feed Specification (GTFS) data, which is a common format for public transportation schedules and the associated geographic information \cite{GoogleTransitAPIs2019}. Notably, the route and stop ID information can jointly map each bus stop to a geographical location; the alighting times can be calculated from boarding times and bus schedules. \begin{assumption} Passengers transfer at most once. \end{assumption} In general, most trips in a bus-based transit network involve no transfer or a single transfer. This study adopts this \textit{one-transfer assumption}. The model however can easily handle the more general case (i.e., two or more transfers) if needed. \subsection{Output Specification} The desired output for this study is a transit network level O-D flow count matrix with each origin or destination at the stop level. The optimization models to be presented identify each trip segment as either a singleton trip or a trip leg. The O-D matrix can be calculated elementwise and its $(i,j)^{th}$ entry represents the flow estimation from transit stop $i$ to $j$: Its value is the sum of the number of singleton trips that start in $i$ and end at $j$, and the number of multi-legged trips whose first leg starts in $i$ and last leg ends in $j$. It is also possible to construct the network O-D flows between origins and destinations at varying geographical resolutions based on the stop-level matrices. \section{Methodology} This section presents optimization models to compute the aggregate O-D matrix. The optimization models do not compute the O-D matrix directly; instead they solve the Transfer Identification Problem (TIP) that identifies whether each trip segment is followed by a transfer, in which case the next trip segment is also identified. It is then simple to use the TIP solution to compute the aggregate O-D matrix. The optimization models choose the values of these decision variables in order to minimize the distance between the observed and estimated transfer probabilities at transit stops, subject to the transfer assumptions discussed earlier. This section presents three approaches for solving the TIP: A Quadratic Integer Program (QIP), a two-stage approach based on a continuous relaxation of the QIP and rounding, and an Integer Program (IP). \subsection{A QIP Model for the TIP} Figure \ref{fig:opt} presents the QIP formulation for the TIP: the model specifies the setup and parameters, the decision variables, the objective function and the constraints, which are now discussed in detail. \begin{figure*}[!t] \begin{tabbing} \tabrule\\ 123\=123\=123\=12312312123341234\=12345678901234567\=123\=123\=123\=\kill {\bf Data:} \\ \> $T$: set of recorded trips; \\ \> $C$: set of transit centers; \\ \> $p_{1,i}^*$: observed transfer probability at transit center $i$; \\ \> $p_{2}^*$: observed transfer probability at stops other than transit centers; \\ \> $t$: maximal transfer interval time in minutes; \\ \> $d$: maximal transfer walking distance in miles; \\ \> $\forall j \in T:$ \\ \>\> $l_j$: route of trip $j$; \\ \>\> $b_j$: boarding stop of trip $j$; \\ \>\> $a_j$: alighting stop of trip $j$; \\ \>\> $s_j$: boarding time of trip $j$; \\ \>\> $t_j$: alighting time of trip $j$; \\ \>\> $T_j := \{k \in T \mid l_j \neq l_k, ~ dist(a_j, b_k) < d, ~ 0 < s_k - t_j < t \} $, set of possible transfers from $j$. \\ {\bf Variables:} \\ \> $x_j \in \{0,1\}$ \>\>\> $(j \in T)$ \> --- $x_j = 1$ if $j$ is a first leg; \\ \> $y_{j,k} \in \{0,1\}$ \>\>\> $(j \in T, k \in T_j)$ \> --- $y_{j,k} = 1$ if trip segment $j$ transfers to $k$; \\ \> $p_{1,i} \in [0,1]$ \>\>\> $(i \in C)$ \> --- calculated transfer probability at $i$; \\ \> $p_{2} \in [0,1]$ \>\>\> --- calculated transfer probability at stops other than transit centers. \\ {\bf Objective:} \\ \> minimize ${\displaystyle \sum_{i \in C} \; (p_{1,i} - p_{1,i}^*)^2 + (p_{2} - p_{2}^*)^2}$ \\ {\bf Constraints:} \\ \> ${\displaystyle \sum_{k \in T_j} y_{j,k} = x_j}$ \>\>\> $(j \in T)$ \>\> (0.1) \\ \> ${\displaystyle \sum_{j \in T \mid k \in T_j}y_{j,k} \leq 1 - x_k}$ \>\>\> $(k \in T)$ \>\> (0.2) \\ \> ${\displaystyle p_{1,i} = \frac{\sum_{j \in T \mid a_j = i} \ x_j}{\mid \{j \in T \mid a_j = i \} \mid }}$ \>\>\> $(i \in C)$ \>\> (0.3) \\ \> ${\displaystyle p_{2} = \frac{\sum_{j \in T \mid a_j \notin C} \ x_j}{\mid \{j \in T \mid a_j \notin C \} \mid}}$ \>\>\>\>\> (0.4) \\ \tabrule \end{tabbing} \caption{The QIP Model for the TIP.} \label{fig:opt} \end{figure*} The model is defined over the set $T$ of trip segments and the set $C$ of transit centers. Each trip segment $j \in T$ is characterized by its boarding stop $b_j$, alighting stop $a_j$, boarding time $s_j$, alighting time $t_j$, and bus line $l_j$. Two sets of decision variables are associated with each segment: Binary variable $x_j$ is 1 if and only if trip segment $j$ has a transfer and binary variable $y_{j,k}$ is 1 if and only if segment $j$ transfers to segment $k$. Recall that a transfer between two segments $j$ and $k$ has positive probability if and only if they are on different routes, the maximum walking distance and transfer time constraints are satisfied, and trip segment $k$ starts after $j$ ends. These constraints can be expressed as $$ l_z \neq l_c \ \& \ dist(a_j, b_k) < d \ \& \ 0 < s_k - t_j < t $$ \noindent where the function $dist(\cdot,\cdot)$ is a metric (e.g., geodesic, Euclidean, or Manhattan distances), $d$ and $t$ are parameters chosen to denote the maximum allowed walking distance and transfer time. For each segment $j$, the set of feasible second legs from $j$ is denoted by $T_j$. Constraint (0.1) specifies that there is a transfer after trip segment $j$ (i.e. $x_j = 1$) if and only if there is exactly one transfer from this trip to one of its feasible successors (i.e., exactly one of $y_{j,k} = 1$ for $k \in T_j$). Constraint (0.2) states the one-transfer assumption: If segment $j$ transfers to segment $k$ (i.e., $y_{j,k} = 1$), then $k$ cannot make any subsequent transfers (i.e., $x_k = 0$). Conversely, if segment $k$ has a transfer (i.e., $x_k = 0$), then any feasible prior leg $j$ (i.e., all $j ~\textrm{such that}~ k \in T_j$) cannot transfer to $k$ (i.e., $y_{j,k} = 0$). Constraints (0.3) and (0.4) respectively compute the estimated transfer probabilities at each transit center and all other stops, where $\sum_{j \in T \mid a_j = i} \ x_j$ represents the number of first legs transferring at transit center $i$ and $\mid \{j \in T \mid a_j = i \} \mid$ counts the total number of segments ending at transit center $i$. Recall that the goal of the TIP is to select potential transfers for each trip segment so that the aggregate-level transfer probabilities given by Constraints (0.3) and (0.4) are as close as possible to the observed transfer probabilities. In the QIP, closeness is measured with an L2-norm. The sets of first legs, second legs, and singleton trips are denoted respectively by $T_1$, $T_2$ and $T_s$, which form a partition of the set $T$. They are defined explicitly as follows, \begin{align*} T_1 &:= \{j \in T \mid x_j = 1\} \\ T_2 &:=\{k \in T \mid y_{j,k} = 1 ~\textrm{for some}~ j \in T\} \\ T_s &:= T \setminus (T_1 \cup T_2) \end{align*} The O-D matrix can then be estimated elementwise from the QIP solution. Each entry $(i, i')$ of the O-D matrix records the expected number of trips from transit stop $i$ to $i'$ and is a sum of two components: The number of singleton trips starting at stop $i$ and ending at $i'$ and the number of two-legged trips whose first leg starts at stop $i$ and second leg ends at $i'$. Let $\textbf{1}_{ \{ \cdot \} }$ be an indicator function which equals to 1 when the statement is true and 0 otherwise. Then the O-D matrix can be computed as follows: \begin{align*} OD_{i,i'} = &\sum_{j \in T_s} \textbf{1}_{\{b_j=i,~a_j=i'\}} \\ + &\sum_{j \in T_1} \sum_{k \in (T_j \cap T_2)} y_{j,k} \textbf{1}_{\{b_j=i,~a_k=i'\}}. \end{align*} The proposed QIP formulation is applied to the case study to be introduced in Section IV and shown to have severe scalability issues: It cannot be solved by Gurobi \cite{gurobi}, a state-of-the-art commercial optimization solver, within 24 hours. In general, the QIP formulation does not have guaranteed tractability due to the (potentially) quadratic number of variables and the large size of the data sets, even for small-size cities. \subsection{Rounding the Continuous QIP Relaxation} This section explores a scalable two-stage approach which consists of (1) solving the continuous relaxation of the QIP and (2) rounding the solutions of continuous relaxation to derive a feasible binary substitute. The continuous relaxation relaxes the domain of the variables from the set $\{0,1\}$ to the interval $[0,1]$, producing a Convex Quadratic Program (QCP), which can be solved efficiently. The QCP solution for variables $x_j$ and $y_{j, k}$ $(j \in T, k \in T_j)$ now assigns values in the range $[0,1]$ to the decision variables which can thus be interpreted as the probability of having a transfer after segment $j$ and the probability that segment $k$ be the second leg of that transfer. The QCP relaxation is depicted in Figure \ref{fig:opt1} and mimics the QIP. \begin{figure*}[!ht] \begin{tabbing} \tabrule\\ 123\=123\=123\=12312312123341234\=12345678901234567\=123\=123\=123\=\kill {\bf Variables:} \\ \> $x_j \in [0,1]$ \>\>\> $(j \in T)$ \> --- probability of $j$ being a first leg; \\ \> $y_{j,k} \in [0,1]$ \>\>\> $(j \in T, k \in T_j)$ \> --- probability of $j$ transferring to $k$; \\ \> $p_{1,i} \in [0,1]$ \>\>\> $(i \in C)$ \> --- calculated transfer probability at $i$; \\ \> $p_{2} \in [0,1]$ \>\>\> --- calculated transfer probability at stops other than transit centers. \\ {\bf Objective:} \\ \> minimize ${\displaystyle \sum_{i \in C} \; (p_{1,i} - p_{1,i}^*)^2 + (p_{2} - p_{2}^*)^2}$ \\ {\bf Constraints:} \\ \> ${\displaystyle \sum_{k \in T_j} y_{j,k} = x_j}$ \>\>\> $(j \in T)$ \>\> (1.1) \\ \> ${\displaystyle \sum_{j \in T \mid k \in T_j}y_{j,k} \leq 1 - x_k}$ \>\>\> $(k \in T)$ \>\> (1.2) \\ \> ${\displaystyle p_{1,i} = \frac{\sum_{j \in T \mid a_j = i} \ x_j}{\mid \{j \in T \mid a_j = i \} \mid }}$ \>\>\> $(i \in C)$ \>\> (1.3) \\ \> ${\displaystyle p_{2} = \frac{\sum_{j \in T \mid a_j \notin C} \ x_j}{\mid \{j \in T \mid a_j \notin C \} \mid}}$ \>\>\>\>\> (1.4) \\ \tabrule \end{tabbing} \caption{The QCP Relaxation of the TIP.} \label{fig:opt1} \end{figure*} To obtain an aggregate O-D matrix, it is necessary to round the variables and assign them binary values. The second stage is based on a feasible rounding strategy that proceeds as follows. First, the segments likely to have a transfer are rounded to 1 by choosing a threshold $x^* \in [0,1]$ and selecting those variables whose value in the QCP relaxation exceeds the threshold, i.e., $$ \hat{x}_j = \begin{cases} 1, &if~x_j \geq x^* \\ 0, &otherwise. \end{cases} $$ \noindent The threshold can be obtained by rounding up the variables $x_j~(j \in T)$ with $n$ largest probability of having a transfer to 1 and rounding down the rest to 0. For notation simplicity, define the observed transfers: $\delta_{1,i}$ for transit center $i$, $\delta_2$ for other non-transit stops and $n$ for the total number of transfers based on the observed transfer probabilities as follows, $$ \begin{array}{ll} \delta_{1,i} &= p^*_{1,i} \times |\{j \in T \mid a_j = i \}| \\ \delta_2 &= p^*_2 \times |\{j \in T \mid a_j \notin C \}| \\ n &= \sum_{i \in C} \delta_{1,i} + \delta_2. \end{array} $$ The threshold $x^*$ can be derived as below, $$ x^* = \min_{j \in T} x_j ~\textrm{s.t.}~ |\{j' \in T|x_{j'} \geq x_j \}| \leq n ,$$ and the set of first legs consists of all trip segments indexed by $j \in T$ with the corresponding variable $x_j \geq x^*$, that is, $$ T_1 := \{ j \in T | \hat{x}_j = 1 \} = \{ j \in T | x_j \geq x^* \}. $$ It remains to determine the set of second legs and the set of singleton trips. The likelihood of a segment to be a second leg can be approximated by summing the transfer probabilities from all of its possible first legs, i.e., $\mathcal{L}_k := \sum_{j \in T_1} y_{j,k} ~(k \in T \setminus T_1)$. Note that, since the transfer probabilities from different first legs are computed in different probability spaces, their sums do not directly translate to a probabilistic interpretation. However, it is still a sensible measure for identifying the set of segments most likely to be second legs. Recall that the one-transfer assumption requires that the number of second legs equals the number of first legs and that $n$ as defined earlier denotes the total observed transfers: The set of second legs can now be defined by those segments $k$ whose measure $\mathcal{L}_k$ is among the $n$ largest values , i.e., $$ T_2 := \{k \in T \setminus T_1 \mid ~~ \big \lvert \{k' \in T\setminus T_1 \mid \mathcal{L}_{k'} \geq \mathcal{L}_k\} \big \rvert \leq n\}. $$ Once the trips are grouped as first-leg, second-leg, or singleton-trip, the probabilities of transferring from a first-leg trip $j$ to any feasible second leg $k \in (T_j \cap T_2)$ are normalized to produce, for each first leg, a well-defined probability distribution over its feasible second legs, i.e., $\forall j \in T_1$, $\sum_{k \in T_j \cap T_2} p_{j,k} = 1$ and $p_{j,k} \geq 0 ~~(k \in T_j \cap T_2)$. The normalized probabilities are calculated as, $$ p_{j,k} = \frac{y_{j,k}}{\sum_{k' \in T_j \cap T_2} y_{j,k'}}~~ \forall k \in T_j \cap T_2, \forall j \in T_1. $$ The O-D matrix can then be estimated elementwise as a probability-weighted sum of trips. The expected number of trips from transit stop $i$ to $i'$ is the sum of two components: The count of singleton trips starting at stop $i$ and ending at $i'$, and the sum of (normalized) probabilities for all feasible two-legged trips whose first leg starts at stop $i$ and the second leg ends at $i'$, i.e., \begin{align*} OD_{i,i'} = &\sum_{j \in T_s} \textbf{1}_{\{b_j=i,~a_j=i'\}} \\ + &\sum_{j \in T_1} \sum_{k \in (T_j \cap T_2) } p_{j,k} \textbf{1}_{\{b_j=i,~a_k=i'\}}. \end{align*} \subsection{The Integer Programming (IP) Model} This section proposes a third approach based on Integer Programming (IP). The key idea behind the IP model is to replace the L2-norm by a L1-norm and to reason about the \textit{observed transfers} instead of the \textit{observed transfer probabilities}. Recall that the observed transfers are defined based on the observed transfer probabilities as $\delta_{1,i}$ for each transit center $i \in C$ and $\delta_2$ for other non-transit stops. Figure \ref{fig:opt2} describes the resulting IP model. The objective function minimizes the absolute differences between the observed and estimated numbers of transfers. The logical constraints are the same as in the QIP, but there is no need to reason about transfer probabilities. The aggregate O-D matrix can then be computed from the optimal solution as for the QIP. \begin{figure*}[!ht] \begin{tabbing} \tabrule\\ 123\=123\=123\=12312312123341234\=12345678901234567\=123\=123\=123\=\kill {\bf Data:} \\ \> $\delta_{1,i} \in \mathbb{N} $ \>\>\> $(i \in C)$ \> --- observed transfers at transit center $i$; \\ \> $\delta_{2} \in \mathbb{N}$ \>\>\>\> --- observed transfers at other stops. \\ {\bf Variables:} \\ \> $x_j \in \{0,1\}$ \>\>\> $(j \in T)$ \> --- $x_j = 1$ if trip $j$ transfers; \\ \> $y_{j,k} \in \{0,1\}$ \>\>\> $(j \in T, k \in T_j)$ \> --- $y_{j,k} = 1$ if trip $j$ transfers to $k$; \\ {\bf Objective:} \\ \> minimize ${\displaystyle \sum_{i \in C} \| \delta_{1,i} - \sum_{j \in T \mid a_j = i} x_j \|_1 + \| \delta_2 - \sum_{j \in T \mid a_j \notin C} x_j \|_1}$ \\ {\bf Constraints:} \\ \> ${\displaystyle \sum_{k \in T_j} y_{j,k} = x_j}$ \>\>\>\> $(j \in T)$ \>\>\> (2.1) \\ \> ${\displaystyle \sum_{j \in T \mid k \in T_j}y_{j,k} \leq 1 - x_k}$ \>\>\>\> $(k \in T)$ \>\>\> (2.2) \\ \tabrule \end{tabbing} \caption{The Integer Programming Model for the TIP.} \label{fig:opt2} \end{figure*} \section{Case Study} This section applies the proposed methodology to a case study for the broader Ann Arbor--Ypsilanti region in Michigan and validates the O-D flow matrices estimated from both the rounded QCP and the IP. \begin{figure*}[!t] \includegraphics[width=16cm]{network.png} \centering \caption{Transit Network Operated by AAATA} \label{fig:aaata} \end{figure*} \subsection{Data Description} The data in this study was provided by the Ann Arbor Area Transportation Authority (AAATA), consisting of boarding-only smart-card transactions. The data was collected from the transit network operated by AAATA as depicted in Figure \ref{fig:aaata}, where a total of 58 inbound or outbound routes (denoted by blue lines in Figure \ref{fig:aaata}) connect 1,232 stops including two transit centers, namely Blake Transit Center and Ypsilanti Transit Center. The bus schedule and stop location information were extracted from the GTFS data \cite{TheRide2019}. We conducted two experiments using the Go!pass data and the Period Pass data, respectively. Go!pass is purchased by local businesses at downtown Ann Arbor as a benefit for their employees who can get unlimited bus services provided by AAATA. Period Pass, on the other hand, allows the holders to take unlimited rides on any bus routes within the specified period and offers discounted prices for senior and students \cite{TheRide2019}. There are overall 32,840 transactions for Go!Pass and 43,660 for Period Pass from all weekdays in October, 2017. \subsection{Experimental Settings} In this study, we use the smart-card data to validate the proposed methodology. To be specific, we apply the trip-chaining method on the boarding-only smart-card data to infer alighting stops, identify transfers, and produce a transit O-D matrix to evaluate and validate our optimization models \cite{trepanier2007individual,munizaga2012estimation}. Note that we assume the ground truth is given by the trip-chaining method on the smart-card data. The trip-chaining method exploits unique IDs for passengers to link their consecutive transit trip segments. It relies on two major assumptions: 1) the alighting point of each trip is within walking distance of the consequent boarding location (usually assumed to be in the range of 400 m to 2,000 m); 2) the alighting point of the last boarding of the day for a passenger is adjacent to his/her first boarding stop of the same day. In addition, researchers typically assume a time threshold (e.g., 30 minutes) to identify transfer activities: A passenger is assumed to take a transfer if the interval between the alighting time and the subsequent boarding is less than the specified threshold \cite{devillaine2012detection}. For the trip-chaining benchmark, we assume the maximum walking distance is a quarter mile (402 meters), transfer time threshold is 30 min, and the last destination is assumed to be the closest stop to the first origin as suggested in \cite{alsger2016validating}. Recall that the optimization models require all route-level time-stamped O-D matrices collected from a transit network as its input, which can be readily calculated from APC data using the IPF techniques. Therefore, to ensure the validity of the comparison, we process the same smart-card data to generate the route-level O-D matrices by directly aggregating the inferred route-level boarding-alighting pairs for each transaction into time-stamped route-level O-D flows, which will serve as the input for our optimization models. To be consistent with the settings of the benchmark, the optimization models also assume the same maximum walking distance (402 meters) and transfer time thresholds (30 min). There are two transit centers in the region, i.e., one in downtown Ann Arbor, the other in downtown Ypsilanti. We assume that the Blake Transit Center (in Ann Arbor) takes $i = 1$ and the Ypsilanti Transit Center takes $i = 2$. The ground-truth transfer rates computed from the smart-card analysis are: For Go!Pass, $$P_{11}^* = 0.232, ~P_{12}^* = 0.591 ~\textrm{and}~ P_2^* = 0.062;$$ for Period Pass, $$P_{11}^* = 0.588, ~P_{12}^* = 0.554 ~\textrm{and}~ P_2^* = 0.143.$$ \subsection{Geographical Resolutions for Model Evaluation} The estimated O-D matrices by our models are evaluated at various geographical resolutions, i.e., the stop level, the TAZ level (which is widely adopted for transportation planning), and the Transit Analysis Clusters (TACs). TACs are self-defined zones, obtained by using Hierarchical Clustering Analysis (HCA) \cite{HCA2013}. In HCA, each stop is initially assigned to a cluster on its own, and at each step, two most similar clusters (based on distance) are joined until a single cluster is left. The end result is a giant cluster organized in a tree structure. To obtain TACs at different geographical resolutions, it suffices to cut the tree at a given height: The resultant clusters have pairwise distances approximately at that height which is chosen to capture the desired distance threshold. There are three major reasons behind this choice of TACs through HCA. First, riders may choose different origin and destination stops via different bus routes in a day, so aggregating the travel demand of spatially close stops is critical when constructing O-D matrices \cite{luo2017constructing}. Second, the stop-to-stop O-D matrix does not directly reveal the travel demand pattern like a zone-to-zone O-D matrix which shows the equilibrium between demand and supply in the transit system and measures the access or egress times for transit trips \cite{tamblay2016zonal}. Third, traditional TAZs have boundaries on the streets, where most bus stops are located. Hence the TAZs boundaries might create artificial divisions between closely related stops and influence the estimation. We will present the evaluation metrics at stop level, TAZ level and TAC level with varying radius to provide a more comprehensive examination of model performance. \subsection{Experimental Results} The R-squared metric is used to evaluate the accuracy of the O-D flow estimates as in previous studies \cite{Tavassoli2016HowCT} and \cite{EconometricsInTransportation}. Given a ground-truth square matrix $OD^{*}$ and its estimation $OD$, both of which with size of $n \times n$, the R-squared metric is defined as $$ \theta = 1 - \frac{\sum_{i,j=1}^n (OD^{*}_{ij} - OD_{ij})^2 }{\sum_{i,j=1}^n (OD^{*}_{ij} - \overline{OD^{*}})^2 }. $$ \noindent where $\overline{OD^{*}}$ denotes the average of all entries of $OD^{*}$. The R-squared metric can be interpreted as the percentage of total variability (in terms of sum of squared errors) that can be explained by the estimated matrix, compared to a mean model. A higher R-squared value generally indicates a better estimation. As discussed, the models are evaluated at different geographical resolutions: The stop level, the TAZ level, and the TAC level with radius ranging from 0.5 miles to 2 miles in increments of 0.5 miles. The stop level can also be seen as a TAC level with radius of 0 mile. The results at the TAZ level are now summarized. For the Go!Pass data, the R-squared evaluated against the ground-truths inferred from trip-chaining methods is equal to 88.71\% for the rounded QCP model and 95.57\% for the IP model. For the Period Pass data, the R-squared reaches 67.15\% for rounded QCP model and 85.06\% for the IP model. Figure 5 illustrates the R-Squared evaluated using both Go!Pass and Period Pass data for the radius of TAC ranging from 0 mile to 2 miles. \begin{figure*}[t!] \includegraphics[width=16cm]{rsquared.png} \centering \caption{R-Squared Values at Varying Radius of TACs.} \label{fig:rsq} \end{figure*} Both approaches can be solved efficiently on a personal computer of i7-7500U CPU with Gurobi 8.0 through Python 3.6. The rounded QCP approach takes 121.78 seconds on Go!Pass data (with 32,840 transactions) and 156.96 seconds on Period Pass data (with 43,660 transactions); the IP approach solves faster with a computational time of 2.34 seconds on Go!Pass data and 5.97 seconds on Period Pass data. There are three key general observations. First, the IP approach performs significantly better than rounded QCP approach on both data sets. Second, the predictive power of both approaches improves with increasing TAC radius but the rate of improvement typically decreases as the clusters expand in their radius. The Rounded QCP approach on Go!Pass increases monotonically from 84.70\% at 0-mile radius to 94.92\% at 2-mile radius. A similar trend can be observed from the IP approach on the Go!Pass data where $\theta$ increases monotonically from 92.39\% at 0-mile radius to 97.91\% at 2-mile radius. For both models, the same behavior is also observed on the Period Pass data. Furthermore, note that the accuracy of 1-mile TAC is an important turning point as shown in Figure \ref{fig:rsq}---the improvement rate below the 1-mile threshold is significantly larger than above it. Third, the model performance on the Period Pass data is worse than the Go!Pass data, which might result from the irregular travel behavior of Period Pass users. Recall that the Go!Pass is a special pass purchased by companies located in downtown areas for their daily commuting employers, who tend to have more consistent and predictable commuting patterns during the weekdays. Therefore, for the Go!Pass data, the daily variation in transfers at each stop may be less and can be better described by a single observed long-term transfer rate. In comparison, Period Pass targets a more general audience and offers special discounts for the seniors and students who tend to take more spontaneous and thus less predictable trip chains. It is also observed that the true transfer rates for some days in the Period Pass data deviate dramatically from the long-term observation used in the optimization models. Also, the transfer rates for Period Pass is much higher than Go!Pass, which might lead to more feasible transfers for each first-leg trips and add to the modeling complexity. These discussions might indicate an inherent limitation in modeling transfer identification with inadequate information. Without unique identifier of passengers, it is expected that when dealing with extremely high transfer probabilities (such as 50\% or above), the aggregate-level variability is difficult to capture in detail. \section{Conclusion} This paper presented optimization models to estimate the transit network O-D flow matrix based on time-stamped and location-based boarding and alighting counts, and observed or estimated proportions of transferring passengers at each transit center and other non-transit centers. It proposed a QIP approach, a two-stage approach based on a QCP relaxation of the QIP and a feasible rounding procedure, and an IP model that replaces the L2-norm of the QIP by a L1-norm. While the QIP is not tractable for real data sets, the QCP and IP approaches can be solved efficiently for the transit data provided by AAATA. Moreover, the IP model is superior to the QCP approach in terms of accuracy. When measuring against the ground-truth as calculated from trip-chaining methods using the R-squared metric, the IP model can achieve up to 95.59\% at the TAZ level and 96.99\% at the 1-mile self-defined TAC level for the Go!Pass data (which exhibits more consistent travel behavior) and 85.06\% at the TAZ level and 90.61\% at the 1-mile TAC level for the Period Pass data (which exhibits more irregular travel patterns). There is also a clear improvement in predictive accuracy with lower spatial resolution. The results suggest that the IP model can produce accurate estimation for applications requiring varying levels of spatial resolutions. In particular, the IP model can meet the needs for tasks ranging from predicting O-D flow among bus stops to constructing a zone-level transit-trip O-D matrix to inform future transit planning. The results indicate the IP model is especially promising for cases with moderate or relatively low transfer rates and for populations with consistent transfer patterns. It is because the observed transfer probabilities are used as benchmarks, against which the deviation of estimated transfer probabilities is minimized. Therefore, the capability of observed transfer probabilities to capture a consistent pattern for transfer activities is a critical factor for accurate modeling. It is recommended to apply the transfer probabilities observed or surveyed during a relatively long period (i.e., monthly or yearly), which reduces the potential inaccuracy brought by daily variations. If such information is absent, the expert judgment from transit operators could be used instead. This study can be further developed from the following perspectives. First, the IP model directly applies the parameters for behavioral assumptions as suggested in \cite{alsger2016validating} for their case study on South-East Queensland public transport network in Australia. However, the parameters for such behavioral assumptions might differ across case studies due to differences in transit systems, built environments and socio-demographics of the regions under analysis. More comprehensive studies to validate the proposed methodology would welcome experimental results on more case studies or a sensitivity analysis on the choice of maximally allowed distance and transfer time. Second, the current IP formulation can be easily extended to account for multiple transfers. Future work can verify the effectiveness of integer programming modeling multiple transfers. Also, the optimization models for transfer identification may have many symmetric solutions, leading to large deviations when transfer probabilities are high. As a result, it is important to clearly identify these equivalent solutions and conduct closer case-dependent analysis to obtain a more accurate prediction of the O-D pairs. This is a key direction for future research. The methodology proposed in this study mainly serves to extend the current analysis of AVL/APC data and produce a network-level O-D matrix to inform transportation planning. Our models will also be suitable for analyzing smart-card data or Automatic Farebox Collection (AFC) data with hidden unique identifiable information. Recently, as stated in \cite{pelletier2011smart}, the use of smart-card data has raised privacy concerns. One major problem is the vulnerability of the central database which stores smart-card transactions and user information, especially when the data is used for multiple purposes and accessible by multiple groups. Withholding unique ID information when releasing the data to third parties could significantly reduce the risk of private information disclosure. However, lacking unique ID information would prevent the use of the trip-chaining methods. Therefore, the transfer identification model provides a tangible tool for estimating travel demand for such data at an aggregate level. \section*{Acknowledgment} This research is funded by the Michigan Institute of Data Science (MIDAS) and by Grant 7F-30154 from the Department of Energy. The authors would like to thank Forest Yang from the AAATA for his assistance in providing the data. Findings presented in this paper do not necessarily represent the views of the funding agencies. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,116,691,498,762
arxiv
\section{Introduction} Our main object of interest will be the Feigenbaum functions which are the solutions of the Feigenbaum-Coullet-Tresser fixed point equation \cite{CT}, \cite{feig0}, \cite{feig1}: \begin{equation}\label{equ:1hp,1} \tau H^2(x) = H(\tau x). \end{equation} $H$ is assumed to be a smooth unimodal map on some interval which contains $0$ with the critical point of order $\ell$ and normalized (following \cite{EW}, \cite{leswi:feig}) so that the critical value is at $0$ and its image at $1$. It is well known (and very non-trivial, see e.g. \cite{leswi:feig} for a historical account) that for each $\ell$ even and positive a unique solution $(H_{\ell},\tau_{\ell})$ exists and has the form $H_\ell(x)=E_\ell(x)^\ell$ where $E_\ell$ is a real-analytic mapping with strictly negative derivative on $[0,1]$ and with a unique zero $x_{0,\ell}$ (so that $x_{0,\ell}$ is the critical point of $H_\ell$ of order $\ell$). Furthermore, by~\cite{EpsLas}, \cite{leswi:feig}, $E_\ell$ extends to a univalent map (denoted again by $E_\ell$) from some Jordan domain $\Omega_\ell$ onto a slit complex plane (see Sect.~\ref{rev} for details). This implies in particular that $H_\ell$ has a polynomial-like extension onto some disk $D(0, R)$, $R>1$ with a single critical point of order $\ell$. Let $J_\ell$ be the Julia set of this polynomial-like mapping. For every $\ell$ even and positive the {tower} map (cf.~\cite{profesorus}) $\hat{T}_\ell: \CC\to \CC$ is defined almost everywhere as follows. Introduce the fundamental annulus $A_\ell=\Omega_\ell\setminus \tau_\ell^{-1}\overline{\Omega_\ell}$ (geometrically, this is indeed an annulus domain, for every finite $\ell$). For every $n\in \ZZ$ and every $z\in \tau_\ell^n A_\ell$, let $$\hat{T}_\ell(z)=\tau_\ell^n H_\ell \tau_\ell^{-n}(z).$$ Note that $\hat{T}_\ell(z)=H_\ell^{2^n}(z)$ for $n\ge 0$ and $z\in \tau_\ell^{-n}A_\ell$. By \cite{EW}, \cite{leswi:feig}, the quadruple $(H_\ell, \tau_\ell, \Omega_\ell, x_{0,\ell})$, as $\ell\to \infty$, has a well-defined non-trivial limit $(H_\infty, \tau_\infty, \Omega_\infty, x_{0,\infty})$ so that the limit tower $\hat{T}_\infty: \CC\to \CC$ and $A_{\infty} := \Omega_{\infty}\setminus \overline{\tau_{\infty}^{-1}\Omega_{\infty}}$ are defined as well. Main results of the present paper are summarized in the following Theorems~\ref{m1}-\ref{m2}. \begin{theo}\label{m1} For any $\ell\in 2\NN$ or $\ell=\infty$, there exists a unique measure $\mu_\ell$ supported on $A_{\ell}$ which satisfies the following conditions (1)-(2): (1) $\mu_\ell$ is absolutely continuous w.r.t. the Lebesgue measure on the plane and $\mu_\ell(A_\ell)=1$ and with a density which real-analytic and positive on $A_{\ell}$. (2) $\hat{\mu}_\ell$ defined by $\hat{\mu}_{\ell}(S)=\mu_{\ell}\left(\tau_{\ell}^n S\right)$ for every Borel set $S\subset \tau_\ell^{-n}A_\ell$ and every $n\in \ZZ$ is a $\sigma$-finite measure on $\CC$ which is invariant under $\hat{T}_\ell: \CC\to \CC$. \end{theo} Define the level function $\hat{m}: \CC\to \ZZ$ and the map $T_\ell: A_\ell\to A_\ell$ so that $\hat{m}(z)=n$ for $z\in\tau_\ell^{-n} A_\ell$ and $T_\ell=\tau_\ell^{\hat{m}\circ H_\ell}H_\ell$. Then Theorem~\ref{m1} means that $\mu_\ell$ is invariant under $T_\ell$. Let $0<\ell< \infty$ be any even number. Since $T_\ell$ is $\mu_\ell$-ergodic and $\hat{m}\circ H_\ell$ is integrable, by the Birkhoff Ergodic Theorem, for Lebesgue almost every $z\in \CC$ the following limit (called 'drift') exists: $$\vartheta(\ell):=\lim_{N\to \infty} \frac{1}{N}\hat{m}(\hat{T}_\ell^N(z))=\lim_{N\to \infty} \frac{1}{N}\sum_{i=0}^{N-1}\hat{m}\circ H_\ell(T_\ell^i(y)) =\int_{A_\ell}\hat{m}\circ H(x)d\mu_\ell(x),$$ where $y=\tau_\ell^kz\in A_\ell$, for an appropriate $k\in \ZZ$. It follows from here, similar to~\cite{leswi:limit}, that the Lebesgue measure of the Julia set $J_\ell$ is positive if and only if $\vartheta(\ell)>0$. We are interested in the behavior of $\vartheta(\ell)$ as $\ell$ tends to infinity. \begin{theo}\label{m2} (1) The sequence of measures $\{\mu_\ell\}_{\ell\in 2\NN}$ tends strongly to $\mu_\infty$, i.e. the corresponding densities converge in $L_1(\CC,\Leb_2)$; morever their convergence is analytic on some disk which contains the critical point of $H_{\ell}$ for all $\ell$ large enough. (2) the sequence of drifts $\{\vartheta(\ell)\}_{\ell\in 2\NN}$ converges to a finite number (the limit drift) \begin{equation}\label{limdrift} \vartheta(\infty)=-\frac{1}{\log\tau_\infty}\lim_{r\to 0}\int_{A_\infty\setminus B(x_{0,\infty},r)}\log\frac{|H_\infty(z)|}{|z|}d\mu_\infty(z). \end{equation} \end{theo} Note that this integral exists only in the Cauchy sense given above and is undefined on $A_{\infty}$, see~\cite{leswi:feig}. \iffalse As in \cite{??}, we will use the following {\bf Fact.} Let $\nu$ be a finite Borel measure on a subset $B$ of a normed space $X$, $h: B\to X$ is a map such that both functions $\log||x||$, $\log||h(x)||$ are $\nu$-integrable. Assume $t: B\to B$ is a $\nu$-measure-preserving transformation as follows: there is a countable partition $\{B_j\}$ of $B$ and a constant $a\in \C\setminus \{0\}$ such that $t|_{B_j}=a^{m_j}h$ for each $j$ and an integer $m_j$. Then $$\sum_j m_j \nu(A_j)=-\frac{1}{|a|}\int_B \log\frac{||h(x)||}{||x||} d\nu(x).$$ \fi The present paper is a sequel of \cite{leswi:feig},~\cite{leswi:hd},~\cite{leswi:measure},~\cite{leswi:common} and particularly \cite{leswi:limit}. In \cite{leswi:limit}, a formula for the limit drift which is similar to (\ref{limdrift}) is proved in a class of smooth covering circle maps. The proof (in the present paper) for the class of Feigenbaum maps follows similar lines, but substantially more technical. \cite{leswi:limit} was a ground for a computer-assistant evaluation of the limit drift in the class of circle covers. The result shows that the limit drift in this class is negative which implies in particular that those maps of the circle with high enough criticalities do not have wild attractor. In a recent preprint \cite{ds}, the authors present a computer-assisted proof that the area of $J_2$ is zero. The case of $\ell=2$ represents the opposite end of the range of possibilities compared with our interest in $\ell$ that tend to $\infty$. \section{The Feigenbaum Function} \subsection{Review of known properties.}\label{rev} We consider the Feigenbaum-Coullet-Tresser fixed point equation with the critical point of order $\ell$ even and positive, set at some point $x_{0,\ell}\in (0,1)$ and normalized so that the critical value is at $0$ and its image at $1$. The equation has the form of~(\ref{equ:1hp,1}) and $H$ is assumed to be unimodal on some interval which contains $0$ with Feigenbaum topological type. It is well known that for each $\ell$ a unique solution $(H_{\ell},\tau_{\ell})$ exists. We will now describe it following~\cite{leswi:feig}. $H_{\ell}$ is a holomorphic map defined on a domain $\Omega_{\ell}$ which is a bounded topological disk symmetric with respect to the real line and mapping into $\CC$. $\Omega_{\ell}$ can be split into two disks by an arc $\mathfrak{w}_{\ell}$ which is tangent at $x_{0,\ell}$ to the line $\{ z :\: \Re z = x_0\}$ and mapped by $H_{\ell}$ into the real line. One can further observe that the image of $\mathfrak{w}_{\ell}$ is the positive half-line for $\ell$ divisible by $4$ and the negative half-line otherwise. The right connected component of $\Omega_{\ell}\setminus \mathfrak{w}_{\ell}$ will be denoted by $\Omega_{+,\ell}$ and the left one by $\Omega_{-,\ell}$. We will also write $H_{\pm,\ell}$ for $H$ restricted to $\Omega_{\pm,\ell}$. \paragraph{Convergence as $\ell\rightarrow\infty$.} When $\ell\rightarrow\infty$ triples $(H_{\ell},\tau_{\ell}, x_{0,\ell})$ converge to a limit $(H_{\infty},\tau_{\infty}, x_{0,\infty})$ where $\tau_{\infty}>1$, $x_{0,\infty}\in(0,1)$ and $H_{\ell}$ converge to $H_{\infty}$ uniformly at least on the interval $[0,1]$. Mapping $H$ is unimodal with the critical point at $x_0$ and $(H_{\infty},\tau_{\infty})$ satisfy the Feigenbaum equation~(\ref{equ:1hp,1}). Furthermore, $H_{\infty}$ has a holomorphic continuation which is similar to $H_{\ell}$. Namely, its domain is $\Omega_{\infty}$ which is symmetric with respect to $\RR$ and is the union of two bounded disks $\Omega_{\pm,\infty}$ with closures intersecting exactly at $\{x_{0,\infty}\}$. We then define restrictions $H_{\pm,\infty}$ to the corresponding $\Omega_{\pm,\infty}$. \paragraph{Holomorphic continuation.} These mappings can then be described by the following statement. \begin{fact}\label{fa:1hp,1} For every $\ell$ even and positive, the mapping $H_{\ell}$ only takes the value $0$ at the critical point $x_{0,\ell}$ while the image of $H_{\infty}$ avoids $0$ at all. Subsequently, using the principal value of the logarithm one can consider mappings \begin{eqnarray*} \phi_{-,\ell} & = &\log ( \tau^{-2}_{\ell} H_{-,\ell} ) \\ \phi_{+,\ell} & = &\log ( \tau_{\ell}^{-1} H_{+,\ell} ) \end{eqnarray*} for $\ell$ even or infinite. Then each $\phi_{\pm,\ell}$ maps the corresponding $\Omega_{\pm,\ell}$ onto the set \[ \varPi_{\ell} := \{ z\in\CC :\: |\Im z| < \frac{\ell\pi}{2} \} \setminus [0,+\infty) \] and is univalent. \end{fact} We can now formulate the convergence of mappings as $\ell\rightarrow\infty$. \begin{fact}\label{fa:1hp,2} As $\ell$ tends to $\infty$ mappings $(\phi_{\pm,\ell})^{-1}$ converge to $(\phi_{\pm,\infty})^{-1}$ uniformly on compact subsets of $\varPi_{\infty} := \CC \setminus [0,+\infty)$. \marginpar{??} \end{fact} For $\ell$ finite we will also an analytic continuation of mappings $\phi_{\pm,\ell}$ which is described next. \begin{fact}\label{fa:6hp,1} Transformations $\phi_{\pm,\ell}$ for $\ell$ finite each have two univalent analytic continuations, one with domain equal to $\Omega_{\ell} \cap \HH_+$ and another one to $\Omega_{\ell} \cap\HH_-$ with ranges $\{ z\in\CC :\: 0<\Im z<\ell\pi\}$ and $\{ z\in\CC :\: 0<\Im z<\ell\pi\}$, respectively. \end{fact} \paragraph{Geometric properties of $\Omega_{\pm,\ell}$.} Below we state a couple of properties which will be used. \begin{fact}\label{fa:1hp,3} For any $\ell$ positive and even or infinite, \begin{itemize} \item \[ \overline{\Omega}_{\ell} \cap \RR = [ y_{\ell}, \tau_{\ell} x_{0,\ell} ] \] where $y_{\ell} < 0$ and $H_{\ell}(\tau^{-1}y_{\ell}) = \tau_{\ell}x_{0,\ell}$, \item \[ \tau_{\ell} \Omega_{-,\ell} \setminus \overline{\Omega}_{\ell} = \{\tau_{\ell}x_{0,\ell}\} \] \item \[ \overline{\Omega}_{\ell} \subset D(0,\tau_{\ell}) \; .\] \end{itemize} \end{fact} \paragraph{Associated mapping.} \begin{defi}\label{defi:1hp,1} For any $\ell$ positive and even or infinite, define the {\em asociated mapping} \[ G_{\ell}(z) = H_{\ell}(\tau_{\ell}^{-1}z) \] where $z\in\tau_{\ell}\Omega_{\ell}$. We also define the {\em principal inverse branch} $\mathbf{G}^{-1}_{\ell}$ which is defined on $\CC\setminus \{ x\in\RR :\: x\notin [0,\tau^2_{\ell}]\}$ and fixes $x_{0,\ell}$. \end{defi} We list key properties of the associated mapping. \begin{fact}\label{fa:1hp,4} \begin{itemize} \item $G_{\ell}$ has a fixed point at $x_{0,\ell}$ which is attracting for $\ell$ finite and neutral for $\ell=\infty$. \item the range of the the principal inverse branch $\mathbf{G}^{-1}_{\ell}$ is contained in $\tau_{\ell}\Omega_{-\ell}$. \item \begin{eqnarray*} \mathbf{G}^{-1}_{\ell}(\Omega_{+,\ell}) &=& \Omega_{-,\ell} \\ \mathbf{G}^{-1}_{\ell}(\Omega_{-,\ell}\setminus(-\infty,0]) &=& \Omega_{+,\ell} \; . \end{eqnarray*} \item $\tau_{\ell}^{-1} H_{\ell} = H_{\ell} G_{\ell}$ on $\Omega_{\ell}$. \end{itemize} \end{fact} \paragraph{Coverings.} \begin{fact}\label{fa:3ha,1} A holomorphic mapping $\psi :\: U\rightarrow V$, where $U$ and $V$ are domains on $\CC$, is a covering if and only if for every $v\in V$, every simply-connected domain $W$ which contains $v$ and is compactly contained in $V$ and every $u :\: \psi(u)=v$ there exists a univalent inverse branch of $\psi$ defined on $W$ which sends $v$ to $u$. \end{fact} \subsection{Analytic continuations.} \begin{defi}\label{defi:3hp,1} Let is define for $k :\: 0\leq k\leq \infty$ and $0<\ell\leq\infty$ \[ \varPi^k_{\ell} := \{ z\in\CC :\: |\Im z| < \frac{\ell\pi}{2} \} \setminus \left( \left\{ 2j\log\tau :\: j=0,\cdots,k-1\right\} \cup [2k\log\tau,+\infty) \right) \; .\] \end{defi} Thus $\varPi_{\ell}^0 = \varPi_{\ell}$ in the notation of Fact~\ref{fa:1hp,1}, while \[ \varPi^{\infty}_{\ell} = \{ z\in\CC :\: |\Im z| < \frac{\ell\pi}{2} \} \setminus \{ 2j\log\tau :\: j=0,1,\cdots\} \; .\] \begin{prop}\label{prop:2hp,1} For every $k\geq 0$ and every $\ell$ positive and even or infinite, there exist domains $\hat{\Omega}^k_{\pm,\ell}$ where $\hat{\Omega}^0_{\pm,\ell}=\Omega_{\pm,\ell}$, respectively. Furthermore $\phi_{\pm,\ell}$ continue analytically to the corresponding $\hat{\Omega}^k_{\pm,\ell}$ with non-vanishing derivative and the claims below hold: \begin{itemize} \item $\hat{\Omega}^k_{+,\ell}$ and $\hat{\Omega}^k_{-,\ell}$ are disjoint, \item for $k>0$ \[ \hat{\Omega}^k_{+,\ell} = G^{-1}_{\ell}\left(\hat{\Omega}^{k-1}_{-,\ell} \right) \] for $\ell$ finite and \[ \hat{\Omega}^k_{+,\infty} = G^{-1}_{\infty}\left(\hat{\Omega}^{k-1}_{-,\infty}\right) \cap \tau_{\infty}\Omega_{-,\infty} \] for $\ell=\infty$, while for $k\geq 0$ one also has \[ \hat{\Omega}^k_{-,\ell} = \mathbf{G}^{-1}_{\ell}(\hat{\Omega}^{k}_{+,\ell}) \] for all $\ell$, where $\mathbf{G}^{-1}_{\ell}$ is the principal inverse branch, cf. Definition~\ref{defi:1hp,1}. \item \[ \phi_{\pm,\ell} :\: \hat{\Omega}^k_{\pm,\ell}\setminus \phi_{\pm,\ell}^{-1}\bigl(\{j\log\tau_{\ell}^2 :\: 0\leq j<k\}\bigr) \rightarrow \varPi^k_{\ell} \] is a covering. \end{itemize} \end{prop} \paragraph{Proof of Proposition~\ref{prop:2hp,1}.} The proof will naturally proceed by induction with respect to $k$. For $k=0$ all claims are known, in particular the second one follows from Fact~\ref{fa:1hp,4} and the third from Fact~\ref{fa:1hp,1}. In an inductive step from $k-1$ to $k$, $\hat{\Omega}^k_{\pm,\ell}$ are already defined by the second claim. The first one is easy, since the each of following inclusions implies the next one by the second claim: \begin{eqnarray*} z &\in& \hat{\Omega}^k_{+,\ell} \cap \hat{\Omega}^{k}_{-,\ell}\\ G_{\ell}(z) &\in& \hat{\Omega}^{k-1}_{-,\ell} \cap \hat{\Omega}^k_{+,\ell}\\ G^2_{\ell}(z) &\in& \hat{\Omega}^{k-1}_{+,\ell} \cap \hat{\Omega}^{k-1}_{-,\ell} \; . \end{eqnarray*} obviously. Thus, we need to prove the third claim. Let us begin with a lemma. \begin{lem}\label{lem:2hp,1} For $\ell$ finite and even, \[ G_{\ell} :\: \tau\Omega_{\ell} \setminus \left( \{\tau_{\ell}x_{0,\ell}\} \cup G^{-1}_{\ell}\left([\tau_{\ell},\infty)\right)\right) \rightarrow \CC\setminus\left( \{0\} \cup [\tau_{\ell},\infty) \right) \] is a covering. For $\ell=\infty$, the corresponding claim is that \[ G_{\infty} :\: \tau_{\infty}\Omega_{-,\infty} \rightarrow \CC\setminus\left( \{0\} \cup [\tau_{\infty}^2,+\infty)\right) \] is a covering. \end{lem} \begin{proof} Let us deal with the case of $\ell$ finite. Since $G_{\ell} = H_{\ell} \tau^{-1}_{\ell}$, $G_{\ell}$ can be be viewed as composed of two branches one defined on $\tau_{\ell}\Omega_{-,\ell}$ and the other on $\tau_{\ell}\Omega_{+,\ell}$ which match analytically on the common boundary $\tau_{\ell}\mathfrak{w}_{\ell}$. By Fact~\ref{fa:1hp,1} $\log G_{\ell}$ is a univalent mapping of its domain with $\tau_{\ell}x_{0,\ell}$ removed onto $\CC$ with infinitely many slits of the form $\{x+iy :\: y=2\pi k,\, k\in \ZZ,\, x\geq X_k\}$ where $X_k$ is $\log\tau$ or $2\log\tau$ depending on which branch of $G_{\ell}$ acts. A projection by $\exp$ then yields the claim. A similar reasoning works for $\ell=\infty$ except that $G$ already maps $\tau_{\infty}\Omega_{-,\infty}$ univalently onto $\CC\setminus [\tau^2_{\infty},+\infty)$. \end{proof} \subparagraph{Mapping $\phi_{+,\ell}$.} Since $\hat{\Omega}^{k-1}_{-,\ell} \cap \RR \subset (-\infty,x_{0,\ell})$, \[ G_{\ell} :\: \hat{\Omega}^k_{+,\ell}=G^{-1}_{\ell}(\hat{\Omega}^{k-1}_{-,\ell}) \rightarrow \hat{\Omega}^{k-1}_{-,\ell} \setminus \{0\} \] is a covering. Since $\phi_{-,\ell}(0)=\log\tau^{-2}_{\ell}$, it follows that \[ G_{\ell} :\: \hat{\Omega}^{k}_{+,\ell} \setminus (\phi_{-,\ell}\circ G_{\ell})^{-1}\bigl( \{ \log \tau_{\ell}^{-2} \} \bigr) \rightarrow \hat{\Omega}^{k-1}_{-,\ell} \setminus \phi^{-1}_{-,\ell}\bigl(\{ \log \tau_{\ell}^{-2}\bigr) \}) \] is a covering as well. Furthermore, \[ \phi_{-,\ell} :\: \hat{\Omega}^{k-1}_{-,\ell} \setminus \phi^{-1}_{-,\ell}(\{ \log \tau_{\ell}^{2j} :\: j=-1,\cdots,k-2 \} ) \rightarrow \varPi^{k-1}_{\ell}\setminus \{ \log \tau^{-2}_{\ell}\} = \varPi^k_{\ell} - 2\log\tau_{\ell}\] is also a covering. To prove that the composition is also a covering, take $z\in U \subset \varPi^k_{\ell} - 2\log\tau_{\ell}$ und recall Fact~\ref{fa:3ha,1}. The every inverse branch defined of $\phi_{-,\ell}$ defined on $U$ into $\hat{\Omega}^{k-1}_{-,\ell}$ has a range which is a disk in $\hat{\Omega}^{k-1}_{-,\ell} \setminus \phi^{-1}_{-,\ell}(\{ \log\tau_{\ell}^{-2} \})$ which therefore avoids $0$. Since $G_{\ell}$ is a a covering of $\hat{\Omega}^{k-1}_{-,\ell}\setminus\{0\}$, for every such disk one can find an inverse branch of $G_{\ell}$. Hence, \[ \phi_{-,\ell}\circ G_{\ell} :\: \hat{\Omega}^k_{+,\ell} \setminus (\phi_{-,\ell}\circ G_{\ell})^{-1}\left(\{ \log \tau_{\ell}^{2j} :\: j=-1,\cdots,k-2 \} \right) \rightarrow \varPi^k_{\ell} - 2\log\tau_{\ell} \] is a covering. By the functional equation \[ \phi_{+,\ell} = \phi_{-,\ell} \circ G_{\ell} + 2\log\tau_{\ell} \] which completes the proof of the third claim for $\phi_{+,\ell}$. \subparagraph{Mapping $\phi_{-,\ell}$.} The associated map $G_{\ell}$ maps $\hat{\Omega}^k_{-,\ell}$ univalently onto $\hat{\Omega}^k_{+,\ell}$ so that \[ \phi_{+,\ell}\circ G_{\ell} :\: \hat{\Omega}^k_{-,\ell} \setminus \mathbf{G}_{\ell}^{-1}\Bigl( \phi_{+,\ell}^{-1}\bigl(\{ \log \tau_{\ell}^{2j} :\: j=0,\cdots,k-1 \}\bigr)\Bigr)\rightarrow \varPi^k_{\ell} \] is clearly a covering. By the functional equation \[ \phi_{-,\ell} = \phi_{+,\ell}\circ G_{\ell} \] which concludes the proof of Proposition~\ref{prop:2hp,1}. Let us state the main result about the analytic continuation of $\phi_{\pm,\ell}$. \begin{theo}\label{theo:3hp,1} For every $\ell$ positive and even or infinite there exist domains $\hat{\Omega}_{\pm,\ell}$, disjoint, simply-connected and symmetric with respect to $\RR$. The following inclusions hold: \begin{eqnarray*} \Omega_{-,\ell} \subset &\hat{\Omega}_{-,\ell} &\subset \tau_{\ell}\Omega_{-,\ell}\\ \Omega_{+,\ell} \subset & \hat{\Omega}_{+,\ell} &\subset \tau_{\ell}\Omega_{\ell}\\ \hat{\Omega}_{+,\infty} &\subset& \tau_{\infty}\Omega_{-,\infty}\; . \end{eqnarray*} Furthermore, $\phi_{\pm,\ell}$ continue analytically to $\hat{\Omega}_{\pm,\ell}$, respectively, with non-zero derivative and mappings \[ \phi_{\pm,\ell} :\: \hat{\Omega}_{\pm,\ell} \setminus \phi_{\pm,\ell}^{-1}\bigl( \{ \log \tau_{\ell}^{2j} :\: j=0,1,\cdots\}\bigr) \rightarrow \varPi^{\infty}_{\ell} \] are coverings, cf. Definition~\ref{defi:3hp,1}. \end{theo} \paragraph{Proof of Theorem~\ref{theo:3hp,1}.} From the second claim of Proposition~\ref{prop:2hp,1} one easily concludes each of the sequences $\left(\hat{\Omega}^k_{\pm,\ell}\right)_{k=0}^{\infty}$ is a non-decreasing sequence of simply-connected domain, symmetric with respect to $\RR$ and contained in the appropriate component of the domain of $G_{\ell}$. If we set \[ \hat{\Omega}_{\pm,\ell} : = \bigcup_{k=0}^{\infty} \hat{\Omega}^k_{\pm,\ell} \] then the only claim of Theorem~\ref{theo:3hp,1} which is not obvious concerns the maps being coverings. We will use the criterion of Fact~\ref{fa:3ha,1}. Fix $z\in\varPi^{\infty}_{\ell}$ and let $U$ be its simply-connected neighborhood compactly contained in $\varPi^{\infty}_{\ell}$ and hence bounded. The $U\subset \varPi^{k_0}_{\ell}$ for some finite $k_0$. Then pick a preimage $u$ of $z$ which is also contained in some $\hat{\Omega}_{\pm,\ell}^{k_1}$ with $k_1$ finite. Then by Proposition~\ref{prop:2hp,1} there is an inverse branch of $\mathbf{\phi}^{-1}_{\pm,\ell} :\: U \rightarrow \hat{\Omega}^{\max(k_0,k_1)}_{\pm,\ell}$. Its range obviously avoids the set $\phi_{\pm,\ell}^{-1}\bigl( \{ \log \tau_{\ell}^{2j} :\: j=0,1,\cdots\}\bigr)$. So, the condition of Fact~\ref{fa:3ha,1} is satisfied. \subsection{Special considerations for $\ell$ finite} We will write for $0\leq k\leq \infty$ \[ \mathit{P}_{\ell}^{k} := \CC\setminus \left( \left\{ \tau_{\ell}^{2j} :\: 0\leq j<k\right\} \cup [\tau^{2k}_{\ell},+\infty)\right) .\] \paragraph{Coverings and slits.} From Theorem~\ref{theo:3hp,1} we derive the following corollary. \begin{coro}\label{coro:6ha,1} For $\sigma=\pm$ mappings \begin{equation*} \exp(\phi_{\sigma,\infty}) :\: \hat{\Omega}_{\sigma,\infty}\setminus \left(\phi_{\sigma,\infty}\right)^{-1}\bigl( \{ \tau^{2j}_{\infty} :\: j=0,1,\cdots \}\bigr) \rightarrow \mathit{P}^{\infty}_{\infty} \end{equation*} are coverings and their domains are contained in $\tau_{\infty}\Omega_{-,\infty}$. \end{coro} \begin{proof} The mappings are seen to be coverings by Theorem~\ref{theo:3hp,1} using the criterion of Fact~\ref{fa:3ha,1}. The inclusion of domains also follows from Theorem~\ref{theo:3hp,1}. \end{proof} However, an analogous statement for finite $\ell$ instead of $\infty$ would be false for two reasons. First, the inclusion of domains would fail since $\hat{\Omega}_{+,\ell}$ extends to the right of $\tau_{\ell}x_{0,\ell}$ and secondly $\exp$ is not a covering of the plane by a vertical strip. We will now address these difficulties one by one. \paragraph{Restricting the domain of $H_{+,\ell}$.} \begin{lem}\label{lem:6ha,1} Suppose that $\mathfrak{S}$ is either $(-\infty,0]$ or $[0,+\infty)$. Suppose $u_0\in\Omega_{+,\ell}$ is mapped into \[ \mathbf{\Pi} := \{ z\in \varPi^{\infty}_{\ell}\setminus\mathfrak{S} :\: |\Im z|<\pi\} \] by $\phi_{+,\ell}$. Then there is a covering of $\mathbf{\Pi}$ by the analytic continuation of $\phi_{+,\ell}$ restricted to a domain which is contained in $\hat{\Omega}_{+,\ell} \setminus [\tau_{\ell} x_{0,\ell},\infty)$ and contains $u_0$. \end{lem} \begin{proof} Recall that $\phi_{+,\ell} = \phi_{-,\ell} \circ G_{\ell} + 2\log\tau_{\ell}$. Then $G_{\ell}$ maps $\hat{\Omega}_{+,\ell}\cap\RR = (x_0,\tau^2_{\ell} x_0)$ into $(0,\tau_{\ell})$. Subsequently, $(0,\tau_{\ell})$ is transformed by $\phi_{-,\ell}+2\log\tau_{\ell}$, inasmuch as it fits into its domain, into $(-\infty,0)$. Hence, when $\mathfrak{S} = (-\infty,0]$ the domain of the covering does not intersect $\RR$ and the claim follows. When $\mathfrak{S}=[0,+\infty)$, the $\mathbf{\Pi} = \{ z\in\CC :\: |\Im z|<\pi\} \setminus [0,+\infty)$ is simply connected an the covering is univalent. Clearly, $\phi_{+,\ell}(u_0)$ can be connected to any point $x$ in the negative half-line by a path inside $\mathbf{\Pi}$ which is otherwise disjoint from $\RR$. The lifting of this path by $\phi_{+,\ell}$ avoids $\partial\Omega_{\ell}$ except for the endpoint which is the preimage of $x$ by the covering. Then this preimage must be in the closure of $\Omega_{\ell}$ which avoids $(\tau_{\ell}x_0,+\infty)$. Neither is $\tau_{\ell}x_{0,\ell}$ possible as the preimage, since it goes to $-\infty$ by $\phi_{+,\ell}$. \end{proof} \paragraph{Covering slit domains by $\tau^{-n}_{\ell}H_{\pm,\ell}$.} Now we will address the second difficulty. \begin{prop}\label{prop:6hp,1} Suppose that $\ell>0$ is even or infinite. Let $V$ be any one of $V^+,V^-,V^{\circ}$ where \begin{align*} V^- := & \CC\setminus \bigl( \{ \tau^{2j}_{\ell} :\: j=1,\cdots\} \cup (-\infty,1] \bigr)\\ V^+ := & \CC \setminus [0,+\infty)\\ V^{\circ} := & \CC\setminus \bigl( [1,+\infty) \cup (-\infty,0] \bigr) \end{align*} Let $\sigma=\pm$, $u_0\in\Omega_{\sigma,\ell}$ and suppose $\exp\left(\phi_{\sigma,\ell}(u_0)\right) \in V$. Then there is a domain $U, u_0\in U$ such that $\exp\phi_{\sigma,\ell} :\: U \rightarrow V$ is a covering. $U$ is contained in \begin{itemize} \item $\hat{\Omega}_{-,\ell}\cup\Omega_{+,\ell} \setminus [x_{0,\ell},+\infty) $ when $\sigma=-$, \item $\hat{\Omega}_{+,\ell}\cup\Omega_{-,\ell} \setminus \bigl( (-\infty,x_{0,\ell}] \cup [\tau_{\ell}x_{0,\ell},+\infty) \bigr) $ when $\sigma=+$, \item $\CC\setminus\RR$ provided that $|\Im\phi_{\sigma,\ell}(u_0)|\geq\pi$. \end{itemize} \end{prop} \begin{proof} If $|\Im\phi_{\sigma,\ell}(u_0)|\geq\pi$, then $\phi_{\sigma,\ell}(u_0)$ belongs to a horizontal strip of width $2\pi$ which is mapped by $\exp$ univalently onto $V^+$ or $\CC\setminus (-\infty,0]$ which contains both $V^-, V^{\circ}$. $U$ is the preimage of the strip by $\phi_{\sigma,\ell}$ which may require the use of the extension from Fact~\ref{fa:6hp,1}. Accordingly, $U\subset\Omega_{\ell}\cap \HH_{\pm}$. This implies the inclusions postulated by Proposition~\ref{prop:6hp,1} and the covering in this case is just a univalent map. Similar case is $V=V^+$ since here again $\phi_{\sigma,\ell}(u_0)$ is contained in a horizontal strip which is mapped onto $V^+$ by $\exp$ and $U$ is constructed in the same way. The remaining case is when $\left| \Im\phi_{\sigma,\ell}(u_0) \right| < \pi$ and $V=V^-,V^{\circ}$ which implies $V\subset \CC\setminus (-\infty,0]$. Then $V$ is equivalent by $\mathbf{\log}$ to a subset of $\mathbf{\Pi} := \{ z\in \varPi^{\infty}_{\ell} :\: |\Im z|<\pi$ and $U$ is chosen as a subset of $U'$, where $U'$ covers $\mathbf{\Pi}$ by $\phi_{\sigma,\ell}$. By Theorem~\ref{theo:3hp,1} such a covering by $\phi_{\sigma,\ell}$ exists with $U' \subset \hat{\Omega}_{\sigma,\ell}$. Morevoer, by Lemma~\ref{lem:6ha,1}, $U'$ does not intersect $[\tau_{\ell}x_{0,\ell},+\infty)$. \end{proof} \subsection{Convergence estimates.} Fact~\ref{fa:1hp,2} states almost uniform convergence of mappings $\phi_{\pm,\ell}^{-1}$. That is not good enough for our purposes. The goal is the following statement. \begin{prop}\label{prop:11jp,1} Mappings $\phi_{\pm,\ell}^{-1}$ converge to $\phi^{-1}_{\pm,\infty}$ uniformly, i.e. \[ \lim_{\ell\rightarrow\infty} \sup \bigl\{ \left|\phi^{-1}_{\pm,\ell}(\zeta)-\phi^{-1}_{\pm,\infty}(\zeta)\right| :\: |\Im\zeta|<\frac{\ell\pi}{2},\, \zeta\notin [0,+\infty) \bigr\} = 0 .\] Additionally, \[ \lim_{|\zeta|\rightarrow\infty} \phi^{-1}_{\pm,\infty}(\zeta) = x_{0,\infty} \; .\] \end{prop} The proof will be achieved in a sequence of Lemmas. \begin{lem}\label{lem:17hp,1} There exists $R_0>0$ such that for every $0<r\leq R_0$ there are $\varepsilon(r)>0$ and $\ell(r)<\infty$ such that for every $\ell\geq\ell(r)$ even and every $z\in \overline{B(x_{0,\infty},R_0)} \setminus B(x_{0,\infty},r) $ the estimate $|G^2_{\ell}(z)-z| \geq \varepsilon(r)$ holds. \end{lem} \begin{proof} For $\ell$ sufficiently large and $z$ in a neighborhood of $x_{0,\infty}$ we can represent \begin{equation}\label{equ:17hp,1} G^2_{\ell}(z) - z = a_{0,\ell} + a_{1,\ell}\zeta + a_{2,\ell}\zeta^2 + a_{3,\ell}\zeta^3 + g_{4,\ell}(\zeta)\zeta^4 \end{equation} where $\zeta:= z-x_{0,\infty}$, $a_{0,\ell},a_{1,\ell},a_{2,\ell}$ all tend to $0$ as $\ell\rightarrow\infty$, $\lim_{\ell\rightarrow\infty} a_{3,\ell} = a_{3,\infty} < 0$ and $g_{4,\ell}(\zeta)$ are a sequence of analytic functions convergent in a neighborhood of $0$ to $g_{4,\infty}$. Then $R_0$ is chosen so that $C(x_{0,\infty},R_0)$ fits inside the required neighborhoods and $a_{3,\infty} > 3R_0 \sup\{|g_{4,\infty}(\zeta) :\: |\zeta|\leq R_0\}$. Then for $\ell$ large enough $a_{3,\ell} > 2R_0 \sup\{|g_{4,\ell}(\zeta) :\: |\zeta|\leq R_0\}$. This implies that \[ |a_{3,\ell}\zeta^3 + g_{4,\ell}(\zeta)\zeta^4| > \frac{1}{2}|a_{3,\ell}|r^3 \] for $\zeta\in C(0,r)$, $r\leq R_0$. Set $\varepsilon(r) = \frac{r^3}{4} |a_{3,\infty}|$. The proof is finished by choosing $\ell(r)$ so that for all $\ell\geq \ell(r)$ \begin{align*} |a_{0,\ell}| + |a_{1,\ell}|R_0 + |a_{2,\ell}|R_0^2 < & \frac{\varepsilon(r)}{2}\\ \frac{|a_{3,\ell}|}{|a_{3,\infty}|} > & \frac{3}{4} . \end{align*} \end{proof} \begin{lem}\label{lem:10jp,1} \begin{multline*} \forall R_0,\epsilon>0\; \exists L(\epsilon),\ell(\epsilon)<\infty\\ \forall \ell :\: \ell(\epsilon)\leq\ell\leq\infty\; \forall \zeta :\: |\Im\zeta|<\frac{\ell\pi}{2},\, \dist\left(\zeta, [0,+\infty)\right) > L(\epsilon) \\ \left| G^2_{\ell}\circ\phi^{-1}_{\pm,\ell}(\zeta) - \phi^{-1}_{\pm,\ell}(\zeta) \right| < \epsilon . \end{multline*} \end{lem} \begin{proof} If $0\leq \Im\zeta < \frac{\ell\pi}{2}$, then the maximum of the hyperbolic distances between $\zeta$ and $\zeta-\log\tau_{\ell}^2$ in the domains \begin{equation}\begin{split} \left\{ \zeta\in\CC\setminus [0,+\infty) :\: |\Im\zeta| < \frac{\ell\pi}{2}\right\}\; \text{and}\\ \left\{ \zeta\in\CC\setminus [0,+\infty) :\: 0<\Im\zeta < \ell\pi \right\} \end{split}\end{equation} tends to $0$ uniformly in $\ell$ as the distance from $\zeta$ to the slit $[0,+\infty)$ tends to $\infty$. The first of those domains is mapped by $\phi_{\pm,\ell}$ univalently on $\Omega_{\pm,\ell}$ by Fact~\ref{fa:1hp,1} and the other to $\Omega_{\ell} \cap \HH_+$ by Fact~\ref{fa:6hp,1}. Since either of these domains misses $x_{0,\ell}$, the element of of the hyperbolic metric at $z:=\phi_{\pm}^{-1}(\zeta)$ is bounded by $4|z-x_{0,\ell}|\:|dz|$ which for $\ell$ large enough is less than $2R_0\;|dz|$. Hence, also the Euclidean distance $\bigl|\phi_{\pm,\ell}^{-1}(\zeta) - \phi_{\pm,\ell}^{-1}\left(\zeta-\log\tau_{\ell}^2\right)\bigr|$ tends to $0$ uniformly in $\ell$ as $\dist\left(\zeta,[0,\infty)\right) \rightarrow\infty$. But $\phi_{\pm,\ell}^{-1}\left(\zeta-\log\tau_{\ell}^2\right) = G^2_{\ell}\circ\phi^{-1}_{\pm,\ell}(\zeta)$ which ends the proof. \end{proof} \begin{lem}\label{lem:10jp,2} \begin{multline*} \forall r>0\; \exists L(r),\ell(r)<\infty\; \forall \ell\geq \ell(r)\; \forall \zeta :\: |\Im\zeta|<\frac{\ell\pi}{2},\, \dist\left(\zeta, [0,+\infty)\right) > L(r) \\ \left| \phi^{-1}_{\pm,\ell}(\zeta) - x_{0,\infty} \right| \leq r . \end{multline*} \end{lem} \begin{proof} Fix $R_0$ from Lemma~\ref{lem:17hp,1} and take any $r :\: r\in(0,R_0)$. From Lemma~\ref{lem:17hp,1} we then get $\varepsilon(r)$ and set $\epsilon := \varepsilon(r)/2$ in Lemma~\ref{lem:10jp,1}. The bound $\ell(r)$ can now be fixed so that for $\ell\geq\ell(r)$ both Lemmas apply and $|x_{0,\ell}-x_{0,\infty}|<\frac{r}{2}$. Set also $L(r) := L(\epsilon)$ given by Lemma~\ref{lem:10jp,1}. For any $\ell\geq\ell(r)$ we consider the set \[ S_{\ell} := \bigl\{ \zeta\in\CC :\: |\Im\zeta|<\frac{\ell\pi}{2},\,\dist\left(\zeta,[0,+\infty)\right)>L(\epsilon),\, \left|\phi_{\pm,\ell}^{-1}(\zeta) -x_{0,\infty}\right| \leq r \bigr\} .\] $S_{\ell}$ is obviously closed in $\bigl\{ \zeta\in\CC :\: |\Im\zeta|<\frac{\ell\pi}{2},\,\dist\left(\zeta,[0,+\infty)\right)>L(\epsilon)$ and also non-empty since $\lim_{x\rightarrow\infty} \phi^{-1}_{\pm,\ell}(x) = x_{0,\ell}$. The proof is finished once we have shown that $S_{\ell}$ is also open. If $\zeta$ is a non-interior point, we must have $\left|\phi_{\pm,\ell}^{-1}(\zeta) -x_{0,\infty}\right| = r$. Then by Lemma~\ref{lem:17hp,1} we get $\left| G_{\ell}^2\circ\phi_{\pm,\ell}^{-1}(\zeta) - \phi^{-1}_{\pm,\ell}(\zeta)\right| \geq \varepsilon(r) > \epsilon$. But by Lemma~\ref{lem:10jp,1}, $\left| G_{\ell}^2\circ\phi_{\pm,\ell}^{-1}(\zeta) - \phi^{-1}_{\pm,\ell}(\zeta)\right| \geq \varepsilon(r) < \epsilon$ for all $\zeta\in S_{\ell}$. Hence, there are no non-interior points. \end{proof} \begin{lem}\label{lem:11jp,2} For any $L,\epsilon>0$ define the set \[ V(L,\epsilon) := \bigl\{ \zeta\in\CC :\: \Re\zeta \leq L,\,|\Im\zeta|<\pi,\,\dist\left(\zeta,[0,\infty)\right)\geq \epsilon\bigr\} .\] Then the family $\left(\phi_{-,\ell}\right)_{\ell}$, $\ell$ positive and even or infinite, is equicontinuous on $V(L,\epsilon)$ and converges uniformly to $\phi_{-,\infty}^{-1}$. \end{lem} \begin{proof} Let us begin by proving that for each $\ell$, $\phi^{-1}_{-,\ell}$ is uniformly continuous on $V(L,\epsilon)$. That is clear on the set $V(L',L,\epsilon) := \{ \zeta\in V(L,\epsilon) :\: \Re\zeta \geq L'\}$ for any $L'$ by compactness and in particular since $\phi_{-,\ell}^{-1}$ extends through each line $\{ \zeta :\: \Im\zeta = \pm \pi\}$ by Fact~\ref{fa:6hp,1}. It remains to see that \begin{equation}\label{equ:11jp,2} \lim_{\Re\zeta\rightarrow-\infty} \phi_{-,\ell}^{-1}(\zeta) = x_{0,\ell} . \end{equation} This is the case when $\zeta$ is real and any other sequence of points remains in a bounded hyperbolic distance from $\RR$ in an extended domain $-\frac{\pi}{2} \leq \Im\zeta \leq \frac{3}{2}\pi$ or its symmetric image. Now equicontinuity will follows if we show uniform convergence. That again is clear on $V(L',L,\epsilon)$ for any $L'$ by Fact~\ref{fa:1hp,2}. On the set $V(L,\epsilon)\setminus V(L',L,\epsilon)$ we conclude from Lemma~\ref{lem:10jp,2} that for any $r>0$ there is $L'(r)$ sufficiently close to $\infty$ and $\ell(r)$ that $\phi^{-1}_{-,\ell}\left( V(L,\epsilon)\setminus V(L'(r),L,\epsilon) \right) \subset D(x_{0,\infty},r)$ for all $\ell\geq\ell_0(r)$. Uniform convergence follows. \end{proof} Recall the principal inverse branch $\mathbf{G}_{\ell}^{-1}$, cf. Definition~\ref{defi:1hp,1}. \begin{lem}\label{lem:11jp,1} For any $L,\epsilon>0$ define the set \[ W(L,\epsilon) := \bigl\{ z\in\CC :\: |z|\leq L,\,\dist\left(z,[\tau_{\infty},\infty)\right)\geq \epsilon\bigr\} \setminus (-\infty,0] .\] Then for some $\ell_0$ the sequence $\left(\mathbf{G}_{\ell}^{-1}\right)_{\ell=\ell_0}^{\infty}$ is equicontinuous and converges to $\mathbf{G}^{-1}_{\infty}$ uniformly on $W(L,\epsilon)$. \end{lem} \begin{proof} The basis of the proof is the representation \begin{equation}\label{equ:11jp,1} \mathbf{G}_{\ell}^{-1}(z) = \tau_{\ell}\phi_{-,\ell}^{-1}\left(\log(z) - \log\tau_{\ell}^2\right) \end{equation} where the principal branch of the $\log$ is used. Then uniform convergence follows from the representation~(\ref{equ:11jp,1}) and Lemma~\ref{lem:11jp,2}. It remains to show uniform continuity of $\mathbf{G}_{\ell}^{-1}$ for each $\ell$. Here $\ell_0$ should be chosen so that for $\ell\geq\ell_0$ the difference $\left|\log\tau^2_{\infty}-\log\tau^2_{\ell}\right|$ is less than $\epsilon/2$. Then uniform continuity also follows from Lemma~\ref{lem:11jp,2} on a set where $\log(z)$ is uniformly continuous, which is outside of $D(0,\eta),\eta>0$. Additionally, by formula~(\ref{equ:11jp,2}) $\mathbf{G}_{\ell}^{-1}$ can be extended continuously to $0$ by setting $\mathbf{G}_{\ell}^{-1}(0) := x_{0,\ell}$ and uniform continuity follows. \end{proof} \begin{lem}\label{lem:11jp,3} There exist $\ell_0$ and $L_0,\epsilon_0>0$ such that for every $L, \epsilon>0$ and $\ell\geq\ell_0$ the distance from the set $\mathbf{G}_{\ell}^{-1}\left(W(L,\epsilon)\right)$ to $[\tau_{\infty},\infty)$ is at least $\epsilon_0$ and the set is contained in $\{ \zeta\in\CC :\: \Re\zeta < L_0\}$. \end{lem} \begin{proof} We begin by observing that \[ \mathbf{G}_{\ell}^{-1}\left(W(L,\epsilon)\right)\subset\tau_{\ell}\Omega_{-,\ell} .\] For $\ell=\infty$, $\tau_{\infty}\Omega_{-,\infty}$ is compactly contained in $\CC\setminus [\tau_{\infty},+\infty)$ and hence the distance from the claim of this Lemma is positive and is bounded from the right. By uniform convergence from Lemma~\ref{lem:11jp,1} this situation persists for all $\ell$ sufficiently large. \end{proof} \begin{coro}\label{coro:11jp,1} There exist $\ell_0, L_0<\infty$ and $\epsilon_0>0$ such that for every $L\geq L_0$, $\epsilon :\: 0<\epsilon\leq\epsilon_0$ and $k\in\ZZ :\: k>0$ the family $\left(\mathbf{G}_{\ell}^{-k}\right)_{\ell\geq\ell_0}$ is equicontinuous and uniformly convergent in $W(L,\epsilon)$. \end{coro} \begin{proof} This follows from an inductive use of Lemma~\ref{lem:11jp,1} once we pick $L_0,\epsilon_0$ as in Lemma~\ref{lem:11jp,3}. \end{proof} \paragraph{Wedge lemma.} For every $\ell$ even, positive and finite there is a repelling orbit of period $2$ under $G_{\ell}$ which consists of points $x_{+,\ell}\in \HH_+$ and $x_{-,\ell}\in \HH_-$. When $\ell\rightarrow\infty$ points $x_{\pm,\ell}$ tend to $x_{0,\infty}$. The key observation is that there are two inverse branches of $G^2_{\ell}$, which will be written as $\mathbf{G}^{-2}_{\pm,\ell}$ which map $H_{\pm}$ into itself, respectively. Then $x_{\pm,\ell}$ are fixed points of the corresponding $\mathbf{G}^{-2}_{\pm,\ell}$ which attract $\HH_{\pm}$. The lemma below is stated for the upper half-plane without loss of generality. \begin{lem}\label{lem:15hp,1} Suppose that $z_0\in\HH_+$ and $0<\eta\leq \Im z_0$. Furthermore, assume that $\frac{1}{3}\pi < \arg(z_0-x_{0,\infty}) < \frac{2}{3}\pi$. For every $\eta>0$ there is $\ell(\eta)<\infty$ such that whenever $\ell(\eta)\leq \ell <\infty$, then the forward orbit of $z_0$ under $\mathbf{G}^{-2}_{+,\ell}$ is contained $D(x_{0,\ell},e\Im z_0)$. \end{lem} \begin{proof} We choose $\ell(\eta)$ so that for every $\ell\geq \ell(\eta)$ the stunted wedge $\{ z\in\HH_+ :\: \Im z\geq\eta ,\, \frac{1}{3}\pi < \arg(z-x_{0,\infty}) < \frac{2}{3}\pi \}$ is contained in the wedge $\mathfrak{W} := \{ z\in \HH_+ :\: \frac{1}{4}\pi < \arg (z-x_{+,\ell}) < \frac{3}{4}\pi \}$. Then for any $z_0\in\mathfrak{W}$ the hyperbolic distance in $\HH_+$ from $z_0$ to $x_{+,\ell}$ is less than $\log\frac{\Im z_0}{\Im x_{+,\ell}} + 2$. It is not expanded by the action $\mathbf{G}^{-2}_{+,\ell}$. Given a hyperbolic distance the maximum Euclidean distance is of obtained when the real parts coincide, which yields the estimate of the Lemma. \end{proof} A consequence of the wedge lemma is the following estimate. \begin{lem}\label{lem:11jp,4} For every $r,H>0$ there exists $K(r,H)$ and $\ell(r,H)$ such that for every $\ell\geq\ell(r,H)$ we get \[ \phi_{\pm,\ell}^{-1} \left( \{ \zeta\in\CC :\: |\Im\zeta|\leq H,\,\Re\zeta>K(r,H)\}\right) \subset D(x_{0,\infty},r) .\] \end{lem} \begin{proof} Since for $\ell=\infty$ the mapping $\phi_{\pm,\infty}$ is a Fatou coordinate, it maps every horizontal half-ray \[ \{ \zeta\in\CC :\: \Im\zeta=H, \Re\zeta>0\} \] to a curve convergent to $x_{0,\infty}$ and tangent to the repelling direction. Hence, given $r,H$ for some $K(r,H)$ the set $\left\{ \zeta\in\CC :\: |\Im\zeta|\leq H,\,\Re\zeta > K(r,H) \right\}$ is mapped into $\left\{ z\in\CC :\: |z-x_{0,\infty}|<\frac{r}{10},\, \arg(z-x_{0,\infty})^2 < \frac{\pi}{10} \right\}$. Now choose an integer $k(r)$ so that \begin{equation}\label{equ:11jp,3} \left(k(r)-1\right)\log\tau^2_{\infty} \in \left( K(r,H), K(r,H)+2\log\tau^2_{\infty}\right) . \end{equation} Then for any $\ell$ \[ W(r,H,\ell) := \phi_{\pm,\ell}^{-1}\bigl( \left\{ \zeta\in\CC :\: |\Im\zeta|\leq H,\, \left(k(r)-1\right)\log\tau^2_{\ell} \leq \Re\zeta \leq k(r)\log\tau^2_{\ell} \right\} \bigr) \] is contained in \[ \mathbf{G}^{-2\left(k(r)+1\right)}_{\ell}\bigl( \left\{ \zeta\in\CC :\: |\Im\zeta|\leq H,\, -2\log\tau^2_{\ell}\leq \Re\zeta\leq -\log\tau^2_{\ell}\right\}\bigr) .\] By Corollary~\ref{coro:11jp,1} for all $\ell$ sufficiently large \[ W(r,H,\ell) \subset \left\{ z\in\CC :\: |z-x_{0,\infty}|<\frac{r}{3},\, \arg(z-x_{0,\infty})^2 < \frac{\pi}{3} \right\} .\] Then by Lemma~\ref{lem:15hp,1} all subsequent images of $W(r,H,\ell)$ by iterates of $\mathbf{G}_{\ell}^{-1}$ are contained in $D(x_{0,\infty},r)$. But that means the entire set \[ \phi_{\pm,\ell}^{-1}\bigl( \left\{ \zeta\in\CC :\: |\Im\zeta|\leq H,\, \left(k(r)-1\right)\log\tau^2_{\ell} \leq \Re\zeta \right\}\bigr) \subset D(x_{0,\infty},r) . \] By the choice of $k(r)$, cf. expression~(\ref{equ:11jp,3}) by resetting $K(r,h) := K(r,H)+2\log\tau^2_{\infty}$ we get the claim. \end{proof} \begin{lem}\label{lem:11jp,5} For every $r>0$ there is $K(r)$ and $\ell(r)$ so that for any $\ell\geq\ell(r)$ and $\zeta :\: |\zeta|>K(r),\, |\Im\zeta| < \frac{\ell\pi}{2},\, \zeta\notin [0,+\infty)$ we get $\phi^{-1}_{\pm,\ell}(\zeta) \in D(x_{0,\infty},r)$. \end{lem} \begin{proof} Let us begin with Lemma~\ref{lem:10jp,2} which implies the claim for $\zeta :\: |\Im\zeta|<\frac{\ell\pi}{2}, \dist\left(\zeta,[0,+\infty)\right) > L(r)$. Then invoke Lemma~\ref{lem:11jp,4} with $H:=L(r)$ to conclude that the claim also holds on the infinite half-strip $\bigl\{ \zeta\in\CC :\: |\Im\zeta|\leq L(r),\, \Re\zeta > K\left(r,L(r)\right)\bigr\}$. What remains is a bounded set. \end{proof} \paragraph{Proof of Proposition~\ref{prop:11jp,1}.} The limit at $\infty$ for $\phi^{-1}_{\pm,\infty}$ follows from Lemma~\ref{lem:11jp,5}. It remains for check uniform convergence. Fix $r>0$ By Lemma~\ref{lem:11jp,5}, for $|\zeta|>K\left(\frac{r}{2}\right)$ we get $\phi^{-1}_{\pm\ell}(\zeta) \in D(x_{0,\infty},\frac{r}{2})$ for all $\ell$ large enough and hence $\left|\phi^{-1}_{\pm,\ell}(\zeta)-\phi^{-1}_{\pm,\infty}(\zeta)\right| < r$. The remaining bounded set after shifting by some multiple of $\log\tau^2_{\infty}$ is compactly contained in $\CC\setminus [0,+\infty)$. Hence uniform convergence follows form Fact~\ref{fa:1hp,1} and Corollary~\ref{coro:11jp,1}. \paragraph{Diameter of $\mathfrak{w}_{\ell}$.} Recall the arc $\mathfrak{w}_{\ell}$ which for finite $\ell$ separates $\Omega_{+,\ell}$ from $\Omega_{-,\ell}$. $\mathfrak{w}_{\ell}\cap\HH_+$ is invariant under $\mathbf{G}_{+,\ell}^{-2}$. \begin{lem}\label{lem:17ha,3} For every $\epsilon>0$ there is $\ell(\epsilon)$ such for any $\ell\geq \ell(\epsilon)$, even and finite, and any $z\in \mathfrak{w}_{\ell}\cap\HH_+$ the hyperbolic diameter in $\HH_+$ of the subarc of $\mathfrak{w}$ between $z$ and $G^2_{\ell}(z)$ is bounded by $\epsilon$. \end{lem} \begin{proof} Let $\mathfrak{w}(z)$ denote the segment of $\mathfrak{w}_{\ell}$ between $z$ and $G^2_{\ell}(z)$. Then its hyperbolic diameter is bounded by the hyperbolic diameter of $\mathfrak{w}\left(G^{2n}_{\ell}(z)\right)$ for any $n$ positive by Schwarz' Lemma. On the other hand, $\mathfrak{w}_{\ell}$ is a preimage of a line by an analytic mapping, hence a smooth curve at $x_{0,\ell}$ tangent to the vertical line $x_{0,\ell}+\iota\RR$. It develops that the limit of the hyperbolic diameter of $\mathfrak{w}\left(G^{2n}(x)\right)$ as $n\rightarrow\infty$ is $-2\log G'_{\ell}(x_{0,\ell})$ which tends to $0$ as $\ell\rightarrow\infty$. \end{proof} \begin{lem}\label{lem:17ha,2} \[ \lim_{\ell\rightarrow\infty} \diam (\mathfrak{w}_{\ell}) = 0 \; .\] \end{lem} \begin{proof} It is enough to prove the claim for $\mathfrak{w}_{\ell} \cap \HH_+$. Fix $r>0$ and suppose that for $\ell$ arbitrary large $\mathfrak{w}_{\ell} \cap \HH_+$ intersects $C(x_{0,\infty},r)$ at $z_0$. Then by Lemma~\ref{lem:17hp,1} for $\ell\geq \ell(r)$ the Euclidean diameter of the subarc of $\mathfrak{w}_{\ell}$ between $z_0$ and $G^2_{\ell}(z_0)$ is at least $\varepsilon(r)$. But by Lemma~\ref{lem:17ha,3} the hyperbolic diameter of the same arc tends to $0$ as $\ell\rightarrow\infty$ which yields a contradiction. \end{proof} \section{Dynamics near an almost parabolic point.} \subsection{Elementary estimates.} \paragraph{Double wedge in $\Omega_{\ell}$.} Start with the following fact: \begin{fact}\label{fa:18ha,1} For any $\delta>0$ there is $r(\delta)>0$ such that the double wedge \[ \left\{ x_{0,\infty}+\zeta :\: |\zeta|<r(\delta),\, |\arg\zeta^2| < \pi-\delta \right\} \] is contained in $\Omega_{\infty}$. \end{fact} We will now work to obtain a similar estimate for finite $\ell$, uniform in $\ell$. \begin{defi}\label{defi:24ha,1} For $\delta>0$ and $0<r<R$ and $s\in\{+,-,0\}$ we will write \[W_s(\delta,r,R) := \left\{ x_{s,\infty}+\zeta :\: r<|\zeta|<R,\, |\arg\zeta^2| < \pi-\delta \right\} \; .\] \end{defi} \begin{lem}\label{lem:18ha,1} For every $\delta>0$ and $s\in\{+,-,0\}$ there is $r(\delta)>0$ and, additionally, for every $r_1>0$ there is $\ell(\delta,r_1)<\infty$ such that \[ \forall\ell\geq\ell(\delta,r_1)\; W_s(\delta,r_1,r(\delta)) \subset \Omega_{\ell} \; ,\] cf. Definition~\ref{defi:24ha,1}. \end{lem} \begin{proof} By Fact~\ref{fa:18ha,1} for every $r_1>0$ and $r(\delta)$ taken from that Fact the set $W_0(\delta,r_1,r(\delta))$ is compactly contained in $\Omega_{\infty}$. Moreover, for some $\epsilon(r_1,\delta)>0$, \[ \bigcup_{z\in W(\delta,r_1)} \overline{D\left(z,\epsilon(r_1,\delta)\right)} \] remains compactly contained in $\Omega_{\infty}$. By Fact~\ref{fa:1hp,2}, for $\ell$ large enough, the mappings $\phi_{\pm,\ell}^{-1} \circ \phi_{\pm,\infty}$ send $C\left(z,\epsilon(\delta,r_1)\right)$ to a Jordan curve which surrounds $z$. By the argument principle, $z$ also has a preimage by $\phi_{\pm,\ell}$. The claim for $s=\pm$ follows since $\lim_{\ell\rightarrow\infty} |x_{0,\ell}-x_{s,\ell}| = 0$. \end{proof} \subsection{Main theorem.} \begin{defi}\label{defi:27ha,1} For an analytic function $g$, $z$ which can be forever iterated by $g$ and $\sigma>0$ define \[ P(g,z,\sigma) := \sum_{k=0}^{\infty} |Dg^k(z)|^{\sigma} \; .\] \end{defi} We now state a general theorem whose hypotheses are satisfied by functions $G_{\ell}$ we considered so far. In particular, the geometric condition of $\Omega_{\ell}$ follows from Lemma~\ref{lem:18ha,1}. Recall that a mapping $g$ symmetric about $\RR$ and defined in $\CC$ doubly slit along the real axis is in the {\em Epstein class} if its derivative does not vanish in $\RR$ and has an inverse branch defined on $\HH_+$ which maps into $\HH_+$ or $\HH_{-}$. \begin{theo}\label{theo:27ha,1} Suppose that $(G_{\ell})$ is a sequence of mappings which are all defined of $\CC\setminus\left( (-\infty, X_1] \cup [ X_2,+\infty)\right)$, $X_1<X_2$, which are holomorphic, symmetric about $\RR$ and in the Epstein class. Next, for some sequence $(x_{0,\ell})$ of points contained in $(X_1,X_2)$ and convergent to $x_{0,\infty}\in (X_1,X_2)$, there is a representation \[ G_{\ell}(z+x_{0,\ell})-x_{0,\ell} = \sum_{k=1}^{\infty} \alpha_{k,\ell} z^k \] where $\forall \ell\; \alpha_{1,\ell}\in (-1,0)$ and $\lim_{\ell\rightarrow\infty} \alpha_{1,\ell} = -1$. Suppose finally that $G_{\ell}$ converge almost uniformly in their domain. For every $\ell$, let $\Omega_{\ell}$ be a domain which is fully invariant under $G_{\ell}$ and assume further \begin{multline*} \exists \delta_0 > \frac{\pi}{2}\; \exists R_0>0\; \forall r>0\; \exists \ell_0(r)<\infty \; \forall \ell\geq\ell_0(r)\; \\ \left\{ x_{0,\infty}+z :\: r<|z|<R_0,\, |\arg z^2| < \delta_0\right\} \subset \Omega_{\ell} \; . \end{multline*} Then, for some $R_1>0$, every $\ell$ and $\sigma>\frac{4}{3}$ \[ \int_{D(x_{0,\infty},R_1)\setminus\Omega_{\ell}} P(\mathbf{G}_{\ell}^{-2},x+\iota y,\sigma),\, dx\, dy\] are uniformly bounded for all $\ell$, where $\mathbf{G}^{-2}_{\ell}$ is the inverse branch of $G^2_{\ell}$ which fixes $x_{0,\ell}$. \end{theo} From these hypotheses for every $\ell$ we get a repelling periodic orbit of period $2$, $\{x_{\pm,\ell}\}$ under $G_{\ell}$. The next lemma is stated $\HH_+$ without loss of generality, since by symmetry the analogous statement holds in the lower half-plane. \begin{lem}\label{lem:24ha,1} For some $\delta<\frac{\pi}{4}$ there is $r_0>0$ such that for every $r :\: 0<r<r_0$ there exists $\ell(r)<\infty$ so that the following claim holds. If $u\in \HH_+ \cap D\left(x_{+,\ell},\frac{r}{2}\right)$ and $z\notin\Omega_{\ell}$, then for some positive $n$ and all $\ell\geq\ell(r)$ \[ G^{2n}_{\ell}(u) \in \left\{ x_{+,\ell}+\iota z\in \HH_+ :\: \frac{r}{2} < |z| < r,\, |\arg z| <\delta\right\} .\] \end{lem} \begin{proof} Initially choose $\ell(r)$ so large that $|x_{0,\ell}-x_{+,\ell}| , \frac{r}{2}$ for all $\ell\geq\ell(r)$. Additionally, when $r$ is small enough and $\ell$ large, then $G_{\ell}^2\left(D(x_{+,\ell},\frac{r}{2}) \cap \HH_+\right) \subset \HH_+$. Consider the orbit of $u$ under $G_{\ell}^2$. First we show that for some $n$ it must leave $D(x_{+,\ell},\frac{2r}{3})$. Suppose not. Since $G_{\ell}^2$ expands the hyperbolic metric of $\HH_+$, the orbit must eventually leave every compact neighborhood of $x_{+,\ell}$. It follows that $\lim_{n\rightarrow\infty} \Im G^{2n}_{\ell}(u) = 0$. By choosing $r$ small, we can make sure that $[x_{0,\ell}-r,x_{0,\ell}+r]\subset\Omega_{\ell}$ for all $\ell$. Thus, $G^{2n}_{\ell}(u) \in\Omega_{\ell}$ which is contradicts the hypothesis of Theorem~\ref{theo:27ha,1} by which $\Omega_{\ell} \cap \HH_+$ is completely invariant under $G^2_{\ell}$. Now we see that for some $n\geq 0$ we have \begin{align*} \left|G^{2n}_{\ell}(u)-x_{+,\ell}\right| \leq & \frac{r}{2},\, \text{but}\\ \left|G^{2(n+1)}_{\ell}(u)-x_{+,\ell}\right| > & \frac{r}{2} . \end{align*} Since $G_{\ell} \rightarrow G_{\infty}$ uniformly on compact neighborhoods of $x_{0,\infty}$ and the derivative if $1$ at that point, by choosing $r$ small and $\ell$ large, we can have \[\left| \frac{G^{2n}_{\ell}(u)-x_{+,\ell}}{G^{2(n+1)}_{\ell}(u)-x_{+,\ell}} \right| > \frac{1}{2} .\] Hence \[ G^{2(n+1)}(u) \in \left\{ z\in \HH_+ :\: \frac{r}{2} < |z-x_{+,\ell}| < r\right\} \setminus \Omega_{\ell} \; .\] The condition on the argument follows from the geometric hypothesis of Theorem~\ref{theo:27ha,1} . The possibility of $\arg z$ being close to $\pi$ can be ruled out when $\ell$ is made sufficiently large so that $|x_{0,\ell}-x_{+,\ell}|$ becomes small compared to $r$. \end{proof} \subsection{Generalized Fatou coordinate.} Let us write \[ \mathbf{G}^{-2}_{\ell}(x_{0,\ell}+z) - x_{0,\ell} = \sum_{k=1}^{\infty} a_{k,\ell} z^k .\] For $a_{2,\ell}$ the condition of dominant convergence is satisfied and so it can be removed by a change of coordinate which for all $\ell$ belongs to a compact family of diffeomorphisms of a fixed neighborhood of $x_{0,\infty}$, see~\cite{profesorus1}, proof of Theorem 7.2. With a slight abuse of notations we internalize this change of coordinate simply assuming $a_{2,\ell}=0$. Next, we write $a_{1,\ell}=1+\frac{\rho_{\ell}}{4}$ where $\rho_{\ell}>0$. We also know that $\lim_{\ell\rightarrow\infty} a_{3,\ell} = a_{3,\infty} > 0$. Now $x_{+,\ell}=x_{0,\ell}+\iota\sqrt{\frac{\rho_{\ell}}{4a_{3,\ell}}}\mathfrak{E}(\ell)$ where we shall write $\mathfrak{E}(\ell) := \exp\bigl(O\left(\sqrt{\rho_{\ell}}\right)\bigr)$. Consider the development of $\mathbf{G}^{-2}_{\ell}$ at $x_{+,\ell}$: \begin{equation*} \mathbf{\Gamma}_{\ell}(z) := \iota^{-1}\left(\mathbf{G}^{-2}_{\ell}(x_{+,\ell}+\iota z)-x_{+,\ell}\right) = \sum_{k=1}^{\infty} \hat{a}_{k,\ell} z^k \end{equation*} where \begin{equation}\label{equ:25hp,2} \begin{split} \hat{a}_{1,\ell} = & \left(1-\frac{\rho_{\ell}}{2}\mathfrak{E}(\ell)\right)\\ \hat{a}_{2,\ell} = & - \frac{3}{2}\sqrt{a_{3,\ell}\rho_{\ell}}\mathfrak{E}(\ell)\\ \hat{a}_{3,\ell} = & -a_{3,\ell}\mathfrak{E}(\ell) . \end{split} \end{equation} Observe that two features of $\mathbf{\Gamma}_{\ell}$ which make its analysis non-standard. First, the quadratic term cannot be removed or neglected as $\ell\rightarrow\infty$, i.e. no dominant convergence in the sense of~\cite{profesorus1}. \paragraph{Definition of the generalized Fatou coordinate.} \begin{defi}\label{defi:30hp,1} Define $\zeta_{s,\ell} :\: \CC \rightarrow \hat{\CC}\setminus\{0\}$ by \[ \zeta_{\ell}(z) = \frac{1}{2A_{\ell} z^2} \; \] where $A_{\ell} = -\frac{\hat{a}_{3,\ell}}{\hat{a}_{1,\ell}} + 3\frac{\hat{a}^2_{2,\ell}}{\hat{a}^2_{1,\ell}} = a_{3,\ell}\mathfrak{E}(\ell)$. \end{defi} If we denote $\zeta := \zeta_{\ell}(z)$, we get the representation: \begin{equation}\label{equ:25hp,1} \zeta_{\ell}\left(\Gamma_{\ell}(z)\right) = \gamma_{\ell}\left(\sqrt{\zeta}\right) := a^{-2}_{1,\ell}\zeta+1 +\sqrt{\frac{9}{2}\rho_{\ell}}\mathfrak{E}(\ell)\sqrt{\zeta} + O\left(|\zeta|^{-1/2}\right) \; . \end{equation} In order for equation~\ref{equ:25hp,1} to hold, correct branch of $\sqrt{\zeta}$ needs to be chosen by substituting $\sqrt{\zeta} = z^{-1} \left(2A_{\ell}\right)^{-1/2}$. In particular, for $\Re z > 0$ one should choose the principal branch of $\sqrt{\zeta}$. When the principal branch of $\sqrt{\zeta}$ is used, we will talk of the principal branch $\mathbf{\gamma}_{\ell}$. \begin{lem}\label{lem:25hp,1} There exists constant $\ell_0<\infty$, $R_0,K_1,K_2,K_3$ such that for any $z :\: |z| \geq R_0$ and $\ell\geq\ell_0$ \[ \left|\gamma_{\ell}(\sqrt{\zeta})-\zeta-\rho_{\ell}\zeta-\sqrt{\frac{9}{2}\rho_{\ell}\zeta}-1\right| \leq K_1 \rho_{\ell}^{3/2} |\zeta| + K_2 \rho_{\ell} \sqrt{|\zeta|} + K_3 |\zeta|^{-1/2} \; .\] \end{lem} \begin{proof} From formula~(\ref{equ:25hp,1}) the linear term in $\gamma_{\ell}(\zeta)-\zeta$ is $(a_{1,\ell}^{-2}-1)z=\rho_{\ell}z + O(\rho_{\ell}^{3/2}) z$ which gives rise to the term of order $|\zeta|$ in the claim of the Lemma. The root term in formula~(\ref{equ:25hp,1}) is \[ \sqrt{\frac{9}{2}\rho_{\ell}}\mathfrak{E}(\ell)\sqrt{\zeta} = \sqrt{\frac{9}{2}\rho_{\ell}}\sqrt{\zeta} + O\left(\rho_{\ell}\sqrt{|\zeta|}\right) \] and the $O\left(|\zeta|^{-1/2}\right)$ term is directly copied. \end{proof} \subsection{Dynamics of $\mathbf{\gamma}_{\ell}$.} Although the goal of Theorem~\ref{theo:27ha,1} is an estimate uniform in $\ell$, the description of the dynamics will be split into cases depending on $\ell$: the mid-range case of $\zeta = O\left(\rho^{-1}_{\ell}\right)$ which generally reminiscent of a parabolic point and the far-range for larger $\zeta$ where the true nature of the fixed point at $x_{+,\ell}$ becomes evident. \begin{lem}\label{lem:27hp,1} For any $\delta :\: 0<\delta<\frac{\pi}{2}$ and $Q\geq 1$ there are $r(\delta)$ and $\ell_0(\delta,Q)$ such that for every $\ell\geq\ell_0(\delta,Q)$ if $\zeta :\: r(\delta)<\Re\zeta<Q\rho^{-1}_{\ell},\, |\arg\zeta|<\delta$, then $\Re \mathbf{\gamma}_{\ell}(\sqrt{\zeta}) > \Re \zeta + \frac{1}{2}$ and $|\arg \mathbf{\gamma}_{\ell}(\sqrt{\zeta})| < \delta$. \end{lem} \begin{proof} According to Lemma~\ref{lem:25hp,1} \[ \mathbf{\gamma}_{\ell}(\sqrt{\zeta})-\zeta= \rho_{l}\zeta + \sqrt{\frac{9\rho_{\ell}}{2}\zeta} + 1 + \text{corrections} \; .\] Both the linear and root terms are helping the estimate of the Lemma, by increasing the real part of the expression and bringing its argument closer to $0$. So we ignore them. What is left is $1$ and the corrections. We make each of them less than $\frac{\delta}{30}$. For the $K_3$ term this requires making $\zeta$ sufficiently large depending on $\delta$. The next term is bounded by $K_2\sqrt{Q}\rho_{\ell}^{1/2}$ and requires $\ell$ large enough depending on $\delta,Q$ and the first term is estimated similarly. Thus, \[ \mathbf{\gamma}_{\ell}(\sqrt{\zeta})-\zeta= 1 + E(\delta) + \text{terms of $\Re>0$ and $|\arg| < \delta$} \] with $\left|E(\delta)\right| < \frac{\delta}{10}$. The claim of the Lemma follows. \end{proof} \begin{lem}\label{lem:27hp,2} With the same notations as in the previous lemma, for every $\delta>0$ there are $r(\delta)>0, L(Q)<\infty$ such that if $r(\delta) < \Re\zeta, |\arg\zeta|<\delta$ and for every $j=0,\cdots, k$, $\Re \mathbf{\gamma}_{\ell}(\zeta) < Q\rho^{-1}_{\ell}$, then \[ \forall \ell\geq\ell_0(\delta,Q)\; \left| D_{\zeta}\mathbf{\gamma}^k_{\ell}(\zeta) \right| < L(Q) \; .\] \end{lem} \begin{proof} We choose $r(\delta)$ at least as large as in Lemma~\ref{lem:27hp,1} and as a consequence of Lemma~\ref{lem:25hp,1} and Cauchy's estimates we get \[ \bigr|\log\left(D_{\zeta}\gamma_{\ell}(\sqrt{\zeta})\right)\bigr| \leq K_1\rho_{\ell} + K_2\rho_{\ell}|\zeta|^{-1/2} + K_3|\zeta|^{-3/2} \] for $|\zeta|, \ell$ greater than some constants. By Lemma~\ref{lem:25hp,1}, $\|\mathbf{\gamma}_{\ell}^j(\sqrt{\zeta})\|\geq r(\delta)+\frac{j}{2}$. If $r(\delta)>1$, this leads to the following estimate: \[ \bigl|\log\left(D^k_{\zeta}\gamma_{\ell}(\sqrt{\zeta})\right)\bigr| \leq K_1k \rho_{\ell} + K_2\rho_{\ell}|\sqrt{k} + 2K_3\sum_{j=1}^{\infty} j^{-3/2} \; . \] At the same time, since $\Re\mathbf{\gamma}_{\ell}^k(\zeta)<Q\rho^{-1}_{\ell}$, $k<2Q\rho^{-1}_{\ell}$ which yields the claim of the Lemma. \end{proof} \paragraph{Far-range dynamics.} Here we $|\zeta|\geq Q\rho_{\ell}^{-1}$. \begin{lem}\label{lem:27hp,3} For every $\eta>0$ there is $Q(\eta) :\: 1<Q(\eta)<\infty$ and $\ell_0(\eta)$ such that for every $\ell\geq \ell_0(\eta)$ and $\zeta :\: |\zeta| \geq Q(\eta)\rho_{\ell}^{-1}$ \begin{itemize} \item \[ \left|\gamma_{\ell}(\sqrt{\zeta})\right| \geq \left| \zeta\right|(1+\rho_{\ell})^{1-\eta} \; ,\] \item \[ \left|D_{\zeta}\gamma_{\ell}(\sqrt{\zeta})\right| \leq (1+\rho_{\ell})^{1+\eta} \; ,\] \end{itemize} \end{lem} \begin{proof} From Lemma~\ref{lem:25hp,1} we conclude that for $|\zeta| \geq Q\rho_{\ell}^{-1}$, $\ell\geq\ell_0$, \[ \left|\frac{\gamma_{\ell}\left(\sqrt{\zeta}\right)}{\zeta+\rho_{\ell}\zeta}\right| \geq 1-Q^{-1}\rho_{\ell} - \rho_{\ell}\sqrt{\frac{9}{2Q}} - KQ^{-1/2} \rho^{3/2}_{\ell} = 1 - K(\ell,Q)\rho_{\ell}\] where $\forall \ell_0\; \lim_{Q\rightarrow\infty} \sup\{ K(\ell,Q) :\: \ell\geq\ell_0 = 0$. For $\rho_{\ell}$ small enough and $K(\ell,Q)\leq 1$ this leads to \[ \left|\frac{\gamma_{\ell}\left(\sqrt{\zeta}\right)}{\zeta+\rho_{\ell}\zeta}\right| \geq \left(1+\rho_{\ell}\right)^{-2K(\ell,Q)} \] and it suffices to choose $Q(\eta)$ so that $\sup\left\{ K(\ell,Q(\eta)) :\: \ell\geq\ell_0\right\} < \frac{\eta}{2}$ in order to obtain the first claim. For the second claim, we similarly get from Lemma~\ref{lem:25hp,1} that \[ \left| D_{\zeta}\gamma_{\ell}(\sqrt{\zeta})\right| \leq 1+ \rho_{\ell} + \rho_{\ell} \sqrt{\frac{9}{2Q}} + K_1\rho_{\ell}^{3/2} \] for $\ell$ and $Q$ suitably bounded below. Similar to the previous case the right side can be bounded above by $\left(1+\rho_{\ell}\right)^{1+2K'(\ell,Q)}$ and the second claim follows. \end{proof} \paragraph{Joint estimates.} We will now write general estimates on the absolute value and derivative of iterates of $\gamma_{\ell}$. \begin{lem}\label{lem:30hp,1} For every $\delta :\: 0<\delta<\frac{\pi}{2}$ there is $r_0(\delta)>0$ and for every $\eta>0$ there are $\ell_0(\delta,\eta), L(\eta), Q(\eta)>1$ such that \begin{equation*} \forall \ell\geq \ell_0(\delta,\eta)\; \forall \zeta\in\CC :\: |\zeta|>r_0,\,|\arg\zeta|<\delta \; \exists k(\zeta,\ell)\; \Re\gamma_{\ell}^{k(\zeta,\ell)}(\sqrt{\zeta})\geq Q(\eta)\rho_{\ell}^{-1} :\: \end{equation*} \begin{equation*} \begin{split} \forall 0\leq k\leq k(\zeta,\ell)\; & \Re\gamma^k_{\ell}(\sqrt{\zeta})\geq \Re\zeta+\frac{k}{2} \\ \forall k\geq 0\; & \bigl|\gamma^k_{\ell}\left(\sqrt{\zeta}\right)\bigr| \geq \bigl(\Re\gamma_{\ell}^{\min\left(k,k(\zeta,\ell)\right)}(\sqrt{\zeta})\bigr)(1+\rho_{\ell})^{\max\left(k-k(\zeta,\ell\right),0)(1-\eta)}\\ \forall k\geq 0\; & \bigl|D_{\zeta}\gamma^k_{\ell}\left(\sqrt{\zeta}\right)\bigr| \leq L(\eta) (1+\rho_{\ell})^{\max\left(k-k(\zeta,\ell),0\right)(1+\eta)} . \end{split} \end{equation*} \end{lem} \begin{proof} By Lemma~\ref{lem:27hp,1} when $\zeta$ is chosen in the specified set, it will move inside the same set by at least $\frac{1}{2}$ to the right by each iterate of $\gamma_{\ell}$, which then must be the principal branch. $Q(\eta)$ is chosen by Lemma~\ref{lem:27hp,3}. The key point is the choice of $k(\zeta,\ell)$ which the smallest $k$ for which $\Re\gamma_{\ell}^k(\zeta) \geq Q(\eta)\rho_{\ell}^{-1}$. Until that point the dynamics is controlled by Lemma~\ref{lem:27hp,1} and the estimate of Lemma~\ref{lem:27hp,2} on the derivative, while afterwards the dynamics becomes complicated, but simple estimates of Lemma~\ref{lem:27hp,3} hold. \end{proof} Now we draw conclusions for iterates of $\mathbf{G}_{\ell}^{-2}$. \begin{lem}\label{lem:30hp,2} For every $\delta :\: 0<\delta<\frac{\pi}{4}$ there is $r_0(\delta)>0$ and for every $\eta>0$ and $r :\: 0<r<r_0$ there are $\ell_0(\delta,\eta), L(\eta,r)$ such that \begin{equation*} \forall \ell\geq \ell_0(\delta,\eta)\; \forall z \in \bigl\{ z\in\CC :\: r<|z-x_{+,\ell}|<r_0,\,\left|\arg \iota^{-1}(z-x_{+,\ell})\right|<\delta \bigr\}\; \exists k(z,\ell) \end{equation*} \begin{equation*} \begin{split} \forall 0\leq k \leq k(z,\ell)\; & \left| D_z\mathbf{G}_{\ell}^{-2k}(z)\right| \leq L(\eta,r) \bigl(1 + \frac{k}{2}\bigr)^{-3/2}\\ \forall k\geq k(z,\ell)\; &\left| D_z\mathbf{G}_{\ell}^{-2k}(z)\right| \leq L(\eta,r)\rho_{\ell}^{3/2}\left(1+\rho_{\ell}\right)^{(-\frac{1}{2}+3\eta)\max\left(k-k(z,\ell),0\right)} . \end{split} \end{equation*} \end{lem} \begin{proof} This is a consequence of Lemma~\ref{lem:30hp,1} and the change of coordinate $\zeta := \zeta_{\ell}(z)$, cf. Definition~\ref{defi:30hp,1}. The derivative of that change of coordinate is bounded below in terms of $r$ and the derivative of the inverse change is bounded above by a constant times $|\zeta|^{-3/2}$. The bound on the argument $\delta$ doubles by the generalized Fatou coordinate, hence different values in the hypotheses of the Lemmas. Now the claim follows directly from Lemma~\ref{lem:30hp,1}, except that by taking $r_0(\delta)$ small enough we guarantee $\Re\zeta\geq 1$. \end{proof} \subsection{Estimates of the Poincar\'{e} series.} Define a domain $W(r,\delta) : = \{ x_{+,\ell}+\iota z \in\CC\setminus\Omega_{\ell}:\: \frac{r}{2} < |z| < r,\, |\arg z| < \delta \}$ where $\delta_0<\frac{\pi}{4}$. For $z\in W(r,\delta_0)$ we define \begin{equation}\label{equ:30hp,2} \hat{P}(z,\sigma) := \sum_{k=1}^{\infty}\sum_{j=1}^{k-1} |D\mathbf{G}_{\ell}^{-2j}(z)|^{2-\sigma} |D\mathbf{G}_{\ell}^{-2k}(z)|^{\sigma} \; . \end{equation} \begin{lem}\label{lem:28hp,1} For some $r_0>0$ and $0<\delta<\frac{\pi}{4}$, any $0<r<r_0$ and every $\ell\geq \ell_0(r)$, \[ \int_{D(x_{0,\infty},\frac{r}{4})\setminus\Omega_{\ell}} P(\mathbf{G}_{\ell}^{-2},x+\iota y,\sigma) \leq 2 \int_{W(r,\delta)} \hat{P}(x+\iota y,\sigma)\,dx\,dy \; .\] \end{lem} \begin{proof} Constants $r_0$ and $\delta$ are chosen from Lemma~\ref{lem:24ha,1} which asserts that $W(r,\delta)$ is a fundamental domain such that every orbit which starts in $\HH_+ \cap D\left(x_{+,\ell},\frac{r}{2}\right)\setminus\Omega_{\ell}$ passes through it under the forward iteration by $G^2_{\ell}$. For $\ell$ sufficiently large depending on $r$, $D\left(x_{0,\infty},\frac{r}{4}\right) \subset D\left(x_{+,\ell},\frac{r}{2}\right)$. Taking into account the symmetry about $\RR$, the claim of the Lemma is reduced to \begin{equation}\label{equ:30hp,1} \int_{D(x_{+,\ell},\frac{r}{2}) \cap \HH_+\setminus\Omega_{\ell}} P(\mathbf{G}_{\ell}^{-2},x+\iota y,\sigma) \leq \int_{W(r,\delta)} \hat{P}(x+\iota y,\sigma)\,dx\,dy \; . \end{equation} By the fundamental domain property \begin{multline*} \int_{D(x_{+,\ell},\frac{r}{2}) \cap \HH_+\setminus\Omega_{\ell}} P(\mathbf{G}_{\ell}^{-2},x+\iota y,\sigma) \leq \\ \int_{W(r,\delta)} \sum_{j=1}^{\infty} P\left(\mathbf{G}_{\ell}^{-2},\mathbf{G}_{\ell}^{-2j}(x+\iota y),\sigma\right) \left|D_z\mathbf{G}_{\ell}^{-2j}(x+\iota y)\right|^2\,dx\,dy \; . \end{multline*} Representing the Poincar\'{e} series from the definition, we evaluate the sum under the second integral: \begin{multline*} \sum_{j=1}^{\infty} P\left(\mathbf{G}_{\ell}^{-2},\mathbf{G}_{\ell}^{-2j}(x+\iota y),\sigma\right) \left|D_z\mathbf{G}_{\ell}^{-2j}(x+\iota y)\right|^2 =\\ \sum_{j=1}^{\infty} \sum_{p=1}^{\infty} \bigl| D_z\mathbf{G}_{\ell}^{-2p}\left( \mathbf{G}_{\ell}^{-2j}(x+\iota y)\right)\bigr|^{\sigma} \left|D_z\mathbf{G}_{\ell}^{-2j}(x+\iota y)\right|^2 = \\ \sum_{j=1}^{\infty} \sum_{k=j+1}^{\infty} \bigl| D_z\mathbf{G}_{\ell}^{-2k}\left(x+\iota y\right)\bigr|^{\sigma} \left|D_z\mathbf{G}_{\ell}^{-2j}(x+\iota y)\right|^{2-\sigma} \end{multline*} with $k:=j+p$ and estimate~(\ref{equ:30hp,1}) follows by interchanging the order of summation. \end{proof} \paragraph{Proof of Theorem~\ref{theo:27ha,1}.} The proof will follow from Lemmas~\ref{lem:30hp,2} and~\ref{lem:28hp,1}. We begin by setting the parameters, starting with $\delta$ of Lemma~\ref{lem:28hp,1}. Given that, we choose $2r$ in Lemma~\ref{lem:30hp,2} then $r_0(\delta)$ as well as $r_0$ of Lemma~\ref{lem:28hp,1}. Now $\eta$ is fixed so that $3\eta<\frac{1}{2}$, thus $\eta:=\frac{1}{8}$ will do. Then all the bounds $\ell_0(\delta,\eta),L(\eta,r)$ of Lemma~\ref{lem:30hp,2} become constants and will be written simply as $\ell_0,Q,L$. Only the dependence of $\ell$ through $k(z,\ell)$ and $\rho_{\ell}$ remains. By lemma~\ref{lem:30hp,2} for all $z\in W(r,\delta), \ell\geq\ell_0$ and $k\geq 0$, $D_z \mathbf{G}_{\ell}^{-2k}(z)$ are uniformly bounded above. Then, by inspecting the formula of Definition~\ref{defi:27ha,1} for $g:=G_{\ell}^{-2}$ we see that increasing $\sigma$ increases the sum of the Poincar\'{e} series at most by a uniform constant for any $z\in W(r,\delta)$. Hence, without loss of generality we can restrict our considerations to $\frac{4}{3} < \sigma < 2$. Then \[ \hat{P}(z,\sigma) \leq K \sum_{k=1}^{\infty} k\left| D_z\mathbf{G}^{-2k}_{\ell}(z) \right|^{\sigma} \; .\] First we estimate the sum for $k\leq k(z,\ell)$: \[ \sum_{k=1}^{k(z,\ell)} k\left| D_z\mathbf{G}^{-2k}_{\ell}(z) \right|^{\sigma} \leq L^{\sigma} k\left(1+\frac{k}{2}\right)^{-\frac{3\sigma}{2}} \leq K \sum_{k=1}^{\infty} k^{1-\frac{3}{2}\sigma} \leq K(\sigma) \] for $\sigma>\frac{4}{3}$. Now we deal with \[ \sum_{k>k(z,\ell)} k\left| D_z\mathbf{G}^{-2k}_{\ell}(z) \right|^{\sigma} \leq L^{\sigma}\rho_{\ell}^{3\sigma/2}\sum_{k=0}^{\infty} k\left(1+\rho_{\ell}\right)^{-k\sigma/8}\] using the estimate of Lemma~\ref{lem:30hp,2} with $\eta=\frac{1}{8}$. Since $2>\sigma>\frac{4}{3}$, $\rho_{\ell}^{3\sigma/2}\leq \rho_{\ell}^{2+\sigma'}$ where $\sigma':=\frac{3}{2}\sigma-2>0$, while $L^{\sigma}$ is just another constant $L'$. For $\rho_{\ell}$ sufficiently small \[ \left(1+\rho_{\ell}\right)^{-\sigma/8} \leq 1 - \frac{\sigma\rho_{\ell}}{9} .\] Hence, for all $\ell$ sufficiently large, \begin{multline*} \sum_{k>k(z,\ell)} k\left| D_z\mathbf{G}^{-2k}_{\ell}(z) \right|^{\sigma} \leq L'\rho_{\ell}^{2+\sigma'} \sum_{k=0}^{\infty} k\left(1-\frac{\sigma\rho_{\ell}}{9}\right )^k = \\L'\rho_{\ell}^{2+\sigma'} \left(1-\frac{\sigma\rho_{\ell}}{9}\right)\left(\frac{9}{\sigma\rho_{\ell}} \right)^2 \leq \frac{81 L' \rho_{\ell}^{\sigma'}}{\sigma^2} \end{multline*} which tends to $0$ as $\ell\rightarrow\infty$. So, $\hat{P}(z,\sigma)$ is uniformly bounded for all $z\in W(r,,\delta)$ and $\ell$ large enough. For such $\ell$ Theorem~\ref{theo:27ha,1} follows from Lemma~\ref{lem:28hp,1}. For each of the remaining finitely many $\ell$ the point $x_{+,\ell}$ is a hyperbolic attractor for $\mathbf{G}_{\ell}^{-2}$, so the Poincar\'{e} series is integrable as well. \section{Induced maps} \subsection{Induced mapping $T_{\ell}$.} \begin{defi}\label{defi:3hp,2} For every $\ell$ finite and even or infinite, consider the {\em fundamental annulus} $A_{\ell} := \Omega_{\ell}\setminus\tau_{\ell}^{-1}\overline{\Omega}_{\ell} $. We further define {\em fundamental half-annuli} $A_{\pm,\ell} := A_{\ell} \cap \mathbb{H}_{\pm}$. \end{defi} For $\ell=\infty$, the fundamental annulus is not in fact a topological annulus since it is pinched at $x_0$. However fundamental half-annuli are always topological disks by Fact~\ref{fa:1hp,3}. \begin{defi}\label{defi:3hp,3} For any $z\in A_{\ell}$ define $T_{\ell}(z) = \tau^{n(z)} H_{\ell}(z)$ where $n(z)$ is chosen so that $T_{\ell}(z) \in A_{+,\ell} \cup A_{-,\ell}$. The domain of $T_{\ell}$ is the set of all $z\in A_{\ell}$ for which such $n(z)$ exists. \end{defi} From the definition of the fundamental annulus at most one such $n(z)$ exists for each $z$. Moreover, it can always by found if the condition is relaxed to $T_{\ell}(z) \in \overline{A}_{\ell}$. Hence, $T_{\ell}$ is defined on $\Omega_{\ell}$ except for a countable union of analytic arcs. \paragraph{Branches of $T$.} Since $A_{\pm,\ell}$ is simply connected and avoids the singularities of $H_{\ell}$ which are all on $\RR$, the mapping $\tau_{\ell}^{-n} H_{\ell}$ has univalent inverse branches whose ranges for all $n$ cover the domain of $T$. Thus, the domain of $T_{\ell}$ is a countable union of topological disks. The restriction of $T_{\ell}$ to any connected component of its domain will be called a {\em branch} of $T_{\ell}$. Any branch $\mathfrak{z}:=\mathfrak{z}_{\sigma,s,n,p,\ell}$ can be uniquely determined by its \begin{itemize} \item {\em side} $\sigma$ which can by $+$ or $-$ depending on whether the domain of $\mathfrak{z}$ is in $\Omega_{+,\ell}$ or $\Omega_{-,\ell}$, \item {\em sign} $s$ which can be $+$ or $-$ depending on whether the domain of $\mathfrak{z}$ lies in the upper or lower half-plane, \item {\em level} $n$ defined by $\mathfrak{z} = \tau_{\ell}^{n}\exp(\phi_{\sigma,\ell})$ where $\sigma$ is the side of the branch and \item {\em height} $p$. To determine the height map the domain of $\mathfrak{z}$ by $\phi_{\sigma,\ell}$. Since the range of $\mathfrak{z}$ which is equal to $\exp(\phi_{\sigma,\ell})$ rescaled by a power of $\tau_{\ell}$ avoids $\RR$, $\phi_{\sigma,\ell}\left(\Dm\mathfrak{z}\right)$ is contained in a horizontal strip $\{ z\in\CC :\: p\pi < \Im z < (p+1)\pi\}$ if the sign $s=+$, or $\{ z\in\CC :\: (-p-1) \pi < \Im z < -p\pi\}$ if $s=-$. \end{itemize} So, the range of $\mathfrak{z}_{\sigma,s,n,p,\ell}$ is $A_{+,\ell}$ if and only if $s(-1)^p=1$, since the statement holds true for $p=0$ and $s=+$ and then flips each time $s$ changes or $p$ changes by $1$. \begin{defi}\label{defi:3hp,4} A branch is called {\em inner} if its height is positive. \end{defi} \begin{lem}\label{lem:3hp,1} $T_{\ell}$ has no branches of side $-$, height $0$ and level greater than $1$. \end{lem} \begin{proof} Level greater than $1$ means that the image of the domain of the branch by $H_{\ell}$ is inside $A_{\ell}$. Thus, the domains of such branches when mapped by $\phi_{-,\ell}$ would lie inside the semi-infinite strip with imaginary part in $(-\pi,\pi)$ and the real part bounded above by the image of the boundary of $\tau^{-2}_{\ell}\Omega_{\ell}$ by $\log$. In other words, they lie inside a set which is mapped by $H_{\ell}$ univalently onto $\Omega_{\ell}\setminus(-\infty,0]$. From Fact~\ref{fa:1hp,4} we know that $G_{\ell} = H_{\ell}\tau^{-1}_{\ell}$ maps $\Omega_{\ell}$ univalently onto $\Omega_{\ell}\setminus (-\infty,0]$. Thus, the domains of our branches are contained in $\tau^{-1}\Omega_{\ell}$ which is excluded from $A_{\ell}$. \end{proof} \paragraph{Generic branches.} So far we consider the mapping $T_{\ell}$ and its branches which all depend of $\ell$. However, since a branch is uniquely defined by its symbol $(\sigma,s,n,p)$ we can also talk of $T$ as ``generic mapping'' independent of $\ell$ and consider its generic branches defined by their symbols. The only limitation is on the height $p\leq \frac{\ell}{2}$. \subsection{Extensibility of compositions of branches.} \begin{defi}\label{defi:3hp,5} Define for any $s\in\ZZ$ and $\ell$ positive and even or infinite: \begin{align*} Z_{\ell} := & [0,\tau^{-1}_{\ell}] \cup \{1,\tau_{\ell}\} \cup [\tau^2_{\ell},+\infty)\\ Z_{+,\ell}(s) := & [0,+\infty)\\ Z_{-,\ell}(s) := & Z_{\ell} \cup (-\infty,\tau_{\ell}^s]\\ Z_{\circ,\text{small},\ell}(s) := & (-\infty,0] \cup [\tau_{\ell}^s,+\infty)\\ Z_{\circ,\ell}(s) := & Z_{\ell} \cup Z_{\circ,\text{small},\ell}(s) \end{align*} \end{defi} \begin{lem}\label{lem:3hp,2} Consider any composition of branches in the form \[ \xi := \mathfrak{z}_{\sigma_k,s_k,n_k,p_k,\ell}\circ \ldots \circ \mathfrak{z}_{\sigma_1,s_1,n_1,p_1,\ell}\; .\] Then, there exists $s\in \ZZ$ such that for every $\mathfrak{m}\in\{+,-,\circ\}$ and every $\hat{s}\in \ZZ$ there exists $\hat{\mathfrak{m}}\in \{+,-,\circ\}$ and the mapping $\xi$ continues analytically to a covering of the set $V^{\mathfrak{m}}_{\ell}(s):=\CC\setminus Z_{\mathfrak{m},\ell}(s)$ defined on a domain which is contained in: \begin{itemize} \item \[ \hat{\Omega}_{-,\ell}\cup\Omega_{+,\ell} \setminus \bigl( [x_{0,\ell},+\infty) \cup Z_{\hat{\mathfrak{m}},\ell}(\hat{s}) \bigr) \] when $\sigma_1=-$, or \item \[ \hat{\Omega}_{+,\ell}\cup\Omega_{-,\ell} \setminus \bigl( (-\infty,x_{0,\ell}] \cup [\tau_{\ell}x_{0,\ell},+\infty) \cup Z_{\hat{\mathfrak{m}},\ell}(\hat{s}) \bigr)\] when $\sigma=+$. \end{itemize} Furthermore, if the final symbol in the composition $(\sigma_k,s_k,n_k,p_k) = (+,\pm,2,0)$, then the claim can be strengthened for $\mathfrak{m}=\circ$ by saying that $\xi$ continues analytically to a covering of the set $V^{\circ}_{\text{large},\ell}(s) := \CC\setminus Z_{\circ,\text{small},\ell}(s)$. \end{lem} \begin{proof} The proof will proceed by induction with respect to $k$. \paragraph{Verification for $k=1$.} We begin be representing the branch as $\tau_{\ell}^{n}\exp(\phi_{\sigma_1,\ell})$ and observing that $V^{\mathfrak{m}}_{\ell}(n) \subset \tau^n_{\ell} V^{\mathfrak{m}}$ in the notations of Proposition~\ref{prop:6hp,1}. Also, $V^{\circ}{\text{large},\ell}(n) = \tau^n_{\ell} V^{\circ}$. Hence, the claim of that Proposition holds regarding the existence of a covering and inclusions of its domain. What is left to do is checking that domain is disjoint from $Z_{\hat{\mathfrak{m}},\ell}(\hat{s})$. \subparagraph{Inner branches.} We set $s:=n$. Since the branch is inner, $|\Im\phi_{\sigma_1,\ell}|>\pi$ on its domain and hence the domain of the extension from Proposition~\ref{prop:6hp,1} is disjoint from the real line which contains any $Z_{\hat{\mathfrak{m}},\ell}(\hat{s})$. \subparagraph{Branches of height $0$ and side $-$.} By Lemma~\ref{lem:3hp,1} they have positive level which means $n\leq 1$. We set $s:=n$ in this case, too. Then, by Proposition~\ref{prop:6hp,1} there is a covering defined on some domain contained in $\hat{\Omega}_{-,\ell} \cup \Omega_{+,\ell} \setminus [x_{0,\ell},+\infty)$. We need to choose $\hat{\mathfrak{m}}$ so that this domain is disjoint from $Z_{\mathfrak{m},\ell}(\hat{s})$. The appropriate choice here is $\hat{\mathfrak{m}},+$, since then $Z_{\mathfrak{m},\ell}(\hat{s})\in [0,+\infty)$, The possible intersection of $[0,+\infty)$ with the domain of covering is at most $[0,x_{0,\ell})$ whose image under $\tau^s_{\ell}\exp(\phi_{-,\ell})$ is $(0,\tau^{s-2}_{\ell}] \subset Z_{\ell}$. This is always contained in $Z_{\ell}$. \subparagraph{Branches of height $0$ and side $+$.} Let is first assume that $\hat{s}\neq 0$. In this case we also specify $s:=n$ and choose \[ \hat{\mathfrak{m}}= \left\{ \begin{array}{ccc} - &\mbox{if}& \hat{s}<0\\ \circ&\mbox{if}&\hat{s}>0 \end{array} \right. \; .\] By this choice, \[ Z_{\hat{\mathfrak{m}},\ell}(\hat{s}) \cap (x_{0,\ell}, \tau_{\ell} x_{0,\ell}) = \{1\} \] and so $1$ is the only possible point of $Z_{\hat{\mathfrak{m}},\ell}(\hat{s})$ in the domain of the covering. However, $\tau_{\ell}^s\exp\phi_{+,\ell}(1) = \tau_{\ell}^{n-2} \in Z_{\ell}$. So, $1$ is mapped by the branch outside of $\CC\setminus Z_{\mathfrak{m},\ell}(s)$ and the domain of the covering is disjoint from $Z_{\hat{m},\ell}(\hat{s})$. So from now on $\hat{s}=0$. The analysis is further split depending on $n$. \begin{itemize} \item $n\leq 1$. In this case, set $s:=n$ and $\hat{\mathfrak{m}}:=-$. Then $ Z_{-,\ell}(0) \cap (x_{0,\ell},\tau_{\ell}x_{0,\ell}) = (x_{0,\ell},1]$. The image of this under the branch is contained $(0,\tau_{\ell}^{-1}] \subset Z_{\ell}$ and hence disjoint from the domain of the covering from Proposition~\ref{prop:6hp,1}. \item $n\geq 4$. We also set $s:=n$ and now $\hat{\mathfrak{m}}:=\circ$. This leads to $Z_{\circ,\ell}(0) \cap (x_{0,\ell},\tau_{\ell}x_{0,\ell}) = [1,\tau_{\ell}x_{0,\ell})$ being excluded from the domain of the covering by the claim of the Lemma. Indeed, this set is mapped by the branch to $[\tau^{n-2}_{\ell},\tau^n_{\ell}) \subset Z_{\ell}$. \item $n=2$. In that case we will set $s:=0$. It is still true that $Z_{\mathfrak{m},\ell}(0) \supset \CC\setminus\tau^2_{\ell}V^{\mathfrak{m}}$ when $\mathfrak{m}=+,\circ$, but not when $\mathfrak{m}=-$, Instead, \[ Z_{-,\ell}(0)= (-\infty,1] \cup \{\tau_{\ell}\} \cup [\tau^2_{\ell},+\infty) \supset (-\infty,0] \cup [\tau^2_{\ell},+\infty) = \CC\setminus\tau^2_{\ell}V^{\circ} \; .\] Hence, Proposition~\ref{prop:6hp,1} is applicable again and a covering of $V^{\mathfrak{m}}_{\ell}(0)$ exists by an extension of branch to a domain whose intersection with $\RR$ is contained in $(x_{0,\ell},\tau_{\ell}x_{0,\ell})$. We must set $\hat{\mathfrak{m}} := -$ if $\mathfrak{m}=-$ and $\hat{\mathfrak{m}} := \circ$ otherwise. Then \begin{equation}\label{equ:10ha,1} \begin{split} Z_{-,\ell}(0) \cap (x_{0,\ell},\tau_{\ell}x_{0,\ell}) = & (x_{0,\ell},1] \\ Z_{\circ,\ell}(0) \cap (x_{0,\ell},\tau_{\ell}x_{0,\ell}) = & [1,\tau_{\ell}x_{0,\ell}) \end{split} \end{equation} which are mapped by the branch to $(0,1]\subset Z_{-,\ell}(0)$ or $[1,\tau^2_{\ell})\subset Z_{\circ,\text{small},\ell}(0) \subset Z_{\circ,\ell}(0) \subset Z_{+,\ell}(0)$, respectively. The additional claim of Lemma~\ref{lem:3hp,2} concerns this type of branches. Indeed, we observe that $V^{\circ}_{\text{large},\ell}(0) \subset \tau^2_{\ell} V^{\circ}$ and the inclusion for the image of $Z_{\circ,\ell}(0)$ by the branch has already been observed. \item $n=3$. In this case $s:=1$ and as in the previous case one checks that $Z_{\mathfrak{m},\ell}(1) \supset \CC\setminus\tau^3_{\ell}V^{\mathfrak{m}}$ when $\mathfrak{m}=+,\circ$ and $Z_{-,\ell}(1) \supset \CC\setminus\tau_{\ell}^3 V^{\circ}$. We pick $\hat{\mathfrak{m}}=-$ if $\mathfrak{m}=-$ and $\circ$ otherwise, as in the preceding case, which leads to inclusions~(\ref{equ:10ha,1}). Then the branch maps $(x_{0,\ell},1]$ to $(0,\tau_{\ell}]\subset Z_{-,\ell}(1)$ and $[1,\tau_{\ell}x_{0,\ell})$ to $[\tau_{\ell},\tau^3_{\ell})\subset Z_{\circ,\ell}(1) \subset Z_{+,\ell}(1)$, respectively. \end{itemize} \paragraph{The inductive step.} We decompose $\xi = \xi'\circ\mathfrak{z}$. By the inductive claim applied to $\xi'$, $\CC\setminus Z_{\mathfrak{m},\ell}(s)$ is covered by an extension of $\xi'$ restricted to a domain which then itself is covered by an extension of $\mathfrak{z}$. This yields a covering by Fact~\ref{fa:3ha,1} whose domain is contained in the domain of the extension of $\mathfrak{z}$. \end{proof} Let us conclude with a technical observation. \begin{lem}\label{lem:10hp,1} For any $s\in \ZZ$ each of the sets $Z_{+,\ell}(0), Z_{\circ,\ell}(0), Z_{-,\ell}(0)\cup [\tau_{\ell},+\infty)$ contains $Z_{\mathfrak{m},\ell}(s)$ for some $\mathfrak{m}$, where $\mathfrak{m}$ is generally different for each of the three cases. \end{lem} \begin{proof} Certainly $Z_{+,\ell}(0) \supset Z_{+,\ell}(s)$ for any $s$ since this domain is independent of $s$. When $s<0$, $Z_{-,\ell}(s)=(-\infty,\tau^{-1}_{\ell}] \cup \{1,\tau_{\ell}\} \cup [\tau^2_{\ell},+\infty)$ which is contained in both $Z_{\circ,\ell}(0)$ and $Z_{-,\ell}(0)$. When $s=0$ the statement is obvious. For $s>0$ we get $Z_{\circ,\ell}(0) \supset Z_{\circ,\ell}(s)$ as well as $Z_{-,\ell}(0)\cup [\tau_{\ell},+\infty) \supset Z_{\circ,\ell}(s)$. \end{proof} \paragraph{Univalent extensibility.} While Lemma~\ref{lem:3hp,2} provides a general statement which was suitable for a proof by induction, the goal of a dynamicist is to work with univalent extensions. We will proceed to derive them. \begin{defi}\label{defi:10hp,1} For $0\leq \theta_0,\theta_1 \leq \pi$ define the domain \begin{multline*} \tilde{V}(\theta_0,\theta_1) :=\\ \CC \setminus \bigl( [0,\tau_{\ell}^{-1}] \cup [\tau_{\ell},+\infty) \cup \{ r\exp(-\iota\theta_0) :\: r\geq 0\} \cup \{ 1 + r\exp(-\iota\theta_1) :\: r\geq 0\} \bigr) \; . \end{multline*} By definition, $V(\theta_0,\theta_1)$ is the connected component of $\tilde{V}(\theta_0,\theta_1)$ which contains $\HH_+$. \end{defi} \begin{prop}\label{prop:10hp,1} Let $\xi$ be any composition of branches of $T$ with the range equal to $A_+$, without loss of generality. Suppose that the domain $\xi$ is contained $\Omega_{\sigma_1,\ell}$, $\sigma_1=+,-$. Then, for any $0\leq\theta_0,\theta_1\leq\pi$ the branch $\xi$ has an analytic continuation which maps univalently onto $V(\theta_0,\theta_1)$ and the domain the extension satisfies the inclusion from the claim of Lemma~\ref{lem:3hp,2}. \end{prop} As a consequence of Lemma~\ref{lem:3hp,2} and Lemma~\ref{lem:10hp,1}, $\xi$ has three different covering extensions: $\xi_+$ with the range $\CC\setminus Z_{+,\ell}(0)$, $\xi_{\circ}$ with the range $\CC \setminus Z_{\circ,\ell}(0)$ and $\xi_{-}$ to the range $\CC \setminus \bigl( Z_{-,\ell}(0) \cup [\tau_{\ell},+\infty) \bigr)$. The domains of those extensions satisfy the inclusions from the claim of Lemma~\ref{lem:3hp,2}. Since each of the ranges is a simply-connected domain, each covering reduces to a univalent map. They all coincide on the preimage of $\HH_+$. Now $V(\theta_0,\theta_1)\setminus \HH_+$ splits into three connected components: $V_-(\theta_0,\theta_1)$ which contains the interval $(1,\tau_{\ell})$, $V_{\circ}(\theta_0,\theta_1)$ containing $(\tau_{\ell}^{-1},1)$ and $V_+(\theta_0,\theta_1)$ which contains $(-\infty,0)$. Define the domain of the desired extension as the union of $\xi^{-1}(\HH_+)$ and $\xi_{\mathfrak{m}}^{-1}\bigl( V_{\mathfrak{m}}(\theta_0,\theta_1) \bigr)$. Since all three extensions coincide on the preimage of $\HH_+$ and map the remaining parts of the domain into disjoint sets, the extension of $\xi$ on this domain is analytic and one-to-one. It is also proper, since each of the three extensions was a homeomorphism, thus univalent. This concludes the proof of Proposition~\ref{prop:10hp,1}. \subsection{Uniform tightness.} In this section we consider a generic mapping $S$ induced by the generic mapping $T$. \begin{defi}\label{defi:22ja,1} A generic mapping $S$ induced by $T$ is a collection of finite sequences of symbols $(\sigma_j,s_j,n_j,p_j)_{j=1}^r$. Given $\ell\leq\infty$ the induced mapping $S_{\ell}$ is obtained first by defining the {\em return time} $r_{S,\ell}(z)$ as the length $r$ of the longest sequence in $S$ which observes the limitation $0\leq p_j \leq \frac{\ell}{2}$ and such that the composition \[ \mathfrak{z}_{\sigma_r,s_r,n_r,p_r}\circ\cdots\circ \mathfrak{z}_{\sigma_1,s_1,n_1,p_1} \] is applicable at $z$. Then one obtains the induced mapping $S_{\ell} :\: S_{\ell}(z) := T^{r_{S,\ell}(z)}(z)$. \end{defi} For example, $T$ itself is the collection of all possible sequences $(\sigma,s,n,p)$ of length $1$ and the empty collection determines the identity map. \begin{defi}\label{defi:3jp,1} A generic induced mapping $S$ will be called {\em uniformly tight} if for every $\epsilon>0$ there is $\ell_0(\epsilon)$ and finite set of generic branches $\mathfrak{Z}(S,\epsilon)$ of $S$ such that if we define $\omega_{\ell}(\mathfrak{Z}):=\bigcup_{\mathfrak{z}\in\mathfrak{Z}(S,\epsilon)} \Dm(\mathfrak{z}_{\ell})$, then for all $\ell\geq\ell_0(\epsilon)$ \[ \int_{\Omega_{\ell}\setminus\omega_{\ell}(\mathfrak{Z})} r_{S,\ell}(x+\iota y)\,dx\,dy < \epsilon .\] \end{defi} \paragraph{A fact about convergence in measure.} \begin{fact}\label{fa:4np,1} Suppose that $(W_n),\, n=1,\cdots,\infty$ are bounded open sets and $\overline{W}_n \rightarrow \overline{W}_{\infty}$ in the Hausdorff topology. If $|\partial W_{\infty}|=0$, then \[ \lim_{n\rightarrow\infty} \left|(\overline{W}_{\infty}\setminus W_n) \cup (\overline{W}_n\setminus W_{\infty}) \right| = 0 .\] \end{fact} \begin{proof} Consider an open neighborhood with arbitrarily small measure which contains $\partial W_{\infty}$. \end{proof} \begin{lem}\label{lem:21ja,1} \[ \lim_{\ell\rightarrow\infty} \int \|\chi_{\Omega_{\pm,\ell}} - \chi_{\Omega_{\pm,\infty}}\|\, d\Leb_2 = 0 \] and likewise if $\mathfrak{z} := \mathfrak{z}_{\sigma_k,s_k,n_k,p_k}\circ\cdots\circ \mathfrak{z}_{\sigma_1,s_1,n_1,p_1}$, then \[ \lim_{\ell\rightarrow\infty} \int\left|\chi_{\Dm(\mathfrak{z}_{\ell})} - \chi_{\Dm(\mathfrak{z}_{\infty})}\right|\, d\Leb_2 = 0 .\] \end{lem} \begin{proof} By Proposition~\ref{prop:11jp,1} we observe that the closures of the sets under consideration converge in the Hausdorff topology and the claim follows from Fact~\ref{fa:4np,1}. \end{proof} \begin{coro}\label{coro:21ja,1} For any generic induced mapping $S$ and for every $\epsilon>0$ there is $\ell_0(\epsilon)$ and finite set of generic branches $\mathfrak{Z}(S,\epsilon)$ of $S$ such that for every $\ell\geq\ell_0$ \[ \sum_{\mathfrak{z}\notin\mathfrak{Z}(S,\epsilon)} \left|\Dm(\mathfrak{z}_{\ell})\right| < \epsilon .\] \end{coro} \begin{proof} This is an easy consequence of Lemma~\ref{lem:21ja,1}. \end{proof} \paragraph{Uniform tightness under composition.} \begin{defi}\label{defi:4ja,1} We recall that a univalent mapping $\varphi :\: U \rightarrow V$ has {\em distortion bounded} by $Q$ onto $Z\subset V$ provided that $\sup \bigl\{ \left|\log\frac{D\varphi(z_1)}{D\varphi(z_2)}\right| :\: z_1,z_2\in \varphi^{-1}(Z) \bigr\} \leq Q$. \end{defi} The next lemma immediately generalizes by induction to any finite composition of uniformly tight mappings. By the {\em domain} of an induced map we understand the set where its return time is positive. \begin{lem}\label{lem:4ja,1} Suppose $S_1,S_2$ are generic mappings induced by $T$ and for every $\ell\geq\ell_0$ the image of every branch of $S_{1,\ell}$ contains the domain of $S_{2,\ell}$; moreover there is $Q$ such that the distortion of every branch of $S_{1,\ell}$, for every $\ell\geq\ell_0$ is bounded by $Q$. Assume also that there is $\mu>0$ such that for all $\ell\geq\ell_0$ the range of every branch of $S_{1,\ell}$ intersected with the domain of $S_{2,\ell}$ has Lebesgue measure at least $\mu$. Then if $S_1, S_2$ are uniformly tight, so is $S_2 \circ S_1$. \end{lem} \begin{proof} Fix an $\epsilon>0$. The hypothesis of uniformly bounded distortion implies that $S_1$ transports the Lebesgue measure with a Jacobian uniformly bounded above by $Q$ and below by $Q^{-1}$. By the uniform tightness of $S_2$ and Corollary~\ref{coro:21ja,1} we find a finite set $\mathfrak{Z}_2$ of branches of $S_2$ such that any $\epsilon_1,\epsilon_2>0$ and all $\ell$ sufficiently large \marginpar{??} \begin{equation*} \begin{split} \sum_{\mathfrak{z}\notin\mathfrak{Z}_2} \left|\Dm(\mathfrak{z}_{\ell})\right| & < Q^{-1}\mu\epsilon_1\\ \int_{\Omega_{\ell}\setminus\bigcup_{\mathfrak{z}\in\mathfrak{Z}_2}\Dm(\mathfrak{z}_{\ell})} r_{S_2,\ell}(x+\iota y)\,dx\,dy & < Q^{-1}\mu\epsilon_2 \end{split} \end{equation*} Now for every branch $\mathfrak{z}$ of $S_{1,\ell}$ the preimages of the domains of branches not in $\mathfrak{Z}_2$ occupies at most $Q\epsilon_1$-part of $\Dm(\mathfrak{z})$. We get the estimate \[ \int_{\Dm(\mathfrak{z}_{\ell})\setminus\mathfrak{z}_{\ell}^{-1}\left(\bigcup_{\mathfrak{w}\in\mathfrak{Z}_2}\Dm(\mathfrak{w}_{\ell})\right)} r_{S_2\circ S_1,\ell}(x+\iota y)\,dx\,dy \leq \bigl( \epsilon_1 r_{S_1,\ell}\left(\Dm(\mathfrak{z}_{\ell})\right) + \epsilon_2\bigr) \left| \Dm(\mathfrak{z}_{\ell}) \right| .\] Summing up over all branches of $\mathfrak{z}$ of $S_1$ we arrive at \[ \int_{\Omega_{\ell}\bigcup_{\mathfrak{w}\in\mathfrak{Z}_2}\Dm(\mathfrak{w}_{\ell}\circ\mathfrak{z}_{\ell})} r_{S_2\circ S_1,\ell}(x+\iota y)\,dx\,dy \leq \epsilon_1 \int_{\Omega_{\ell}} r_{S_1,\ell}(x+\iota y)\,dx\,dy + \epsilon_2 \left| \Omega_{\ell} \right| .\] Uniform tightness of $S_1$ implies that $\int_{\Omega_{\ell}} r_{S_1,\ell}(x+\iota y)\,dx\,dy$ is uniformly bounded for all $\ell$ large enough. Hence, $\epsilon_1$ and $\epsilon_2$ can be chosen so that \begin{equation}\label{equ:22jp,2} \forall\ell\geq\ell_0\; \int_{\Omega_{\ell}\bigcup_{\mathfrak{w}\in\mathfrak{Z}_2}\Dm(\mathfrak{w}_{\ell}\circ\mathfrak{z}_{\ell})} r_{S_2\circ S_1,\ell}(x+\iota y)\,dx\,dy < \frac{\epsilon}{2} . \end{equation} Now use the uniform tightness of $S_1$ to find a finite set $\mathfrak{Z}_1$ of its branches such that \begin{equation}\label{equ:22jp,1} \begin{split} \sum_{\mathfrak{z}\notin\mathfrak{Z}_1} \left|\Dm(\mathfrak{z}_{\ell})\right| & < \epsilon_3 \\ \int_{\Omega_{\ell}\setminus\bigcup_{\mathfrak{z}\in\mathfrak{Z}_1}\Dm(\mathfrak{z})} r_{S_1,\ell}(x+\iota y)\,dx\,dy &< \frac{\epsilon}{4} . \end{split} \end{equation} Since $\mathfrak{Z}_2$ is a finite set, the return time of all its branches is bounded by $R_2<\infty$. Hence, for any branch $\mathfrak{z}\notin \mathfrak{Z}_1$, we estimate \[ \int_{\Dm(\mathfrak{z}_{\ell})\setminus\mathfrak{z}_{\ell}^{-1}\left(\bigcup_{\mathfrak{w}\in\mathfrak{Z}_2}\Dm(\mathfrak{w}_{\ell})\right)} r_{S_2\circ S_1,\ell}(x+\iota y)\,dx\,dy \leq \bigl( r_{S_1,\ell}\left(\Dm(\mathfrak{z}_{\ell})\right) + R_2\bigr) \left| \Dm(\mathfrak{z}_{\ell}) \right| \] which after summing up over all $\mathfrak{z}\notin \mathfrak{Z}_1$ leads to \begin{multline*} \int_{\bigcup_{\mathfrak{z}\notin\mathfrak{Z}_1}\Dm(\mathfrak{z}_{\ell})\setminus\bigcup_{\mathfrak{w}\in\mathfrak{Z}_2,\mathfrak{z}\notin\mathfrak{Z}_1}\Dm(\mathfrak{w}_{\ell}\circ\mathfrak{z}_{\ell})} r_{S_2\circ S_1,\ell}(x+\iota y)\,dx\,dy \leq \\ \int_{\bigcup_{\mathfrak{z}\notin\mathfrak{Z}_1} \Dm(\mathfrak{z})} r_{S_1,\ell}(x+\iota y)\,dx\,dy + R_2 \sum_{\mathfrak{z}\notin\mathfrak{Z}_1} \left| \Dm(\mathfrak{z}_{\ell}) \right| \leq \frac{\epsilon}{4} +R_2\epsilon_3 \end{multline*} where the final estimates come from inequalities~(\ref{equ:22jp,1}). We now take $\epsilon_3$ so small that $R_2\epsilon_3 < \frac{\epsilon}{4}$ and together with estimate~(\ref{equ:22jp,2}) we obtain \[ \int_{\Omega_{\ell}\setminus\bigcup_{\mathfrak{w}\in\mathfrak{Z}_2,\mathfrak{z}\in\mathfrak{Z}_1}\Dm(\mathfrak{w}_{\ell}\circ\mathfrak{z}_{\ell})} r_{S_2\circ S_1,\ell}(x+\iota y)\,dx\,dy < \epsilon .\] \end{proof} \subsection{Post-singular branches.} The singular value of many branches which is contained in the fundamental annulus is $1$. That singular value is adjacent to the domains of two branches $\mathfrak{z}_{+,\pm,2,0,\ell}$ for all $\ell$. These branches will be described as post-singular and will require special attention if we want to induce a uniformly hyperbolic map. Post-singular branches both have the form $\tau_{\ell}H_{\ell}$. One quickly sees that this is conjugated to the multiplication by $\tau_{\ell}$ by $H_{\ell}$: \[ \mathbf{H}_{\ell}^{-1}\tau_{\ell} H^2_{\ell} = \mathbf{H}_{\ell}^{-1} \tau_{\ell} H_{\ell} G_{\ell} \tau_{\ell} = \mathbf{H}_{\ell}^{-1} H_{\ell} \tau_{\ell} = \tau_{\ell} \] where $\mathbf{H}_{\ell}^{-1}$ is the inverse branch defined on $\CC\setminus \left( (-\infty,0] \cup \tau^2_{\ell},+\infty)\right)$. Define {\em exit time} at $\mathfrak{E}_{\text{sing},\ell}(z)$ as the smallest non-negative number of iterates of $\tau_{\ell} H_{\ell}$ needed to map $z$ outside the union of domains of the post-singular branches. \begin{lem}\label{lem:10kp,1} There exists a constant $K$ such that for some $\ell_0$, all $\ell\geq\ell_0$ and all $z\in\CC$ \[ \mathfrak{E}_{\text{sing},\ell}(z) \leq K\log\max\left(\frac{1}{z-1},2\right) .\] \end{lem} \begin{proof} For $\ell$ large enough $\mathbf{H}_{\ell}^{-1}$ maps $\Omega_{+,\ell}$ for $\ell$ into $D(0,R_0)$, for a fixed $R_0$, and with uniformly bounded distortion. Then the exit time is bounded as follows: \[ \mathfrak{E}_{\text{sing},\ell}(z) \leq \frac{\log \frac{R_0}{\left|\mathbf{H}^{-1}(z)\right|}}{\log\tau_{\ell}} .\] Since $|\mathbf{H}^{-1}(z)| > K_1 |z-1| $ with $K_1>0$ because of the bounded distortion, the estimate of the Lemma follows. \end{proof} \begin{defi}\label{defi:7mp,1} Define the generic {\em post-singularly refined} map $\tsing$ as consisting of sequences (cf. Definition~\ref{defi:22ja,1}) which begin with any symbol {\em other than} $(+,\pm,2,0)$ and followed by any, possibly empty, sequence consisting of the two post-singular symbols $(+,\pm,2,0)$. \end{defi} Dynamically, $\tsing$ is $T$ restricted to the complement of the domains of the post-singular branches followed by the first exit map from the union of the domains of the post-singular branches. Our main goal will be the following. \begin{prop}\label{prop:7mp,1} Mapping $\tsing$ is uniformly tight, cf. Definition~\ref{defi:3jp,1}. \end{prop} For convenience, in the proof we will use \[ \dsing(\ell) := \Dm(\mathfrak{z}_{+,+,2,0,\ell}) \cup \Dm(\mathfrak{z}_{+,-,2,0,\ell}),\] i.e. the union of the domains of the post-singular branches. \paragraph{Reduction to pointy branches.} A natural way to approach the proof is by using Lemma~\ref{lem:4ja,1}. Here $S_{2,\ell}$ is the first exit map from $\dsing(\ell)$. $S_2$ is uniformly tight by Lemma~\ref{lem:10kp,1}, however $S_1$ cannot be made $T$, since not all branches of $T_{\ell}$ map onto $\dsing(\ell)=\Dm(S_{2,\ell})$ with distortion uniformly bounded for large $\ell$. Since $\exp(\phi_{\pm,\ell})$ extend to coverings of $\CC\setminus \left(\{0\} \cup \{ \tau_{\ell}^{2n} :\: n=0,1,\cdots\}\right)$ by Theorem~\ref{theo:3hp,1}, all branches of odd level can be continued univalently to map onto $\Omega_{+\,ell}$. The same can be said of all branches of positive level or non-zero height. Thus, if in $T$ one eliminates all symbols except for $(\pm,\pm,-2k,0) :\: k=0,1,\cdots$ such a map $S_{1,\ell}$ for every $\ell$ large enough will map onto $\dsing(\ell)$ with uniformly bounded distortion and $S_2\circ S_1$ is uniformly tight by Lemma~\ref{lem:10kp,1}. For the proof of Proposition~\ref{prop:7mp,1} it will suffice to demonstrate that $S_2 \circ S'_{1}$ is uniformly tight where $S'_1$ consists of sequences of length $1$ of ``pointy'' symbols $(\pm,\pm,-2k,0) :\: k=0,1,\cdots$. The use of adjective ``pointy'' is based on the fact that the domains of those branches for any $\ell$ are exactly though that touch the cusps in the boundary of $\Omega_{\ell}$. Those cusps are critical points, or essential singularities in the case of $\ell=\infty$, of analytic continuations of those branches for which $1$ is the critical, respectively asymptotic, value. Since the limiting singularities are flat, the uniform integrability $\mathfrak{E}_{\text{sing},\ell}\circ S'_{1,\ell}$ is far from obvious and will require estimates. \paragraph{A uniform estimate with respect to $\ell$.} Let us write \[ \mathfrak{Q}(\lambda,\ell) := \{z\in\Omega_{-,\ell} :\: \Re \phi_{-,\ell}(z) < -\lambda,\, \left|\Im\phi_{-,\ell}(z)\right| < \pi\} .\] \begin{lem}\label{lem:12ma,1} \[ \exists \ell_0<\infty\; \lim_{\lambda_0\rightarrow\infty} \sup_{\ell\geq\ell_0} \int_{\mathfrak{Q}(\lambda_0,\ell)} \Re \phi_{-,\ell}(z)\, d\Leb_2(z) = 0 .\] \end{lem} \begin{proof} The problem of uniformity with respect to $\ell$ here is different from the situation treated in the proof of Theorem~\ref{theo:27ha,1}. It is described in literature as ``dominant convergence'', see~\cite{profesorus1} Thms. 8.1-8.3. The result in our notations can be stated as follows. \begin{fact}\label{fa:12mp,1} For every $K>1$ there exist $\lambda(K)>0$ and $\ell_0(K)<\infty$ such that the mapping $\frac{1}{\phi_{,\ell}(z)}$ on the set $\{z\in\Omega_{-,\ell} :\: \Re\phi(z) < -\lambda(K)\}$ for all $\ell\geq\ell_0$ takes form $\frac{1}{\phi_{-,\ell}(z)} = \Upsilon_{\ell}(2C_{\ell}z^2)$ where $C_{\ell} = -D^3G_{\ell}(x_0,\ell) > 0$ and $\Upsilon_{\ell}$ is a $K$-quasi-conformal mapping of $\hat{\CC}$ fixing $0,1,\infty$. \end{fact} This will now be used to estimate the Lebesgue measure of $\mathfrak{Q}(r,\ell)$ for $\lambda>\lambda(K)$. The image of the set $\{ u\in\CC : -2\lambda<\Re u< -\lambda, -\pi < \arg u <\pi \}$ by the complex inversion is easily seen to have measure bounded above by $K_1 \lambda^{-3}$. Consider now the set $\Upsilon_{\ell}^{-1}\left(\{ u\in\CC : -2\lambda < \Re u< -\lambda, -\pi < \arg u <\pi \}\right)$. Since $\Upsilon_{\ell}$ belongs to a compact family of quasi-conformal mappings the constant in the change of area theorem of Bojarski, see~\cite{lehvi} Theorem 5.2, is uniform and the measure of that set is bounded above by $K_1 \lambda^{-\frac{5}{2}}$ for all $\ell\geq\ell_0(K)$ provided that $K>1$ was chosen close enough to $1$. Additionally, by the H\"{o}lder continuity of quasiconformal mappings in the usual sense, that set is disjoint from $D\left(0, K_3 \lambda^{-\frac{5}{4}}\right)$. Next, we take a preimage of the same set by the mapping $2C_{\ell}z^2$ observing that $\C_{\ell}$ is bounded below by $\frac{C_{\infty}}{2} > 0$ for all $\ell$ large enough. The Jacobian of the inverse mapping is bounded above by $K_4\lambda^{\frac{5}{4}}$ which leads to \[ \left|\mathfrak{Q}(\lambda,\ell)\setminus \mathfrak{Q}(2\lambda,\ell) \right| \leq K_6 \lambda^{-\frac{5}{4}} \] for all $\ell\geq\ell_0$ which by summing up a geometric progression leads to \begin{equation}\label{equ:12mp,1} \left|\mathfrak{Q}(\lambda,\ell) \right| \leq 2K_6 \lambda^{-\frac{5}{4}} . \end{equation} Let us write $q_{\ell}(\lambda) = \|\mathfrak{Q}(\lambda,\ell)\|$. Then the integral in the claim of the Lemma can be written as \[ \int_{\mathfrak{Q}(\lambda_0,\ell)} \Re \phi_{-,\ell}(z)\, d\Leb_2(z) = \int_{\lambda_0}^{\infty} \lambda dq_{\ell}(\lambda) = \left. \lambda q_{\ell}(\lambda) \right|_{\lambda_0}^{\infty} - \int_{\lambda_0}^{\infty} q_{\ell}(\lambda)\,d\lambda = O\left(\lambda_0^{-\frac{1}{4}}\right) \] independently of $\ell\geq\ell_0$ by estimate~(\ref{equ:12mp,1}). \end{proof} \paragraph{The primary pair of pointy branches.} In this fragment we consider the generic induced map $S_2 \circ\mathfrak{z}_{+,\pm,0,0}$. It consists of sequences of symbols which begin with $(+,\pm,0,0)$ and are followed by a sequence of post-singular symbols $(+,\pm,2,0)$ of any positive finite length. For any $\ell$ \begin{equation}\label{equ:12mp,3} \mathfrak{z}_{+,\pm,0,0,\ell} = \tau^{-1}_{\ell} H_{+,\ell} = H_{-,\ell} \circ G_{\ell} = H_{-,\ell}\circ H_{-,\ell}\circ \tau^{-1}_{\ell} . \end{equation} \begin{lem}\label{lem:12mp,1} The generic mapping $S_2\circ\mathfrak{z}_{+,\pm,0,0}$ is uniformly tight. \end{lem} \begin{proof} The final action by $H_{-,\ell}$ in the representation~(\ref{equ:12mp,3}) with the image $\dsing(\ell)$ has distortion uniformly bounded in terms of $\ell$. Taking into account Lemma~\ref{lem:10kp,1} we conclude that $\mathfrak{E}_{\text{sing},\ell} \circ \mathfrak{z}_{+,\pm,0,0}(z) \leq K_1 \max\left( -\phi_{-,\ell}(\tau^{-1}_{\ell} z),1\right)$. Also, since the part of the border $\Omega_{+,\ell}$ adjacent to $\tau_{\ell} x_{0,\ell}$ consists of preimages of segments in the positive half-line, we have $|\Im\phi_{-,\ell}(\tau^{-1}_{\ell} z)|<\pi$ for all $z\in\Dm\mathfrak{z}_{+,\pm,0,0,\ell}$. By Lemma~\ref{lem:12ma,1}, for some $\ell_0<\infty$ any $\epsilon>0$ there exists $r(\epsilon)>0$ and all $\ell\geq \ell_0$ \[ \int_{\Dm\mathfrak{z}_{+,\pm,0,0,\ell}} \left( \mathfrak{E}_{\text{sing},\ell}\cdot \chi_{|z-1|>r(\epsilon)}\right)\circ\mathfrak{z}_{+,\pm,0,0,\ell}(z)\, d\Leb_2(z) < \epsilon .\] By Lemma~\ref{lem:10kp,1} there is an upper limit $K_2$, independent of $\ell$ sufficiently large, on $\mathfrak{E}_{\text{sing},\ell}$ for branches not contained in $D(1,r(\epsilon))$ and since only two post-singular symbols are allowed that translates to a number of branches bounded depending on $\epsilon$ for all such $\ell$. \end{proof} \paragraph{Proof of Proposition~\ref{prop:7mp,1}.} All remaining pointy branches have the form $\tau_{\ell}^{-n}H$ for $n>1$ and hence are in the form $\mathfrak{z}_{+,\pm,0,0,\ell} \circ G_{\ell}^{n-1}$. Since $G$ maps as a covering of $\CC\setminus\left(\{0\}\cup [\tau^2_{\ell},+\infty)\right)$ and its post-singular set under iteration on $\tau_{\ell}\Omega_{-,\ell}$ is contained in $[0,1]$, the mapping by iterates of $G_{\ell}$ onto $\Dm\mathfrak{z}_{+,\pm,0,0,\ell}$ has distortion uniformly bounded independently of $\ell$ sufficiently large. Thus, for any particular pointed branch its composition with $S_2$ is uniformly tight by Lemma~\ref{lem:4ja,1}. On the other have, the closures of domains of pointy branches for $\ell=\infty$ converge to $\{x_{0,\infty}\}$ is Hausdorff topology so for $\ell \geq \ell(\epsilon)$ only a fixed number are not contained in $D(x_{0,\infty},\epsilon)$ and hence their joint measure is bounded by $\pi\epsilon^2$. As a corollary to Lemma~\ref{lem:12mp,1} the integral $\int_{\Dm\mathfrak{z}_{=,\pm,0,0.\ell}} \mathfrak{E}_{\text{sing},\ell}\circ \mathfrak{z}_{+,\pm,0,0.\ell}(z)\,d\Leb_2(z) \leq K_1$ for all $\ell$ sufficiently large. Consequently, on the domain of any pointy branch the integral of the return time of $\tsing{\ell}$ is bounded by $K_2$ times the measure of that domain of the branch. Hence, all pointy branches except for finitely many can carry arbitrarily small integral of the return time. \subsection{Parabolic branches.}\label{sec:4qa,1} Another pair branches which cause problems are {\em parabolic branches} with symbols $(-,\pm,1,0)$. They have the form $\tau_{\ell}^{-1}H_{\ell}$ which is just $G_{\ell}$ conjugated by $\tau_{\ell}$ and has a fixed point $\tau_{\ell}^{-1}x_{\pm,\ell}$ which is on the boundary of the domain of such a branch. Hence the name, since when $\ell\rightarrow\infty$ this develops into a parabolic fixed point for compositions of such branches. We will proceed to get rid of them by inducing, much in the way we dealt with the post-singular branches, except that now $\tsing$ rather than $T$ is our starting point. Define $\dpar(\ell)$, $\ell\leq\infty$ as the union of the domains of two parabolic branches. Next, the {\em exit time} $\mathfrak{E}_{\text{par},\ell}(z)$ as the smallest non-negative number of iterates of $\tau_{\ell}^{-1}H_{-,\ell}$ needed to take $z$ outside $\dsing(\ell)$. \begin{defi}\label{defi:21ma,1} Define the generic {\em parabolically refined} map $\tpar$ as consisting of sequences of $\tsing$, cf. Definition~\ref{defi:7mp,1} which begin with any symbol {\em other than} a parabolic one $(-,\pm,1,0)$ and followed by any, possibly empty, sequence consisting of the two parabolic symbols $(-,\pm,1,0)$. \end{defi} Our goal is \begin{prop}\label{prop:21ma,1} Mapping $\tpar$ is uniformly tight, cf. Definition~\ref{defi:3jp,1}. \end{prop} Here $S_2 := S_{\text{par}}$ is the first exit map from the parabolic branches, given by all non-empty sequences of parabolic symbols $(-,\pm,1,0)$ and $S_1$ is $\tsing$ restricted by excluding sequences with initial parabolic symbols as in Definition~\ref{defi:21ma,1}. Since after such exclusion the distortion of the map in uniformly bounded, then we are in the position to use Lemma~\ref{lem:4ja,1}. The bounded distortion follows from the additional claim of Lemma~\ref{lem:3hp,2} by which the branches of $\tsing(\ell)$ extend univalently onto $V^{\circ}_{\text{large},\ell}(0)$ which compactly contains $\dpar(\ell)$ and the nesting is uniform for large $\ell$ by Proposition~\ref{prop:11jp,1}. Hence, Proposition~\ref{prop:21ma,1} is reduced to the uniform tightness of $S_{\text{par}}$. \paragraph{Connection with Theorem~\ref{theo:27ha,1}.} That theorem will be our main tool, since after conjugation by $\tau_{\ell}$ the pair of parabolic branches becomes $G_{\ell}$ and $\tau_{\ell}\dpar(\ell)$ is contained in the complement of $\Omega_{\ell}$. For $N$ natural define $\dpar(\pm,N,\ell) := \{ z\in\dpar(\ell)\cap\HH_{\pm} :\: \mathfrak{E}_{\text{par},\ell}(z) \geq N \}$. \begin{lem}\label{lem:21ma,1} For any $r>0$ there are $\ell(r), N(r)<\infty$ such that for every $\ell\geq\ell(r)$ the inclusion $\dpar\left(\pm,N(r),\ell\right) \subset D(x_{0,\infty}\tau_{\infty}^{-1},r)$ holds. \end{lem} \begin{proof} If not, then by taking convergent subsequences we construct at point $z_0$ whose complete forward orbit by $G_{\infty}$ is contained in a bounded set and avoids a wedge $\{ x_0+\zeta :\: |\arg\zeta^2|<\frac{\pi}{4}, 0<|\zeta|<R\}$ with some $R>0$. This is not consistent with the action of $G_{\infty}$ in a half-plane under which every bounded orbit tends to $x_{0,\infty}$ tangentially to the real line. \end{proof} Denote by $G_{1,\ell} = \tau^{-1}_{\ell}G\tau_{\ell}$ and write $\mathbf{G}_{1,\ell}^{-1}$ for its principal inverse branch, cf. Definition~\ref{defi:1hp,1}. Now estimate for any $n\geq 1$ \begin{multline}\label{equ:21mp,1} \int_{\dpar(+,n,\ell) \cup \dpar(-,n,\ell)} \mathfrak{E}_{\text{par},\ell}(z)\,d\Leb_2(z) = \\ (n-1)\left(|\dpar(+,n,\ell)|+|\dpar(-,n,\ell)|\right) + \\\sum_{k=0}^{\infty} \left( |\dpar(+,n+k,\ell)| + |\dpar(-,n+k,\ell)| \right) = \\ (n-1)\left(|\dpar(+,n,\ell)|+|\dpar(-,n,\ell)|\right) +\\ \sum_{k=0}^{\infty} \left| \mathbf{G}_{1,\ell}^{-2k}\left(\dpar(+,n,\ell) \cup \dpar(+,n+1,\ell)\right)\right| +\\ \sum_{k=0}^{\infty} \left| \mathbf{G}_{1,\ell}^{-2k}\left(\dpar(-,n,\ell) \cup \dpar(-,n+1,\ell)\right)\right| \leq \\ (n-1)\left(|\dpar(+,n,\ell)|+|\dpar(-,n,\ell)|\right) +\\ 2\sum_{k=0}^{\infty} \left| \mathbf{G}_{1,\ell}^{-2k}\left(\dpar(+,n,\ell)\right)\right|+2\sum_{k=0}^{\infty} \left| \mathbf{G}_{1,\ell}^{-2k}\left(\dpar(-,n,\ell)\right)\right| =\\ \sum_{s=+,-} \Bigl[ (n-1)|\dpar(s,n,\ell)| +2\int_{\dpar(s,n,\ell) } P(\mathbf{G}_{1,\ell}^{-2},z,2)\, d\Leb_2(z) \Bigr] \end{multline} introducing the Poincar\'{e} series, cf. Definition~\ref{defi:27ha,1}. By symmetry, we will fix $s=+$ in the final estimate of~(\ref{equ:21mp,1}) and show that the quantity tends to $0$ as $n\rightarrow\infty$. \begin{lem}\label{lem:21mp,1} Suppose that $0<n'<n$ and $n-n'$ is even. Then, for any $\ell$, \[ \frac{n-n'+2}{2} | \dpar(+,n,\ell) | \leq \int_{\dpar(+,n',\ell)} P(\mathbf{G}_{1,\ell}^{-2},z,2)\, d\Leb_2(z) .\] \end{lem} \begin{proof} By the change of variable formula \[ |\dpar(+,n,\ell)| = \int_{\dpar(+,n,\ell)\setminus\dpar(+,n+2,\ell)} P(\mathbf{G}_{1,\ell}^{-2},z,2)\, d\Leb_2(z) .\] For the same reason, for $k>0$ \begin{multline*} \int_{\dpar(+,n-2k,\ell)\setminus\dpar(+,n-2k+2,\ell)} P(\mathbf{G}_{1,\ell}^{-2},z,2)\, d\Leb_2(z) \geq\\ \int_{\dpar(+,n,\ell)\setminus\dpar(+,n+2,\ell)} P(\mathbf{G}_{1,\ell}^{-2},z,2)\, d\Leb_2(z) \end{multline*} and the Lemma~\ref{lem:21mp,1} follows. \end{proof} \begin{coro}\label{coro:21mp,1} For $n\geq 2$, \[ (n-1)|\dpar(+,n,\ell) \leq 5 \int_{\dpar(+,\lfloor\frac{n}{2}\rfloor,\ell)} P(\mathbf{G}_{1,\ell}^{-2},z,2)\, d\Leb_2(z) .\] \end{coro} The main estimate is given by the next Lemma. \begin{lem}\label{lem:21mp,2} \[ \lim_{n\rightarrow\infty} \sup \left\{ \int_{\dpar(+,n,\ell)} P(\mathbf{G}_{1,\ell}^{-2},z,2)\, d\Leb_2(z) :\: \ell=2,4,\cdots,\infty\right\} = 0 . \] \end{lem} \begin{proof} Let $\sigma$ be either $2$ or $2-\delta$ for some $0<\delta<\frac{2}{3}$. Using Lemma~\ref{lem:21ma,1} fix $N$ to use Theorem~\ref{theo:27ha,1} and assert that for all $\ell$ \begin{equation}\label{equ:30zp,1} \int_{\dpar(+,N,\ell)} P(\mathbf{G}_{1,\ell}^{-2},z,\sigma)\, d\Leb_2(z) \leq K_1 . \end{equation} For $\ell\geq\ell_0$ all sets $\dpar(+,N,\ell)$ are contained in a compact subset of $\HH_+$. Since $\mathbf{G}^{-2}_{1,\ell}$ for $\ell\leq\infty$ is a contraction in the Poincar\'{e} metric of $\HH_+$ with the limit $x_{+,\ell}$, by taking convergent subsequences we get that $\lim_{n\rightarrow\infty} d_n = 0$, where \[ d_n := \inf \left\{ |D_z\mathfrak{G}^{-2n}_{1,\ell}(z)| :\: z\in\dpar(+,N,\ell),\, \ell=2,4,\cdots,\infty \right\} .\] For $n\geq N$ and of the same parity and every $\ell$, we get \[ \int_{\dpar(+,n,\ell)} P(\mathbf{G}_{1,\ell}^{-2},z,2) = \sum_{k\geq\frac{n-N}{2}} \int_{\dpar(+,N,\ell} |D_z\mathbf{G}_{1,\ell}^{-2k}(z)|^2\, d\Leb_2(z) .\] On the other hand, for $\delta:\:0<\delta<\frac{2}{3}$, cf. estimate~(\ref{equ:30zp,1}), \begin{multline*} K_1 \geq\int_{\dpar(+,N,\ell)} P(\mathbf{G}_{1,\ell}^{-2},z,2-\delta) \geq\\ \sum_{k\geq 0}\int_{\dpar(+,N,\ell)} |D_z\mathbf{G}_{1,\ell}^{-2k}(z)|^{2-\delta}\, d\Leb_2(z)| \geq\\ \sup \left\{ d^{-\delta}_m : m\geq \frac{n-N}{2}\right\} \int_{\dpar(+,N,\ell)} |D_z\mathbf{G}_{1,\ell}^{-2k}(z)|^2\, d\Leb_2(z) . \end{multline*} Since $d_m\rightarrow 0$, the Lemma~\ref{lem:21mp,2} follows. \end{proof} \paragraph{Conclusion of the proof of Proposition~\ref{prop:21ma,1}.} By formula~(\ref{equ:21mp,1}, Corollary~\ref{coro:21mp,1} and Lemma~\ref{lem:21mp,2} \[ \lim_{n\rightarrow\infty} \sup\left\{ \int_{\dpar(+,n,\ell) \cup \dpar(-,n,\ell)} \mathfrak{E}_{\text{par},\ell}(z)\,d\Leb_2(z) :\: \ell=2,4,\cdots,\infty \right\} = 0 .\] The are only $2^{n-1}$ ways to compose parabolic branches with return time less than $n$. Uniform tightness thus follows. \subsection{Outer branches.} Mappings $\tpar(\ell)$ already have uniformly bounded distortion by Lemma~\ref{lem:3hp,2}, since for any branch and $s$ it is possible to choose $\mathfrak{m}$ so that its domain is uniformly nested in $V^{\mathfrak{m}}_{\ell}(s)$. However, we would like to have a uniform expanding Markov structure. Such a structure is suggested by Proposition~\ref{prop:10hp,1} since any composition of branches can be extended univalently to map onto a slit plane $\CC\setminus \left(-\infty,\tau_{\ell}^{-1}]\cup [1,+\infty) \right)$. Let us choose a finite set $\mathfrak{B}$ of branches $T$ which contains symbols $(+,\pm,2,0)$ and $(-,\pm,1,0)$ which correspond to post-singular and parabolic branches discussed before. \begin{defi}\label{defi:23mp,1} Define the generic {\em hyperbolic induced} mapping $\thyp(\mathfrak{B})$ as consisting of sequences which begin with any symbol not in $\mathfrak{B}$ and are followed by any, possibly empty, sequence consisting of exclusively of symbols from $\mathfrak{B}$. \end{defi} For any $\ell$, the domain of $\thyp\left(\mathfrak{B}\right)_{\ell}$ is the subset of $A_{\ell}$ with domains of the branches from $\mathfrak{B}$ removed. Let $\vhyp$ be a bounded Jordan domain with smooth boundary chosen so that $x_{0,\infty} \in \vhyp$ and \[ \overline{\vhyp} \subset \CC\setminus \left(-\infty,\tau_{\ell}^{-1}]\cup [1,+\infty)\right) \] for all $\ell\geq\ell(\vhyp)$, where $\ell(\vhyp)<\infty$. \begin{theo}\label{theo:23mp,1} Fix any domain $\vhyp$ as specified above. Also choose a finite set of branches $\mathfrak{B}$ which contains post-singular and parabolic branches. Then the following properties hold. \begin{itemize} \item $\thyp(\mathfrak{B})$ is uniformly tight, cf. Definition~\ref{defi:3jp,1}. \item For every $\ell\geq\ell(\vhyp)$ any composition of branches of $\thyp(\ell)$ extends univalently onto \[ \CC\setminus\left( (-\infty,\tau_{\ell}^{-1}]\cup [1,+\infty) \right) .\] \item There exist a compact set $F_{\text{hyp}} \subset \vhyp$, a particular choice of $\mathfrak{B}$ and $\ell_0<\infty$ such that for every $\ell\geq\ell_0$ and every branch $\mathfrak{z}\in\thyp(\mathfrak{B})$, the inclusion $\mathfrak{z}_{\ell}^{-1}(\vhyp) \subset F_{\text{hyp}}$ holds, where $\mathfrak{z}_{\ell}^{-1}$ should be taken in the sense of the univalent extension of $\mathfrak{z}$ postulated by the previous claim. \end{itemize} \end{theo} \paragraph{Uniform tightness of exit maps.} Recall that for any generic branch $\mathfrak{z}$ the {\em first exit map} from $\mathfrak{z}$ consists of sequences which repeat the symbol of $\mathfrak{z}$ an arbitrary number of times. \begin{lem}\label{lem:26ma,1} If $\mathfrak{z}$ is not post-singular or parabolic, then the first exit map from $\mathfrak{z}$ is uniformly tight. \end{lem} \begin{proof} For any $\ell$ let the {\em block} mean the union of domains of $\mathfrak{z}_{\ell}$ and the adjacent branch of the same side, sign and level and height greater by $1$. Use Proposition~\ref{prop:10hp,1} to verify that $\mathfrak{z}_{\ell}$ maps with distortion that is bounded independently of $\mathfrak{z}$ and $\ell$ onto the block. Indeed, in the Proposition choose $\theta_0=\theta_1=\pm\frac{\pi}{2}$ with the sign depending on the sign of $\mathfrak{z}$. Then by Proposition~\ref{prop:11jp,1} the distance from the block to the slits is uniformly bounded away from $0$. But then the measure of of the set of points which do not exit by the $n$-th iterate of $\mathfrak{z}_{\ell}$ shrinks uniformly exponentially with $n$ and uniform tightness follows. \end{proof} \paragraph{Proof of the first claim.} One can construct $\thyp(\mathfrak{B})$ by successively inducing on branches one by one. That is, we set $T(0) = \tpar$ and then $T(n+1)$ is the first exit map from the next branch followed by $T(n)$. Each of those maps is uniformly tight by Lemma~\ref{lem:26ma,1} and Lemma~\ref{lem:4ja,1} and since the set $\mathfrak{B}$ was assumed finite, that includes $\thyp(\mathfrak{B})$. \paragraph{Proof of the second claim.} This follows immediately from Proposition~\ref{prop:10hp,1} taken with $\theta_0=\pi, \theta_1=0$. \paragraph{Proof of the third claim.} $\vhyp$ has a finite hyperbolic diameter in $\CC\setminus\left( (-\infty,\tau_{\ell}^{-1}] \cup [1,+\infty) \right)$ and therefore $\mathfrak{z}^{-1}(\vhyp)$ has bounded hyperbolic diameter in the appropriate extension domain $\hat{\Omega}_{\ell}\setminus (-\infty,x_{0,\ell}]$ or $\hat{\Omega}_{\ell}\setminus [x_{0,\ell},+\infty)$, cf. Lemma~\ref{lem:3hp,2}. Note that $x_{0,\ell}$ is on the boundary of that domain and hence Euclidean diameters of $\mathfrak{z}^{-1}(\vhyp)$ tend to $0$ as a uniform function of the distance from $\Dm(\mathfrak{z})_{\ell}$ to $x_{0,\ell}$. By Proposition~\ref{prop:11jp,1} for any $r>0$ the domains of all branches of $T_{\ell}$ except for finitely many are contained in $D(x_{0,\infty},r)$ for all $\ell\geq \ell(r)$. By what was just observed, the same holds for perhaps larger sets $\mathfrak{z}_{\ell}^{-1}(\vhyp)$. We choose $r$ so small that $\overline{D(x_{0,\infty},r)} \subset\vhyp$ and set $F_{\text{hyp}} := \overline{D(x_{0,\infty},r)}$. As $\mathfrak{B}$ we pick precisely the finite set of branches $\mathfrak{z}$ characterized by the condition $\exists \ell\geq\ell(r)\;\;\mathfrak{z}_{\ell}^{-1}(\vhyp) \not\subset F$. Then $\ell_0 := \max\left(\ell(\vhyp),\ell(r)\right)$. \section{Invariant densities} \paragraph{Choice of the domain.} We fix some $\vhyp$ in Theorem~\ref{theo:23mp,1} which implies a choice of $\mathfrak{B}$. To unclutter notation, we will write $\thyp$ for $\thyp(\mathfrak{B})$ and $\thyp(\ell)$ for the instance of $\thyp$ for a particular $\ell$. \paragraph{The Perron-Frobenius operator.} For all $\ell$ sufficiently large the Perron-Frobenius operator can be defined on $L_{1}(\Dm\left(\thyp(\ell)\right),\Leb_2,\RR)$ by \[ (\mathfrak{P}_{\ell} g)(u) =\sum_{\mathfrak{z}\in\thyp} |D\mathfrak{z}^{-1}_{\ell}(u)|^2 g\left(\mathfrak{z}_{\ell}^{-1}(u)\right) \] where we identified a generic induced map $\thyp(\ell)$ with the set of its branches. The term {\em density} will be used for a non-negative function with integral $1$. \begin{fact}\label{fa:24mp,1} The operator $\mathfrak{P}_{\ell}$ is {\em stochastically stable} meaning that there is a invariant density $g^{\infty}_{\ell}$ and for any other density $g\in L_{1}\bigl(\Dm(\left(\thyp(\ell)\right),\Leb_2,\RR\bigr)$, $\lim_{n\rightarrow\infty}\|\mathfrak{P}_{\ell}^ng - g_{\ell}^{\infty}\|_1 = 0$ holds. Additionally, if $\gamma\in L_{1}\bigl(\Dm\left(\thyp(\ell)\right),\Leb_2,\RR\bigr)$ is a fixed point of $\mathfrak{P}_{\ell}$, then $\gamma = c g_{\ell}^{\infty}$, $c\in\RR$. \end{fact} Our goal will be to show that densities $g_{\ell}^{\infty}$ are real-analytic and converge analytically to $g_{\infty}^{\infty}$ when $\ell\rightarrow\infty$. \subsection{The transfer operator.} \begin{defi}\label{defi:24mp,1} Let $X$ denote the space of complex-valued holomorphic functions of two variables defined on $\vhyp \times \vhyp$, continuous to the closure, and real on the diagonal: \[ \forall z\in\vhyp \; \forall f\in X\; f(z,\overline{z}) \in\RR .\] Endow $X$ with the the sup-norm. \end{defi} Then $X$ is a Banach space over $\RR$. \begin{defi}\label{defi:24mp,2} The {\em transfer operator} ${\cal P}_{\ell} :\: X \rightarrow X$ is defined by \[ {\cal P}_{\ell} f(z,w) = \sum_{\mathfrak{z}\in\thyp(\ell)} D\mathfrak{z}^{-1}(z) D\mathfrak{z}^{-1} (w) f\left(\mathfrak{z}^{-1}(z),\mathfrak{z}^{-1}(w)\right) \] where univalent extensions of branches onto $\overline{\vhyp}$ are used, cf. Theorem~\ref{theo:23mp,1}. \end{defi} It is not immediately clear that the transfer operator is continuous or even well-defined. Observe that, at least formally, when $w=\overline{z}$, then \begin{multline*} {\cal P}_{\ell} f(z,\overline{z}) = \sum_{\mathfrak{z}\in\thyp(\ell)} D\mathfrak{z}^{-1}(z) D\mathfrak{z}^{-1} (\overline{z}) f\left(\mathfrak{z}^{-1}(z),\mathfrak{z}^{-1}(\overline{z})\right) =\\ \sum_{\mathfrak{z}\in\thyp(\ell)} D\mathfrak{z}^{-1}(z) \overline{D\mathfrak{z}^{-1}} (z) f\left(\mathfrak{z}^{-1}(z),\overline{\mathfrak{z}^{-1}}(z)\right) \end{multline*} which means that acting on the diagonal $\gamma(z) := f(z,\overline{z})_{|z\in\Dm\left(\thyp(\ell)\right)}$ the transfer operator reduces to the Perron-Frobenius operator $\mathfrak{P}_{\ell}\gamma$. To establish basic properties of the transfer operator introduce {\em branch operators} for a generic branch $\mathfrak{z}$. \begin{equation}\label{equ:27mp,1} {\cal P}_{\mathfrak{z},\ell} f(z,w) = D\mathfrak{z}^{-1}_{\ell}(z)D\mathfrak{z}^{-1}_{\ell}(w) f\left(\mathfrak{z}^{-1}_{\ell}(z),\mathfrak{z}^{-1}_{\ell}(w)\right) . \end{equation} Because of uniformly bounded distortion, cf. Theorem~\ref{theo:23mp,1}, we get an estimate \begin{equation}\label{equ:27mp,2} \| {\cal P}_{\mathfrak{z},\ell} \|\leq K_{\text{norm}} \left|\Dm(\mathfrak{z}_{\ell})\right| \end{equation} for all $\ell$. \begin{lem}\label{lem:24mp,1} For every generic branch $\mathfrak{z}$ of $\thyp$ and $\ell \geq \ell(\mathfrak{z})$, the branch operator ${\cal P}_{\ell,\mathfrak{z}}$ is compact. \end{lem} \begin{proof} Let $X_{\mathfrak{z},\ell}$ means the space of functions $f\in X$ restricted to the domain of $\mathfrak{z}_{\ell}^{-1}(\overline{\vhyp})$, still with the $\sup$-norm. Then, by formula~\ref{equ:27mp,1}, operator ${\cal P}_{\mathfrak{z},\ell}$ can be represented as the composition of a continuous operator on $X_{\mathfrak{z},\ell}$ and the restriction operator from $X$ to $X_{\mathfrak{z},\ell}$. Since $\mathfrak{z}_{\ell}^{-1}\left(\overline{\vhyp}\right)$ is a compact subset of $\vhyp$ by the last claim of Theorem~\ref{theo:23mp,1}, the restriction operator is compact by Cauchy's integral formula and Ascoli-Arzela's theorem. \end{proof} \begin{lem}\label{lem:27mp,2} For some $\ell_0<\infty$ and every $\ell\geq\ell_0$ the series in~\ref{defi:24mp,2} converges in operator norm and ${\cal P}_{\ell}$ is a compact operator. Furthermore, \[ \sup \left\{ \|{\cal P}_{\ell}^n\| :\: n\geq 0,\, \ell\geq\ell_0 \right\} < \infty .\] \end{lem} \begin{proof} The series satisfies Cauchy's condition in operator norm by estimate~(\ref{equ:27mp,2}). The compactness of the limit then follows from Lemma~\ref{lem:24mp,1}. As to the additional claim, observe that for any $n>1$ operator ${\cal P}_{\ell}^n$ is given by a formula analogous to Definition~\ref{defi:24mp,2} except that the summation extends over $\mathfrak{z}\in\thyp^n$. Since estimate~(\ref{equ:27mp,2}) is only based on bounded distortion, it extends to branches of $\thyp^n$. \end{proof} \begin{lem}\label{lem:27mp,3} \[ \lim_{\ell\rightarrow\infty} \|{\cal P}_{\ell} - {\cal P}_{\infty}\| = 0 .\] \end{lem} \begin{proof} Observe first that for every generic branch $\mathfrak{z}$ branch operators ${\cal P}_{\mathfrak{z},\ell}$ converge in operator norm to ${\cal P}_{\mathfrak{z},\infty}$, cf. Proposition~\ref{prop:11jp,1}. By the uniform tightness of $\thyp$ and estimate~(\ref{equ:27mp,2}) for every $\epsilon>0$ there is a finite set of branches $\mathfrak{B}(\epsilon)$ such that for every $\ell$ sufficiently large \[ \bigl\| \left( \sum_{\mathfrak{z}\in\mathfrak{B}} {\cal P}_{\mathfrak{z},\ell} \right) - {\cal P}_{\ell} \bigr\| < \epsilon .\] The claim of the lemma now follows by a $3\epsilon$ argument. \end{proof} \paragraph{Fixed points of transfer operators.} Let us consider the $D_X$ which consists of all $f\in X$ such that $\gamma_f(z) := f(z,\overline{z})_{|z\in \Dm\left(\thyp(\ell)\right)}$ is a density when restricted to $z\in\Dm\left(\thyp(\ell)\right)$. \subparagraph{An identity principle for two complex variables.} \begin{fact}\label{fa:30zp,1} Suppose that $U$ is a domain in $\CC$ and $F :\: U\times U\rightarrow\CC$ is holomorphic. If $F(z,\overline{z})=0$ for all $z$ in an open subset of $U$, then $F$ vanishes identically. \end{fact} This Fact is a particular case of Theorem 7, p. 36, in~\cite{bochner}. \begin{prop}\label{prop:27mp,1} For every $\ell$ large enough ${\cal P}_{\ell}$ has a unique fixed point $f^{\infty}_{\ell} \in D_X$. Additionally, $f^{\infty}_{\ell}(z,\overline{z}) = g^{\infty}_{\ell}$, cf. Fact~\ref{fa:24mp,1}. Moreover, \[ \lim_{\ell\rightarrow\infty} \| f^{\infty}_{\ell} - f_{\infty}^{\infty} \|_X = 0 .\] \end{prop} \paragraph{Proof of Proposition~\ref{prop:27mp,1}.} Let $f\in D_X$. By Fact~\ref{fa:24mp,1} functions ${\cal P}_{\ell}^n \gamma_f$ converge to $g_{\ell}^{\infty}$ in $L_1\bigl(\Dm\left(\thyp(\ell)\right)\bigr)$. By the compactness and uniform bound of Lemma~\ref{lem:27mp,2}, sequence ${\cal P}_{\ell}^n f$ is contained in a compact subset of $X$. Take any two convergent subsequences ${\cal P}_{\ell}^{n_p} f$. Since their limits are the same on the diagonal $(z,\overline{z}) :\: z\in\Dm\left(\thyp(\ell)\right)$ they are the same in $\vhyp\times\vhyp$ by Fact~\ref{fa:30zp,1}. Hence, the entire sequence ${\cal P}_{\ell}^n f$ converges to a fixed point $f_{\ell}^{\infty}$ of ${\cal P}_{\ell}$. Since the transfer operator preserves $D_X$, then $f_{\ell}^{\infty}\in D_X$. The same argument based on the identity principle shows that the limit is independent of $f$ and hence unique for each $\ell$. Since the initial $f$ can be constant on $\vhyp$, it also follows that the set $\{ f_{\ell}^{\infty} :\: \ell\geq\ell_0\}$ is bounded by some $K_1$. It remains to show the convergence of $f_{\ell}^{\infty}$ to $f_{\infty}^{\infty}$. Since ${\cal P}_{\infty}$ is compact, the set $\{ {\cal P}_{\infty} f_{\ell}^{\infty} :\: \ell\geq\ell_0 \}$ is pre-compact in $X$. Observe that \[ \lim_{\ell\rightarrow\infty} \|{\cal P}_{\infty} f_{\ell}^{\infty} - f_{\ell}^{\infty} \|_X \leq \lim_{\ell\rightarrow\infty} \|{\cal P}_{\infty}-{\cal P}_{\ell}\| K_1 = 0\] by Lemma~\ref{lem:27mp,3}. Let $\hat{f}^{\infty}$ be the limit for any convergent subsequence $\ell_p$ of ${\cal P}_{\infty} f_{\ell}^{\infty}$. Then, by what has just been observed, $\hat{f}^{\infty} = \lim_{p\rightarrow\infty} f_{\ell_p}^{\infty}$. Then for any $p$ \[ {\cal P}_{\infty} \hat{f}^{\infty} - \hat{f}_{\infty} = {\cal P}_{\infty} (\hat{f}^{\infty} - f_{\ell_p}^{\infty}) + ({\cal P}_{\infty}-{\cal P}_{\ell_p}) f_{\ell_p}^{\infty} + (f_{\ell_p}^{\infty} - \hat{f}^{\infty}) .\] Since every term on the right-hand side tends to $0$ as $p\rightarrow\infty$, $\hat{f}^{\infty}$ is a fixed point of ${\cal P}_{\infty}$ and $\hat{f}^{\infty} = f_{\infty}^{\infty}$ by the uniqueness of the fixed point. Proposition~\ref{prop:27mp,1} has been proved. \subsection{Invariant measures for original mappings $T_{\ell}$.} The general construction for passing from invariant measures for induced maps $\thyp(\ell)$ to measures for $T_{\ell}$ is well known. Let $r_{\thyp}(\mathfrak{z})$ denote the return time for branch $\mathfrak{z}$ of $\thyp$, i.e. the number of iterates of $T$ which compose to $\mathfrak{z}$. If $\mu_{\text{hyp},\ell}$ is an invariant measure for $\thyp(\ell)$, which is piecewise equal to $T_{\ell}^j$, then \[ \mu_{\ell} := \sum_{\mathfrak{z}\in \thyp} \sum_{j=0}^{r_{\thyp}(\mathfrak{z})-1} \left( T^j_{\ell|\Dm(\mathfrak{z}_{\ell})}\right)_* \mu_{\text{hyp},\ell} \] is immediately seen to be invariant under the push-forward by $T_{\ell}$. We will work in the spaces $L_p := L_p(\CC,\Leb_2,\RR)$. \begin{defi}\label{defi:28mp,1} Given a set $B$ of branches of $\thyp$ the {\em propagation operator} \[ \hat{\mathfrak{P}}_{B,\ell} :\: L_{\infty} \rightarrow L_1 \] is defined by \[ \hat{\mathfrak{P}}_{B,\ell} g(u) = \sum_{\mathfrak{z}\in B} \sum_{j=0}^{r_{\thyp}(\mathfrak{z})-1} |DT_{\ell}^{-j}(u)|^2 (g\cdot\chi_{Dm(\mathfrak{z}_{\ell})})\left(T_{\ell}^{-j}(u)\right) .\] When $B$ is not specified, it is assumed to be the set of all branches of $\thyp$. \end{defi} The convergence of this sum will addressed later. Define {\em simple propagation operators} \begin{equation}\label{equ:6na,1} \hat{\mathfrak{P}}_{\mathfrak{z},j,\ell} g(u)) := |DT_{\ell}^{-j}(u)|^2 (g\cdot\chi_{Dm(\mathfrak{z}_{\ell})})\left(T_{\ell}^{-j}(u)\right) \end{equation} where $\mathfrak{z}$ is any branch of $\thyp$ and $0\leq j<r_{\thyp}(\mathfrak{z})$. \begin{equation}\label{equ:3np,1} \| \hat{\mathfrak{P}}_{\mathfrak{z},\ell}\| \leq K_{\text{prop}} |\Dm\left(\mathfrak{z}_{\ell}\right)| . \end{equation} \begin{lem}\label{lem:4np,1} For any branch $\mathfrak{z}$ of $\thyp$, $0\leq j<r(\mathfrak{z})$ and $\ell$ sufficiently large, \[ \lim_{\ell\rightarrow\infty} \hat{\mathfrak{P}}_{\mathfrak{z},j,\ell} g_{\ell}^{\infty} = \hat{\mathfrak{P}}_{\mathfrak{z},j,\ell} g_{\infty}^{\infty} \] in $L_1$, cf. Fact~\ref{fa:24mp,1}. \end{lem} \begin{proof} By Proposition~\ref{prop:27mp,1} densities $g_{\ell}^{\infty}(u)$ extend to real analytic functions \[ \hat{g}_{\ell}(u) := f_{\ell}^{\infty}(u,\overline{u}) \] which converge uniformly on $\vhyp$. Replacing $g_{\ell}^{\infty}$ with $\hat{g}_{\ell}$ in the claim of the Lemma does not change its meaning, since the characteristic functions of $\Dm(\mathfrak{z}_{\ell})$ force the restriction to appropriate domains. In connection with formula~(\ref{equ:6na,1}) write \begin{multline*} |DT_{\ell}^{-j}|^2 \left(\hat{g}_{\ell}\cdot\chi_{\Dm(\mathfrak{z}_{\ell})}\right)\circ T^{-j}_{\ell} - |DT^{-j}_{\infty}|^2\left(\hat{g}_{\infty}\cdot \chi_{\Dm(\mathfrak{z}_{\infty})}\right)\circ T^{-j}_{\infty} =\\ \left( |DT^{-j}_{\ell}|^2\hat{g}_{\ell}\circ T^{-j}_{\ell}-|DT^{-j}_{\infty}|^2 |\hat{g}_{\infty}\circ T^{-j}_{\infty}\right) \chi_{T_{\infty}^{-j}\left(\Dm(\mathfrak{z}_{\infty})\right)} +\\ \left(\chi_{\Dm(\mathfrak{z}_{\ell})}\circ T^{-j}_{\ell}-\chi_{\Dm(\mathfrak{z}_{\infty})}\circ T_{\ell}^{-j}\right) \hat{g}_{\ell}\circ T^{-j}_{\ell} |DT^{-j}_{\ell}|^2 . \end{multline*} To see that the first term goes to $0$ in $L_1$, observe $\hat{g}_{\ell}, T^{-j}_{\ell} \rightarrow \hat{g}_{\infty}, T^{-j}_{\infty}$ uniformly on compact subsets of $\vhyp$. Next, $\mathfrak{z}_{\ell}$ maps onto $\bigcup_{h\notin\mathfrak{B}} \Dm(h_{\ell})$, cf. Theorem~\ref{theo:23mp,1}. For any fixed $h$, $\chi_{\Dm(h_{\ell}\circ\mathfrak{z}_{\ell})}$ converges to $\chi_{\Dm(h_{\infty}\circ\mathfrak{z}_{\infty})}$ in $L_1$ by Lemma~\ref{lem:26ma,1}. Then the sums over all $h$ also converge by the dominated convergence theorem. After changing variables by $T^{-j}_{\ell}$, we estimate the $L_1$-norm of the second term as follows: \begin{multline*} \left| \left(\chi_{\Dm(\mathfrak{z}_{\ell})}\circ T^{-j}_{\ell}-\chi_{\Dm(\mathfrak{z}_{\infty})}\circ T_{\ell}^{-j}\right) \hat{g}_{\ell}\circ T^{-j}_{\ell} |DT^{-j}_{\ell}|^2 \right|_1 =\\ \left| (\chi_{\Dm(\mathfrak{z}_{\ell})} - \chi_{Dm(\mathfrak{z}_{\infty})}) \hat{g}_{\ell}\right|_1 . \end{multline*} Since $\hat{g}_{\ell}$ are uniformly bounded, cf. Proposition~\ref{prop:27mp,1}, the second term goes to $0$ as $\ell\rightarrow\infty$ by Fact~\ref{fa:4np,1}. \end{proof} \begin{theo}\label{theo:28mp,1} Mappings $T_{\ell}$ have invariant densities $\gamma_{\ell} \in L_1\left(\Leb_2)\right)$, each supported on the corresponding $\Dm(T_{\ell})$. The convergence $\lim_{\ell\rightarrow\infty} \|\gamma_{\ell}-\gamma_{\infty}\|_1 = 0$. Additionally, for some $R_{\text{analytic}}>0$ and all $\ell$ sufficiently large $\gamma_{\ell}$ extend to holomorphic functions of two complex variables on \[ D(x_{0,\infty},R_{\text{analytic}})\times D(x_{0,\infty},R_{\text{analytic}})\] which converge uniformly on this set. \end{theo} \paragraph{Proof of $L_1$ convergence.} Densities $\gamma_{\ell}$ are given by $\gamma_{\ell} := \hat{\mathfrak{P}}_{\ell} g_{\ell}^{\infty}$. We will now address the convergence of the propagation operator. All inverse branches in formula~(\ref{equ:6na,1}) have uniformly bounded distortion since they map into the domains of branches of $\thyp(\ell)$ which are all contained in $F_{\text{hyp}}$. So, for some $K_{\text{propag}}$ independent of $\mathfrak{z},\ell,j$ \[ \| \hat{\mathfrak{P}}_{\mathfrak{z},j\ell} \|_1 \leq K_{\text{propag}} |\chi_{\Dm(\mathfrak{z}_{\ell})}| .\] If we sum this up over all $j :\: 0\leq j<r_{\thyp}(\mathfrak{z})$ we get a factor $r_{\thyp}(\mathfrak{z})$ and if we further sum up over all branches $\mathfrak{z}$ in some set $B$, then since the domains are disjoint, we get \[ |\hat{\mathfrak{P}}_{B,\ell}|_1 \leq K_{\text{propag}} \int_{\bigcup_{\mathfrak{z}\in B} \Dm(\mathfrak{z}_{\ell})} r_{\thyp,\ell}(u)\, d\Leb_2(u) .\] Let $\epsilon>0$ be arbitrary. By uniform tightness in Theorem~\ref{theo:23mp,1}, cf. Definition~\ref{defi:3jp,1}, there is a set of branches $B(\epsilon)$ including all but finitely many branches of $\thyp$ such that \begin{equation}\label{equ:6np,1} \|\hat{\mathfrak{P}}_{B(\epsilon),\ell} \leq K_{\text{propag}} \epsilon \end{equation} uniformly for all $\ell\geq\ell(\epsilon)$. Then we estimate \begin{multline*} \limsup_{\ell\rightarrow\infty} \left\| \hat{\mathfrak{P}}_{\ell} g_{\ell}^{\infty} - \hat{\mathfrak{P}}_{\infty} g_{\infty}^{\infty}\right\|_1 \leq \limsup_{\ell\rightarrow\infty} \bigl\|\sum_{\mathfrak{z}\notin B(\epsilon)}\sum_{j=0}^{r(\mathfrak{z})-1}\left( \hat{\mathfrak{P}}_{\mathfrak{z},j,\ell} g_{\ell}^{\infty} - \hat{\mathfrak{P}}_{\mathfrak{z},j,\infty} g_{\infty}^{\infty}\right)\bigr\|_1 +\\ \limsup_{\ell\rightarrow\infty} \left\| \hat{\mathfrak{P}}_{B(\epsilon),\ell} g_{\ell}^{\infty} - \hat{\mathfrak{P}}_{\infty} g_{\infty}^{\infty}\right\|_1 \leq 0 + K_{\text{propag}}\epsilon\left(\|g_{\ell}^{\infty}\|_{\infty} + \|g_{\infty}^{\infty}\|_{\infty}\right) \end{multline*} where in the final estimate we used Lemma~\ref{lem:4np,1} and inequality~(\ref{equ:6np,1}). Since $\epsilon$ was arbitrary and $\|g_{\ell}^{\infty}\|_{\infty}$ are uniformly bounded, cf. Proposition~\ref{prop:27mp,1}, it follows that \[ \lim_{\ell\rightarrow\infty} \left\| \hat{\mathfrak{P}}_{\ell} g_{\ell}^{\infty} - \hat{\mathfrak{P}}_{\infty} g_{\infty}^{\infty}\right\|_1 = 0 \] which is the first claim of Theorem~\ref{theo:28mp,1}. To prove that claim about $R_{\text{analytic}}$ start with the observation that $\thyp(\mathfrak{B})$ is the first return map to union of the domains of branches not in $\mathfrak{B}$. Then the formula of Definition~\ref{defi:28mp,1} implies that $\gamma_{\ell}=g_{\ell}^{\infty}$ on the domain of such branches, since the the only possibility to get something other than $0$ for $u\in \Dm(\mathfrak{x}_{\ell} :\: \mathfrak{x}\notin\mathfrak{B}$ is when $\mathfrak{z}=\mathfrak{x}$ and $j=0$. On the other hand, the set $\mathfrak{B}$ was finite and disjoint from $D(x_{0,\ell},R_{\text{analytic}})$ for some $R_{\text{analytic}}>0$. Then the $\gamma_{\ell} = g_{\ell}^{\infty}$ continue analytically to $f_{\ell}^{\infty}$ and the second claim of Theorem~\ref{theo:28mp,1} follows from Proposition~\ref{prop:27mp,1}. \subsection{Geometric properties of the boundary of $\Omega_{\ell}$.} Recall the arc $\mathfrak{w}_{\ell}$ which joins $x_{\pm,\ell}$ to $x_{0,\ell}$ and is invariant under $G_{\ell}$. Define $\hat{\HH_{\pm,\ell}} := \HH_{\pm}\setminus \mathfrak{w}_{\ell}$. We can also take $\hat{\HH_{\pm,\infty}} = \hat{\HH}_{\pm}$. Then $\hat{\HH_{\pm,\ell}}$ are swapped by the action of the principal inverse branch $\mathbf{G}_{\ell}^{-1}$: \[ \mathbf{G}_{\ell}^{-1}(\hat{\HH}_{\sigma,\ell}) \subset \hat{\HH}_{-\sigma,\ell} ,\, \sigma=\pm .\] \paragraph{Fundamental segments in the border of $\Omega_{\ell}$.} Recall the point $y_{\ell} := \mathbf{G}_{\ell}^{-1}(\tau_{\ell} x_{0,\ell})$. The first segment of the boundary of $\Omega_{+,\ell}$ is composed of two arcs in the form $G_{\pm,\ell}^{-1}[y_{\ell},0)$ where $G_{\pm}$ denotes the inverse branch which maps $\CC\setminus [0,+\infty)$ into $\HH_{\pm}$ while sending $(y_{\ell},0)$ into the border of $\Omega_{+,\ell}$. Then the rest of the boundary consists images of these two arcs by $\mathbf{G}_{\ell}^{-n}$ for $n>0$. \begin{defi}\label{defi:24xp,1} We will call all arcs in the form $\mathbf{G}_{\ell}^{-n}\left(G_{\pm,\ell}^{-1}[y_{\ell},0)\right)$, $n=0,1,\cdots$ {\em fundamental segments} of $\partial\Omega_{\ell}$ of {\em order} $n$. \end{defi} \begin{lem}\label{lem:31na,1} There exists finite constants $K_{\text{arc}},\ell_0$ and for every $n\geq 0$ and $\ell_0\leq\ell\leq\infty$ if $\mathfrak{u}$ is a fundamental segment of the boundary of $\Omega_{\ell}$ with endpoints $u_1$ and $u_2$, then \[ \diam(\mathfrak{u}) \leq K_{\text{arc}} |u_1 - u_2| \; .\] \end{lem} \begin{proof} For any $\ell=\infty$ arc of orders $2$ and $3$ are compactly contained in $\hat{\HH}_{\infty}$. This persists for large $\ell$ by Proposition~\ref{prop:11jp,1} and Lemma~\ref{lem:17ha,3}. For those arcs the estimate holds. Arcs of larger orders are obtained by taking inverse branches $\mathbf{G}_{\ell}^{-1}$ with uniformly bounded distortion. \end{proof} \begin{lem}\label{lem:27va,1} There are a constant $K_{\ref{lem:27va,1}}$ and an integer $\ell_{\ref{lem:27va,1}}$ with the following property. Let $\ell\geq\ell_{\ref{lem:27va,1}}$ and $v$ belong to a fundamental segment of order $n$ with endpoints $u_1,u_2$ in the boundary of $\Omega_{\sigma_1,\ell} \cap \HH_{\sigma_2}$, where $\sigma_1,\sigma_2=\pm$. Then there is $\hat{v} \in \partial\Omega_{-\sigma_1,\ell}\cap\HH_{\sigma_2}$ which belongs to a fundamental segment of order $n+1$ and \[ |\hat{v}-v| \leq K_{\ref{lem:27va,1}} |u_1-u_2| .\] \end{lem} \begin{proof} Without loss of generality $n\geq 2$ and let $v_0 = G_{\ell}^{n-2}(v)$. By Proposition~\ref{prop:11jp,1} for $\ell$ sufficiently large the fundamental segment of order $2$ which contains $v_0$ and a point $\hat{v}_0$ which belongs to the fundamental segment of order $3$ can be enclosed in a disk of fixed hyperbolic diameter in $\hat{\HH}_{(-1)^n\sigma_2}$. This configuration is then mapped by $\mathbf{G}_{\ell}^{-n+2}$ with bounded distortion, which yields the claim of the Lemma. \end{proof} \paragraph{Action of $\mathbf{G}_{\ell}^{-2}$ near fixed points.} \begin{lem}\label{lem:28na,2} Choose $\ell$ and consider point $u+x_{0,\ell}$ in the boundary of $\Omega_{\ell}$. Then, for every $\epsilon>0$ and integer $k$, possibly negative, there is $r(\epsilon,k)>0$ and for every $r: 0<r\leq r(\epsilon,k)$ there is $\ell(r,\epsilon,k)<\infty$ so that for every $\ell(r,\epsilon,k)\leq\ell\leq\infty$ the estimate \[ \left| \mathbf{G}_{\ell}^{-2k}(u+x_{0,\ell}) - x_{0,\ell} - u(1+ka|u|^2) \right| < \epsilon |u|^3 \] holds wherever $r \leq |u| \leq r(\epsilon,k)$, where \[ a := -\frac{1}{3}SG_{\infty}(x_{0,\infty}) = \frac{1}{6}D^3\mathbf{G}^{-2}_{\infty}(x_{0,\infty}) > 0 .\] \end{lem} \begin{proof} We will first consider the case of $k=\pm 1$. We have the expansion \[ \mathbf{G}_{\ell}^{-2k}(x_{0,\ell}+u) - x_{0,\ell} = u + ka u^3 + \psi_{\ell,k}(u) + u^4 O_{\ell}(1) \] where $\left| O_{\ell}(1) \right|$ is bounded for all $0<|u|<r_0$ and $\ell\geq\ell_0$ while $\lim_{\ell\rightarrow\infty} \psi_{\ell,k}(u) = 0$ for all $0<|u|<r_0$. Since $u+x_{0,,\infty}$ is in the boundary of $\Omega_{\ell}$, by Lemma~\ref{lem:18ha,1} for any $\eta>0$ there is $r(\eta)>0$ and if $r<|u|<r(\delta)$ as well as $\ell\geq\hat{\ell}(\eta,r)$, then $|\arg u^2-\pi|<\eta$. Consequently, \begin{equation}\label{equ:30vp,1} \left| u^2 + |u|^2\right| < 2\sin\frac{\eta}{2}|u|^2 < \eta . \end{equation} when $\eta$ is small enough. Thus, \[ \left| \mathbf{G}_{\ell}^{\mp 2}(u+x_{0,\ell}) - x_{0,\ell} - u(1 \pm a|u|^2) \right| \leq \eta |u|^3 + |\psi_{\ell,k}(u)| + |u|^4 |O_{\ell}(1)| .\] This leads to fixing $r(\epsilon,k)$ so that for $|u|\leq r(\epsilon,k)$ the last term is less than $\frac{\epsilon}{4}|u|^3$ and addition to $r(\epsilon,k) \leq r( \epsilon/4)$ in the previous estimate. Then, \[ \left| \mathbf{G}_{\ell}^{\mp}(u+x_{0,\ell}) - x_{0,\ell} - u(1 \pm a|u|^2) \right| \leq \frac{\epsilon}{2} |u|^3 + |\psi_{\ell,k}(u)| .\] Now as soon as $r$ has been specified in the claim of the Lemma and $|u|>r$, $|\psi_{\ell,k}(u)|$ can be made less than $\frac{\epsilon}{4} |u|^3$ by choosing $\ell(r,\epsilon,k)$ suitably large. Additionally, we need $\ell(r,\epsilon,k) \geq \hat{\ell}(\epsilon/4,r)$ which was needed to secure estimate~(\ref{equ:30vp,1}). The general case follows by induction with respect to $k$. To fix attention, we will describe the inductive step for $k>0$. Let $u_k = \mathbf{G}_{\ell}^{-2k}(u+x_{0,\infty}) - x_{0,\infty}$. Then \begin{equation}\label{equ:31vp,1} \left| \mathbf{G}_{\ell}^{-2}(u_k-x_{0,\ell}) - x_{0,\ell} - u_k(1+a|u_k|^2) \right| < \epsilon_1 |u_k|^3 . \end{equation} By the inductive step \[ \left| u_k - u(1+ka|u|^2) \right| < \epsilon_2 |u|^3 . \] In particular, $u_k$ and $u$ differ only by $O(|u|^3)$ and $u_k|u_k|^2 = u|u|^2 + O(|u|^5)$. Furthermore, \begin{multline*} \left| u_k(1+a|u_k|^2) - u(1+(k+1)a|u|^2) \right| = \\ \left| u_k(1+a|u_k|^2) - a u_k |u_k|^2 - ku(1+ka|u|^2) + O(|u|^5) \right| \leq \epsilon_2 |u|^3 . \end{multline*} From formula~(\ref{equ:31vp,1}), we get \[ \left| \mathbf{G}_{\ell}^{-2}(u_k+x_{0,\ell}) - x_{0,\ell} - u(1+(k+1)a|u|^2) \right| \leq \epsilon_1 |u_k|^3 + \epsilon_2|u|^2 +O(|u|^5) .\] Given $\epsilon>0$, if we take $\epsilon_1=\epsilon_2=\epsilon/3$ and $u$ small enough, we get the claim. \end{proof} \begin{lem}\label{lem:2xp,1} Consider a fundamental segment $\mathfrak{u}$ in the boundary of $\Omega_{\ell}$ with endpoints $v$ and $\mathbf{G}^{-2}_{\ell}(u)$. Let $r>0$. There are constants $K_{\ref{lem:2xp,1}}$ and $\ell(r)$ such that if $\ell\geq\ell(r)$ and $\mathfrak{u}$ intersects $C(x_{0,\ell},r)$, then $|v-\mathbf{G}_{\ell}^{-2}(v)| < K_{\ref{lem:2xp,1}}r^3$. \end{lem} \begin{proof} In Lemma~\ref{lem:28na,2} set $\epsilon=a$ and $k=1$. Let $\rho:=|v-x_{0,\ell}|$. To use Lemma~\ref{lem:28na,2} we need to have $\rho$ suitably small. This is true provided that the order of $\mathfrak{u}$ is sufficiently large and $\ell$ is large as well, by Proposition~\ref{prop:11jp,1}. The estimate of the Lemma can be easily met for finitely many orders based on the same Proposition. Then for $\ell\geq\ell(\rho)$ Lemma~\ref{lem:28na,2} will implies that \begin{equation}\label{equ:2xp,1} |v-\mathbf{G}^{-2}_{\ell}(v)| \leq a\rho^3 . \end{equation} By Lemma~\ref{lem:31na,1} \[ \rho(1-aK_{\text{arc}}\rho^2) \leq r \leq \rho(1+aK_{\text{arc}}\rho^2) .\] Again, for $\rho < \sqrt{\frac{1}{2aK_{\text{arc}}}}$, this reduces to $\frac{r}{2}\leq\rho\leq 2r$. Hence, we can replace the condition $\ell\geq\ell(\rho)$ with $\ell\geq\ell(\frac{r}{2})$ and rewrite estimate~(\ref{equ:2xp,1}) as \[ |v-\mathbf{G}^{-2}_{\ell}(v)| \leq 8 a r^3 .\] \end{proof} \paragraph{Sections of $\partial\Omega_{\ell}$ by circles.} Fix $\sigma_1,\sigma_2=\pm$ and $r>0$. Then we will write \[ X_{\sigma_1,\sigma_2,\ell}(r) := \left\{ v\in\partial\Omega_{\sigma_1,\ell}\cap\HH_{\sigma_2} :\: |v-x_{0,\ell}|=r \right\} .\] \begin{lem}\label{lem:31np,2} There exist $r_{\ref{lem:31np,2}}>0$ and an integer constant $N_{\ref{lem:31np,2}}<\infty$ with the following property. For every $r :\: 0<r<r_{\ref{lem:31np,2}}$ there is $\ell(r)<\infty$ and if $\ell(r)\leq\ell\leq\infty$, $\sigma_1,\sigma_2=\pm$, then $X_{\sigma_1,\sigma_2,\ell}(r)$ is contained in some $N_{\ref{lem:31np,2}}$ consecutive fundamental segments in the boundary of $\Omega_{\sigma_1,\ell}\cap\HH_{\sigma_2}$. \end{lem} \begin{proof} Let $v_0$ be the point of $X_{\sigma_1,\sigma_2,\ell}(r)$ which is furthest from $x_{\sigma_2,\ell}$ in the ordering of the arc. Suppose that $k_0$ is an integer chosen so that the fundamental segment which contains $\mathbf{G}_{\ell}^{-2k_0}(v_0)$ also intersects $X_{\sigma_1,\sigma_2,\ell}(r)$ and contains its point $v_1$ which is closest to $x_{\sigma_2,\ell}$ in the ordering of the arc. Use Lemma~\ref{lem:28na,2} with $u=v+0-x_{0,\infty}$ and $\epsilon:=a$. For $r$ sufficiently small and $\ell$ large depending on $r,k_0$, we obtain \[ |\mathbf{G}_{\ell}^{-2k_0}(v_0)-x_{0,\ell}| \leq r\left( 1-(k_0-2)ar^2 \right) . \] Then the diameter of the fundamental segment which contains $\mathbf{G}_{\ell}^{-2k_0}(v_0)$ is at least $(k_0-2)ar^3$. On other hand, the diameter of any fundamental segment which intersects $C(x_{0,\ell},r)$ for $\ell\geq\ell(r)$ is bounded by $K_{~\ref{lem:2xp,1}} K_{\text{arc}} r^3$ in view of Lemmas~\ref{lem:2xp,1} and~\ref{lem:31na,1}. Then $k_0 \leq a^{-1}K_{~\ref{lem:2xp,1}} K_{\text{arc}}+2$ and $N_{\ref{lem:31np,2}}$ is that bound increased by $1$. \end{proof} Results on sections are summarized by the following Proposition. \begin{prop}\label{prop:3xp,1} There exist $R_{\ref{prop:3xp,1}}>0$, $Q_{\ref{prop:3xp,1}}<\infty$ and an integer constant $M_{\ref{prop:3xp,1}}$ with the following property. For every $r :\: 0<r<R_{\ref{prop:3xp,1}}$ there is $\ell(r)<\infty$ and if $\ell\geq\ell(r)$, then the set $X_{\sigma,\ell}(r) = X_{+,\sigma,\ell}(r)\cup X_{-,\sigma,\ell}(r)$ is \begin{itemize} \item covered by fundamental segments in $\partial\Omega_{\ell}\cap\HH_{\sigma}$ with orders that vary by no more than $M_{\ref{prop:3xp,1}}$. \item contained in a Euclidean disk of radius $Q_{\ref{prop:3xp,1}} r^3$, \end{itemize} \end{prop} \subparagraph{Proof of Proposition~\ref{prop:3xp,1}.} Let $v \in X_{-,\sigma,\ell}(r)$ for definiteness belong to a fundamental segment of order $n$. By Lemma~\ref{lem:27va,1} we find $\hat{v}$ in a fundamental segment of order $n+1$ in $\partial\Omega_{+,\ell}$ with \[ |v-\hat{v}| \leq K_{\ref{lem:2xp,1}} K_{\ref{lem:27va,1}} r^3 \] provided that $\ell\geq\ell(r)$ by Lemma~\ref{lem:2xp,1}. Again, by making $r$ small we can ensure that $|\hat{v}-x_{0,\ell}|>\frac{r}{2}$. Then we use Lemma~\ref{lem:28na,1} with $\epsilon=a$. What we get is that when $k> \frac{K_{\ref{lem:2xp,1}} K_{\ref{lem:27va,1}}}{2a}+1$, then $|\mathbf{G}_{\ell}^{k}(\hat{v})-x_{0,\ell}| > r$ while $|\mathbf{G}_{\ell}^{-k}(\hat{v})-x_{0,\ell}| < r$ provided $r$ is small enough depending on $k$ and $\ell$ is large enough depending on $k,r$. In any case, a fundamental segment in $\partial \Omega_{+,\ell}$ with order between $n-k$ and $n+k+2$ intersects $C(x_{0,\ell},r)$. In other words, $X_{+,\sigma,\ell}(r)$ intersects a fundamental segment whose order differs from $n$ by no more than $k+2$. Now the first claim of Proposition~\ref{prop:3xp,1} follows from Lemma~\ref{lem:31np,2}. The second claim is derived by Lemma~\ref{lem:2xp,1}, since $X_{\sigma,\ell}(r)$ can be connected by a bounded number of fundamental segments and the interval from $v$ to $\hat{v}$. \subsection{K\"{o}nig's coordinate.} For $\ell<\infty$ point $x_{+,\ell}$ is an attracting point for $\mathbf{G}_{\ell}^{-2}$ and the basin of attraction contains the entire $\HH_+$. \begin{defi}\label{defi:19np,1} The {\em K\"{o}nig coordinate} $\mathfrak{k}_{\ell,\pm}$ is univalent map from $H_{\pm}$ into $\CC$ given by \[ \mathfrak{k}_{\ell,\pm}(u) = \lim_{n\rightarrow\infty}\bigl[ \left(DG^2_{\ell}(x_{\pm,\ell})\right)^{n} \left(\mathbf{G}_{\ell}^{-2n}(u) - x_{\pm,\ell}\right) \bigr] .\] \end{defi} It is worth noting that since $x_{\pm,\ell}$ form a cycle under $G_{\ell}$, we get \[ DG^2_{\ell}(x_{\pm,\ell}) = DG_{\ell}(x_{+,\ell}) DG_{\ell}(x_{-,\ell}) = \left| DG_{\ell}(x_{\pm,\ell})\right |^2 \] since $DG_{\ell}(x_{+,\ell}) = \overline{DG_{\ell}(x_{-,\ell})}$. We get the functional equation for $u \in \HH_{\pm}$ \[ \mathfrak{k}_{\mp,\ell} \circ \mathbf{G}^{-1}_{\ell}(u) = \left(DG_{\ell}(x_{\pm,\ell})\right)^{-1} \mathfrak{k}_{\pm,\ell}(u)\; . \] Our interest is in the behavior of K\"{o}nig's coordinate in $D(x_{0,\infty},R_{\text{analytic}})\setminus\Omega_{\ell}$. Recall the arc $\mathfrak{w}_{\ell}$ which joins $x_{\pm,\ell}$ to $x_{0,\ell}$ and is the common boundary component of $\Omega_{\pm,\ell}$ and invariant under $G_{\ell}$. It is convenient to restrict the domain of $\mathfrak{k}_{\pm,\ell}$ to $\HH_{\pm}\setminus\mathfrak{w}_{\ell}$ and then we take the logarithm $\log \mathfrak{k}_{\pm,\ell}$ which will map into some horizontal strip of with $2\pi$. Set \begin{equation}\label{equ:2qp,1} t_{\ell} := \frac{\log \left|DG_{\ell}(x_{\pm,\ell})\right|^2}{\log\tau_{\ell}^2} . \end{equation} Then functions $\psi_{\ell} = \log H_{\ell},\, t_{\ell}^{-1}\log \mathfrak{k}_{\pm,\ell}$ satisfy the same functional equation \begin{equation}\label{equ:4qa,1} \psi_{\ell} \circ \mathbf{G}_{\ell}^{-2}(u) = \psi_{\ell}(u) - \log\tau^2_{\ell} . \end{equation} \paragraph{Repelling Fatou coordinate.} When $\ell=\infty$ the K\"{o}nig coordinate is replaced with the exponential of the repelling Fatou coordinate denoted by $\mathfrak{k}_{\pm,\infty}$. The functional equation is $\log\mathfrak{k}_{\pm,\infty}\mathbf{G}^{-2}_{\infty} = 1 + \log \mathfrak{k}_{\pm,\infty}$. In that case we put \[ \psi_{\infty}:=-\log\tau_{\infty}^2\log\mathfrak{k}_{\pm,\infty} \] and the equation~(\ref{equ:4qa,1}) will be satisfied. We will speak a {\em generalized} K\"{o}nig coordinate to include this case. \paragraph{Estimates of the variation of the generalized K\"{o}nig coordinate.} We will write $d_{\pm,\ell}$ for the hyperbolic metric of $\hat{\HH}_{\pm,\ell}$. \begin{lem}\label{lem:28na,1} For every $R>0$ there are $\ell_{\ref{lem:28na,1}}<\infty$ and $K_{\ref{lem:28na,1}}(R)<\infty$ for which the following statement holds true for every $\ell\geq\ell_{\ref{lem:28na,1}}$. Fix a fundamental segment in the boundary of $\Omega_{\ell}\cap\HH_{\pm}$ and denote its endpoints by $u_1,u_2$. Let $\Delta$ be a disk which contains that fundamental segment and whose hyperbolic diameter is less than $R\cdot d_{\pm,\ell}(u_1,u_2)$, Then, whenever $z_1,z_2\in\Delta$, \[ t_{\ell}^{-1}\bigl| \log\mathfrak{k}_{\pm,\ell}(z_1)-\log\mathfrak{k}_{\pm,\ell}(z_1)\bigr| < K_{\ref{lem:28na,1}}(R) .\] \end{lem} \begin{proof} $\ell_{\ref{lem:28na,1}}$ should be chosen so that for every $\ell\geq\ell_{\ref{lem:28na,1}}$ the hyperbolic diameter of the fundamental arcs of order $2,3$ in the boundary of $\Omega_{\ell}$ is bounded by some $K$. Then the same bound holds for all orders, since $\mathbf{G}_{\ell}^{-2}$ is a hyperbolic contraction. Thus, the hyperbolic diameter of $\Delta$ is bounded by $KR$ and so the distortion of $\log \mathfrak{k}_{\pm,\ell}$ is bounded on $\Delta$ in terms of $R$. Let $u_1$ and $u_2$ be the endpoints of the fundamental segment. Then, \[ \bigl|\log\mathfrak{k}_{\pm,\ell}\left(z_1\right) - \log\mathfrak{k}_{\pm,\ell}\left(z_2\right) \bigr| \leq K_1(R) \left|\log\mathfrak{k}_{\pm,\ell}(u_1)-\log\mathfrak{k}_{\pm,\ell}(u_2)\right| = K_1(R)t_{\ell}\log\tau_{\ell}^2 \] where the last inequality follows from formula~\ref{equ:2qp,1}. \end{proof} \paragraph{The filler map.} $\mathfrak{k}_{\pm,\ell}$ is defined up to a multiplicative constant. Let us choose it so that $t_{\ell}^{-1}\log\mathfrak{k}_{\pm,\ell}$ and $\log H_{\ell}$ are equal at an endpoint of the arc of order $2$ in the boundary of $\Omega_{\ell}$. Then we get: \begin{coro}\label{coro:8xp,1} There exists $K_{\ref{coro:8xp,1}}$ such that every $\ell\geq\ell_{\ref{lem:28na,1}}$ at every point $u$ in the boundary of $\Omega_{\ell}$, \[ \left| t_{\ell}^{-1}\log\mathfrak{k}_{\pm,\ell}(u) - \log H_{\ell}(u)\right| \leq K_{\ref{coro:8xp,1}} .\] \end{coro} This follows from Lemma~\ref{lem:28na,1} since its enough to establish the estimate $u$ in arcs of order $2$ and $3$. \begin{lem}\label{lem:31np,3} There are $R_{\ref{lem:31np,3}}>0$ and $K_{\ref{lem:31np,3}}$ and for every $r :\: 0<r\leq R_{\ref{lem:31np,3}}$ one can choose $\ell_{\ref{lem:31np,3}}(r)<\infty$ with the following property. For every $\ell :\: \ell_{\ref{lem:31np,3}}(r) \leq\ell\leq\infty$ there is an arc which contains the set $X_{\sigma,\ell}(r)$, $\sigma=\pm$, cf. Proposition~\ref{prop:3xp,1}, so that for every two points on this arc $t_{\ell}^{-1}\log\mathfrak{k}_{\sigma,\ell}$ differ by no more than $K_{\ref{lem:31np,3}}$. \end{lem} \begin{proof} By Proposition~\ref{prop:3xp,1} the convex hull of $X_{\sigma,\ell}(r)$ on the circle $C(x_{0,\ell},r)$ is contained in a Euclidean disk $\hat{\Delta}$ of radius $2Q_{\ref{prop:3xp,1}} r^3$. On the other hand, if $u\in X_{\sigma,\ell}(r)$ then $|u-\mathbf{G}_{\ell}^{-2}(u)|\geq \frac{a}{2} r^3$ provided that $r$ is sufficiently small and $\ell$ large enough depending on $r$ - just refer to Lemma~\ref{lem:28na,2} with $\epsilon=\frac{a}{2}$. Additionally, the distance from $u$ to $\RR$ is bigger than $r/2$ under the same conditions on $r,\ell$. Hence, $t_{\ell}^{-1}\log\mathfrak{k}_{\sigma,\ell}$ maps $\hat{\Delta}$ with uniformly bounded distortion and the claim follows as in Lemma~\ref{lem:28na,1}. \end{proof} \subsection{The drift integral.} Let us recall the fundamental annulus $A_{\ell}$, cf. Definition~\ref{defi:3hp,2}. The drift integral is \[ \vartheta(\ell) = -\frac{1}{\log\tau_{\ell}} \Re \int_{A_{\ell}} \log\frac{H_{\ell}(u)}{u} \gamma_{\ell}(u),\, d\Leb_2(u) ,\] cf. in~\cite{leswi:limit} Lemma 3.2, Definition~\ref{defi:3hp,2}. The function $\log\frac{H_{\ell}(u)}{u}$ is bounded in $A_{\ell}$ except in neighborhoods of $x_{0,\ell}$. Its growth there can be controlled by the functional equation $H_{\ell}\circ G_{\ell} = \tau^{-2}_{\ell} H_{\ell}$. This shows that for $\ell$ finite the magnitude of that function exceeds $M$ on sets which are exponentially small in $M$ and hence the drift integral is well-defined. For $\ell=\infty$ this argument breaks down and the drift integral has undefined value, see in~\cite{leswi:measure}, Proposition 3.5. Our goal is to prove the following result. \begin{theo}\label{theo:7np,1} There exists a finite limit \[ \lim_{\ell\rightarrow\infty} \vartheta(\ell) = -\frac{1}{\log\tau_{\infty}}\lim_{r\rightarrow 0^+} \Re \int_{A_{\infty}\setminus D(x_{0,\infty},r)} \log\frac{H_{\infty}(u)}{u}\gamma_{\infty}(u)\, d\Leb_2(u) .\] \end{theo} For all $\ell\geq \ell(r)$ the complement of $D(x_{0,\infty},r)$ meets the domains of only finitely many branches of $T_{\ell}$. For that reason $\log\frac{H_{\ell}(u)}{u} \cdot \chi_{A_{\ell}\setminus D(x_{0,\infty},r)}$ are bounded uniformly with respect to $\ell$ and converge to $\log\frac{H_{\infty}(u)}{u}$ pointwise. By the Lebesgue dominated convergence theorem and Theorem~\ref{theo:28mp,1} \begin{multline}\label{equ:17np,1} \lim_{\ell\rightarrow\infty} -\frac{1}{\log\tau_{\ell}} \Re \int_{A_{\ell}\setminus D(x_{0,\infty},r)} \log\frac{H_{\ell}(u)}{u}\gamma_{\ell}(u)\, d\Leb_2(u) =\\ -\frac{1}{\log\tau_{\infty}} \int_{A_{\infty}\setminus D(x_{0,\infty},r)} \log\frac{H_{\infty}(u)}{u}\gamma_{\infty}(u)\, d\Leb_2(u) .\end{multline} \paragraph{Stokes' formula.} We can assume $r<R_{\text{analytic}}$, cf. Theorem~\ref{theo:28mp,1}, and hence all $\gamma_{\ell}$ are analytic. \begin{defi}\label{defi:10np,1} Let $\psi_{\ell}(u)$ satisfy \begin{equation}\label{equ:7np,2} \partial_{\overline{u}} \psi_{\ell}(u) = \gamma_{\ell}(u), \end{equation} normalized so that the linear part at $x_{0,\infty}$ is $\gamma_{\ell}(x_{0,\infty})\overline{u-x_{0,\infty}}$. \end{defi} Stokes' formula is $\int_D F(u)\gamma_{\ell}(u)\, d\Leb_2(u) = \frac{1}{2i} \int_{\partial D} F(u)\psi_{\ell}(u)\, du$ for $F$ holomorphic in $D$ and continuous to the closure. \begin{defi}\label{defi:10xp,1} For every $\ell$ including $\infty$ define $\Phi_{\ell}$ on some fixed neighborhood of $x_{0,\infty}$ by \[ \Phi_{\ell}(z) := \left\{ \begin{array}{ccc} \log \frac{H_{\ell}(z)}{z} &\text{if}& z\in\Omega_{\ell}\\ t_{\ell}^{-1} \log\mathfrak{k}_{\pm,\ell}(z) - \log z& \text{if} & z\notin \Omega_{\ell} \end{array}\right. \] \end{defi} Then, let us also define \begin{multline}\label{equ:10xp,1} \Theta_{\ell}(r) := \Re \left[ \frac{\iota}{2} \left( \int_{C(x_{0,\infty},r)} \Phi_{\ell}(u)\psi_{\ell}(u)\, du \right.\right.+\\ \left.\left.\int_{\partial\Omega_{\ell}\cap D(x_{0,\ell},r)} \left(\log\frac{H_{\ell}(u)}{u} - \Phi_{\ell}(u)\right)\psi_{\ell}(u)\, du \right) \right] . \end{multline} For $\ell<\infty$ we claim that \begin{equation}\label{equ:11xp,1} \Theta_{\ell}(r) = -\int_{D(x_{0,\infty},r)} \Re\Phi_{\ell}(u)\gamma_{\ell}(u)\,d\Leb_2(u) . \end{equation} Observe first that the singularities of $\Phi_{\ell}$ and $\log H_{\ell}$ at $x_{\sigma,\ell}, \sigma=+,-,0$ are logarithmic and therefore integrable. Then take into account that $\Phi_{\ell}$ is discontinuous and hence the Stokes' formula has to be used separately on $\Omega_{\ell}\cap D(x_{0,\infty},r)$ and $D(x_{0,\infty},r)\setminus\Omega_{\ell}$. The boundaries of those sets can be complicated, but add up to $C(x_{0,\infty},r)$ and subtract along $\partial\Omega_{\ell} \cap D(x_{0,\infty},r)$ which corresponds to the second term in formula~(\ref{equ:10xp,1}). For $\ell=\infty$ the convergence of $\Theta_{\infty}$ is not clear and will be shown later. Assuming it holds, in all cases including $\ell=\infty$, we get for $0<\rho<r$ \begin{equation}\label{equ:11xp,2} \Theta_{\ell}(r)-\Theta_{\ell}(\rho) = \int_{\{ u :\: \rho<|u-x_{0,\infty}|<r\}}\Re\Phi_{\ell}(u)\gamma_{\ell}(u)\,d\Leb_2(u) . \end{equation} \begin{prop}\label{prop:10xp,1} For every $\epsilon>0$ there is $r(\epsilon)>0$ and and for every $0<r\leq r(\epsilon)$ there is $\ell_{\ref{prop:10xp,1}}(r)<\infty$ such that \[ \forall \ell\;\; \ell_{\ref{prop:10xp,1}}(r)\leq\ell\leq\infty\implies \left| \Theta_{\ell}\left(r\right)\right| < \epsilon .\] This includes the claim that $\Theta_{\infty}(r)$ is convergent. \end{prop} The proof of this Proposition will require some preparatory estimates. \paragraph{Estimates on circles.} \begin{lem}\label{lem:10na,1} For every $r>0$ there is $\ell(r)<\infty$ so that for all $\ell(r)\leq\ell\leq\infty$ and $u :\: |u-x_{0,\infty}|=r$ \[ \left|\psi_{\ell}(u) - \gamma_{\ell}(x_{0,\infty}) \overline{u-x_{0,\infty}}\right| \leq O_{\psi}(r^2) \] where $O_{\psi}(r)$ is independent of $\ell$ and $\limsup_{r\rightarrow 0^+} r^{-2} O_{\psi}(r^2) < \infty$. \end{lem} \begin{proof} Change variables to $z := u-x_{0,\infty}$. By Definition~\ref{defi:10np,1} $\psi_{\ell}(z) = \gamma_{\ell}(x_{0,\infty})\overline{z} + \psi_{1,\ell}(z)$ where the linear part of $\psi_{1,\ell}$ vanishes at $z=0$. By the analytic convergence claim of Theorem~\ref{theo:28mp,1}, it means that $|\psi_{1,\ell}(z)| \leq K_1 |z|^2$ for all $\ell$ sufficiently large and $z$ in a fixed neighborhood of $0$. Hence, $|\psi_{\ell}(z) - \gamma_{\ell}(x_{0,\infty}) \overline{z}| \leq K_1 |z|^2$. \end{proof} \begin{lem}\label{lem:11na,1} There exist a function $o_{\text{Fatou}}(r^{-2}) :\: \lim_{r\rightarrow 0^+} r^2 O_{\text{Fatou}}(r) = 0$ and a positive constant $C_{\text{Fatou}}$ such that \begin{multline*} \forall\epsilon>0\; \exists r(\epsilon)>0\; \forall 0<r<r(\epsilon) \exists \ell(r) < \infty\; \forall \ell(r)\leq\ell\leq\infty\\ \log H_{\ell}(u) = -\frac{C_{\text{Fatou}}}{(u-x_{0,\infty})^2} + o_{\text{Fatou}}(r^{-2}) \end{multline*} for $u :\: |u-x_{0,\infty}|=r,\, -\pi +\epsilon<\arg (u-x_{0,\infty})^2<\pi-\epsilon$. In particular, it holds for $u$ in $\Omega_{\ell}$. \end{lem} \begin{proof} For $\ell=\infty$ recall Fact~\ref{fa:18ha,1} by which the arc of values of $u$ in the claim of the Lemma is a compact subset of $\Omega_{\infty}$. Then the claim of the Lemma follows from the form of the Fatou coordinate, with $C_{\text{Fatou}} = -\frac{3\log \tau_\infty^2}{D^3 G_{\infty}(x_{0,\ell})}$. For $\ell$ finite, write \[ \log H_{\ell}(u) = \log H_{\infty} \left(\phi_{\pm,\infty}^{-1}\circ\phi_{\pm,\ell}(u)\right) .\] When $r$ has been fixed, the composition in parentheses goes to the identity uniformly on a neighborhood of the arc $u :\: |u-x_{0,\infty}|=r,\, -\pi +\epsilon<\arg (u-x_{0,\infty})^2<\pi-\epsilon$ by Proposition~\ref{prop:11jp,1}. Hence, by choosing $\ell(r)$ large enough we can make $|H_{\infty}(u)-H_{\ell}(u)|$ smaller than some $o(r^{-2})$. \end{proof} \begin{lem}\label{lem:11na,2} For some $K_{\text{arc}}<\infty$ and every $r>0$ there is $\ell(r)<\infty$ such that for all $\ell :\: \ell(r)\leq\ell\leq\infty$ and $u\in \Omega_{\ell} \cap C(x_{0,\infty},r)$ the estimate $|\log H_{\ell}(u)| \leq K_{\text{arc}} r^{-2}$ holds. \end{lem} \begin{proof} Under $\phi^{-1}_{\pm,\infty}$ vertical lines are mapped to arcs which tend to $x_{0,\infty}$ with tangents at angle $\pi/4$ with the real line. Hence, for $L_1$ large enough and positive, the preimage of $L_1+\iota\RR$ is in the domain of the repelling Fatou coordinate of $G_{\infty}$. Its image by the repelling Fatou coordinate is contained in some right half-plane $\Re z > L_2, L_2>0$. For $n>0$ the image of $L_1+n\log\tau_{\infty}^2$ is contained in $\Re z > L_2 + n$. From the asymptotics of the repelling Fatou coordinate, the preimage of $L_1+n\log\tau_{\infty}$ by $H_{\infty}$ is contained in $D(x_{0,\infty},Kn^{-2})$. Hence the desired estimate for $\ell=\infty$ follows on the set $\phi_{\pm,\infty}^{-1}\left(\{ u :\: \Re u> L_1+\log\tau_{\infty}^2\}\right)$. For $\ell$ large enough it is then derived from Proposition~\ref{prop:11jp,1}. On the other hand, when $\Re u$ is bounded, for $\ell$ sufficiently large the preimage by $\phi_{\pm,\ell}$ is contained in the the wedge $|\arg (u-x_{0,\infty})^2| < \frac{3}{4}\pi$ and Lemma~\ref{lem:11na,1} applies with a stronger claim. \end{proof} \begin{lem}\label{lem:10xp,1} For some $K_{\ref{lem:10xp,1}}<\infty$ and every $r>0$ there is $\ell_{\ref{lem:10xp,1}}(r)<\infty$ such that for all $\ell :\: \ell_{\ref{lem:10xp,1}}(r)\leq\ell\leq\infty$ and $u\in C(x_{0,\infty},r)$ the estimate $|\log \Phi_{\ell}(u)| \leq K_{\ref{lem:10xp,1}} r^{-2}$ holds. \end{lem} \begin{proof} The term $\log u$ from Definition~\ref{lem:10xp,1} is bounded and can be ignored. Now in view of Lemma~\ref{lem:11na,2} the estimate needs to be established for $u$ outside of $\Omega_{\ell}$. But then the difference between $t_{\ell}^{-1}\mathfrak{k}_{\pm,\ell}(u)$ and $\log H_{\ell}(u)$ at a point of $X_{\pm,\ell}(r)$, cf. Proposition~\ref{prop:3xp,1}, is uniformly bounded by Lemma~\ref{lem:31np,3} and Corollary~\ref{coro:8xp,1}. \end{proof} \begin{lem}\label{lem:11np,1} In the setting of Proposition~\ref{prop:10xp,1}, for every $\epsilon>0$, $0<r\leq r(\epsilon)$ and $\ell\geq\ell(r)$ \[ \left| \Re \left[ \frac{\iota}{2}\int_{C(x_{0,\infty},r)} \Phi_{\ell}(u)\psi_{\ell}(u)\,du \right] \right| < \frac{\epsilon}{2} .\] \end{lem} \begin{proof} We pick an $\eta>0$ having in mind the statement of Lemma~\ref{lem:11na,1} and then split the arc $C(x_{0,\infty},r)$ into the sum of arcs $C_{\pm}$ which are contained in the sector $u :\: |u-x_{0,\infty}|=r,\, -\pi +\epsilon<\arg (u-x_{0,\infty})^2<\pi-\epsilon$ and in $\Omega_{\ell}$ and $c_{\pm}$ which the rest. The angular measure of $c_{\pm}$ does not exceed $\eta$. If we take into account that $|\psi_{\ell}(u)|\leq K_1|u-x_{0,\infty}|$ by Lemma~\ref{lem:10na,1} and combine with the estimate of Lemma~\ref{lem:10xp,1}, both holding when $\ell\geq\ell(r)$, then for such $\ell$ \begin{equation}\label{equ:15np,3} \bigl| \int_{c_{\pm}} \Phi_{\ell}(u) \psi_{\ell}(u)\, du \bigr| \leq K_1 \eta . \end{equation} We want to have $K_1 \eta(\epsilon) = \frac{\epsilon}{6}$ which sets a value $\eta(\epsilon)$. Now we pass to estimating the integral along $C_{\pm}$. We will rely on Lemma~\ref{lem:11na,1} which requires $r < r(\eta(\epsilon)) := r(\epsilon)$. Then, by Lemma~\ref{lem:11na,1}, \begin{multline}\label{equ:15np,1} \bigl| \Re \left[ \iota\int_{C_{\pm}} \log H_{\ell}(u) \psi_{\ell}(u)\, du \right] \bigr| = \\\bigl| \Re \left[ \iota\int_{C_{\pm}} \gamma_{\ell}(x_{0,\infty})C_{\text{Fatou}} \frac{\overline{z}}{z^2} \, dz \right] \bigr| + C_{\text{Fatou}} r^{-1} O_{\psi}(r^2) + K_2 r^2 o_{\text{Fatou}}(r^{-2}) .\end{multline} The residual terms in estimate~(\ref{equ:15np,1}) tend to $0$ as $r\rightarrow 0^+$ and by $r(\epsilon)$ sufficiently small, we can ensure that they add up to less than $\frac{\epsilon}{3}$. The main term is evaluated directly \begin{equation}\label{equ:15np,2} \Re \left[ \iota \int_{C_{\pm}} \frac{\overline{z}}{z^2} \, dz \right] = 2\Re \left[ \frac{1}{2\iota} \exp(-2\iota\theta) |_{\theta_1}^{\theta_2} \right] \end{equation} where $z=r\exp(\iota\theta)$. $\theta_1$ and $\theta_2$ are in the form $\pm\left(\frac{\pi}{2} - \frac{\eta}{2}\right)$. Inserting $\pm\frac{\pi}{2}$ for $\theta_1,\theta_2$ results in a purely real difference and hence zero contribution to the real part of the main integral. What remains has absolute value bounded by $\eta$. So, this time by possibly decreasing $\eta$ we get less than $\frac{\epsilon}{6}$. This, together with estimates~(\ref{equ:15np,3},\ref{equ:15np,1}), yields the claim of the Lemma. \end{proof} \paragraph{Length of the boundary arcs.} Let us write $w(\sigma,s,\ell)$ for $\partial{\Omega}_{\sigma,\ell}\cap \HH_{s}$ where $\sigma,s$ can be any combination of $+,-$. For $r>|x_{+,\ell}-x_{0,\ell}|$ we will write $w_r(\sigma,s,\ell)$ for the smallest connected subarc of $w(\sigma,s,\ell)$ which touches $x_{s,\ell}$ and contains $w(\sigma,s,\ell)\cap D(x_{0,\ell},r)$. \begin{lem}\label{lem:3qp,1} For every $\varepsilon>0$ there exist $\ell(\varepsilon)<\infty$ and $r(\varepsilon)>0$ so that for every $\ell\geq\ell(\varepsilon)$ and $\sigma,s=\pm$, we get $r(\varepsilon)>|x_{\pm,\ell}-x_{0,\ell}|$ and the Euclidean length $|w_{r(\varepsilon)}(\sigma,s,\ell)|<\varepsilon$. \end{lem} \begin{proof} We start by observing that the length of the basic arc $G_{\pm,\ell}^{-1}[y_{\ell},0)$ is uniformly bounded for all $\ell$ sufficiently large. That arc is the preimage under $\phi_{\sigma,\ell}$ of the horizontal ray $x+\iota\pi :\: -\infty<x<\log |y_{\ell}|$. For $\ell=\infty$ it is an analytic arc of finite length. As $\ell\rightarrow\infty$ $\phi^{-1}_{\sigma,\ell}$ converge uniformly to $\phi^{-1}_{\sigma,\infty}$ together with the derivatives, by Cauchy estimates. Then $w(\sigma,s,\ell)$ is formed by taking images under the inverse map $\mathbf{G}_{\ell}^{-1}$. These mappings all have uniformly bounded distortion for $\ell$ large enough and hence we can estimate the length by taking the sum of absolute values of the derivatives $D_z\mathbf{G}_{\ell}^{-n}\left(G_{\pm,\ell}^{-1}(y_{\ell})\right)$. The requisite estimates are provided by Lemma~\ref{lem:30hp,2}. Point $z$ in the Lemma will be chosen as \begin{equation}\label{equ:2fa,1} z(\sigma,s,\ell):=\mathbf{G}_{\ell}^{-n(\sigma,s)}\left(G_{s',\ell}^{-1}(y_{\ell})\right) \end{equation} where $s'=\pm$ and is equal to $s$ if and only if $n(\sigma,s)$ is even. This will do for $n(\sigma,s)$ and $\ell$ large enough, since $z$ needs to be close enough to $x_{s,\ell}$ and then the condition on the the argument of $z-x_{s,\ell}$ is also satisfied by Lemma~\ref{lem:18ha,1}. Then Lemma~\ref{lem:30hp,2} specifies $k(z(\sigma,s,\ell),\ell)$ which will be written as $k(\sigma,s,\ell)$. First look at the estimate for $\bigl|D_z\mathbf{G}_{\ell}^{-1}\left(z(\sigma,s,\ell)\right)\bigr|$ for $k\geq k(\sigma,s,\ell)$. Recall that $\lim_{\ell\rightarrow\infty} \rho_{\ell} = 0$. Take $\eta=\frac{1}{4}$ while $r$ in Lemma~\ref{lem:30hp,2} can be fixed since $z(\sigma,s,\ell)$ is given by formula~(\ref{equ:2fa,1}). For $\rho_{\ell}$ small enough, $(1+\rho_{\ell})^{-\frac{1}{8}} < 1-\frac{\rho_{\ell}}{7}$ and hence the sum of those derivatives is bounded by $9L\sqrt{\rho_{\ell}}$ which can be made arbitrarily small by taking $\ell$ large enough. For $\hat{k}<k(\sigma,s,\ell)$ the sum of the derivatives of iterates between $\hat{k}$ and $k(\sigma,s,\ell)$ is bounded by $\frac{L'}{\sqrt{\hat{k}}}$. It remains to show that as $r\rightarrow 0$ in the statement of the present Lemma, $\mathbf{G}_{\ell}^{-2k}z(\sigma,s,\ell) \subset D(x_{0,\ell},r)$ implies $k\geq \hat{k}(r)$ and $\hat{k}(r)$ can be made as large as needed by making $r$ small. This is indeed so, since the hyperbolic distance between $z(\sigma,s,\ell)$ and $\mathbf{G}_{\ell}^{-2}\left(z(\sigma,s,\ell)\right)$ is fixed and then shrunk by iterates. Finally, the condition $r(\varepsilon)>|x_{0,\infty}-x_{\pm,\ell}|$ can be satisfied by again specifying $\ell(\varepsilon)$ sufficiently large. \end{proof} \paragraph{Proof of Proposition~\ref{prop:10xp,1}.} In view of Lemma~\ref{lem:11np,1} it remains to estimate \[ \int_{\partial\Omega_{\ell}\cap D(x_{0,\infty},r)} \left( \log\frac{H_{\ell}(u)}{u} - \Phi_{\ell}(u)\right)\, du .\] The integrand is uniformly bounded for all $\ell$ sufficiently large by Corollary~\ref{coro:8xp,1}. The length of $\partial\Omega_{\ell}\cap D(x_{0,\infty},r)$ can be made arbitrarily small for all $\ell$ large enough including $\infty$ by Lemma~\ref{lem:3qp,1} by making $r$ small. Proposition~\ref{prop:10xp,1} has been established. \paragraph{Integral of $\Phi_{\ell}$ outside of $A_{\ell}$.} \begin{prop}\label{prop:11xp,1} For every $\epsilon>0$ there is $r(\epsilon)>0$ and for every $0<r\leq r(\epsilon)$ there is $\ell_{\ref{prop:11xp,1}}(r)<\infty$ such that \[ \forall \ell\;\; \ell_{\ref{prop:10xp,1}}(r)\leq\ell\leq\infty\implies \left| \int_{D(x_{0,\infty},r)\setminus A_{\ell}} \left|\Re\Phi_{\ell}(u)\right|\gamma_{\ell}(u)\,d\Leb_2(u)\right| < \epsilon .\] \end{prop} \paragraph{Proposition~\ref{prop:11xp,1} for the complement of $\Omega_{\ell}$.} Let $W_{\pm,\ell} : = \HH_{\pm} \setminus \overline{\Omega}_{\ell}$. For $r$ small and $\ell$ large given $r$, $D(x_{0,\infty},r)\setminus A_{\ell}$ contains two sets \[ W_{\pm,\ell}(r) := D(x_{\infty,0},r) \cap W_{\pm,\ell} .\] We will prove the estimate of Proposition~\ref{prop:11xp,1} first for these sets. Let us consider the case of $\ell=\infty$ first. Under the repelling Fatou coordinate $W_{\pm,\infty}$ is a strip of bounded horizontal width. Thus the measure of $W_k = \left\{ u\in W_{\pm,\infty}(r) :\: k-1 \leq |t_{\infty}^{-1}\mathfrak{k}_{\pm,\infty}(u)| \leq k\right\}$ is $O(k^{-3})$. Thus, \[ \left|\int_{W_{\pm,\infty}(r)} \Phi_{\infty}(u)\gamma_{\infty}(u)\,d\Leb_2(u)\right| \leq K \sum_{k\geq k(r)} k^{-2} \] where $k(r)$ is the smallest $k$ for which $W_k$ is non-empty. Since $\lim_{r\rightarrow 0^+} k(r) = \infty$ this can be made less than $\epsilon/2$ by taking $r(\epsilon)$ small. For $\ell$ finite we observe first that since $\log H_{\ell}$ is real on $\partial{\Omega}_{\ell}$, then by Corollary~\ref{coro:8xp,1}, the imaginary part of $t_{\ell}^{-1} \log\mathfrak{k}_{\pm,\ell}$ is bounded on $\partial\Omega_{\ell}$, as well as on the arc of $C(x_{0,\infty},r)$ which joins the components of $\partial\Omega_{\ell}$ by Lemma~\ref{lem:31np,3}. By the maximum principle for harmonic functions the imaginary part is thus bounded on $W_{\pm,\ell}(r)$ for $\ell$ large enough depending on $r$. The real part, on the other hand, by the functional equation is just the exit time from $W_{\pm,\ell}(r)$ under $G_{\ell}$ up to constants. In the notations of section~\ref{sec:4qa,1}, \[ \left| \Re t_{\ell}^{-1}\log\mathfrak{k}_{\pm,\ell}(u) \right| \leq K_2 \mathfrak{E}_{\text{par},\ell}(\tau_{\ell}^{-1}u) + K_3 .\] The integral of this over $u \in W_{\pm,\ell}(r) :\: |t_{\ell}^{-1}\Re\mathfrak{k}_{\pm,\ell}(u)| \geq k(r)$ tends to $0$ with $k(r)\rightarrow\infty$ by Proposition~\ref{prop:21ma,1} and since $\lim_{r\rightarrow 0^+} k(r) = \infty$ as in the case of $\ell=\infty$, the proof is finished. \paragraph{Proposition~\ref{prop:11xp,1} for $\tau_{\ell}^{-1}\Omega_{\ell}$.} The rest of $D(x_{0,\infty},r)\setminus A_{\ell}$ is the set $\tau_{\ell}^{-1}\Omega_{+,\ell} \cap D(x_{0,\infty},r)$. $\log H_{\ell}$ is defined in this sector. This time we will write $w_{\ell}(r) := \tau^{-1}_{\ell} \Omega_{+,\ell} \cap D(x_{0,\infty},r)$. The imaginary part of $\log H_{\ell}$ is bounded by $\pi$ on $w_{\ell}(r)$ and the integral of the real part of $\log H_{\ell} = \log\tau_{\ell}^2 + \phi_{-,\ell}$ over the set \[ \mathfrak{Q}(\lambda,\ell) := \left\{u\in\Omega_{-,\ell} :\: \Re H_{\ell}(u) < \lambda, |\Im H_{\ell}(u)|<\pi\right\} \] tend to $0$ uniformly with respect to $\ell$, cf. Lemma~\ref{lem:12ma,1}. Since for every $\lambda$ there is $r(\lambda)>0$ such that for all $\ell$ $w_{\ell}\left(r(\lambda)\right) \subset \mathfrak{Q}(\lambda,\ell)$, the integral can be made arbitrarily small for all $\ell$ by making $r$ small enough. Proposition~\ref{prop:11xp,1} has been proved. \paragraph{Proof of Theorem~\ref{theo:7np,1}.} \subparagraph{Convergence of the right-hand side.} From formula~(\ref{equ:11xp,2}) and Propositions~\ref{prop:10xp,1},~\ref{prop:11xp,1} in the case of $\ell=\infty$ we conclude that for every $\epsilon>0$ there is $r_{\infty}(\epsilon)>0$ such that whenever $0<\rho<r<r_{\infty}(\epsilon)$, then \[ \left| \int_{\{ u:\: \rho<|u-x_{0,\infty}|<r\}\cap A_{\infty}} \Re\frac{H_{\infty}(u)}{u}\gamma_{\infty}(u)\,d\Leb_2(u) \right| < \epsilon .\] Hence, the limit of right-hand side of the formula in Theorem~\ref{theo:7np,1} exists and will be denoted with $\vartheta_{\infty}$. \subparagraph{Convergence for finite $\ell$.} For finite $\ell$, the same Propositions and formula~(\ref{equ:11xp,1}) it follows that for every $\epsilon>0$ there are $r(\epsilon)>0$, then for every $0<r\leq r(\epsilon)$ there is $\ell(r)$ such that for all $\ell\geq\ell(r)$ \begin{equation}\label{equ:14xp,1} \left|\vartheta_{\ell} + \frac{1}{\log\tau_{\ell}} \int_{A_{\ell}\setminus D(x_{0,\infty},r)} \Re \frac{H_{\ell}(u)}{u}\gamma_{\ell}(u)\,d\Leb_2(u)\right| <\epsilon . \end{equation} \subparagraph{The link between finite and infinite $\ell$.} Now take $\rho :\: r(\epsilon)\geq\rho>0$ such that \begin{equation}\label{equ:14xp,2} \left| \vartheta_{\infty} + \frac{1}{\log\tau_{\infty}} \int_{A_{\infty}\setminus D(x_{0,\infty},\rho)} \Re\frac{H_{\infty}(u)}{u}\gamma_{\infty}(u)\,d\Leb_2(u) \right| < \epsilon . \end{equation} By estimate~(\ref{equ:17np,1}) there is $\hat{\ell}(\rho)<\infty$ such that if $\ell\geq\hat{\ell}(\rho)$, then \begin{multline*} \left| - \frac{1}{\log\tau_{\ell}} \int_{ A_{\ell}\setminus D(x_{0,\infty},\rho)} \Re\frac{H_{\ell}(u)}{u}\gamma_{\ell}(u)\,d\Leb_2(u) +\right. \\ \left.\frac{1}{\log\tau_{\infty}} \int_{ A_{\infty}\setminus D(x_{0,\infty},\rho)} \Re\frac{H_{\infty}(u)}{u}\gamma_{\infty}(u)\,d\Leb_2(u) \right| < \epsilon . \end{multline*} When $\ell\geq\max\left(\ell(\rho),\hat{\ell}(\rho)\right)$, from the estimate above and~(\ref{equ:14xp,1}) we conclude that \[ \left |\vartheta_{\ell} + \frac{1}{\log\tau_{\infty}} \int_{ A_{\infty}\setminus D(x_{0,\infty},\rho)} \Re\frac{H_{\infty}(u)}{u}\gamma_{\infty}(u)\,d\Leb_2(u) \right| < 2\epsilon . \] When~(\ref{equ:14xp,2}) is taken into account, we get $|\vartheta_{\ell}-\vartheta_{\infty}| <3\epsilon$ which ends the proof. \subsection{Main Theorems.} Theorem~\ref{m1} follows from the first claim of Theorem~\ref{theo:28mp,1}. The convergence claim in Theorem~\ref{m2} follows from the convergence in Theorem~\ref{theo:28mp,1} and the drift formula from Theorem~\ref{theo:7np,1}.
1,116,691,498,763
arxiv
\section{Introduction} The stellar populations within a galaxy hold information about how that system formed and evolved. Contained within its integrated light, the abundances of various chemical elements in the atmospheres of its constituent stars provide insights into a galaxy's past. Comparing observations of galaxies to models can reveal these clues and allows for the determination of stellar population properties including age, metallicity, chemical abundances and star formation history, all of which provide details about their formation and evolution. The framework for such models was developed by Tinsley (\citealt{Tinsley68}; \citealt{Tinsley80}), in which the time-evolution of stellar population colours and chemical abundances were predicted and matched to observations. These first models provided the basis of modern evolutionary stellar population synthesis (SPS) that is widely-used to fit spectral indices or full spectra of unresolved populations in external galaxies (e.g. \citealt{Bruzual83}; \citealt{Worthey94}; \citealt{Vaz96,Vaz99Model,Vaz2010,Vaz2015}; \citealt{Coelho07}; \citealt{Conroy2012a}). A key component in the generation of SPS models is the stellar library used to convert the predictions of stellar evolutionary calculations, values of surface gravity (log g) and effective temperature ($\textrm{T}_{\textrm{eff}}$) at different metallicities, into spectra. An effective library would contain stars of various evolutionary stages, covering a large range of \textrm{$\textrm{T}_{\textrm{eff}}$, log g} and metallicity (e.g. characterised by [Fe/H]\footnote{[A/B]=$\log[{n(A)/n(B)}]_{*}$ - $\log[{n(A)/n(B)}]_{\odot}$, where $n(X)/n(Y)$ is the number abundance ratio of element A, relative to element B.}). More recent work has also covered abundance patterns (e.g. [Mg/Fe] in \citealt{Milone2011} and [$\alpha$/Fe] in \citealt{Yan2019}). Stellar libraries can consist of theoretical spectra (e.g. \citealt{Coelho2014}) or observed spectra (e.g. \citealt{SanchezBlazquez2006}). Theoretical spectra have the advantage of covering a wide parameter space and do not have the typical observational limitations. They are however limited by the calculations, which embrace multiple simplifying physical assumptions. The treatment of convection, microturbulence, atmospheric geometry, local thermodynamic equilibrium (LTE) are all choices that limit their accuracy. Several theoretical libraries that cover a wide parameter range have been produced for large spectroscopic surveys or Single Stellar Population (SSP) modelling (\citealt{Coelho05}; \citealt{Coelho2014}; \citealt{Bohlin17}; \citealt{Allende18}). SPS models computed using only theoretical spectra have been used in the literature (e.g. \citealt{Maraston05}; \citealt{Coelho07} at low and high resolution, respectively). Although observational spectra correctly represent all of the physics and spectral features present in stars, they suffer from observational constraints such as limited wavelength coverage, spectral resolution, atmospheric absorption, emission residuals from the sky and noise. A major issue affecting empirical libraries is the limited parameter space covered, which is unavoidable because spectra are drawn from samples of stars in the vicinity of the solar neighbourhood and will therefore be representative of the Milky Way's chemical evolution. It is possible to obtain spectra of stars from further distances with differing chemical abundance patterns, but long exposure times limit these observations to only small, bright samples. A historical review of empirical libraries is presented in \cite{Trager2012}. A very popular empirical library is the Medium-resolution Isaac Newton Telescope Library of Empirical Spectra (MILES) (\citealt{SanchezBlazquez2006}) which consists of $\sim$1000 flux calibrated stars between $3500-7500\,${\AA}. Coverage of empirical libraries in effective temperature and surface gravity is good. However, the abundance patterns sampled in the solar neighbourhood are limited, constraining the range of stellar populations that can be accurately modelled. The abundance patterns of other galaxies and even within our Galaxy are not always the same as the solar neighbourhood (e.g. \citealt{Edvardsson93}; \citealt{Holtzman2015}). These empirical libraries, and the SSP models that can be generated from them, are therefore limited. Examples of SSP models computed using empirical stars can be found in \cite{Vaz99Model} and \cite{Vaz2010}. Another approach is to use combinations of empirical and theoretical spectra to increase the wavelength coverage of stellar population models (e.g. \citealt{Bruzual03}; \citealt{Maraston11}). An analysis of the impact of using theoretical or empirical stellar spectra in the generation of stellar population models is presented in \cite{Coelho20}. The elemental abundance patterns of galaxies highlight the time-scales in which their constituent stellar populations were formed. Even moderate resolution spectra contain details that allow for measurements of individual chemical abundances (e.g. R$\sim$2000 in \citealt{Parikh19}, using MaNGA \citealt{Blanton17}). A useful abundance ratio to measure is [$\alpha$/Fe], because the sources and time-scales of interstellar medium (ISM) enrichment for $\alpha$-capture and iron-peak elements are different. The ISM is polluted with $\alpha$-elements by Type II supernovae on shorter time-scales than iron-peak elements that mostly originate from Type Ia supernovae. The overabundance of [Mg/Fe] compared to the solar neighbourhood observed in early-type galaxies (ETGs) is usually attributed to short formation time-scales (e.g. see the review of \citealt{Trager98} and references therein). If one can quantify how stellar spectra are sensitive to elemental abundances it is possible to build stellar spectral libraries, and therefore SPS models, which contain abundance patterns different from the solar neighborhood. This is motivated by the different abundance patterns seen in external systems such as ETGs and Dwarf Spheroidal galaxies (dSphs) (e.g. see the review of \citealt{Conroy13} and references therein, in addition to \citealt{Letarte2010}; \citealt{Conroy2014}; \citealt{WortheyTangServen2014}; \citealt{Sen2018}). To account for non-solar abundance patterns in SSP models, a hybrid approach can be made in which a combination of the predictions from theoretical spectra (calculated from stellar spectral models or fully theoretical SSP models) and the accuracy of empirical spectra is used. A prediction of how abundances affect spectral lines is applied to empirical spectra to account for different abundance patterns, known as a differential correction. These corrections can be performed on specific spectral lines, presented in the form of response functions (e.g. \citealt{Tripicco1995}; \citealt{Korn2005}), or can be calculated for the full spectrum. SSP models can be generated using a fully empirical library as the base, with differential corrections made from theoretical models to account for different abundance patterns. Some of the first works to take a differential abundance pattern approach in full spectrum SPS modelling were that of \cite{Prugniel07} and \cite{Cervantes07}, followed by \cite{Walcher09}. This work was then expanded by \cite{Conroy2012a}, who calculated the response of SSP spectra for element abundance variations, at fixed metallicity, near the solar value. This method of using the abundance pattern predictions of models can be applied to individual stars in empirical libraries, which are then used to generate SSP spectra (e.g. \citealt{LaBarbera17} for [Na/Fe] variations), or to fully empirical SSP spectra directly from fully theoretical SSP models (e.g. \citealt{Vaz2015} for [$\alpha$/Fe] variations). In this work we build a stellar spectral library with stars that contain atmospheric abundances that can encompass a range of extragalactic environments. We use state-of-the-art theoretical spectra and apply their abundance predictions to existing empirical MILES stars. The result is a library of semi-empirical star spectra, covering a broad range of stellar parameter space, including [$\alpha$/Fe] variations spanning a larger range and finer sampling than previously computed \cite{Vaz2015} SSP models. Our aim is to produce a database of stars with different abundance patterns, which can then be directly used in the construction of new SSP models. We make the semi-empirical stellar library available for public use in both population synthesis and stellar applications. We chose to base the semi-empirical library on the widely-used MILES empirical library for which SSP modelling methods already exist. The structure for this paper is as follows. Section~\ref{sec:ModelSpectra} describes the generation and processing of a new theoretical stellar library, for use in stellar population modelling. Section~\ref{sec:ModelTesting} tests this new library through comparisons to other published theoretical libraries. Section~\ref{sec:MILESstars} outlines the underlying empirical MILES stellar library used in the calculations and their parameters that we adopt. Section~\ref{sec:sMILESstars} describes the interpolation to create theoretical MILES stars, plus the differential correction process used in the creation of semi-empirical MILES star spectra with different [$\alpha$/Fe] abundances. Section~\ref{sec:TestingObs} tests the star spectra through comparisons to observations. Section~\ref{sec:Summary} presents our summary and conclusions. \section{Models of Stellar Spectra} \label{sec:ModelSpectra} To address the limitations of using purely empirical stellar spectra in SSP models, we use theoretical spectra with varying abundance patterns. By taking ratios of theoretical spectra and applying them to existing MILES stars, we create a library of semi-empirical MILES star spectra with different [$\alpha$/Fe] abundances that can be used to compute semi-empirical SSPs. This approach, making use of both models and observations, builds upon the work of \cite{LaBarbera17}, implementing both the accuracy of empirical spectra with the differential abundance pattern predictions of theoretical spectra. Using only differential predictions from theoretical spectra has been shown to reproduce observations of abundance pattern effects more accurately than fully theoretical spectra, particularly for wavelengths below $\textrm{Mg}_{\textrm{b}}$ (e.g see figure 11 of \citealt{Knowles19} or \citealt{Martins07}; \citealt{Bertone08}; \citealt{Coelho2014}; \citealt{Villaume17}; \citealt{Allende18}). This approach requires a theoretical library of stellar spectra, from which abundance pattern predictions are used. Rather than use an existing library that has particular stellar parameter and abundance pattern coverage as well as a wavelength range for use in a specific application, such as the H band investigated using APOGEE (e.g. \citealt{Zamora2015}), we compute a new, high resolution theoretical stellar spectral library that is specifically designed for this project. In \cite{Knowles19} we tested theoretical spectra from three different groups of modellers, who used different software, and found that the differences between models is less than the differences between models and observations. Therefore, for this work we chose a method with which we were most familiar and that achieved good results in the comparisons to observations. Based on the results obtained from testing in \cite{Knowles19}, we follow the calculation method presented in detail in \cite{Mezaros2012} and \cite{Allende18}. This section summarises the computation methods and parameter choices for our new library, covering UV to near infrared wavelengths. \subsection{Computation Method} \label{sec:CompMethod} The production of theoretical stellar spectra requires two main processes: model atmosphere calculation, followed by radiative transfer through an atmosphere to produce an emergent spectrum, requiring the use of a synthetic spectrum code together with appropriate opacities, including a list of atomic and molecular absorption transitions and a specification of element abundances. The self-consistent approach would be to exactly match the chemical abundances in both stages of the production. To reduce computation time, a simplification is typically made in which only the dominant sources of opacity are varied in the model atmosphere whilst more elements are varied in the detailed synthetic spectrum calculation. The model atmospheres used in this project were generated using ATLAS9 (\citealt{Kurucz1993}), for which recently computed opacity distribution functions (ODFs) already existed. These ODFs cover the main sources of line opacity variations in stellar atmospheres, including variations in metallicity, $\alpha$-element and carbon abundances. The ODFs and model atmospheres used in this paper are described in \citealt{Mezaros2012}. They are publicly available\footnote{\url{http://research.iac.es/proyecto/ATLAS-APOGEE//}} and were used in the APOGEE analysis pipeline (\citealt{Ana2016}). The $\alpha$ elements we included are: O, Ne, Mg, Si, S, Ca and Ti. The ODFs and model atmospheres used in this work adopt \citealt{Asplund2005} solar abundances and a microturbulent velocity of $2\,\mathrm{km\,s^{-1}}$. We note here that this fixed value of microturbulent velocity is only used in the atmospheric model generation. In the spectral synthesis stage we use a microturbulent velocity that is dependent on effective temperature and surface gravity. The model atmospheres consist of 72 plane parallel layers from $\log\tau_{\mathrm{Ross}}=-6.875$ to $+0.200$ in steps of $\Delta\log\tau_{\mathrm{Ross}}=0.125$ (\citealt{Castelli03}). Alternative model atmosphere calculation methods would include the opacity sampling regimes of both MARCS (\citealt{Gustafsson75}; \citealt{Plez92}; \citealt{Gustafsson08}) and ATLAS12 (\citealt{Kurucz2005a}; \citealt{Castelli2005a}) and have been found to produce similar predictions to ATLAS9 models (e.g. \citealt{bon11}; \citealt{Mezaros2012}; \citealt{Knowles19} ). The stellar atmospheres used in this work have metallicities ranging from $-$2.5 to 0.5, for a range of carbon and $\alpha$ abundances presented later in this section, covering a large section of the MILES empirical stellar library. This region is deemed reliable to interpolate within when computing stellar population models, given the distribution of MILES stars (see figure 10 of \citealt{Milone2011}). This reliability is expressed through a Quality Number (Q$_n$), defined in \citealt{Vaz2010}) and shown in figure 6 of \cite{Vaz2015}. Q$_n$ gives a quantifiable measure of SSP spectra reliability, based on the density of stars around isochrone locations used in SSP calculations, with higher densities resulting in larger Q$_n$ values. For the radiative transfer stage of this work, we use ASS$\epsilon$T (Advanced Spectrum SynthEsis Tool) (\citealt{Koesterke2009}). ASS$\epsilon$T is a package, consisting of Fortran programs, providing fast and accurate calculations of LTE and non-LTE spectra from 1{\small D} or 3{\small D} models. Ideally we would calculate cool star models with 3D geometry and account for NLTE. However, we note that 1{\small D}, LTE modelling normally handles the opacity in more detail than in existing 3{\small D} and NLTE codes, which are computationally costly (e.g. \citealt{bon11}). Therefore, we caution that our 1D, LTE models will be increasingly poorer representations of real stars at lower temperatures, below about 4000 K. Future work might investigate whether more complex models would produce better estimates of differential element responses in the spectra of the coolest stars. For generating a large number of theoretical spectra, each covering a broad wavelength range, we use the 1{\small D} and LTE mode of ASS$\epsilon$T, with the input ATLAS9 atmospheres, to produce a library of synthetic spectra at air wavelengths. Calculations were done in “ONE-MOD” mode within ASS$\epsilon$T, with the opacities computed exactly for each model at every atmospheric depth. Several important aspects of the models are summarised below. \begin{itemize} \item{\textbf{Solar Abundances} - To maintain abundance consistency in the computation, we define abundances relative to \cite{Asplund2005} solar abundances in both ATLAS9 and ASS$\epsilon$T.} \item {\textbf{Abundance Definitions} - The models were computed with variable metallicity ([M/H]), ([$\alpha$/M]) and carbon ([C/M]) abundances. [M/H] here is defined as: \begin{equation} \textrm{[M/H]}=\log[{n(M)/n(H)}]_{*} - \log[{n(M)/n(H)}]_{\odot}, \label{MHDefEq} \end{equation} where $n(M)$ is the number of nuclei of any particular element with atomic number greater than two, but not the summation of all, i.e. it applies to iron, lithium, potassium, and any single element. [M/H] here is therefore defined as a scaled-metallicity in which all metals, apart from the $\alpha$-elements and carbon if they are also non-solar, are scaled by the same factor from the solar mixture (e.g. [M/H]=0.2=[Fe/H]=[Li/H]). This definition means [$\alpha$/M]=[$\alpha$/Fe] and [C/M]=[C/Fe].} \item {\textbf{ODFs} - To avoid complex computation of new ODFs with variable abundances, we generate models on a grid for which ODFs existed. Therefore, we are constrained to generate synthetic spectra on the existing grid points from \cite{Mezaros2012}. These grid points dictate the abundance pattern sampling of the current library.} \item{\textbf{Line lists} - The line lists used in the calculations are described in detail in \cite{Allende18}. In summary, metal and molecular transitions are obtained from Kurucz\footnote{\url{http://kurucz.harvard.edu/}}. Molecules present in the calculation include H$_2$, CH, $\textrm{C}_{2}$, CN, CO, NH, OH, MgH, SiH, and SiO. TiO transitions are only included for stars below 6000{\small K}, as explained in Section~\ref{sec:NewGrids}}. \end{itemize} Models were computed at the grid points described in Section~\ref{sec:NewGrids}. The wavelength range of the models was guided by the starting value of the extended MILES library ($\sim1680\,${\AA}) (\citealt{Vazdekis16}) and the inclusion of calcium triplet (CaT) features (at 8498, 8542 and $8662\,${\AA}), to allow for investigation of IMF variations in ETGs. This results in a high resolution theoretical library that is generated spanning the wavelength range of 1680-$9000\,${\AA}. However, for the semi-empirical library, we will be limited to producing semi-empirical stellar spectra with the current MILES library wavelength range of 3500-$7500\,${\AA} as described in Section~\ref{sec:MILESstars}. \subsection{Element Abundance Variation} \label{sec:ElementVariations} The total number of models generated is based on the number of elements varied, their range of variation and number of steps taken, as well as the sampling in other stellar parameters. We specify what element groups are varied in each component of the model computation. \begin{itemize} \item \textbf{Model Atmosphere (ATLAS9)} - [M/H], [$\alpha$/M] and [C/M] \item \textbf{Radiative Transfer (ASS$\epsilon$T)} - [X/H], where X can be any element from atomic number 2 to 99 \end{itemize} Variation of elements is driven by the ODFs and by observations of abundance patterns in external systems (e.g. see \citealt{WortheyTangServen2014}; \citealt{Sen2018}). Therefore, we vary the abundances in the following way. \begin{itemize} \item{\textbf{[M/H]} from $-$2.5 to +0.5 in steps of 0.5 dex - where [M/H] is defined in equation (~\ref{MHDefEq}}) \item {[\boldmath{$\alpha$}/\textbf{M}] from $-$0.25 to +0.75 in steps of 0.25 dex (where $\alpha$ = O, Ne, Mg, Ca Si, S and Ti to be consistent with the model atmosphere variations)} \item {$[\textbf{\textrm{C}/\textrm{M}}]$ from $-$0.25 to +0.25 in steps of 0.25 dex) - carbon abundance has a large impact on stellar spectra. Its atmospheric composition, relative to oxygen, can lead to carbon stars. The balance of C and O is important in the molecular equilibrium of cool stars and the entire atmospheric structure changes significantly when C/O approaches one, producing carbon stars (\citealt{Mezaros2012}; \citealt{Gonneau2016}). With ODFs computed with carbon variations, it is possible to consistently change carbon in both model atmosphere and spectral synthesis components.} \end{itemize} Other elements variations that could be synthesised and would be useful in stellar population studies include nitrogen and sodium. However, in this work we present the first stage of this stellar library and focus on $\alpha$ and carbon variations, which are known to have the largest impact on stellar structure and on stellar spectra when changes in their ratios to iron are considered. Considering these two will lead to significant improvements in fitting the spectra of stars and stellar populations. Sodium variations have been considered in \cite{LaBarbera17} at the star and SSP level for a limited number of models using the same methods described here, albeit with abundance variations made only in the radiative transfer component of computation. \subsection{Microturbulence} \label{sec:Microturbulence} \begin{figure*} \centering \subfloat{{\includegraphics[width=8.5cm]{Plots/APOGEE_NewEq_OldEq_vturb_v3.pdf}}} \qquad \subfloat{{\includegraphics[width=8.5cm]{Plots/APOGEE_NewEq_2012Eq_vturb_v3.pdf}}} \caption{Left Panel: Microturbulent velocity as a function of log g for the original \citealt{DF2016} equation (DF16 Red lines) and the modified version of the equation (DF16Mod black lines) for four values of $\textrm{T}_{\textrm{eff}}$. For $\textrm{T}_{\textrm{eff}}$=5500{\small K}, DF16=DF16Mod. The main difference can be seen at lower temperatures, where DF16Mod avoids dropping to such low values of microturbulent velocity. This modification better represents the trends found by observations (e.g. \citealt{Ramirez2013}, their figure 5). Right Panel: Microturbulent velocity as a function of log g for the modified DF16 equation (DF16Mod. Black lines), the \citealt{Thygesen2012} equation (T12. Blue lines) and the APOGEE equation (APOGEE. Green lines). Although the T12 equation appears to follow the linear behaviour of the APOGEE calibration well, problems arise at higher $\textrm{T}_{\textrm{eff}}$ where the equation does not reach the higher values of microturbulent velocity observed at low log g.} \label{fig:DF16_old_vs_new_plus_APOGEE_vs_DF16Mod_vs_T12} \end{figure*} An important parameter in the computation of one dimensional stellar spectra is the microturbulent velocity. Due to a limitation in classical 1{\small D} models to fully treat the velocity fields present in stellar photospheres correctly, microturbulence is included to match the observed broadening of spectral lines (e.g. \citealt{Struve34}; \citealt{vanParadijs72}). Treated as motions of mass below the mean free path of photons, microturbulence is usually modelled as a Gaussian distribution of velocity dispersion, which in turn produces Doppler shifts that mimic the effect of thermal motions. For weak lines that have typically Gaussian profiles, the effect of microturbulence is to increase the width and reduce the depth of the absorption line, producing no change in equivalent width. However, for stronger and saturated lines for which absorption can occur in the damping wings of line profiles, microturbulence expands the wavelength range of possible absorption and results in reduced saturation and therefore increases the total absorption. Therefore, the choice of this parameter is important because it can affect the resulting line-strengths when calculating synthetic spectra. Although the available ODFs, and therefore model atmospheres, were computed at $2\,km\,s^{-1}$, microturbulent velocity can be varied in ASS$\epsilon$T and therefore we considered the effect of this parameter on the theoretical grid. The effects of microturbulence on the absolute and differential application of theoretical line-strengths are discussed in \citealt{Knowles19}. The results of these tests are summarised here. In general, we found that absolute differences in line-strength indices can be large even for relatively small differences in the adopted microturbulent velocity (of $1\,\mathrm{km\,s^{-1}}$ and $2\,\mathrm{km\,s^{-1}}$). These differences are largest in cool giant spectra with line-strengths differing by order 1-$2\,${\AA} with a change of microturbulent velocity from $1\,\mathrm{km\,s^{-1}}$ to $2\,\mathrm{km\,s^{-1}}$. We refer interested readers to section 4.1 of \citealt{Knowles19} for more details. We and other authors have shown that in absolute terms, microturbulence can have a large effect on spectra (\citealt{Conroy2012a}; \citealt{Knowles19}). Therefore, for any absolute application of the model library, it will be important to make a careful consideration of this parameter. Two typical options for this parameter, common in previous libraries, are to compute spectra at fixed microturbulent velocity (e.g. \citealt{Conroy2012a}) or have a varying microturbulent velocity grid dimension (e.g. \citealt{Allende18}). To reduce computation time, but to also incorporate microturbulent velocity values observed in real stars, we have taken a different approach in which spectra are computed with different microturbulent velocity values, depending on the fundamental stellar parameters of $\textrm{T}_{\textrm{eff}}$ ({\small K}) and log g ($\mathrm{cm \,s}^{-2}$). We considered three literature representations of how microturbulent velocity (vturb) varies with the physical parameters of stars. These relations were: \begin{align} \small{\textrm{vturb} (\mathrm{km\,s^{-1}})=2.478-0.325\hspace{2pt}\textrm{log g}} \label{APOGEEvturEq} \end{align} \begin{multline} \small{\textrm{vturb} (\mathrm{km\,s^{-1}})=0.871-2.42\times 10^{-4}(\textrm{T}_{\textrm{eff}}-5700)}\\\small{-2.77\times 10^{-7}(\textrm{T}_{\textrm{eff}}-5700)^{2}-0.356(\textrm{log g}-4)} \label{Thygesen2012Eq} \end{multline} \begin{multline} \small{\textrm{vturb} (\mathrm{km\,s^{-1}})=0.998+3.16\times10^{-4}(\textrm{T}_{\textrm{eff}}-5500 )-0.253(\textrm{log g}-4)}\\\small{-2.86\times10^{-4}(\textrm{T}_{\textrm{eff}}-5500)(\textrm{log g}-4)}\footnotesize{+0.165(\textrm{log g} -4)^2} \label{DF2016Eq} \end{multline} equation~(\ref{APOGEEvturEq}) was used by APOGEE (\citealt{Holtzman2015}) and was derived using a calibration subsample of red giants, but did not account for any relationship between ${\textrm{T}}_{\textrm{eff}}$ and vturb. Equation ~(\ref{Thygesen2012Eq}) is from \cite{Thygesen2012} using a sample of 82 red giants in the Kepler field. Although this accounted for both effective temperature and surface gravity effects, it was limited to only red giants in a small $\textrm{T}_{\textrm{eff}}$ range ($\approx$4000-5000{\small K}). In the figures we refer to this equation~(\ref{Thygesen2012Eq}) as T12. Equation~(\ref{DF2016Eq}), from \cite{DF2016}, was derived using a sample of cool dwarfs and giants in the Hyades cluster and calibrated to predictions of 3{\small D} models. In the figures below we refer to this equation~(\ref{DF2016Eq}) as DF16.\\\\ In general, based on the observations mentioned above, the behaviour of vturb with $\textrm{T}_{\textrm{eff}}$ and log g follows the following criteria: \begin{itemize} \item{vturb is large ($\approx4\,\mathrm{km\,s^{-1}}$) for high $\textrm{T}_{\textrm{eff}}$($\approx$6000{\small K}) and low log g ($\approx$2) (figures 7 and 9 in \citealt{Gray2001}; figure 1 in \citealt{Montalban2007}). This is larger than values reached by the APOGEE relation and therefore it would be unwise to use that relation for our large parameter space.} \item{vturb is smaller ($\ll4\,\mathrm{km\,s^{-1}}$) and can be as small as $<1\,\mathrm{km\,s^{-1}}$ at lower $\textrm{T}_{\textrm{eff}}$ ($\approx$ 5000{\small K}) and high log g ($\approx$ 4.5) (figure 5 in \citealt{Ramirez2013})} \item {vturb$\approx$2-$3\,\mathrm{km\,s^{-1}}$ at high $\textrm{T}_{\textrm{eff}}$ ($\approx$7500{\small K}) and high log g ($\approx$4.0) (figures 7 and 9 in \citealt{Gray2001}; figure 5 in \citealt{Niemczura2015}; figure 5 in \citealt{Ramirez2013}). Generally this value is lower than present at high $\textrm{T}_{\textrm{eff}}$ ($\approx$7000{\small K}) and low log g ($\approx$2.5) (figure 1 in \citealt{Montalban2007}), as well as lower than values present at low $\textrm{T}_{\textrm{eff}}$ and low log g (figure 7 of \citealt{Gray2001}).} \item{As seen in all the observations considered, giants have higher vturb than dwarfs.} \end{itemize} Because our model grids span a wide range of stellar parameter space, it was important to include (at least similar to the sense observed) the trends found in all three of the literature relations (equations~\ref{APOGEEvturEq},~\ref{Thygesen2012Eq} and ~\ref{DF2016Eq}) considered. The DF16 equation was calibrated using a sample of both giant and dwarf stars and included both $\textrm{T}_{\textrm{eff}}$ and log g parameters. Therefore, we used this form of equation~(\ref{DF2016Eq}), but with a slight modification of the cross term, such that: \begin{multline} \small{\textrm{vturb ($\mathrm{km\,s^{-1}}$)}=0.998+3.16\times10^{-4}(\textrm{$\textrm{T}_{\textrm{eff}}$}-5500 )-0.253(\textrm{log g}-4)}\\\small{-2\times10^{-4}(\textrm{$\textrm{T}_{\textrm{eff}}$}-5500)(\textrm{log g}-4)}\small{+0.165(\textrm{log g} -4)^2} \label{DF16ModEq} \end{multline} The cross term coefficient was modified from $2.86\times10^{-4}$ to $2\times10^{-4}$ to better follow the trends of equation~(\ref{APOGEEvturEq}) in the parameter range of APOGEE and satisfy the above criteria. Figure~\ref{fig:DF16_old_vs_new_plus_APOGEE_vs_DF16Mod_vs_T12} (Left Panel) shows the difference between the original DF16 (red lines) and modified DF16Mod (black lines) relations, for different values of $\textrm{T}_{\textrm{eff}}$. For $\textrm{T}_{\textrm{eff}}$=5500{\small K}, the equations are the same, so those two lines overlap. Figure~\ref{fig:DF16_old_vs_new_plus_APOGEE_vs_DF16Mod_vs_T12} (Right Panel) plots our modified equation (black lines) and T12 equation (blue lines) for different values of $\textrm{T}_{\textrm{eff}}$ along with the APOGEE calibration (green line), from equation~(\ref{APOGEEvturEq}). \begin{figure} \begin{center} \includegraphics[width=\linewidth, angle=0]{Plots/MILESDF15ModvsAPOGEE_v3.pdf} \caption[Comparison of DF16Mod and APOGEE microturbulent velocity relations in the \citealt{Cenarro07} MILES parameter range]{Microturbulent velocity as a function of log g using the modified DF16 equation (DF16Mod black points), and the APOGEE equation (APOGEE green points) for the MILES stars, with stellar parameters from \citealt{Cenarro07}. We also present the RMS scatter between the two estimates. For dwarf stars, DF16Mod agrees well with APOGEE, with larger deviations seen in giant stars.} \label{fig:APOGEE_vs_DF16Mod_MILES} \end{center} \end{figure} We conclude that it is important to include both effective temperature and surface gravity in the parameterisation, because observations and analyses (e.g. references given above) suggest that trends are present in both. The modified relation~(\ref{DF16ModEq}) approximately follows the trends found in these studies as well as those present in the APOGEE relation~(\ref{APOGEEvturEq}). We used our modified equation (\ref{DF16ModEq}) for $\textrm{T}_{\textrm{eff}}$ from 3500 to 6000{\small K} and for temperatures higher than this we lock the microturbulent velocity to our relation~(\ref{DF16ModEq}) with a fixed $\textrm{T}_{\textrm{eff}}$= 6000{\small K}. To test our parameterisation, we show the difference and RMS scatter between the APOGEE calibration and our relation, for the MILES parameters from \citealt{Cenarro07} in Figure~\ref{fig:APOGEE_vs_DF16Mod_MILES}. This RMS scatter is small compared to the typical values of $1-2\,\mathrm{km\,s^{-1}}$ found for microturbulent velocity in APOGEE (\citealt{Ana2016}). However, we note that whilst there can be large absolute differences in spectral line-strengths due to microturbulence, we showed in \cite{Knowles19} that effects on the differential application of models were small ($\approx0.02\,${\AA}) compared to typical observational errors on line-strengths ($\approx0.1\,${\AA}). Therefore, for work involving the semi-empirical library, which uses the models only in a differential sense, the choice of microturbulent velocity is not as important as it is for the absolute predictions of models. We still however attempt to match the microturbulent velocity to observations in the generation of theoretical stellar spectra, by using equation~(\ref{DF16ModEq}). \subsection{New Theoretical Star Grids} \label{sec:NewGrids} Due to coverage in log g of the available ODFs, the models were split into three sub-grids, based on ranges in {$\textrm{T}_{\textrm{eff}}$}. All of the models described below were generated using LTE assumptions in both atmosphere and spectral synthesis components. \subsubsection{3500-6000{\small{K}} Grid} For the lowest temperature grid, models were computed with the following parameter steps, such that: \begin{itemize} \item {$\textrm{T}_{\textrm{eff}}$=3500{\small K} to 6000{\small{K}} in steps of 250{\small K}} \item {log g=0 to 5 in steps of 0.5 dex} \item {[M/H]=$-$2.5 to +0.5 in steps of 0.5 dex} \item {[$\alpha$/M]=$-$0.25 to +0.75 in steps of 0.25 dex. We note here that we are making an assumption that these elements increase in lockstep, which is not exactly true in the Milky Way (e.g. \citealt{Bensby2014}; \citealt{Zasowski19})} \item{[C/M]=$-$0.25 to 0.25 in steps of 0.25 dex} \end{itemize} Thus, the number of models computed in this grid is \begin{center} Number of Models = $\textrm{T}_{\textrm{eff}}$ steps x log g steps x Element Variations \\ =N($\textrm{T}_{\textrm{eff}}$) x N(log g) x N([M/H]) x N([$\alpha$/M]) x N([C/M])\\ =11 x 11 x 7 x 5 x 3 = 12705 models \end{center} For these 12705 models, seven models were missing ODFs or did not converge. In order to maintain regularity of the grid, the missing models were computed using a linear interpolation of models in the nearest available grid points. These seven models were all at the lowest $\textrm{T}_{\textrm{eff}}$ (3500{\small K}), high surface gravity (log g=4.0, 4.5 5.0), low metallicity ([M/H]=$-$1.5 or 2.0) and at high $\alpha$ abundance ([$\alpha$/M]=0.75)) points. The parameters of these seven stars are specified in \cite{Knowles19_Thesis} (section 3.3.2). \begin{figure} \begin{center} \includegraphics[width=82mm, angle=0]{Plots/Theory_Grid.png} \caption{Top Panel: Abundance pattern coverage in the [C/M] vs [$\alpha$/M] plane. Bottom Panel: 3{\small D} stellar parameter coverage of 3500-6000{\small K} grid. Each point in the [C/M] vs [$\alpha$/M] plane represents 11 x 11 x 7 = 847 models in this lowest $\textrm{T}_{\textrm{eff}}$ grid.} \label{fig:TheoreticalGrid_Coverage} \end{center} \end{figure} For illustration, the parameter coverage of the lowest effective temperature grid is presented in Figure~\ref{fig:TheoreticalGrid_Coverage}. \begin{figure*} \begin{center} \includegraphics[width=\linewidth, angle=0]{Plots/LineList_v5.pdf} \caption[Effect of removing TiO lines at different effective temperatures in the high resolution theoretical stellar library]{Effect of removing TiO lines from molecular line list at different temperatures for the fixed binning, high-resolution library described in Section 2.6. The red and blue spectra represent stars with the TiO line list included and removed for each temperature, respectively. Fluxes are normalised to the maximum flux value of each spectrum. The green line represents the residual obtained from a division of Full line list and Short line list spectra. Differences in the top panel ($\textrm{T}_{\textrm{eff}}$=4000K) are seen in locations known to be affected by TiO absorption (see 5a of \citealt{Kirkpatrick91}; figure 1 of \citealt{Plez98}; figure 1 of \citealt{Allard00}).} \label{fig:Linelist_Test} \end{center} \end{figure*} To help minimize the number of models, we split our higher temperature models into two sub-grids. We have a grid of models from 6250-8000{\small K} and a grid from 8250-10000{\small K}. The upper limit of these temperatures was chosen to cover regions of the existing MILES library where stars that contain the most information regarding abundance patterns exist. The ODFs and model atmospheres available also make cuts to surface gravity at the higher temperatures, which have increasing radiation pressure and therefore the lowest surface gravity models become unstable (e.g. see figure 2 of \citealt{Mezaros2012}). Thus, the number of models for our higher $\textrm{T}_{\textrm{eff}}$ sub-grids are described in Sections~\ref{sec:62508000Grid} and ~\ref{sec:825010000Grid}. \subsubsection{6250-8000{\small K} Grid} \label{sec:62508000Grid} \begin{itemize} \item{$\textrm{T}_{\textrm{eff}}$=6250{\small K} to 8000{\small K}, in steps of 250{\small K}} \item{log g=1 to 5, in steps of 0.5 dex} \item{[M/H]=$-$2.5 to +0.5, in steps of 0.5 dex} \item {$[\alpha/\textrm{M}]$=$-$0.25 to +0.75, in steps of 0.25 dex} \item{[C/M]=$-$0.25 to +0.25, in steps of 0.25 dex} \end{itemize} Thus, the number of models computed in the 6250-8000{\small K} grid is \begin{center} N($\textrm{T}_{\textrm{eff}}$) x N(log g) x N([M/H]) x N([$\alpha$/M]) x N([C/M])\\ 8 x 9 x 7 x 5 x 3 = 7560 models \end{center} To avoid excessive computation times, careful consideration of the number of spectra, the wavelength coverage, linelists used and number of abundance steps was necessary. A method to decrease computation time is to reduce the number of input atomic and molecular transitions. For $\textrm{T}_{\textrm{eff}}$ above 6000{\small K}, we removed a significant molecular contributor to the linelists, TiO, which is prevalent in stellar spectra at low temperatures, however at higher temperatures absorption features become weak. TiO band strengths are particularly used in unresolved stellar population analysis as Initial Mass Function (IMF) probes (e.g. TiO$_2$ defined in \citealt{Trager98}). For example, \cite{LaBarbera16} use TiO index measurements to investigate the radial variations of the IMF in ETGs. This index strength increases as effective temperature decreases and therefore the IMF sensitivity arises from the ratio of low mass (low effective temperature) to high mass stars on the main sequence (\citealt{Fontanot18}). Figure~\ref{fig:Linelist_Test} shows an example of the effect of removing TiO transitions from our models at various temperatures. As expected, TiO bands are extremely prevalent in the lowest $\textrm{T}_{\textrm{eff}}$ spectrum and differences in the grid between higher temperature models are very small. \subsubsection{8250-10000{\small K} Grid} \label{sec:825010000Grid} \begin{itemize} \item{$\textrm{T}_{\textrm{eff}}$=8250{\small K} to 10000{\small K} in steps of 250{\small K}} \item{log g=2 to 5, in steps of 0.5 dex} \item{[M/H]=$-$2.5 to +0.5, in steps of 0.5 dex} \item {$[\alpha/\textrm{M}]$=$-$0.25 to +0.75, in steps of 0.25 dex} \item{[C/M]=$-$0.25 to 0.25, in steps of 0.25 dex} \end{itemize} Thus, the number of models computed in the 8250-10000{\small K} grid is \begin{center} N($\textrm{T}_{\textrm{eff}}$) x N(log g) x N([M/H]) x N([$\alpha$/M]) x N([C/M]) =\\ 8 x 7 x 7 x 5 x 3 = 5880 models \end{center} No models in the two higher $\textrm{T}_{\textrm{eff}}$ grids had missing ODFs or convergence issues. \subsubsection{[Ca/Fe]=0 Grid} We also compute a small model grid with [Ca/Fe]=0.0 to match results of integrated light studies of ETGs in which calcium was found to track iron-peak elements (\citealt{Vazdekis97}; \citealt{Trager98}; \citealt{Thomas03Ca}; \citealt{Schiavon2007}; \citealt{Johansson12}; \citealt{Conroy2014}). \begin{itemize} \item{$\textrm{T}_{\textrm{eff}}$=3500{\small K} to 6000{\small K}, in steps of 250{\small K}} \item{log g=0 to 5, in steps of 0.5 dex} \item{[M/H]=$-$2.5 to +0.5, in steps of 0.5 dex} \item{[$\alpha$/M]=0.25, where $\alpha$ is O, Ne, Mg, Si, S and Ti} \item{[C/M]=0.25 - as was found by \cite{Conroy2014}} \end{itemize} Thus, the number of models computed in the [Ca/Fe]=0.0 grid is \begin{center} N($\textrm{T}_{\textrm{eff}}$) x N(log g) x N([M/H]) x N([$\alpha$/M]) x N([C/M]) =\\ 11 x 11 x 7 x 1 x 1 = 847 models \end{center} \subsection{Processing} We now describe methods and procedures of processing raw spectra from ASS$\epsilon$T into three different resolution libraries: a high resolution library in which there is a fixed resolving power (R=$\lambda$/d$\lambda$ - based on equation {\ref{ASSETbinEq}}) within a spectrum but each spectrum has a different resolving power and sampling, a high resolution theoretical library in which all spectra are binned to a common wavelength range and sampling, and a MILES resolution theoretical library used in the differential correction process. ASS$\epsilon$T generates a spectrum in wavelength (in \AA) and flux density measured at the stellar surface (in erg/s/$\mathrm{cm}^{2}$/\AA). Spectra are computed at fixed resolving power, resulting in a sampling that is constant in $d(\log_{10}\lambda)$ but increasing $d\lambda$ for increasing $\lambda$. As default, ASS$\epsilon$T samples the spectrum based on the formula: \begin{equation} \textrm{d}(\log_{10}\lambda)=0.3\sqrt{(v_{Micro}^{2}+v_{TM}^{2})}, \label{ASSETbinEq} \end{equation} where $v_{Micro}$ is the microturbulent velocity and $v_{TM}$ is the thermal Doppler width computed in ASS$\epsilon$T at the coolest layer of the atmosphere. This formula ensures the sampling of at least three wavelength points for the expected line width of the spectrum, but means that every spectrum was computed at different sampling. This is the first theoretical library generated, in which each spectrum has a unique sampling and fixed resolving power based on equation~(\ref{ASSETbinEq}). The IRAF task `dispcor' was then used to resample the spectra, with fifth order polynomial interpolation, to a common start and end wavelength as well as number of wavelength points. Flux density was conserved throughout the resampling process. The common sampling was taken as the largest sampling value of all the spectra generated. This resulted in a final, high resolution library consisting of spectra with $\lambda_{\textrm{start}}=1677.10\,${\AA}, d$\lambda=0.05\,${\AA} and number of wavelengths points, $n_{\lambda}$=146497. This is the second library mentioned above, in which high resolution theoretical spectra are produced with all spectra at a common wavelength range, sampling and resolution. To create synthetic spectra that replicate existing MILES stars, with which differential corrections will be performed, the theoretical library was matched to the existing MILES empirical library in terms of wavelength range, sampling and resolution. IDL routines were used to smooth and rebin\footnote{IDL routines were from https://ascl.net/1708.005, plus our own IDL routine for rebinning by summing and renormalising to relative flux density.}, the fixed sampling, high resolution theoretical library to match the MILES empirical spectra, resulting in a third library with a wavelength range, sampling and resolution of $3540.5-7409.6\,${\AA}, $0.9\,${\AA} and $2.5\,${\AA} respectively. Models of existing MILES stars and MILES stars with different abundance patterns are created via interpolation in these MILES specific theoretical libraries, as described in Section~\ref{sec:sMILESstars}. \subsection{Theoretical Library Summary} In summary, three grids of theoretical stellar spectra were computed, covering different $\textrm{T}_{\textrm{eff}}$ ranges. The first grid consisted of spectra covering effective temperatures from 3500 to 6000{\small K}, surface gravity from 0 to 5 dex and metallicities ([M/H]) that covered a large proportion of the MILES empirical library. Models in this first grid were computed with a microturbulent velocity according to equation~\ref{DF16ModEq}. The second grid, was computed with the same coverage in metallicity as the first grid, but with an effective temperature coverage from 6250 to 8000{\small K} and coverage in surface gravity from 1 to 5 dex, to avoid unstable model atmospheres caused by radiation pressure instabilities. The third grid was also computed with the same coverage in metallicity as the first, but with an effective temperature coverage from 8250 to 10000{\small K} and surface gravity coverage from 2 to 5 to also avoid unstable model atmospheres caused by radiation pressure instabilities. Both the 6250-8000{\small K} and 8250-10000{\small K} grids were computed with a reduced linelist in which TiO was removed, in order to shorten computation times. Models in the second and third grid were computed with a microturbulent velocity according to equation~\ref{DF16ModEq}, with $\textrm{T}_{\textrm{eff}}$ fixed at 6000{\small K}. All three grids were computed with [$\alpha$/M] variations that cover a range of $\alpha$ abundance variation as observed in external systems such as ETGs galaxies and dSphs, as well as [C/M] variations in a range that covered observations in previous integrated light studies (e.g. \citealt{Conroy2014}; \citealt{WortheyTangServen2014}). Example sequences of theoretical spectra for the parameters of $\textrm{T}_{\textrm{eff}}$, [M/H], [$\alpha$/M] and [C/M] are presented in the supplementary data provided. Each of these restricted temperature grids exists at three different resolution and sampling values. The first library (collection of three temperature grids) is one in which each spectrum has a unique sampling and resolving power based on equation~(\ref{ASSETbinEq}). The second library consists of spectra with a common wavelength range and sampling, such that $\lambda_{\textrm{start}}=1677.10\,${\AA}, d$\lambda=0.05\,${\AA} and $n_{\lambda}$=146497. This fixed binning, high-resolution library is publicly available to download at \url{http://uclandata.uclan.ac.uk/178/}. This library consists of three grids; a low temperature grid ($\textrm{T}_{\textrm{eff}}$=3500-6000{\small K}), an intermediate temperature grid ($\textrm{T}_{\textrm{eff}}$=6000-8000{\small K}) and a high temperature grid ($\textrm{T}_{\textrm{eff}}$=8000-10000{\small K}). The higher two grids include repeats of the highest temperature spectra from the grid below, to maintain continuous coverage in $\textrm{T}_{\textrm{eff}}$. Finally, a MILES-specific theoretical library exists, with spectra smoothed and resampled to match the current MILES empirical library. The result is a medium resolution library with spectra that have a wavelength range, sampling and resolution (FWHM) of $3540.5-7409.6\,${\AA}, $0.9\,${\AA} and $2.5\,${\AA} respectively. This library and its predictions will be used in the later sections of this work to create semi-empirical stellar spectra. We refer to our computed models as the ATK set in later sections of this work. \section{Testing Theoretical Library} \label{sec:ModelTesting} We now make comparisons between our models and other published libraries of theoretical stellar spectra. \begin{table*} \caption{Theoretical star spectra compared for our current models and those published in \protect\cite{Allende18}. The vturb values are those used (see Section~\ref{sec:Microturbulence}) and the Allende Prieto models were interpolated to those values. We have tested a giant (G) and two dwarf (D$_1$, D$_2$) stars. Also shown is RMS scatter about the 1:1 agreement line between the differential predictions ([$\alpha$/M]=0.25/[$\alpha$/M]=0.0) of our models and Allende Prieto models. RMS is calculated for the ratio of our (ATK) and Allende Prieto (CAP) sets of differential predictions of spectra with different abundance patterns.} \begin{tabular}{ccc} \hline Star Type & RMS ($\lambda$<$3000\,${\AA}) & RMS ($\lambda$>$3000\,${\AA})\\ \hline $\textrm{[M/H]}=0.0$&&\\ G (T$_\textrm{eff}$=4000K, log g=2.0, vturb=$1.09\,\mathrm{km\,s^{-1}}$) & $\mathrm{1.33\times10^{-2}}$ & $\mathrm{9.00\times10^{-4}}$ \\ D$_1$ (T$_\textrm{eff}$=4000K, log g=4.0,vturb=$0.524\,\mathrm{km\,s^{-1}}$) & $\mathrm{6.75\times10^{-3}}$ & $\mathrm{7.34\times10^{-4}}$ \\ D$_2$ (T$_\textrm{eff}$=5500K, log g=4.0,vturb=$0.998\,\mathrm{km\,s^{-1}}$) & $\mathrm{2.68\times10^{-3}}$& $\mathrm{2.57\times10^{-4}}$ \\ \\ $\textrm{[M/H]}=-1.0$&&\\ G (T$_\textrm{eff}$=4000K, log g=2.0, vturb=$1.09\,\mathrm{km\,s^{-1}}$) &$\mathrm{1.20\times10^{-2}}$ & $\mathrm{6.35\times10^{-4}}$ \\ D$_1$ (T$_\textrm{eff}$=4000K, log g=4.0,vturb=$0.524\,\mathrm{km\,s^{-1}}$) &$\mathrm{4.12\times10^{-3}}$ & $\mathrm{3.96\times10^{-4}}$ \\ D$_2$ (T$_\textrm{eff}$=5500K, log g=4.0,vturb=$0.998\,\mathrm{km\,s^{-1}}$) & $\mathrm{1.70\times10^{-3}}$ & $\mathrm{1.19\times10^{-4}}$\\ \hline \end{tabular} \label{ATK_vs_CAP_RMS} \end{table*} \subsection{Comparison to Allende Prieto models} \label{sec:ATKvsCAP} To check the accuracy of our theoretical spectra, we first compare to the library of \protect\cite{Allende18}, which covers a wide range of star types, metallicities, [$\alpha$/Fe] and microturbulent velocities. We refer to these models as the CAP set throughout this work. We focus on the differential abundance pattern predictions of both model sets. The abundance pattern prediction is taken as a ratio of an $\alpha$-enhanced ([$\alpha$/M]=0.25) and solar abundance pattern ([$\alpha$/M]=0.0 star. \protect\cite{Allende18} models were interpolated in microturbulent velocity, using a quadratic B\'ezier function, to match stars in our library. Options for interpolations within model grids are discussed in Appendix~\ref{sec:InterpChoice}. Table~\ref{ATK_vs_CAP_RMS} lists the star types compared and their parameters. \begin{figure*} \begin{center} \includegraphics[width=152mm, angle=0]{Plots/ATK_vs_CAP_3stars_v3.pdf} \caption{Comparisons of enhanced-over-base star spectra for stars in our theoretical library, labelled ATK, versus stars from \citealt{Allende18} (interpolated in vturb), labelled CAP. The top plot in each block shows spectra for [$\alpha$/M]=+0.25 divided by [$\alpha$/M]=0.0. The lower plot in each block shows the division of these two ratios (ATK/CAP). Blocks show comparisons for a giant star (upper) and two dwarf stars (middle and lower), all at solar metallicity, with parameters detailed in Table~\ref{ATK_vs_CAP_RMS}.} \label{fig:ATK_vs_CAP_MH0.0} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=86mm, angle=0]{Plots/CAP_vturb_DiffCorr_v3.pdf} \caption{Effect of vturb on CAP model abundance pattern predictions. The effect of vturb on the differential correction is strongest in the UV, with large differences present at the shortest wavelengths between vturb=$1.09\,\mathrm{km\,s^{-1}}$ and vturb=$2\,\mathrm{km\,s^{-1}}$.} \label{CAP_vturb_Test} \end{center} \end{figure} Both model sets were degraded to a spectral resolution of $2.5\,${\AA} FWHM and resampled to $0.3\,${\AA} bins, in order to compare spectra across the full wavelength range available (2000 to $9000\,${\AA}). Figure~\ref{fig:ATK_vs_CAP_MH0.0} shows the comparisons between abundance pattern predictions, at solar metallicity. In all cases the difference between predictions above $3000\,${\AA} is small, with RMS values about the 1:1 model agreement of 0.000900, 0.000734, 0.000257 for the cool giant, coolest dwarf (D$_1$) and cool dwarf (D$_2$) star respectively. The largest deviations are found below $3000\,${\AA} with RMS values of 0.0134, 0.00675 and 0.00268 for giant, D$_1$ and D$_2$ star respectively. Similar results were found for the same analysis at [M/H]=$-$1.0, with RMS values summarised in Table~\ref{ATK_vs_CAP_RMS}. As both \cite{Allende18} and the current set of models use similar methods in the computation of stellar spectra, it is important to show that they produce very similar predictions. The exception to this is found in the UV, where differences between the models are larger. These differences may be due to a combination of four effects. Firstly, the fine grids of \cite{Allende18} (described in section 2.4 of that work) use cubic interpolations of the opacity as a function of density and temperature to reduce computation times, whereas our models use the 'ONE-MOD' mode in ASS$\epsilon$T to compute the opacity for each model at every depth. This difference is expected to be largest in the UV, where more metal lines are present. Secondly, the models of \cite{Allende18} were computed with the outermost layers of the stellar atmospheres removed, which are less reliable for stars with T$_\textrm{eff}$<5000{\small K} (\citealt{Mezaros2012}). Thirdly, the method of microturbulent velocity handling in model sets may also cause small differences in the predictions. Lastly is the inclusion of neon in the $\alpha$-elements of our models. The opacity treatment and outer layer removal in the calculations are expected to be the dominant effects and can cause flux differences on the order of a few percent in the UV, in agreement with the values shown in Figure~\ref{fig:ATK_vs_CAP_MH0.0}. To highlight the impact of vturb on the UV CAP model predictions, we plot a comparison between the differential corrections predicted with vturb=1, 1.09 and $2\,\mathrm{km\,s^{-1}}$ in Figure~\ref{CAP_vturb_Test}. The vturb=1 and $2\,\mathrm{km\,s^{-1}}$ spectra are existing grid points in the published grids. vturb=$1.09\,\mathrm{km\,s^{-1}}$ spectra were generated using interpolations within the ATK grid. As shown, the effect of microturbulence is largest at UV wavelengths, with significant differences found. In our previous work (\citealt{Knowles19}) we showed that uncertainties in vturb can have large effects on the absolute predictions of spectral features, but only small effects on differential corrections in the MILES wavelength range. Our current work shows that differential corrections are more strongly affected by vturb below $\sim$ $3500\,${\AA}. Also from \cite{Knowles19}, we show that spectral models are more similar to each other than they are to real stars. We illustrate some comparisons between our models and real stars in Section~\ref{sec:TestingObs} to show where they agree well and where work is most needed. In summary, comparisons between ATK and CAP predictions of abundance pattern effects have shown that they agree well in the MILES wavelength range, which is important in the generation of semi-empirical stars described later. Small differences between model predictions are found for wavelengths below $\sim$ $3000\,${\AA}, which may be attributed to differences in the method of opacity treatment and interpolation effects when generating CAP models with same microturbulent velocity as ATK models. Differences in microturbulence can have large effects (up to $\sim$10 percent) on model abundance pattern predictions below $\sim$ $3500\,${\AA}. Neon inclusion in the $\alpha$-elements of ATK models may also create small differences between differential corrections. Further work is required to fully assess these small differences between models in the UV and is beyond the scope of this current work. \subsection{Comparison to PHOENIX Models} \label{ATK_vs_PHOENIX} \begin{table*} \centering \caption{Methods used in the generation of theoretical stellar spectra, for our model grid (ATK) and PHOENIX model grid (\citealt{Husser2013}).} \begin{tabular}{cp{20mm}p{15mm}p{45mm}p{15mm}p{25mm}p{15mm}} \hline Model & Atmosphere Code & Spectrum Code & Equation of State & vturb & Solar Abundance Reference & $\alpha$-elements\\ \hline ATK & ATLAS9, LTE, Plane-parallel (\citealt{Kurucz1993}) & ASS$\epsilon$T & Synspec (\citealt{Hubeny2017}, for the first 99 atoms and 338 molecules (\citealt{Tsuji64,Tsuji73,Tsuji76}, with partition functions from \cite{Irwin81} and updates. & Equation~\ref{DF16ModEq} in Section~\ref{sec:Microturbulence} & \cite{Asplund2005} & O, Ne, Mg, Si, S, Ca, Ti \\ PHOENIX & PHOENIX, LTE, Spherical based on \cite{Hauschildt99} & PHOENIX & Astrophysical Chemical Equilibrium Solver (ACES, see \citealt{Husser2013}) for 839 species (84 elements, 289 ions, 249 molecules, 217 condensates.) & Section 2.3.3 and Equation 7 of \cite{Husser2013} & \cite{Asplund2009} & O, Ne, Mg, Si, S, Ar, Ca, Ti \\ \hline \end{tabular} \label{ATK_PHOENIX} \end{table*} We now compare to another up-to-date and widely-used theoretical stellar spectral library of \cite{Husser2013}, hereafter referred to as the PHOENIX library. Again, we test the relative changes due to variations in atmospheric abundances, rather than focusing on the absolute predictions, which are already known to have limitations as described in Section~\ref{sec:ModelSpectra}. The PHOENIX library consists of high resolution stellar spectra that cover a wide range of stellar parameters and [$\alpha$/Fe] abundances, making it an ideal set to compare to our models. PHOENIX spectra were generated from an updated version of the PHOENIX stellar atmosphere code, described in \cite{Husser2013} and references therein. We use the publicly available distribution of the PHOENIX library\footnote{\url{http://phoenix.astro.physik.uni-goettingen.de/}} in the comparisons. We compare our models to the medium resolution (FWHM=$1\,${\AA}) version of the PHOENIX library. There are several differences between the computation methods of our models and the PHOENIX grid that we summarise in Table~\ref{ATK_PHOENIX}. Both sets of models use the same definitions of [Fe/H] and [$\alpha$/Fe] as described in Section~\ref{sec:CompMethod}, in that the total metallicity (Z) is not conserved when [$\alpha$/Fe] is changed. These definitions mean that [$\alpha$/Fe] and [$\alpha$/M], with M defined in equation~(\ref{MHDefEq}), are equivalent and can be used interchangeably. However, the solar abundances adopted in PHOENIX are from \cite{Asplund2009} compared to the values of \cite{Asplund2005} adopted in ATK models. We compare differential predictions of atmospheric $\alpha$ abundance variations, in the same way as Section~\ref{sec:ATKvsCAP}. This is done for two representative star types; a giant (Teff=4500{\small K}, log g=1.5, [M/H]=0.0) and dwarf (Teff=5500{\small K}, log g=4.0, [M/H]=0.0) star. The microturbulent velocity in both ATK and PHOENIX models are very similar, to minimise any differences due to this parameter. We generate models to match the PHOENIX [$\alpha$/Fe] enhancement of 0.20 using a quadratic interpolation within FER\reflectbox{R}E\footnote{Publicly available at \url{https://github.com/callendeprieto/ferre}.} (\citealt{Allende2006}). We test the predictions of how an [$\alpha$/Fe] change affects spectra, through ratios of enhanced and solar abundance pattern stars ([$\alpha$/Fe]=0.2/[$\alpha$/Fe]=0.0) for both model sets. ATK models were degraded to $1\,${\AA} resolution and resampled to match the PHOENIX spectra. PHOENIX spectra were also converted to air wavelengths to match the ATK models, using the conversion described in section 2.4 (their equations 8, 9 and 10) of \cite{Husser2013}, which is based on \cite{Ciddor96}. \begin{figure*} \begin{center} \includegraphics[width=154mm,angle=0]{Plots/ATK_vs_PHOENIX_2star_Spec_v6.pdf} \caption{Comparison between predicted differential corrections of ATK and PHOENIX models. Both sets of models are smoothed to $1\,${\AA} FWHM and sampled in air wavelengths. Blue and red lines represent ATK and PHOENIX model predictions, respectively. Top panel: A comparison of a giant star differential correction. Bottom panel: A comparison of a dwarf star differential correction.} \label{fig:ATK_vs_PHOENIX_Spec} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=154mm,angle=0]{Plots/ATK_vs_PHOENX_Lick_Giant_v7.pdf} \caption{Comparison between ATK and PHOENIX giant star model predictions of the change of Lick indices due to an atmospheric enhancement of $\alpha$-elements. This is done for Lick indices that are measured in {\AA}, including H$\beta_{\textrm{o}}$ from \protect\cite{Cervantes09}. Top panel: Change of Lick indices due to an $\alpha$ enhancement of 0.2. Bottom panel: The difference between the changes in ATK and PHOENIX models. The 1:1 agreement between model predictions is plotted as a dashed horizontal line. Lick indices are labelled for illustration. Note that in this comparison [Fe/H] is kept constant.} \label{fig:ATK_vs_PHOENIX_Lick_Giant} \end{center} \end{figure*} Figure~\ref{fig:ATK_vs_PHOENIX_Spec} shows the comparison of model predictions. For both star types, the general shape of both ATK and PHOENIX differential predictions are similar. However, there are offsets that are generally larger at shorter wavelengths, where metal lines are are more prevalent. For the giant star, ATK models predict a smaller differential correction (i.e. a smaller reduction in flux due to an atmospheric $\alpha$-enhancements), with larger offsets between ATK and PHOENIX models seen below $\sim5000\,${\AA}. For the dwarf star, the opposite behaviour is found, with ATK models predicting a larger reduction in flux at the shortest wavelength values. Above $\sim5000\,${\AA} in both star types, there is a reasonable agreement between models, with the exception of three features at $\sim$6318, 6343 and $6362\,${\AA}, in which ATK models predict a much larger change than the PHOENIX models. These are known calcium-sensitive lines and are found to be Ca auto-ionization lines, observed as broad lines in late-type stars (\citealt{Culver67}; \citealt{Barbuy15}). On closer inspection, PHOENIX models include the first two of these features but appear to be missing the reddest line. Given that the spectral models differ in every component of the computation, from the atmosphere and radiative transfer modelling through to the equation of state, line lists and even the reference solar abundance, finding the main cause of the offsets is a difficult task and beyond the scope of this work. It is likely that every difference in the calculations contributes to the these offsets. Despite the differences in methodology, we find that generally, ATK models predict a differential correction of similar shape and magnitude to PHOENIX models across the full wavelength range tested. In Figure~\ref{fig:ATK_vs_PHOENIX_Lick_Giant} we investigate how Lick indices (\citealt{Worthey94}; \citealt{Worthey97}; \citealt{Trager98}) change for an [$\alpha$/Fe] enhancement in ATK and PHOENIX models, for the giant star in Figure~\ref{fig:ATK_vs_PHOENIX_Spec}. This is performed for the standard Lick indices that are measured in {\AA}, including H$\beta_{\textrm{o}}$ defined in \cite{Cervantes09}. In Figure~\ref{fig:ATK_vs_PHOENIX_Lick_Giant} the change is now represented as a subtraction, rather than a ratio as in Figure~\ref{fig:ATK_vs_PHOENIX_Spec}. The top panel in Figure~\ref{fig:ATK_vs_PHOENIX_Lick_Giant} shows a direct comparison between ATK and PHOENIX model predictions of changes in Lick indices and the bottom panel shows the difference of predicted changes between models. In the bottom panel we also show the RMS scatter about the 1:1 agreement line (dotted horizontal line). The model predictions are similar, with an RMS value ($0.108\,${\AA}) comparable to typical observational uncertainties in Lick line strengths ($\sim$ 0.1 dex - e.g. see Table 2 of \citealt{Sansom2013}). The analysis is also performed for the dwarf star in Figure~\ref{fig:ATK_vs_PHOENIX_Spec} and an RMS value of $0.0718\,${\AA} is found. Larger differences between model predictions are seen for a few indices, including C$_2$4668 and Mg$_b$, for both giant and dwarf stars. One significant difference between models is the inclusion of spherical geometry in the atmospheric structure of PHOENIX models, compared to the 1{\small D} calculations of ATK. \cite{Bergemann12Fe,Bergemann17Mg} show that low-excitation FeI and Mg lines are sensitive to atmospheric structure and that the effect of NLTE on line strengths and abundance predictions can vary depending on whether the underlying atmosphere is calculated in 1{\small D} or 3{\small D}. In these works they find that for a giant star, with T$_\textrm{eff}$ and log g values similar to those tested here, the 1{\small D} LTE models predict a slightly larger metallicity and magnesium abundance than 3{\small D} LTE models. We note however that in these works, the atmosphere calculations are not fully 3{\small D} and are computed through time and spatial averages (<3{\small D}>) of full hydrodynamical simulations. Systematic errors in abundances determinations were found for <3{\small D}> LTE models in these works. In this work, we find that for Fe5270 and Fe5335 ATK models predict smaller Lick indices than PHOENIX in both the solar and $\alpha$-enhanced giant star. For Mg$_\textrm{b}$, we find that the Lick indices for solar abundance stars are similar for both ATK and PHOENIX models, but the Mg$_\textrm{b}$ index for the $\alpha$-enhanced model is larger for ATK. Another potential issue with modelling the Mg$_\textrm{b}$ is the presence of MgH bands in the Mg$_\textrm{b}$ index region (\citealt{Gregg94}). The strength of this molecular band is affected by 3{\small D} effects, with 1{\small D} models significantly underestimating features compared to equivalent 3{\small D} models (\citealt{Thygesen17}). The disagreement between ATK and PHOENIX predictions of C$_2$4668 indices may also be attributed to differences in the treatment of C$_2$ Swan bands (\citealt{Swan1857}; \citealt{Gonneau2016}), as is discussed in \cite{Knowles19}. The effect of geometry on Balmer lines can be also be large, as discussed in Section~\ref{sec:TestingObs}. In summary, these comparisons show that in terms of general spectral shape and Lick line strengths, ATK and PHOENIX models predict similar differential corrections of [$\alpha$/Fe] enhancements, albeit for only two star types at solar metallicity and for a small range in [$\alpha$/Fe]. This, along with the results of \cite{Knowles19}, gives us confidence to use our models in a differential sense to correct MILES empirical stellar spectra in later sections of this work. A more important test of our models, for their application, is how well they match real star spectra. We provide further tests of our models to two different, widely-used libraries of empirical stellar spectra (see Section~\ref{sec:TestingObs}). We next describe the empirical stellar spectra that we use in the generation of a new semi-empirical library. \section{Empirical MILES Spectra and Parameters} \label{sec:MILESstars} The empirical stellar spectra used in this project are from the Medium resolution Isaac Newton Library of Empirical Spectra (MILES) (\citealt{SanchezBlazquez2006}; \citealt{Falcon2011}). Whilst stars from our Galaxy do not cover the full abundance parameter range of stars in other galaxies, they do cover a broad range in stellar parameters. MILES stars have a typical signal-to-noise of over 100\,{\AA}$^{-1}$, apart from stars which are members of stellar clusters. MILES is a stellar library for which we know attributes of $\textrm{T}_{\textrm{eff}}$, log g, [Fe/H] and [$\alpha$/Fe] for a large proportion of the whole library. Of the 985 stars in the MILES library, \cite{Milone2011} measured the [Mg/Fe] abundances for 752 stars. We use their [Mg/Fe] measurement as a proxy for all [$\alpha$/Fe] abundances in these 752 stars for the first set of interpolations, matching MILES stars (see Section~\ref{sec:sMILESstars}). For the remaining MILES stars without [Mg/Fe] estimates, we made approximate estimates ([Mg/Fe] values of 0.0, 0.2 or 0.4) using measurements from both \cite{Milone2011} (their figure 10) and \cite{Bensby2014} (their figure 15). The \cite{Bensby2014} pattern is estimated from a study of dwarf stars in the Milky Way disk. We assigned a mean [Mg/Fe] value expected for the [Fe/H] value of the star according to the patterns found in \cite{Milone2011} and \cite{Bensby2014}. For any cluster stars that were not included in the \cite{Milone2011} work, we adopted a mean value determined for the other stars of the same cluster (see \citealt{Cenarro07} for discussion of the clusters in MILES). The choice of which MILES stellar parameters ($\textrm{T}_{\textrm{eff}}$, log g and [Fe/H]) to use is particularly important in this work, because this will determine how well the theoretical stellar spectra and resultant semi-empirical (sMILES) spectra can represent the empirical MILES stars. These parameters will be used in interpolations within the model library to create sets of theoretical MILES stars with which to make differential corrections to empirical MILES spectra. The two most widely-used works for MILES stellar parameters are those of \cite{Cenarro07} and Prugniel \& Sharma (\citealt{Prugniel2011}; \citealt{Sharma16}). Both sets of parameters have their benefits. In summary, the Prugniel \& Sharma parameter set has the advantage of being derived in a homogeneous fashion, from a well tested and characterised library of empirical templates, improved methodologies for lower temperature stars and good understanding of the biases involved. However, the work is limited by the use of interpolation of sometimes sparsely sampled data, particularly at the lowest temperatures where not many good star templates are available. From a bibliographic compilation, \cite{Cenarro07} produced a high-quality standard reference of atmospheric parameters for the full library of 985 MILES stars. The process involved calibrations, linked to a high-resolution reference system, and corrections of systematic differences between different sources to produce an averaged source of final atmospheric parameters from the literature, corrected to a common reference system. Because we plan to use the existing \cite{Vaz2015} SSP methodology in the next stage of this project, a final choice was made to use the \cite{Cenarro07} parameters, as was done previously in that work. An important reason for using \cite{Cenarro07} parameters comes from the good agreement that those parameters show with the colour-temperature-metallicity scaling of \cite{Alonso96} and \cite{Alonso99}. The SSP methodology is therefore internally consistent with the \cite{Cenarro07} parameters. In future work, there will be the possibility to use [$\alpha$/Fe] measurements currently being made for MILES stars (\citealt{GarciaPerez21}), rather than relying on the [Mg/Fe] proxy, as we are limited to currently (from \citealt{Milone2011}, [Mg/Fe] measurements). A subsample of MILES stars were previously found not to be representative of their tagged stellar parameters. Stars were identified as problematic by matching a computed spectrum with the given stellar parameters, using the interpolator described in \cite{Vaz2010}. If the match between interpolated and observed spectrum was poor, the target star was removed from the sample or given reduced weighting in any SSP calculation that used them. These are stars with a range of issues including: low quality spectra, erroneous spectra that may have been contaminated, pointing error, spectroscopic binary, large uncertainties in stellar parameters, incorrect extinction estimates, continuum shape problems, may be a carbon star or have segments that correspond to a wrong source. These inspections are described and presented in sections 2.2 of \cite{Vaz2010} and 2.3.1 of \cite{Vaz2015}. This resulted in a final library of 925 stars for which measures of effective temperature, surface gravity and metallicity ([Fe/H]) were taken from \cite{Cenarro07} and [Mg/Fe] measures were taken from \cite{Milone2011} and estimates from \cite{Bensby2014}, as described above. \section{Semi-Empirical MILES Library} \label{sec:sMILESstars} Next, we create a library of semi-empirical stellar spectra, based on application of the \textit{differential} abundance predictions of the theoretical library. This process can be split into the following steps: \begin{enumerate} \item{Interpolations in the theoretical MILES resolution model library to generate theoretical MILES stars. The interpolation generates spectra that exactly match MILES stars in the four atmospheric parameters of effective temperature, surface gravity, metallicity ([Fe/H]) and $\alpha$ abundance ([$\alpha$/M]=[Mg/Fe]). These are referred to as MILES theoretical base star spectra ($M_{TB}$).} \item{Other interpolations in the MILES model library are then made to generate theoretical MILES stars that have different abundance patterns. This interpolation matches the MILES stars in effective temperature, surface gravity and metallicity, but with different $\alpha$ abundances. These are referred to as MILES theoretical enhanced (or deficient) star spectra ($M_{T(\alpha=x)}$), where $x$ gives the [$\alpha$/Fe] abundance. For this work, $x=-0.20, 0.0, 0.20, 0.40, 0.60$.} \item{Differential Corrections, for each star, are then computed through : \begin{equation} \textrm{Differential Correction } (DC)=\frac{M_{T(\alpha=x)}}{M_{TB}} \label{DiffCorrEq} \end{equation} and are applied to empirical MILES stars to create semi-empirical MILES stars, with fluxes converted as follows, with wavelength $\lambda$: \begin{equation} \label{sMILESCalcEq} \textrm{sMILES}(\lambda)=\textrm{D}C(\lambda) \times \textrm{MILES}(\lambda) \end{equation} } \end{enumerate} \begin{figure*} \begin{center} \includegraphics[width=160mm,angle=0]{Plots/m0067_V15.pdf} \caption{The differential correction method followed for computing $\alpha$-enhanced and $\alpha$-deficient semi empirical (sMILES) star spectra. MILES star m0067 is shown as an example. Fully theoretical $\alpha$-enhanced ([$\alpha$/Fe]=+0.6; shown in green) and base (shown in blue) star spectra, are divided to obtain a differential correction (in red). This correction is applied to the corresponding empirical MILES star spectrum (shown in black). The result is a semi-empirical MILES star spectrum (shown in cyan) with a different [$\alpha$/Fe] ratio from the original empirical star.} \label{m0067_V15} \end{center} \end{figure*} This method produces families of semi-empirical star spectra (referred to as sMILES spectra) with the same stellar parameters (T$_\textrm{eff}$, log g and [Fe/H]) as the existing empirical MILES stars but with different abundance patterns ($\alpha$/Fe]) equal to the $M_{T(\alpha=x)}$ correction values of -0.2, 0.0, 0.2, 0.4 and 0.6. [C/Fe]=0.0 was assumed. An illustration of this process is shown in Figure~\ref{m0067_V15}, demonstrating how we apply this differential process to individual stars (rather than SSPs as in \citealt{Vaz2015}, their figure 4). We chose to perform differential corrections on stars rather than SSPs to produce a publicly available library for the community to use in their own population synthesis calculations. An alternative method for the application of differential corrections would be to produce an [$\alpha$/Fe] correction for each sampled point in a given isochrone. However, this would be dependent on the isochrone choice, and the resolution in age and metallicity of those isochrones, which may be subject to change as updates are provided. The main limitations of our chosen method here is that we are dependent on the spectral range and stellar parameter choices of the underlying empirical stellar library, which may vary, as is the case for the extended MILES library (\citealt{Vazdekis16}) and the various determinations discussed in Section~\ref{sec:MILESstars}. In Sections~\ref{sec:Interpolations} and \ref{sec:DiffCorr} we discuss the interpolations in the model library and the differential corrections, respectively. \subsection{Interpolation of Theoretical Stellar Spectra} \label{sec:Interpolations} With MILES star parameters chosen in Section~\ref{sec:MILESstars}, the next step was to interpolate in the model library to generate theoretical spectra that match MILES stars. To create synthetic spectra that replicate existing MILES stars, we use the interpolation mode of the software package FER\reflectbox{R}E. Designed to match spectral models to observed data in order to obtain best fitting parameters of stars, FER\reflectbox{R}E contains routines that allows interpolation within model grids. FER\reflectbox{R}E was used to interpolate in the MILES-specific theoretical library. Ratios between enhanced or deficient and base MILES star models provide the differential spectral correction (equation ~\ref{DiffCorrEq}). The interpolation was performed using the quadratic B\'ezier function within FER\reflectbox{R}E, apart from in a few cases discussed later. A quadratic B\'ezier function is a parametric curve that is defined by three points in parameter space (e.g. in our case, the wavelength, flux density, T$_\textrm{eff}$, log g, [M/H], [$\alpha$/M] and [C/M]). The 925 star parameters were split into three groups depending on their parameters, such that they fell in the parameter range of one of the three MILES resolution and wavelength range sub-grids described in Section~\ref{sec:NewGrids}. Any stars that fell outside, or on the upper or lower grid edges were not used in the semi-empirical library. The results of these cuts meant 587, 169 and 45 stars were computed via interpolation in the 3500-6000{\small K}, 6250-8000{\small K} and 8250-10000{\small K} grid, respectively. This means that the final semi-empirical library consists of families of 801 stars with different [$\alpha$/Fe] abundances. The first group of interpolations resulted in the MILES Theoretical Base stars, used as the denominator in the differential correction (see parameter $M_{TB}$ in equation~\ref{DiffCorrEq}). These base stars were generated by interpolating to the MILES parameters of T$_\textrm{eff}$, log g, [Fe/H] and [Mg/Fe]. Problems were found for 11 low T$_\textrm{eff}$ giant stars, for which linear interpolations were used, as described in Appendix \ref{sec:11probStars}. The next set of interpolations were made to produce theoretical MILES enhanced (or deficient) star spectra, used in the numerator of equation~(\ref{DiffCorrEq}). Spectra were computed with quadratic B\'ezier interpolations, in the T$_\textrm{eff}$, log g and [Fe/H] values of the existing MILES stars, but with [$\alpha$/M] values of -0.20, 0.0, 0.20, 0.40 and 0.60. This choice of [$\alpha$/M] steps reduced problems with interpolation at the grid edges, found previously. The range and sampling of [$\alpha$/Fe] abundances computed here represents an improvement over previously calculated SSP models (e.g. \citealt{Thomas05Mod}, \citealt{Conroy2012a}, \citealt{Vaz2015}) and directly relates to the range of values found in stars residing in external galaxies (e.g. figure 4 of \citealt{Sen2018}), as well as values found from unresolved stellar population studies of massive ETGs (e.g. \citealt{Conroy2014}, \citealt{McDermid15}). The 11 problem stars in the base family were computed also using linear interpolations for their $\alpha$ enhancements. The model spectra with [$\alpha$/Fe] different from those found in the local solar neighbourhood cannot easily be compared directly with real stars because they don't exist in the empirical MILES library or any other empirical libraries based on stars in the local solar neighbourhood. The result was six families of theoretical MILES stars all determined by interpolation of the model grids - one with all the existing MILES parameters and five with the same fundamental parameters but different $[\alpha$/Fe] abundances on a regular grid, at MILES resolution and wavelength range. \subsection{Differential Corrections} \label{sec:DiffCorr} \begin{figure*} \centering \includegraphics[width=\linewidth, angle=0]{Plots/Diff_Corr_Grid_v5_m0067.pdf} \caption{Example differential corrections, which are applicable for MILES star m0067 (=HD010700: T$_\textrm{eff}$=5264{\small K}, log g=4.36, [Fe/H]=-0.50, [Mg/Fe]=0.40). The left panel compares the resulting spectra of theoretical enhanced (or deficient) ($M_{T(\alpha=x)}$) and theoretical base $M_{TB}$ stars. In these plots ($\alpha$=x) is short for ([$\alpha$/Fe]=x). Flux Density is in units of erg/s/$\textrm{cm}^2$/\AA. The right panel shows the resulting differential correction ($\textrm{DC}_{(\alpha=x)}$), derived from equation~(\ref{DiffCorrEq}), for each of the output [$\alpha$/Fe] abundances. Note that for this star, the differential correction for [$\alpha$/Fe]=0.40 is 1, because the empirical MILES star is already at [Mg/Fe]=0.40.} \label{ATK_DiffCorr_m0067} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth, angle=0]{Plots/Diff_Corr_Grid_v5_m0923.pdf} \caption{Example of Differential corrections, which are applicable for MILES star m0923 in the globular cluster M3 (=M3 IV 25: T$_\textrm{eff}$=4367{\small K}, log g=1.27, [Fe/H]=-1.34, [Mg/Fe]=0.30). This empirical MILES star has an [Mg/Fe] value of 0.3. The left panel compares the resulting spectra of the theoretical enhanced (or deficient) ($M_{T(\alpha=x)}$) and theoretical base $M_{TB}$ stars. In these plots ($\alpha$=x) is short for ([$\alpha$/Fe]=x). Flux Density is in units of erg/s/$\textrm{cm}^2$/\AA. The right panel shows the resulting differential correction ($\textrm{DC}_{(\alpha=x)}$), derived from equation~(\ref{DiffCorrEq}), for each of the output [$\alpha$/Fe] abundances.} \label{ATK_DiffCorr_m0923} \end{figure*} Python routines performed the division of flux of the enhanced (or deficient) over base spectra, described in equation~(\ref{DiffCorrEq}) for each wavelength. In equation~(\ref{DiffCorrEq}) $\alpha$ indicates the [$\alpha$/Fe] abundance of the sMILES star that will be produced if the differential correction is applied to the empirical MILES star. Two example sequences of differential corrections are shown for MILES stars m0067 (T$_\textrm{eff}$=5264{\small K}, log g=4.36, [Fe/H]=$-$0.50, [Mg/Fe]=0.40) and m0923 ((T$_\textrm{eff}$=4367{\small K}, log g=1.27, [Fe/H]=$-$1.34, [Mg/Fe]=0.30) in Figures~\ref{ATK_DiffCorr_m0067} and ~\ref{ATK_DiffCorr_m0923}, respectively. As shown, the differential correction is smallest for abundance patterns closest to the measured value of the empirical star. The largest differential corrections are found for wavelengths below $\sim4500\,${\AA}. This is likely due to the many strong metal line and molecular features that increasingly accumulate below $\sim4500\,${\AA}, such as the G-band, Ca H\&K, CH and CN contributions. \cite{Coelho05} (their figure 16) shows the increasing contributions from atomic lines and certain molecular bands at these shorter wavelengths. Figure 2 of our supplementary data demonstrates the effect of metallicity at these shorter wavelengths in our own models. The effects of [$\alpha$/Fe] enhancements shown here are in agreement with previous works (e.g. \cite{Cassisi2004}, their figure 2). Another noticeable feature in the corrections is also present around the Mg$_\textrm{b}$ indices, which again increases as the $[\alpha$/Fe] abundances differ from the measured abundance of the empirical star. The appropriate differential correction was then applied to the corresponding empirical star spectrum value via equation~(\ref{sMILESCalcEq}). The result was 801 spectra in each of the five [$\alpha$/Fe] bins, with a wavelength coverage of $3540.5 - 7409.6\,${\AA} in bins of $0.9\,${\AA}. \begin{figure*} \centering \includegraphics[width=140mm, angle=0]{Plots/sMILES_801_Library_v4_Bensby.pdf} \caption[Final sMILES stellar library produced in this work in $\alpha$ abundance and metallicity, in comparison to the empirical MILES library] {Final semi-empirical MILES (sMILES) stellar library coverage in the [$\alpha$/Fe] vs [Fe/H] plane. The coloured points that lie in horizontal lines represent the families of 801 sMILES stars and black points represent the corresponding 801 empirical MILES stars, with black squares representing those stars with [Mg/Fe] values estimated from a Milky Way relation derived in \cite{Bensby2014} and black circles representing the 752 MILES stars for which \cite{Milone2011} provides [Mg/Fe] values. T$_\textrm{eff}$, log g and [Fe/H] values were taken from \cite{Cenarro07}.} \label{sMILES_Library} \end{figure*} To summarise the sMILES library, we plot the locations of sMILES stars in the [$\alpha$/Fe] vs [Fe/H] plane in Figure~\ref{sMILES_Library} to show the final coverage in these parameters and 801 empirical MILES stars, which show the well known distribution of abundances, for stars in the local solar neighbourhood. Each horizontal coloured line represents a family of 801 sMILES stars at a given [$\alpha$/Fe]. Similar figures are provided in the supplementary material to show the coverage of sMILES stars in the [$\alpha$/Fe] vs T$_\textrm{eff}$ and log g planes. Next we test our new theoretical spectra and differential corrections against existing observed spectra from different empirical libraries. \section{Testing Model Spectra and Differential Corrections against Real Stars} \label{sec:TestingObs} As indicated in Section~\ref{sec:sMILESstars}, it is difficult to test the full range of our sMILES library, because not all such parameter combinations can be found in nearby stars (see Figure~\ref{sMILES_Library}). The fact that a wider range of abundance parameter combinations do appear to exist elsewhere in the Universe (e.g. in dwarf spheroidals and giant ellipticals) is the reason why we wished to generate these sMILES spectra. However, we can do some limited tests. We first compare our theoretical grid to empirical MILES stellar spectra. Then we compare our theoretical spectra and differential corrections to spectra selected from the empirical MaStar stellar library. \subsection{MILES Comparisons} \begin{figure*} \centering \includegraphics[width=147mm, angle=0]{Plots/ATK_vs_MILES_5stars_v2.pdf} \caption{Comparison of ATK model and empirical MILES stars for m0009, m0059, m0077, m00117, m0317. MILES star parameters from \protect\cite{Cenarro07} and \protect\cite{Milone2011} are given in each panel. Spectra are degraded to $2.5\,${\AA}, sampled at $0.9\,${\AA} and normalised to unity area. ATK (red lines) and MILES (black lines) spectra are scaled up by a factor of 2000 and shifted onto the plots. Ratios between ATK models and MILES stars are given in the lower panel of each plot (green lines) with no scaling or shifting applied. The vertical axes on the plots are to scale for the ratios between ATK models and MILES stars. The 1:1 agreement between ATK models and MILES empirical star is represented as a dashed-horizontal line.} \label{ATK_vs_MILES_5stars} \end{figure*} \begin{figure*} \centering \includegraphics[width=150mm, angle=0]{Plots/ATK_vs_MILES_LickIndices_v4.pdf} \caption{Comparison of Lick indices predicted by our interpolated MILES models (labeled ATK) to empirical MILES stars. This is for indices that are measured in {\AA}, including H$\beta_{\textrm{o}}$ from \protect\cite{Cervantes09} and for five examples of stars present in the MILES library. Lick indices are labelled for illustration and the 1:1 agreement between model and observation is plotted as a dashed horizontal line. Disagreements between models and observations are generally larger in the blue. RMS scatter (in {\AA}) about the 1:1 agreement line is given for each star.} \label{ATK_vs_MILES_Lick} \end{figure*} Although the MILES stars will reflect the Milky Way abundance pattern, checks can still be made to test the model grid in various parts of parameter space. To test models directly to MILES stars, we use the theoretical MILES base stars, generated through quadratic interpolations within the model grids, as described in Section~\ref{sec:sMILESstars}. In Figure~\ref{ATK_vs_MILES_5stars}, we show comparisons of these models to MILES stars for various star types, specifically with varying metallicities and [Mg/Fe] values. The cool stars show increasingly larger differences below about 4200{\AA} (e.g. m0059, m0117). The sharp cores of hydrogen alpha lines are not well reproduced in any of the theoretical spectra. Balmer lines in general are poorly fit for the higher temperature stars (e.g. m0317). Ca H\&K lines are stronger in the theoretical models for cool stars than in the MILES stars (e.g. m0059, m0117). The coolest star model (m0059) also shows a mismatch in the red, with molecular features stronger in the theoretical model compared with the MILES star. These results are in agreement with the findings of \cite{Knowles19}, with differences between observations and models identified for cool stars. In Figure~\ref{ATK_vs_MILES_Lick} we also show the differences between predicted Lick indices for our interpolated MILES models and the equivalent empirical MILES stars. In general, the agreement between models and MILES stars is worst at the bluer wavelengths of the MILES range, with reasonable agreements found above $\sim4500\,${\AA}. This is as expected from previous direct comparisons, which have also shown wavelength-dependent disagreements between theoretical models and observed spectra (e.g. \citealt{Martins07}; \citealt{Bertone08}; \citealt{Coelho2014}; \citealt{Villaume17}; \citealt{Allende18}); \citealt{Knowles19}). The models tested here are generated using versions of ATLAS therefore, spherical geometry and non-LTE effects have been ignored. These assumptions may explain the lack of agreement between models and observations, particularly for cool stars. The absolute effect of spherical geometry, in the form of convection, on Balmer lines can be large, resulting in differences between 3{\small{D}} LTE and 1{\small{D}} LTE temperature estimates of late-type stars of up to $\approx$200{\small{K}} (Table 4 of \citealt{Amarsi2018}). Balmer lines modelled under LTE conditions are known to match the line wings, but cannot reproduce the core of the lines (e.g. figures 5 and 6 in \citealt{Amarsi2018} and section 4.2 in \citealt{Martins07}). However, the effect of non-LTE in the cooler temperature regimes tested here are smaller than the 3{\small D} effects, particularly for higher order Balmer features (Table 4 of \citealt{Amarsi2018}). Generally, non-LTE effects become more important in the very lowest and highest temperature stars, in addition to very metal poor stars or those with low surface gravity (e.g. \citealt{Hauschildt99a}; \citealt{Martins2005}; \citealt{Hansen2013} and references therein). The disagreement between model and observed hydrogen line indices (see Figure~\ref{ATK_vs_MILES_Lick}) may also be partly explained by the presence of chromospheres, which can reduce the absorption or even producing emissions in the cores of Balmer lines (e.g. \citealt{Leenaarts12}), and limitations in the atomic data in the region. The Balmer lines in cool stars can be weak and the region could be affected by poorly-known, uncalibrated metal lines. We highlight again here that in the generation of sMILES stars (Section~\ref{sec:sMILESstars}), we use the models in a differential sense only. In the application of predictions of these models, we have shown that using the models' differential predictions of abundance pattern effects produces a better agreement with observations than using the absolute predictions, particularly at bluer wavelengths (\citealt{Knowles19}, their figure 11). The differential predictions of some hydrogen features are scattered by a factor of $\sim$2 less than the absolute predictions and a large reduction in scatter between the two approaches is also seen in G4300 and C$_{2}$4668 indices. Another potential source of disagreement between models and observations here is any abundances differences other than [$\alpha$/Fe], such as C and N, which might affect the empirical stars but are not changed from scaled-solar in the interpolated models. C and N have quite a large effect on the spectra, particularly in the blue (see Response Tables of \citealt{Knowles19}). Future improvements would involve modelling more individual elements in the theoretical models and more accurate measurements of their abundances in empirical stellar spectral libraries. Next, we test our theoretical models to a more recent set of observations. \subsection{MaStar Comparisons} A recent large survey of stars in our Solar neighbourhood is that of the MaStar empirical stellar spectral library (\citealt{Yan2019}). These spectra, covering $3622-10354\,${\AA}, were observed using the BOSS spectrograph on the 2.5m SLOAN telescope at Apache Point Observatory. They obtained good quality spectra for 3321 stars, with spectral sampling of $\Delta\log(\lambda\,$(\AA)$)=1\times10^{-4}$, corrected to rest-frame vacuum wavelengths and flux calibrated, but uncorrected for foreground Galactic extinction. The spectral resolution varies with wavelength, and between observations, as shown in \cite{Yan2019} their figure 10. Typically, the spectral resolution of the MaStar observations is $\sim3\,${\AA} (FWHM), at wavelengths up to $\sim6000\,${\AA}, and increases non-linearly to $\sim5\,${\AA} (FWHM) at the reddest wavelengths. There are 1589 MaStars with [$\alpha$/Fe] measurements, in addition to [Fe/H], $\textrm{T}_{\textrm{eff}}$ and log g measurements, available from their input stellar parameter catalogues from APOGEE, SEGUE and LAMOST surveys (see \citealt{Yan2019} for details). With these stellar parameter measurements for 1589 stars, this makes the MaStar spectral catalogue a potentially useful resource for comparing with our theoretical star spectra, independently of the MILES stellar library. Therefore we compare MaStar spectra, extracted from the MaStar good spectral catalogue\footnote{\url{https://data.sdss.org/sas/dr16/manga/spectro/mastar/v2\_4\_3/v1\_0\_2}}, with our new theoretical star spectra. \begin{table} \caption{Selection parameters showing values for four theoretical stars and ranges about those values (last row) for selection of observed stars from the MaStar good spectral catalogue, with good quality flag MJDQUAL=0. Cool giant (CG) and cool dwarf (CD) stars are listed. CG\_e and CD\_e are more enhanced cool giant and cool dwarf stars.} \begin{tabular}{llllll} \hline \multicolumn{4}{l}{VALUES AND (RANGES)} & \multicolumn{2}{l}{OBSERVED STARS}\\ T$_\textrm{eff}$ & [Fe/H] & log g & [$\alpha$/Fe] & Number and & Number\\ ({\small K}) & (dex) & (dex) & (dex) & Type of Stars & of Spectra\\ \hline 4750 & -0.4 & 2.5 & +0.05 & 4 CG & 12\\ & & & +0.20 & 6 CG\_e & 12\\ & & 4.5 & +0.05 & 4 CD & 15\\ & & & +0.20 & 7 CD\_e & 19\\ ($\pm$100) & ($\pm$0.1) & ($\pm$0.2) & ($\pm$0.06) &&\\ \end{tabular} \label{Example MaStars} \end{table} To investigate effects of individual parameters, we selected groups of MaStars that lie within small errors from specific theoretical stars. Errors on abundance parameters ([Fe/H] and [$\alpha$/Fe]) are large for any one star, typically $\pm\sim$0.05 to $\pm$0.1 dex (e.g. for SEGUE spectra in \citealt{Lee2011}), plus uncertain systematic errors. By selecting groups of similar stars we aim to reduce the uncertainty in their average abundances. The parameters chosen were guided by the wish to test differential effects of [$\alpha$/Fe]. This constraint limits the parameter space from which we can select groups of stars in the our Solar neighbourhood because the range of [$\alpha$/Fe] is small at any given value of [Fe/H]. In \cite{Yan2019}, their figure 13, we see that the best place to look for groups of similar stars is at slightly sub-solar metallicity of [Fe/H]$\sim$-0.4, where there is a group of stars at [$\alpha$/Fe]$\sim$+0.05 and another group at [$\alpha$/Fe]$\sim$+0.2 that we hereafter refer to as the enhanced group. We selected cool MaStars ($\textrm{T}_{\textrm{eff}}\sim$4750{\small K}) with values and ranges detailed in Table~\ref{Example MaStars}, around these abundances, and sampled two values of log g. Four theoretical spectra with parameters given in Table~\ref{Example MaStars} were created, using FER\reflectbox{R}E interpolation of the model grids as elsewhere in this paper. Although the MaStar spectra are flux calibrated, they show variations from multiple observations of the same star that need to be removed in order to make the comparisons with our theoretical spectra. We chose a weighting for the continuum fit that would de-emphasise the absorption features. Therefore, the spectra were processed as follows, using Python code and IRAF routines: \begin{itemize} \item{Flattened by division of a fourth order Legendre polynomial fit, weighted by flux squared (MaStar and theoretical spectra).} \item{Smoothed to $3\,${\AA} FWHM resolution (theoretical spectra), to approximately match MaStar resolutions.} \item{Converted to air wavelengths (MaStar spectra), so that all spectra are on the same wavelength scale.} \item{Binned to $1.0\,${\AA} linear bins (MaStar and theoretical spectra).} \end{itemize} \begin{figure*} \centering \vspace{-0.85cm} \subfloat{\includegraphics[width=96mm, angle=0]{Plots/Spectra_CG_Ldiv_d.pdf}} \subfloat{\hspace{-0.9cm}\includegraphics[width=96mm, angle=0]{Plots/Spectra_CG_e_Ldiv_d.pdf}} \vspace{-0.85cm} \subfloat{\includegraphics[width=96mm, angle=0]{Plots/Spectra_CD_Ldiv_d.pdf}} \subfloat{\hspace{-0.9cm}\includegraphics[width=96mm, angle=0]{Plots/Spectra_CD_e_Ldiv_d.pdf}} \caption{Flattened spectra of our theoretical stars (blue line) overlaid on those of empirical MaStar spectra (multiple coloured thin lines), for the four spectral types listed in Table~\ref{Example MaStars}. The upper two panels show cool giant stars at [$\alpha$/Fe]=+0.05 (left panel) and [$\alpha$/Fe]=+0.20 (right panel). The lower two panels show cool dwarf stars at [$\alpha$/Fe]=+0.05 (left panel) and [$\alpha$/Fe]=+0.20 (right panel).} \label{ATK_vs_MaStar_Spec} \end{figure*} \begin{figure*} \centering \includegraphics[width=125mm, angle=0]{Plots/Differentials_plot_CGstars_air.pdf} \includegraphics[width=125mm, angle=0]{Plots/Differentials_plot_CDstars_air.pdf} \caption{Top plot: Differential enhancements in cool giant stars, shown by flux density ratios of enhanced to less-enhanced flattened star spectra for our theoretical stars (dark blue lines, labelled ATK ratio) and for averaged cool giant MaStars (orange lines, labelled MaStar ratio). The lower panel (black line) shows the division of these ratios, highlighting residual mismatches between theory and observations in their differential changes due to [$\alpha$/Fe] enhancements. Bottom plot: The same, but for cool dwarf stars.} \label{ATK_vs_MaStar_DiffCorr} \end{figure*} In Figure~\ref{ATK_vs_MaStar_Spec} we show the resultant theoretical star spectrum (dark blue line), overlaying the corresponding MaStar spectra (multiple coloured, thin lines), for each of the four star types listed in Table~\ref{Example MaStars}. Figure~\ref{ATK_vs_MaStar_Spec} shows that the spectral structures agree well, after flattening, and the difference between giant (upper row) and dwarf (lower row) stars is clear, for both the theoretical and observed star spectra. These trends of deepening features around the magnesium band and sodium doublet lines in cool dwarfs are the same as seen in \cite{Knowles19_Thesis} (figures 3.13 and 3.14), whilst the near-IR calcium triplet lines go in the opposite sense, getting weaker at higher surface gravity, as in \cite{Knowles19_Thesis} (figure 3.16). Any differences in the spectra due to [$\alpha$/Fe] are more subtle. Therefore, to try to illustrate any such differences, Figure~\ref{ATK_vs_MaStar_DiffCorr} shows the ratio of enhanced to less-enhanced spectra for averaged MaStar spectra (orange line) of a given type ($\textrm{T}_{\textrm{eff}}$, [Fe/H] and log g) and compares this with the same ratio for the theoretical star spectra (dark blue line). These divisions of spectra represent differential corrections to go from less-enhanced to enhanced spectra. The division of these ratios is shown in the lower panels of each plot in Figure~\ref{ATK_vs_MaStar_DiffCorr} In Figure~\ref{ATK_vs_MaStar_DiffCorr} some differential features due to [$\alpha$/Fe] changes are qualitatively followed in both the theoretical and observed stars, particularly at short wavelengths where large changes due to abundance pattern variations are seen (e.g. \citealt{Cassisi2004}, their figure 2; \citealt{Sansom2013}, their figure 4; also Figures~\ref{ATK_DiffCorr_m0067} and ~\ref{ATK_DiffCorr_m0923} of this work). However, specific features, such as the region around Mg$_{\textrm{b}}$, show the expected differential behaviour in the theoretical ratio, but this is not well followed by the observed ratio, particularly for the CD stars. A lack of agreement between different SSP models is also found in this broad spectral region, as illustrated in the recent paper by \citealt{Liu2020}, and might be due to uncertainties in MgH molecular band contributions that are particularly important in cool stars. The MaStar spectra that we are comparing our spectral star models to are also likely to suffer from residual continuum differences due to the way that we have had to flatten the spectra in order to be able to compare them with our models. Quantitatively, for CG stars, the root-mean-square scatters about unity for the three ratios shown in Figure~\ref{ATK_vs_MaStar_DiffCorr} are: RMS=0.0097, 0.0175, 0.0141 for the ATK ratio, MaStar ratio and (ATK ratio/MaStar ratio) respectively. For CD stars, the corresponding values are: RMS=0.0137, 0.0160, 0.0145 for the ATK ratio, MaStar ratio and (ATK ratio/MaStar ratio) respectively. These values avoided the first and last $200\,${\AA} where continuum fits deviate most. The reductions in RMS values on dividing the two ratios (ATK ratio/MaStar ratio) indicate that the MaStar differential enhancements partially follow the theoretical differential enhancements, but not completely, for both CG and CD stars. Some of the residual mismatches are due to noise in the MaStar data and errors in their abundance estimates. The MaStar CD stars, selected to have the same parameters, show quite a wide range of spectral shapes around the Mg molecular bands and systematic deviations from the theoretical spectrum (Figure~\ref{ATK_vs_MaStar_Spec}, lower panels), suggestive of errors on the [$\alpha$/Fe] measurements of some of those MaStars. This test illustrates the difficulty in testing our theoretically predicted spectral ratios against observations of real stars. The [$\alpha$/Fe] enhancement range available (+0.05 to +0.20 dex) is not much larger than typical errors on [$\alpha$/Fe] enhancements ($\sim\pm$0.1 dex). Large [$\alpha$/Fe] enhancement variations at a given metallicity are not available in the empirical stellar libraries of stars in our Galaxy. Given the limitations of empirical star datasets, our match to observed stars seems reasonable, as shown in Figure~\ref{ATK_vs_MaStar_Spec}, for the MaStars selected to be of similar types. In future work improved versions of the MaStar library, with uniform spectral resolution and consistent parameter measurements (rather than heterogeneous ones from the literature, as in \citealt{Yan2019}), may help to improve these comparisons between theoretical and observed star spectra. \section{Summary} \label{sec:Summary} This work presents new theoretical and semi-empirical stellar spectral libraries, useful for the analysis of stars and stellar populations. First, a new high resolution ($\mathrm{R}\sim10^5$) library of theoretical stellar spectra was created to cover a range in stellar parameters including effective temperature, surface gravity, metallicity ($-$2.5$\leq$[M/H]$\leq$+0.5), and covering abundance ratios for $\alpha$-elements ($-$0.25$\leq$[$\alpha$/M]$\leq$+0.75) and carbon ($-$0.25$\leq$[C/M]$\leq$+0.25) (where [M/H]=[Fe/H]). This new library covers parameter ranges of a large proportion of the empirical MILES stars. To minimise the number of models generated, we used an analytical representation of microturbulent velocity as a function of effective temperature and surface gravity based on observational trends found in the literature. These models were generated with consistent abundances of [M/H], [$\alpha$/M] and [C/M] in both their model atmosphere and spectral synthesis components. Existing opacity distribution functions from the APOGEE project were used to create the model atmospheres and the radiative transfer was carried out using ASS$\epsilon$T code in one-dimension, assuming LTE. Kurucz atomic and molecular transitions were included as described in Section~\ref{sec:ModelSpectra}, however, to reduce computation time TiO was excluded for spectra with T$_{\textrm{eff}}>6000${\small K} because it has a negligible effect on stars at these higher temperatures. The resulting theoretical spectra cover a wavelength range from 1680 to $9000\,${\AA} with linear sampling of $0.05\,${\AA} per pixel and are publicly available (see Data Availability Section). Comparisons of our new theoretical library with published theoretical spectra from \cite{Allende18} showed good agreement, with small residuals mainly at $\lambda<3000\,${\AA}. Comparisons with PHOENIX models (\citealt{Husser2013}) showed values of Lick indices that generally agreed within typical observational uncertainties on their measurements ($\sim \pm0.1$\,{\AA}) apart from C$_2$4688 and Mg$_\textrm{b}$ indices. We note here that both our models and those of PHOENIX predict a negative change in C$_2$4688 and a positive change in Mg$_\textrm{b}$ for $\alpha$ enhancements (see top panel of Figure ~\ref{fig:ATK_vs_PHOENIX_Lick_Giant}), and therefore both sets of models produce improvements over not considering [$\alpha$/Fe] differential corrections in $\alpha$-enhanced population models. Potential reasons for differences in model predictions for these indices lie in the geometry of underlying atmospheres, as discussed in Section~\ref{ATK_vs_PHOENIX}. Comparing our theoretical spectra directly with MILES empirical spectra highlighted their absolute differences, particularly at bluer wavelengths. Differences are known to be significant between theoretical and empirical star spectra, which is why we have created a library of semi-empirical stellar spectra. Limitations of theoretical models can be explored in future with these new grids. A differential approach was taken to create a library of semi-empirical stellar spectra covering a range in [$\alpha$/Fe]. Differential corrections were derived from the theoretical grid and applied to empirical star spectra from the MILES library. The resulting grid of semi-empirical (sMILES) model spectra is at the MILES sampling, resolution and wavelength coverage. This library consists of 5 families of 801 semi-empirical star spectra for [$\alpha$/Fe] abundances from $-$0.2 to +0.6 in steps of 0.2 dex. Figure~\ref{sMILES_Library} illustrates the output parameter sampling and coverage, extending the abundance ratios to regions that can be used to model integrated populations from dSphs to giant elliptical galaxies. Tests of our new theoretical library against empirical stars from the new MaStar library showed good overall agreement when comparing continuum divided spectra. We tested our predicted differential corrections for [$\alpha$/Fe] variations against ratios of selected cool stars in the MaStar library and found that abundance ratio effects were partially reflected in both, but that cool dwarfs showed a larger range of spectral shapes around the Mg molecular band features. Such tests of our predicted differential corrections are currently limited by the small range in [$\alpha$/Fe] at each metallicity for observed stars in our Galaxy, and by the heterogeneous nature of the MaStar characterisations. Therefore, improved testing awaits better characterisation of MaStar [$\alpha$/Fe] abundances. Versions of the theoretical and sMILES libraries will be made available on the MILES website for public use. \section*{Acknowledgements} The authors would like to thank the STFC for providing ATK with the studentship for his PhD studies as well as the IAC for providing the support and funds that allowed ATK to visit the institute on two occasions. AES and AV acknowledge travel support from grant AYA2016-77237-C3-1-P from the Spanish Ministry of Economy and Competitiveness (MINECO). AV acknowledges support from grant PID2019-107427GB-C32 from The Spanish Ministry of Science and Innovation. CAP thanks MICINN for grant AYA2017-86389-P. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. We also thank the anonymous referee for their comments and suggestions that have greatly improved the clarity and content of this work. \section*{Data Availability} The new theoretical stellar spectral library, at fixed spectral sampling, presented in this article are publicly available on the UCLanData repository at {\url{https://uclandata.uclan.ac.uk/178/}}. The new semi-empirical stellar spectral library will be made publicly available on the MILES website at {\url{http://miles.iac.es/}}. \bibliographystyle{mnras}
1,116,691,498,764
arxiv
\section{\label{sec:intro}Introduction} Fast and accurate transmission of quantum states through communication or computation networks is a critical objective for quantum technologies~\cite{bennett_quantum_2000,divincenzo_physical_2000,kimble_quantum_2008}. Proposed schemes to achieve this goal consider engineered couplings between the network sites~\cite{bose_quantum_2003,christandl_perfect_2004,albanese_mirror_2004,wojcik_unmodulated_2005,di_franco_perfect_2008,mohseni_environment-assisted_2008,apollaro_99-fidelity_2012,sinayskiy_decoherence-assisted_2012,korzekwa_quantum-state_2014,estarellas_robust_2017}, external fields~\cite{shi_quantum-state_2005,hartmann_excitation_2006,banchi_optimal_2010,shan_controlled_2018}, weak measurements~\cite{he_robust_2013,man_controllable_2014}, or transport in noisy environments of biological or synthetic systems~\cite{zwick_optimized_2014,li_one-way_2019,matsuzaki_one-way_2020,vieira_almost_2020,PhysRevResearch.2.013369}. Such methods are challenging to implement in practice for quantum entanglement transfer, due to quantum decoherence and disorder~\cite{de_chiara_perfect_2005,ashhab_quantum_2015}. Pretty-good State Transfer (PGST) can be achieved on dual-spin chains ~\cite{DualRail}, spin chains with weakly coupled endpoints \cite{wojcik_unmodulated_2005,PhysRevA.76.052328,Banchi_2011,PhysRevA.78.022325,Giampaolo_2010}, and projective measurements ~\cite{measurementAssist} with Quantum Walk (QW) schemes. Continuous time quantum walk (CTQW) is a paradigmatic model of quantum transport~\cite{MULKEN201137,alg}. Both discrete-~\cite{zhan_perfect_2014,stefanak_perfect_2016} and continuous-time~\cite{kendon_perfect_2011,large_perfect_2015,Cameron2014,Connelly2017} quantum walks have been discussed for PGST. CTQW can be made one way by taking complex-valued couplings, which is called a chiral quantum walk (CQW)~\cite{Zimbors2013,PhysRevA.93.042302,Wong2015}. Chirality emerges due to the breaking of time-reversal symmetry (TRS)~\cite{liu_quantum_2015}, and it provides a significant boost to transport speed~\cite{Zimbors2013}. High-dimensional entanglement (entanglement in high-dimensional degrees of freedom, such as spatial path modes) is advantageous in quantum communication~\cite{Cozzolino2019} and quantum superdense coding~\cite{6Liu2002,7Grudka2002,8Hu2018}. The perfect state transfer (PST) in spin chains paves the way for the creation of required entangled states and logic gate structures for quantum computation and quantum information ~\cite{kay_review, Kay2005}. High-dimensional entangled states can be produced by repeatedly generating the entanglement in a low-dimensional system and transferring these to a higher dimensional one~\cite{Giordani2021}. Our first goal is to explore if CQW can be used to transmit two-dimensional quantum entangled states; what are the possible advantages it may offer? In addition, we ask if and to which extent CTQW can be used in place of CQW with the same chiral properties. CTQW can be easier to implement than CQW. For that aim, we identify the underlying physics of the chiral nature of QW in terms of quantum path interference, which can be controlled either via the phase in the initial state for CTQW or via the phase in complex hopping coefficients in CQW. We specifically consider CQW on a linear spin chain of equilateral triangles, as shown in Fig.~\ref{fig:triChain}, which is the simplest graph that allows for so-called probability time symmetry (PTS) breaking~\cite{PhysRevA.93.042302}. A walker can transfer from one node to any other neighboring site on the triangular plaquette by passing through either one or two edges. We consider a uniform complex coupling between the nearest-neighbor sites. Due to the path length difference between the odd and even number of edges traveled, and phases of the complex couplings, interference can enhance the transfer rate. Path interference in the context of quantum walk means that the relative phase between different trajectories the particle can traverse from one site to another one can give constructive or destructive interference effects in the site-to-site probability transfer. By using special graph topologies, complex-valued site-to-site couplings, or specific initial states, one can use path interference to break the PTS. CTQW can only utilize the latter, initial states with specific phases to exploit quantum interference while CQW can use both the freedom to choose the initial state and complex hopping coefficients to break PTS. The spatial entanglement can be defined in the site basis for quantum walks \cite{spatialEntanglementQW, singleParticleEnt}. We assume a particle (we call a spin excitation as a particle) in a Bell-like spatially entangled state of two sites injected into the chain from the left. The quality of the transfer is examined by calculating the density matrix, concurrence~\cite{PhysRevLett.80.2245}, fidelity, and Bures distances explicitly~\cite{Jozsa1994,Hbner1992,Hbner1993}. We have also numerically confirmed that the entanglement state transfer time linearly scales with the chain size \cite{tony1, tony2, tony3}. The triangular chain lattice can be realized in superconducting circuits~\cite{Vepslinen2020,Ma2020}, trapped ions~\cite{trappedIonChain}, NMR systems~\cite{PhysRevA.93.042302}, photonic and spin waveguides~\cite{PhysRevA.93.062104}, and in optical lattices~\cite{PhysRevLett.93.056402}. In the case of optical lattices, complex edge weights could be introduced with the help of artificial gauge fields~\cite{bloch_many-body_2008,aidelsburger_artificial_2018}, nitrogen-vacancy centers in diamonds ~\cite{levi} or with plasmonic non-Hermitian coupled waveguides~\cite{Fu2020}. This paper is organized as follows. We introduce the CQW on a triangular chain model by presenting the adjacency matrix and present the associated Hamiltonian model with complex hopping rates in Sec.~\ref{Sec:Model}. Our results are given in Sec.~\ref{Sec:Results} in five subsections. PTS breaking and entanglement transfer in CTQW and CQW on a triangular chain are discussed in Sec.~\ref{Sec:PTSB-CTQW} and Sec.~\ref{Sec:PTSB-CQW}, respectively. We conclude in Sec.~\ref{Sec:Conclusion}. \section{CQW on a triangular chain}\label{Sec:Model} \begin{figure}[t!] \label{fig1} \centering \includegraphics[width=\linewidth]{fig1.png} \caption{\label{fig:triChain} Graph of a linear chain with $N$ vertices arranged in triangular plaquettes. Initially, the pair of sites $1$ and $2$ at the left end of the chain are entangled. The entangled state is transported to the rightmost pair of vertices ($N-1$ and $N$) after a chiral quantum walk. Arrows indicate the directed edges with complex weight factors, taken to be $+i$. In the opposite direction, the weight factors change phase and become $-i$. We have also examined transferring three-site entanglement from the leftmost plaquette ($1,2$ and $3$) to the right end of the chain.} \end{figure} Typical quantum walks exhibit time-reversal symmetry (TRS) in transfer probabilities between sites $n$ and $m$ in forward ($t$) and backward ($-t$) times such that $P_{nm}(t) = P_{nm}(-t)$. CQWs break the TRS and allow for so-called “directionally biased” transport, $P_{nm}(t) \neq P_{mn}(t)$, in certain graph structures~\cite{PhysRevA.93.042302}. We consider CQW on a triangular chain of $N$ vertices as shown in Fig.~\ref{fig:triChain}, which is a minimal configuration with PTS breaking for a quantum walk with a directional bias~\cite{PhysRevA.93.042302}. We will use the site basis $\{|i\rangle\}$, with $i=1,...,\text{N}$ indicating which site is occupied such that $|i\rangle:=|0_1,0_2,..,1_i,..0_\text{N}\rangle$. The set of coupled sites in a graph determines the edges $e=(i,j)$, which can be described by the so-called adjacency matrix $A$~\cite{Kempe2003,Sett2019, PhysRevA.58.915,Childs2002}. For a triangular chain of $N=5$ sites, $A$ is given by \begin{equation}\label{eq:A} A= \begin{bmatrix} 0&1&1&0&0\\ 1&0&1&1&0\\ 1&1&0&1&1\\ 0&1&1&0&1\\ 0&0&1&1&0\\ \end{bmatrix}. \end{equation} Together with the degree matrix (For definitions of some graph theory terms see Appendix \ref{sec:footNotes}) $D$ for the self edges (i,i), $A$, determines the graph Laplacian $L=D-A$. Hamiltonian of the walk is given by the Hadamard product mentioned in the Appendix \ref{sec:footNotes}, $H=J\circ L$, where $J$ is the matrix of hopping rates (edge weights). We neglect the self energies; therefore, we will take $D=0$ and write the Hamiltonian as \begin{equation}\label{eq:H} H=\sum_{nm}(J_{nm}A_{nm}|n\rangle\langle m|+J_{mn} A_{mn}|m\rangle\langle n|). \end{equation} In contrast to CTQW where every $J_{nm}$ is real-valued, CQW allows for complex edge weights, subject to $J_{mn}=J_{nm}^\ast$, so that the support graph of the walk becomes a directed one (cf.~Fig.~\ref{fig:triChain}). Specifically, we take \begin{equation} \label{eq:Hcqw} H= \begin{bmatrix} 0&-\text{i}&-\text{i}&0&0\\ \text{i}&0&-\text{i}&-\text{i}&0\\ \text{i}&\text{i}&0&-\text{i}&-\text{i}\\ 0&\text{i}&\text{i}&0&-\text{i}\\ 0&0&\text{i}&\text{i}&0\\ \end{bmatrix} \end{equation} The choice of the phase $\theta_{nm} =\theta=\pi / 2$ of complex hopping weights $J_{nm}=|J_{nm}|\exp(\text{i}\theta_{nm})$ (with $n>m$) is based upon the general investigations of CQW~\cite{Zimbors2013} for a triangular plaquette. It is found that maximum bias in time asymmetry can be obtained at $\theta =\pi / 2$~\cite{Zimbors2013}. Remarkably, the spectrum of Hamiltonian with a phase of $\pi/2$ has an anti-symmetric structure \begin{equation}\label{eq:specT} \begin{split} \Lambda_{1,2,3,4,5}&= (-\sqrt{\frac{1}{2} (7 + \sqrt{37})}, -\sqrt{\frac{1}{2}(7 - \sqrt{37})},0,\\ &\sqrt{ \frac{1}{2} (7 - \sqrt{37})}, \sqrt{\frac{1}{2}(7 + \sqrt{37})}). \end{split} \end{equation} We intuitively assume that a similar choice should yield efficient entanglement transfer along a linear chain of equilateral triangles, too. We numerically examined different choices and verified that our intuition is correct (Some typical results will be given in Sec.~\ref{Sec:Results}). The eigenstates corresponding to $\theta=\pi / 2$ are given in Appendix~\ref{sec:eigenstates}. The evolution of the initial state of the system $\rho (0)$ is given by $\rho(t)=U \rho(0)U^\dag$ where $U:= \exp(\text{-i}Ht)$. We define the site states of the chain as $\ket{1},...\ket{i}$, where $i$'s are the site numbers. Therefore, as the initial state, we consider a spatially entangled state $\ket{\psi_\text{spatial}}=(\ket{1}-\text{exp}(i\phi)\ket{2})/\sqrt{2}$ with the phase $\phi$, of the leftmost sites of the chain and we aim to transfer the state to the right end of the chain. Remarkably, even with initial superposition states on our linear triangular chain, could not yield PST (cf.~Fig.~\ref{fig:TRS-CTQW}). The graphs that can support PST require to be hermitian, circulant (\ref{sec:footNotes}), and to have a non-degenerate spectrum, together with a flat eigenbasis~\cite{Cameron2014}. Definitions of the graph theory terms we use are given in Appendix~\ref{sec:footNotes}. Alternatively, PST can still be achieved with a non-circulant graph that contains non-zero values on certain off-diagonal elements of its adjacency matrix~\cite{Connelly2017}. From a practical point of view, implementing graph structures that have PST is challenging because of the sophisticated and usually numerous special connectivities of these graphs. Therefore, creating a simpler graph with PGST can be more feasible in practice than using a graph with a PST. Since our proposed adjacency matrix is neither circulant nor has required non-zero off-diagonal elements, we do not expect any PST. A fundamental difference between CQW and CTQW regarding the directional bias lies in how the transport bias is introduced. In CQW, the directional bias emerges by the differences in transition probabilities depending on the Hamiltonian regardless of the initial states to be transported. In CTQW, directional symmetry breaking is sensitive to the phase differences in the initial particle state in the (spatial) site basis. Intuitively, there is an interplay of path interference and the initial phase in CTQW in breaking PTS. The significance of the difference between the CQW and CTQW in such a directionally biased entanglement transfer is the ability of CQW to break PTS for any initial condition, while directionally biased entanglement transfer in CTQW happens only for certain initial states. In this paper, we consider two time ranges to investigate the entanglement transfer dynamics. The first one, which we call the short-time regime, is to probe the first maximum of the entanglement measure (concurrence) or success fidelity of the state transfer. The second case is called the long-time regime, allowing multiple scatterings of the particle at the ends of the chain. The latter case is used to probe if more successful entanglement transfer is possible or not, at the cost of longer transfer times. \section{Results and Discussion} \label{Sec:Results} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{fig2.pdf} \caption{\label{fig:TRS-CTQW} Time dependence of the occupation probabilities \(P_{1,5}(t)\) of sites $1$ and $5$ of a triangular chain with $5$ sites, where a particle makes CTQW. For an initially localized particle with an initial state \(\ket{1}\), $P_1(t)$ and $P_5(t)$ are indicated by black, solid, and dotted, red curves, respectively. For a particle initially in a superposition state $\ket{\psi(0)}= (\ket{1}-\exp(\text{i}\phi)\ket{2})/\sqrt{2}$ with \(\phi=3\pi \slash 4\), $P_1(t)$ and $P_5(t)$ are shown as dashed, blue and dot-dashed, orange curves, respectively.} \end{figure} We will start with an examination of PTS-breaking in CTQW on the triangular chain. We do not need complex edge weights to break PTS in general. The essential physical mechanism behind PTS breaking is the quantum path interference, for which the required phase difference, or quantum coherence, can be injected into the initial state instead of the edges. Specifically, we consider a particle that is spatially entangled in two sites in the site basis, \begin{eqnarray}\label{eq:ctqw-phaseInitialState} \ket{\psi_{\text{spatial}}(0)} = \frac{1}{\sqrt{2}}(\ket{1}-\text{e}^{\text{i}\phi}\ket{2}) \end{eqnarray} on a regular triangular chain with real-valued hopping weights (we take $J_{nm}=1$ for simplicity). We want to transfer the input state to the end-sites of the structure. Ideally, we would like to achieve the target state $\ket{\psi_{\text{target}}}=(\ket{4}-\text{exp}(\text{i}\phi)\ket{5})/\sqrt{2}$. To characterize the performance of the actual process, in addition to the fidelity of state transfer $|\bra{\psi_{\text{target}}}\ket{\psi (t)}|^2$, we quantify the entanglement transferred to the end of the chain (sites $4$ and $5$). As we have only a single excitation, the initial state in Eq.~(\ref{eq:ctqw-phaseInitialState}) evolves into a state in the form \begin{equation} \begin{split} \ket{\psi(t)}&=\left(A_1\ket{100}_{123}+A_2\ket{010}_{123}+A_3\ket{001}_{123}\right)\ket{00}_{45}\\ &+\ket{000}_{123}\left(A_4\ket{10}_{45}+A_5\ket{01}_{45}\right),\\ \end{split} \end{equation} where $A_i$ are the time-dependent coefficients depending on the eigenvalues of the Hamiltonian of the QW. Tracing out the states of the sites $1,2$ and $3$ in the density matrix $\rho(t)=\ket{\psi(t)}\bra{\psi(t)}$, we find the reduced density matrix in the computational basis $\ket{4}\ket{5}=\ket{00}$, $\ket{01}$, $\ket{10}$, $\ket{11}$ for the sites $4$ and $5$ in the form \begin{equation} \label{eq:anDenMat} \rho_{4,5}= \begin{bmatrix} 1-a_{44}-a_{55}&0&0&0\\ 0&a_{44}&a_{45}&0\\ 0&a^*_{45}&a_{55}&0\\ 0&0&0&0\\ \end{bmatrix}, \end{equation} where $a_{ij}=A_i A^*_j$. The distribution of the zero elements, and hence the sparsity of the matrix, remains the same for any chain length. We can quantify the pairwise entanglement using concurrence ~\cite{PhysRevLett.80.2245}. For a state $\rho$, concurrence is defined as \begin{equation} \label{eq:conc} C(\rho)=\text{max}(0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}), \end{equation} where $\tilde{\rho}$ is the spin-flipped state, and $\{\lambda_i\}$ is the set of eigenvalues of $R=(\rho^{1/2} \tilde{\rho}\rho^{1/2})^{1/2}$ arranged in non-increasing order. With this at hand, $C_{i=4,j=5}$ is found to take the form \begin{equation} \label{eq:concAnalytic} C_{4,5}=2\text{Max}\left(0,\sqrt{a_{44}a_{55}},|a_{45}|\right). \end{equation} Owing to the definition of $a_{ij}$, the concurrence $C_{4,5}$ can further be simplified to $C_{4,5}=2|a_{45}|$. Note that this result can be extended to the matrices with arbitrary size associated with chains of length $N$, giving $C_{N-1,N}=2|a_{N-1,N}|$. Using the Hamiltonian matrix $H$ in Eq.~(\ref{eq:Hcqw}) and the initial state $\rho(0)=\ket{\psi_{\text{spatial}}(0)}\bra{\psi_{\text{spatial}}(0)}$ in Eq.~(\ref{eq:ctqw-phaseInitialState}), with any $\phi$, the dynamics of the entanglement between pairs of sites of the system can be calculated numerically for any value of $t$. We will also investigate the transport of the initial entangled Bell site-state from the sites $1$ and $2$ to sites $4$ and $5$ in CQW. We will perform the calculations for both CQW and CTQW cases separately. In the case of CTQW we use the initial state in Eq.~(\ref{eq:ctqw-phaseInitialState}). \subsection{PTS breaking and entanglement transfer in CTQW on a triangular chain} \label{Sec:PTSB-CTQW} To appreciate the role of the initial phase in Eq.~(\ref{eq:ctqw-phaseInitialState}) on the state transfer and PTS breaking in CTQW on the triangular chain, let's start with the initial state \(\ket{\psi(0)}=\ket{1}\). The occupation probabilities $P_{i}= \langle i | \rho(t) | i\rangle$ of the sites \(i=1\) (solid, black) and \(i=5\) (dotted, red) are shown in Fig.~\ref{fig:TRS-CTQW}, where mirror symmetry in the behaviour of the probability distribution with respect to time can be seen. Transfer from the initially occupied site $\ket{1}$ to the rightmost site $\ket{5}$ is found to be weak (less than $45\%$ at any time). If we use the initial state given in Eq.~(\ref{eq:ctqw-phaseInitialState}) with the Adjacency matrix Eq.~(\ref{eq:A}), in addition to being able to control path interference, PTS can be broken depending on the initial phase $\phi$. We have numerically compared $P_5(t)$ for different $\phi$ and found that $\phi=3\pi/4$ gives the largest occupation of site $|5\rangle$. Fig.~\ref{fig:TRS-CTQW} shows $P_1(t)$ (dotted, red) and $P_5(t)$ (dot-dashed, orange) for $\phi=3\pi/4$, where time reversal asymmetry, $P(t)\neq P(-t)$ emerges. Population transfer is significantly enhanced using such a superposition state initially. We conclude that transferring a particle from the left end of the chain to a site at the right end is more successful by injecting the particle simultaneously at two sites with a certain quantum coherence relative to starting a well-localized particle at a single site. Let's now explore if similar advantages can be found in the entanglement transfer. \begin{figure}[!t] \centering {\includegraphics[width=\linewidth]{fig3.eps}} \caption{\label{fig:CONC-CTQW} Time dependence of the concurrence \(C_{4,5}(t)\) in the state of sites \(\ket{4}\) and \(\ket{5}\) for the initial state in Eq.~(\ref{eq:ctqw-phaseInitialState}) of a particle that makes CTQW on a triangular chain of $N=5$ sites. Different curves stand for the initial states $\ket{\psi(0)}= (\ket{1}-\exp(\text{i}\phi)\ket{2})/\sqrt{2}$ with \(\phi = \pi \slash 4\) (solid, black), \(\phi = \pm \pi \slash 3\) (dashed, blue), \(\phi = \pm \pi \slash 2\) (dot-dashed, red), \(\phi = \pm 3 \pi \slash 4\) (dotted, orange). } \end{figure} Fig.~\ref{fig:CONC-CTQW} shows that the concurrence is optimum for \(\phi=\pm3\pi\slash4\) with a value \(C_{4,5}(1.12)\sim0.8\). Therefore, for pairwise entanglement transfer, \(\phi=3 \pi \slash 4\) gives the most advantageous initial state. A natural question to ask is if there is a fundamental connection between the critical phase \(\phi=3 \pi \slash 4\) and PTS breaking in CTQW. We can quantify the bias between the forward and backward time evolutions using the Bures distance between the states \(\rho(t)\) and \(\rho(-t)\). Bures distance is defined by~\cite{Jozsa1994,Hbner1992,Hbner1993} \begin{equation}\label{bures} D_{B}(\rho,\sigma)^{2}=2(1-\sqrt{F(\rho,\sigma)}), \end{equation} where \begin{equation} \label{eq:fidelity} F(\rho,\sigma)=[\rm{Tr}(\sqrt{\sqrt{\rho} \sigma \sqrt{\rho}})]^{2} \end{equation} is the fidelity \cite{NC}. In Fig. \ref{fig:Bures-CTQW}, we only use the diagonal elements of $\rho(t)$ and $\rho(-t)$ while calculating the Bures distance \(D_B(t)\) for different \(\phi\)'s. As the probability information is maintained on the diagonal elements of the density matrices, the off-diagonal elements are discarded to demonstrate the broken PTS conditions more clearly. For the phases \(0\) and \(\pm \pi\) the PTS is not broken and the Bures distance is zero. The largest bias in forward and backward time evolution is found for \(\phi=\pi \slash 2\), which is different from the critical phase \(\phi=3 \pi \slash 4\) for optimum population and entanglement transfer in CTQW over a triangular chain. We conclude that the CTQW exploits the path interference for efficient state transfer, and a certain phase difference in the initial superposition state, quantum walk with broken PTS is possible, similar to CQW. While the chiral character of CTQW is limited to certain initial states, this can still be practically significant when the implementation of CQW is a challenging and chiral transfer of arbitrary entangled states is not required. \begin{figure}[!t] \centering {\includegraphics[width=\linewidth]{fig4.eps}} \caption{\label{fig:Bures-CTQW} Time dependence of the Bures distance between the diagonal elements of the forward and backward evolved density matrices $\rho(t)$ and $\rho(-t)$, for a CTQW on a triangular chain. The curves are for the phases $\phi$ in the initial state $\ket{\psi(0)}= (\ket{1}-\exp(\text{i}\phi)\ket{2})/\sqrt{2}$ with \(\phi = \pm \pi \slash 3 \) (solid, black), \(\phi = \pm \pi \slash2 \) (dotted, red) and \(\phi = \pm 3 \pi \slash 4 \) (dashed blue). The Bures distance for the phases \(\phi = 0 \) and \(\phi=\pm \pi\) are zero.} \end{figure} \subsection{PTS breaking and entanglement transfer in CQW on a triangular chain and comparison with CTQW} \label{Sec:PTSB-CQW} \begin{figure} \centering {\includegraphics[width=\linewidth]{fig5.eps}} \caption{\label{fig:CONC-CQW} Time dependence of the concurrence \(C_{4,5}\) measuring the pairwise entanglement between the sites \(\ket{4}\) and \(\ket{5}\) of a triangular chain of $N=5$ sites over which an initial maximally entangled Bell state \((\ket{1}+\ket{2})\slash \sqrt{2}\) of the sites $1$ and $2$ undergoes CQW. Different curves are for the different complex hopping coefficients of the CQW with the phases \(\theta = \pi \slash 4\) (solid, black), \(\theta = \pm \pi \slash 3\) (dashed, blue), \(\theta = \pm \pi \slash 2\) (dot-dashed-dashed, red), \(\theta = \pm 3 \pi \slash 4\) (dot-dashed, orange); same as \(\theta = \pi \slash 4\).} \end{figure} For CQW, we plot the concurrence in Fig.~\ref{fig:CONC-CQW} by using an initial Bell state with $\phi=\pi$ in Eq.~(\ref{eq:ctqw-phaseInitialState}) and different \(\theta\) in Eq.~(\ref{eq:Hcqw}). We see that the concurrence is largest for \(\theta=\pi\slash 2\) with a value \(C_{4,5}(1.02)\sim0.9\), indicating that CQW has a slight time advantage (\(\Delta t\sim0.1\)) along with a significantly higher quality transfer of entanglement compared to CTQW (cf.~Fig.~\ref{fig:CONC-CTQW}). Without plotting, we state here that a similar conclusion applies to occupation probabilities, too. We found that CQW with \(\theta=\pi \slash 2\) yields near perfect ($P_5\sim 0.95$) state transfer $\ket{1}\rightarrow\ket{5}$ at $t\sim 1.64$. We calculated the concurrences for \(\phi=\pi \slash 4\), \(\phi=\pi \slash 3\), \(\phi=\pi \slash 2\), \(\phi=3 \pi \slash 4\) and looked for the optimum \(\theta\) values. We have found that for the state (\(\phi=\pi\)) initial Bell state, \(\theta=\pi \slash 2\) gives the optimum (maximum) concurrence \(C_{4,5}(1.02)\sim0.9\). We plot the Bures distance \(D_B(t)\) for the diagonal elements of $\rho(t)$ and $\rho(-t)$ in Fig.~\ref{fig:Bures-CQW}, which shows that \(D_B(t)\) is maximum for \(\theta=\pm \pi \slash 2\) (dotted, red). Remarkably, the maximum broken PTS in the CTQW is found for $\phi=\pi/2$. This suggests that \(\phi=\theta=\pm \pi \slash 2\) is an optimal choice for the broken-PTS condition both for CTQW and CQW over a triangular chain. The critical angle of maximum time-reversal asymmetry however coincides with a critical angle of optimum state transfer only for the CQW. When the numerical values of \(D_B(t)\) for CTQW in Fig.~\ref{fig:Bures-CTQW} and CQW in Fig.~\ref{fig:Bures-CQW} compared, \(D_B(t)\) for CQW is numerically larger than CTQW, suggesting a larger broken PTS condition. \begin{figure}[!t] \centering {\includegraphics[width=\linewidth]{fig6.pdf}} \caption{\label{fig:Bures-CQW} Time dependence of the Bures distance \(D_B(t)\) between the diagonal elements of the forward and backward evolved density matrices $\rho(t)$ and $\rho(-t)$, respectively, for a CQW on a triangular chain of $N=5$ sites. Initially the quantum walker is in a maximally entangled Bell state \((\ket{1}+\ket{2})\slash \sqrt{2}\). Different curves are for chains with different complex hopping coefficients of phases \(\theta = \pm \pi \slash 4 \) (dashed, blue), \(\theta = \pm \pi \slash 3 \) (solid, black) and \(\theta = \pm \pi \slash 2 \) (dotted, red). The Bures distance for the phases \(\theta= 0 \) and \(\pm \pi\) are zero; and for \(\theta=\pi \slash 4\), \(D_B(t)\) is the same as that of \(\theta=3\pi \slash 4\).} \end{figure} In Fig.~\ref{fig:concCQWCTQW}, the concurrences for CQW and CTQW with an initial state of \((\ket{1}+\ket{2})\slash \sqrt{2}\) are plotted. The solid red curve represents the CQW case and the dashed blue line is for the CTQW case. This plot also demonstrates the broken PTS in the CQW case. Here, one can notice the relatively higher entanglement transfer quality in CQW. In addition, the transfer time is shorter in the case of CQW by $\Delta t \sim 0.4$ . To demonstrate the entanglement transfer in the short-time regime, we plot the dynamics of the concurrences $C_{i,j}$ that measure entanglement between every pair of sites $(i,j)$ of the triangular chain in Fig.~\ref{fig:walkEntangled}. One can see the transfer of entanglement from the sites \((\ket{1},\ket{2})\) to \((\ket{4},\ket{5})\). Although a spread over the sites is present, entanglement propagates mainly as \((\ket{1},\ket{2})\rightarrow(\ket{2},\ket{3})\rightarrow(\ket{2},\ket{4})\rightarrow(\ket{3},\ket{4})\rightarrow(\ket{4},\ket{5})\). If the success fidelity or the concurrence is sufficient, the entanglement can be collected at the end of the chain in this short-time regime. On the other hand, after multiple scatterings between the ends of the chain, the entanglement transfer can be enhanced at the cost of longer transfer time. \begin{figure} \centering \includegraphics[width=\linewidth]{fig7.eps} \caption{\label{fig:concCQWCTQW} Time($t$) dependence of the concurrence \(C_{4,5}(t)\) to measure the entanglement between the sites \(\ket{4}\) and \(\ket{5}\) for an initially maximally entangled Bell state \((\ket{1}+\ket{2})\slash \sqrt{2}\) of the sites \(\ket{1}\) and \(\ket{2}\) of a particle that makes CTQW (dashed, blue) and CQW (solid, red) with \(\theta=\pi \slash 2\) on a triangular chain of $N=5$ sites. We take the phases of the complex hopping coefficients as \(\theta=\pi \slash 2\) for CQW; while for CTQW hopping coefficients are real with \(\theta=0\).} \end{figure} \begin{figure}[t!] \centering \includegraphics[scale=0.6 ,fbox]{fig8.eps} \qquad \caption{\label{fig:walkEntangled}The time evolution of concurrence $C_{i,j}$, measuring the pairwise entanglement between the sites $|i\rangle$ and $|j\rangle$, is shown as a matrix with $i,j=1..5$. Each square in these matrices stands for the value of the concurrence $C_{i,j}$. Colors from light to dark scale with $1$ to $0$, respectively. Initially (at $t=0$), the quantum walker is injected in the maximally entangled Bell state of the sites $1$ and $2$ with $C_{1,2}=1$ (upper left panel) to undergo CQW with complex hopping coefficients with phase $\theta=\pi/2$. As time progresses, one can notice the unidirectional transfer of entanglement (light colored pair of squares) to the rightmost sites $4$ and $5$. The panels are for the $t=0$ (upper left), $t=0.2$ (upper right), $t=0.4$ (middle left), $t=0.6$ (middle right), $t=0.8$ (bottom left), and $t=1$ (bottom right).} \end{figure} In Fig.~\ref{fig:figBackScatter}, we plot the long-time behavior of the process for both CTQW and CQW. CQW demonstrates a higher concurrence peak in the short-time regime for the initial Bell state ($\phi=0$). When the fidelities are considered, the longer-time entanglement transfer fidelity is higher than the short-time regime's fidelities for CQW and CTQW. Both allow for PGST of the entanglement with $C_{4,5} = 0.999$ at $t = 28.1$ and with a concurrence of $C_{4,5} = 0.971$ at $t = 25.7$, respectively. These observations depend on the initial state and the chain size. Though not shown here, we numerically verified that CTQW gives a higher concurrence than CQW for certain initial conditions in the short-time regime (e.g., for $\phi=\pi/2$). Hence, we conclude that breaking PTS either by CQW for any initial condition or by CTQW for certain initial conditions gives comparable and high entanglement transfer performance in a short-time regime, which can be further enhanced to PGST in a long-time regime. The successful entanglement transfer (with a concurrence of more than $0.9$) is limited to chains shorter than $N\sim 9$ sites, as discussed in Sec.~\ref{Sec:ChainSize}. \begin{figure}[h] \centering \includegraphics[scale=0.8]{fig9.pdf} \qquad \caption{\label{fig:figBackScatter}Time dependence of the concurrence $C_{4,5}(t)$ of CQW (solid blue curve) and CTQW (solid red curve) for the long-time evolution $(t = 0 − 100)$. In this plot, we have considered the initial state $\ket{\psi_\text{spatial}(0)}=(\ket{1}+\ket{2})/\sqrt{2}$.} \end{figure} \subsection{Transfer of mixed Werner-States on the triangular chain} \label{Sec:WernerState} Having demonstrated the role of pure entangled states under the CQW scheme, it is natural to investigate the behavior of Werner-type mixed states under CQW with \(\theta=\pi \slash 2\) phase \cite{werner}. We introduce the Werner-like state \begin{equation}\label{eq:wernerState} \rho_{\text{Werner}}(b) = b \rho(0) + (1-b)\rho_{\text{mixed}}, \end{equation} where $\rho_{\text{mixed}}$ is the maximally mixed state within the manifold of injection site states $\ket{1}$ and $\ket{2}$. \begin{equation}\label{eq:rhoMixed} \rho_{\text{mixed}}=\frac{1}{2}\sum_{i=1}^{2}\ket{i}\bra{i}. \end{equation} We use the maximally entangled state within the manifold of injection site states, \(\ket{\psi(0)}=(\ket{1}+\ket{2})/\sqrt{2}\) to define the initial state density matrix \(\rho(0)=\ket{\psi_\text{spatial}(0)}\bra{\psi_\text{spatial}(0)}\). To investigate the behaviour of entanglement transfer with respect to time, we calculate the fidelity $F(\rho(t),\rho_{\text{target}})$ as in Eq.~(\ref{eq:fidelity}) with the matrices $\rho(t)$ and $\rho_{\text{target}}$. Here, $\rho_{\rm{target}}$ represents the desired ideally transferred state \begin{equation}\rho_{\text{target}}=\frac{1}{2} \begin{bmatrix} 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&1&b\\ 0&0&0&b&1\\ \end{bmatrix}. \end{equation} In Fig.~\ref{fig:wernerFig}, we plot the time behavior of the Fidelity, $F(\rho(t),\rho_{\text{target}})$, for different \(b\) values. Clearly, the pure maximally entangled state \(\rho_{\text{Werner}}(b=1)=\rho(t=0)\) yields the best entanglement transfer. On the contrary, fidelities closer to zero is observed at the maximally mixed state \(\rho_{\text{Werner}}(b=0)=\rho_{\text{mixed}}\). \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{fig10.eps} \caption{\label{fig:wernerFig}Time dependence of the Fidelity $F(\rho(t),\rho_{\text{target}})$ between \(\rho(t)\) and \(\rho_{\text{Werner}}\) as a function of time $t$ with an initial Werner state under the scheme of CQW on a triangular chain of $N=5$ sites with complex edge weights of phase $\theta=\pi/2$. Where \(b=-1\slash 4\) as solid black, \(b=0\) as dashed blue, \(b=1\slash 2\) as thick dashed red and finally, maximally entangled bell state \(b=1\) is dot-dashed orange.} \end{figure} \subsection{Scaling of entanglement transfer quality and time with respect to the chain size } \label{Sec:ChainSize} Until now, entanglement transfer on a chain of $N=5$ sites has been discussed. In this subsection, we first investigate the entanglement transfer using CQW in the short-time regime by calculating the time $T_\text{max}$ and value $C_{N-1,N}$ of the first peak of the concurrence for chains of up to $N=71$ sites. Fig.~\ref{fig:tmax} shows that $T_\text{max}$ (which we refer to as transfer time) scales linearly with $N$, consistent with previous works~\cite{tony1,tony2, tony3}. Fig.~\ref{fig:tmaxconc} shows that the entanglement transfer quality decreases severely in chains longer than $N\sim 9$ sites. The figures include two different chiral phases $\theta$ values and indicate similar behavior. Next, we explore the long-time entanglement transfer dynamics by assuming a waiting time of $t=500$. In this case, we determine and fix an ideal value of $\theta$ for given $N$ and initial state to find the maximum concurrence, which is not necessarily the first peak. The optimum $\theta$ for the initial state with $\phi=\pi$ is found to be $\pm \pi/2$, whose sign depends on $N$. The results are given in Table~\ref{tab:cqw}, which shows the maximum concurrence and when it occurs for given $N$ and the corresponding optimum $\theta$. For the same initial condition, the results for CTQW are presented in Table~\ref{tab:ctqw}. While the tables report the results for the chains with up to $N=33$ sites, the concurrence reduces to very low values after $N\sim 9$ sites. We can see that both CQW and CTQW methods to break PTS yield highly successful transfer of entangled states for relatively small graphs ($N<9$). For such graphs ( ($N<9$), CQW is faster than CTQW to transfer the entanglement, and its success is slightly higher (also cf.~Fig.~\ref{fig:concCQWCTQW} where the same results are found for another initial condition ($\phi=0$) and in the short-time regime). \begin{figure}[!t] \centering \subfloat[\label{fig:tmax}]{\includegraphics[scale=0.50] {fig11a.pdf}} \qquad \subfloat[\label{fig:tmaxconc}]{\includegraphics[scale=0.50] {fig11b.pdf}} \caption{\label{fig:tmaxc}\textbf{(a)} Scaling of state transfer time with respect to the chain size. Red dots represent $\theta=0$ and blue dots represent $\theta=\pi/2$~\textbf{(b)} Concurrence value at the state transfer time. Red dots represent $\theta=0$ and blue dots represent $\theta=\pi/2$} \end{figure} \begin{table} \centering \begin{tabular}{||c | c c c || } \hline Chain Size $N$ & Time $(t)$ & Concurrence & Optimum Chiral Phase $(\theta)$ \\ [0.5ex] \hline\hline 5 & 55.4 & 0.999 & $-\pi/ 2$ \\ \hline 7 & 85.1 & 0.992 & $\pi/ 2$ \\ \hline 9 & 2.9 & 0.947 & $-\pi/ 2$ \\ \hline 11 & 321.3 & 0.900 & $-\pi/ 2$ \\ \hline 13 & 397.6 & 0.885 & $-\pi/ 2$ \\ \hline 15 & 4.5 & 0.874 & $-\pi/ 2$ \\ \hline 17 & 136.2 & 0.700 & $-\pi/ 2$ \\ \hline 19 & 68.6 & 0.714 & $-\pi/ 2$ \\ \hline 21 & 6.1 & 0.814 & $-\pi/ 2$ \\ \hline 23 & 416 & 0.635 & $-\pi/ 2$ \\ \hline 25 & 88.5 & 0.711 & $-\pi/ 2$ \\ \hline 27 & 7.7 & 0.764 & $-\pi/ 2$ \\ \hline 29 & 125.8 & 0.593 & $\pi/ 2$ \\ \hline 31 & 376.5 & 0.736 & $\pi/ 2$ \\ \hline 33 & 9.3 & 0.718 & $-\pi/ 2$ \\ [1ex] \hline \end{tabular} \caption{\label{tab:cqw} The maximum concurrence and the transfer time table for the longer-time CQW scenario $(t=500)$ with the initial state phase $\phi=\pi$ along with the optimal phases for these parameters.} \end{table} \begin{table} \centering \begin{tabular}{||c | c c ||} \hline Chain Size $N$ & Time $(t)$ & Concurrence\\ [0.5ex] \hline\hline 5 & 193.9 & 0.993 \\ \hline 7 & 342.8 & 0.979 \\ \hline 9 & 410.6 & 0.900\\ \hline 11 & 482.7 & 0.805\\ \hline 13 & 498.3 & 0.749\\ \hline 15 & 288.2 & 0.748 \\ \hline 17 & 82.5 & 0.697 \\ \hline 19 & 4.1 & 0.661 \\ \hline 21 & 4.5 & 0.631 \\ \hline 23 & 4.9 & 0.608 \\ \hline 25 & 5.3 & 0.594 \\ \hline 27 & 5.7 & 0.581 \\ \hline 29 & 6.1 & 0.567 \\ \hline 31 & 6.4 & 0.552 \\ \hline 33 & 6.8 & 0.540 \\ [1ex] \hline \end{tabular} \caption{\label{tab:ctqw} The maximum concurrence and the transfer time table for the longer-time CTQW scenario $(t=500)$ with the initial state phase $\phi=\pi$.} \end{table} \begin{figure}[!t] \centering \subfloat[\label{fig:5cycleA}]{\includegraphics[scale=0.50] {fig12a.png}} \qquad \subfloat[\label{fig:5starB}]{\includegraphics[scale=0.50] {fig12b.png}} \caption{\label{fig:5CycleStar}\textbf{(a)} Graph of a linear chain of 5 vertices arranged as an odd number cycle.~\textbf{(b)} Graph of a linear chain of 5 vortices arranged as a pentagram with five-pointed star-like diagonal connections. } \end{figure} \begin{figure}[!t] \centering {\includegraphics[width=\linewidth]{fig13.eps}} \caption{\label{appFig1} Time($t$) dependence of the concurrence \(C_{4,5}(t)\) to measure the entanglement between the sites \(\ket{4}\) and \(\ket{5}\) on a \(5 \times 5\) circulant graph with only nearest-neighbour interactions for an initially maximally entangled Bell state \((\ket{1}+\ket{2})\slash \sqrt{2}\) of the sites \(\ket{1}\) and \(\ket{2}\) of a particle that makes CQW.} \end{figure} \begin{figure}[!t] \centering {\includegraphics[width=\linewidth]{fig14.pdf}} \caption{\label{appFig2} Time($t$) dependence of the concurrence \(C_{4,5}(t)\) to measure the entanglement between the sites \(\ket{4}\) and \(\ket{5}\) on a complete pentagram graph for an initially maximally entangled Bell state \((\ket{1}+\ket{2})\slash \sqrt{2}\) of the sites \(\ket{1}\) and \(\ket{2}\) of a particle that makes CQW.} \end{figure} \section{Conclusion} \label{Sec:Conclusion} We explored the transfer of spatial entanglement of a single spin excitation (which we call particle) undergoing either CQW or CTQW on a triangular chain. We found that particle transfer to the end of the chain is more successful if the particle is injected simultaneously from the leftmost pair of sites in a specific Bell-type superposition state. The success, measured by the rightmost site's occupation probability, depends on the relative phase $\phi$ between the site states in the initial quantum superposition. Using the Bures distance between the forward and backward time evolved states, we examined the dynamics of PTS breaking at different $\phi$. We conclude that PTS breaking and the success of entangled state transfer via CTQW vary with $\phi$. We explained the physical mechanism in terms of the role of the relative phase $\phi$ in the initial state played in the path interference in the triangular chain which eventually determines the quality of state transfer via CTQW. The success and PTS breaking character of CTQW is limited to certain initial states with strict initial phase values. The chiral phase angle in CQW brings additional flexibility and generality to transfer arbitrary entangled states, which is not possible with CTQW. The chiral phase can be used to optimize transfer success. Even for those entangled states that can be transferred by CTQW, using optimum chiral phases, entanglement transfer with CQW is found to be faster and more successful for small graphs with less than $9$ sites. In our examinations, we also considered long chains (about 70 sites). When longer triangular chains are considered, the entanglement transfer success is severely reduced for both CQW and CTQW. We examined both short-time and long-time dynamics of entanglement transfer. In the short-time regime, the first peak of the concurrence is used to probe the entanglement transfer. The time when the first peak emerges (entanglement transfer time) scales linearly with the chain size, as expected from the earlier works~\cite{tony1, tony2, tony3}. Longer-time regime is used to look for a global maximum in entanglement dynamics, and hence it can give higher entanglement transfer success at the cost of longer waiting times. Speed and success advantages of CQW over CTQW for certain initial states remain in the longer-time regime as well. Breaking PTS strongly either by CQW for any initial condition or by CTQW for certain initial conditions give comparable and high entanglement transfer performance in a short-time regime, which can be further enhanced in long-time regime; though short-time regime can be more practical for real applications open to environmental quantum decoherence effects. In summary, if CTQW is capable to transfer entanglement with PTS breaking for a certain initial state, then the performance of transfer is comparable to CQW. Hence, if implementing CQW is challenging and transfer of arbitrary entangled states is not required, we conclude that CTQW can be preferred over CQW. On the other hand, if the optimum transfer of arbitrary entangled states with PTS breaking character is required than it is necessary to implement CQW. Our main conclusion is foundational in nature, based upon the physical mechanism of PTS breaking in terms of the path interference and phases in the initial state and hopping coefficients, and hence, is independent of any physical embodiment. In addition, we explored the behavior of various mixed Werner states under our CQW scheme. We found that the purest maximally entangled state yields the best state transfer. Our results can help to understand the interplay of PTS breaking and entanglement transfer and practically to design optimum chiral lattices for the transfer of entangled states in physical platforms such as plasmonic non-Hermitian coupled waveguides~\cite{Fu2020}, ultracold atomic optical lattices~\cite{PhysRevLett.93.056402}, photonic-spin waveguides~\cite{PhysRevA.93.062104}, or quantum superconducting circuits ~\cite{Vepslinen2020,Ma2020}. \section{Acknowledgements} The authors thank Tony John George Apollaro and Deniz N. Bozkurt for fruitful discussions.\\ \begin{appendices} \section{Perfect state and entanglement transfer on circulant graphs}\label{sec:PSTonCirculantGraphs} We present perfect state and entanglement transfer on some circulant graphs in this appendix. Such graphs occupy a relatively large space than linear triangular chains to transfer a state over the same distance and require more qubits to implement. For example, we take a \(5 \times 5\) circulant graph shown in ~Fig.\ref{fig:5cycleA} with only nearest-neighbor interactions. The adjacency matrix for such a graph is \begin{equation} A=\begin{bmatrix} 0&-\text{i}&0&0&-\text{i}\\ \text{i}&0&-\text{i}&0&0\\ 0&\text{i}&0&-\text{i}&0\\ 0&0&\text{i}&0&-\text{i}\\ \text{i}&0&0&\text{i}&0\\ \end{bmatrix}. \end{equation} \noindent Fig.~\ref{appFig1} shows that the entanglement transfer on such a graph is nearly perfect with a concurrence of \(C\sim 0.93\) at \(t\sim4.5\). Another example is a pentagram graph which is sketched on ~Fig.\ref{fig:5starB}. This graph contains three triangular plaquettes, but being circulant comes with the cost of more edges. The adjacency matrix reads \begin{equation} A = \begin{bmatrix} 0&-\text{i}&-\text{i}&-\text{i}&-\text{i}\\ \text{i}&0&-\text{i}&-\text{i}&-\text{i}\\ \text{i}&\text{i}&0&-\text{i}&-\text{i}\\ \text{i}&\text{i}&\text{i}&0&-\text{i}\\ \text{i}&\text{i}&\text{i}&\text{i}&0\\ \end{bmatrix}. \end{equation} \noindent Fig.~\ref{appFig2} presents the possibility of nearly perfect entanglement transfer with a concurrence of \(C_{4,5}\sim1\) at \(t\sim3.7\). \section{Eigenvalues and Eigenstates of the Hamiltonian}\label{sec:eigenstates} The eigenstates corresponding to the eigenvalues in Eq.~(\ref{eq:specT}), are listed in the same order as the equations, \begin{equation} \begin{split} \Lambda_1= \begin{bmatrix} -0.65 - 0.76 \text{i}\\ -0.65 + 0.76 \text{i}\\ 0.22 + 0.98 \text{i}\\ 0.22 - 0.98 \text{i}\\ 0 \\ \end{bmatrix}, \Lambda_2= \begin{bmatrix} 0.65 + 1.15 \text{i}\\ 0.65 - 1.15 \text{i}\\ -0.22 + 0.50 \text{i}\\, -0.22 - 0.50 \text{i}\\ 1\\ \end{bmatrix},&\\ \Lambda_3=\begin{bmatrix} 1\\ 1\\ 1\\ 1\\ 0 \\ \end{bmatrix}, \Lambda_4=\begin{bmatrix} -0.65 + 1.40 \text{i}\\ -0.65 - 1.40 \text{i}\\ 0.22 + 0.18 \text{i}\\ 0.22 - 0.18 \text{i}\\ -1\text{i}\\ \end{bmatrix}, \Lambda_5=\begin{bmatrix} -1.30 + 0.25 \text{i}\\ -1.30 - 0.25 \text{i}\\ 0.44 - 0.33 \text{i}\\ 0.445 + 0.33 \text{i}\\ 1\\ \end{bmatrix}. \end{split} \end{equation} \section{Definitions of the some mathematical and graph theory terms used in the manuscript}\label{sec:footNotes} A graph is a set of vertices and edges connecting them. Here, we present definitions of some mathematical terms from graph theory and linear algebra we used in the main text. \noindent \emph{Hadamard Product}:Hadamard Product is the element-wise product of two matrices with the same dimensions.\\ \emph{Circulant Graph}:Undirected graphs contain only bidirectional edges. Circulant graphs are undirected graphs, which take any vertex to all of the other vertices~.\\ \emph{Flat Eigenbasis}:When each eigenvector of a basis has entries of the same magnitude, that eigenbasis is called a flat eigenbasis~ \cite{Cameron2014}. \\ \emph{Adjacency Matrix, and the Graph Laplacian Matrix}:A Laplacian matrix $L=D-A$ is a matrix that describes a graph, where $A$ is the adjacency matrix and $D$ is the degree matrix. A degree matrix is a diagonal matrix whose elements indicate the number of edges attached to each vertex of a graph. An adjacency matrix represents the connections of a graph, whose elements corresponding to adjacent (connected by an edge) vertices are $1$~\cite{footDegreeM}. \end{appendices} \clearpage \section{References} \bibliographystyle{unsrt}
1,116,691,498,765
arxiv
\section{Introduction} Entity linking (EL) fulfils a key role in grounded language understanding: Given an ungrounded entity mention in text, the task is to identify the entity’s corresponding entry in a Knowledge Base (KB). In particular, EL provides grounding for applications like Question Answering \citep{fevry2020entities} (also via Semantic Parsing \citep{shaw2019generating}) and Text Generation \citep{puduppully2019data}; it is also an essential component in knowledge base population \citep{shen2014entity}. Entities have played a growing role in representation learning. For example, entity mention masking led to greatly improved fact retention in large language models \citep{guu2020realm,roberts2020much}. But to date, the primary formulation of EL outside of the standard monolingual setting has been \emph{cross-lingual}: link mentions expressed in one language to a KB expressed in another \citep{mcnamee-etal-2011-cross,tsai-roth-2016-cross,sil2018neural}. The accompanying motivation is that KBs may only ever exist in some well-resourced languages, but that text in many different languages need to be linked. Recent work in this direction features progress on low-resource languages \cite{zhou_tacl2020}, zero-shot transfer \cite{sil-florian-2016-one,rijhwani2019zero,zhou2019towards} and scaling to many languages \cite{pan-etal-2017-cross}, but commonly assumes a single primary KB language and a limited KB, typically English Wikipedia. We contend that this popular formulation limits the scope of EL in ways that are artificial and inequitable. First, it artificially simplifies the task by restricting the set of viable entities and reducing the variety of mention ambiguities. Limiting the focus to entities that have English Wikipedia pages understates the real-world diversity of entities. Even within the Wikipedia ecosystem, many entities only have pages in languages other than English. These are often associated with locales that are already underrepresented on the global stage. By ignoring these entities and their mentions, most current modeling and evaluation work tend to side-step under-appreciated challenges faced in practical industrial applications, which often involve KBs much larger than English Wikipedia, with a much more significant zero- or few-shot inference problem. Second, it entrenches an English bias in EL research that is out of step with the encouraging shift toward \emph{inherently multilingual} approaches in natural language processing, enabled by advances in representation learning \cite{johnson2017google,pires-etal-2019-multilingual,conneau-etal-2020-unsupervised}. Third, much recent EL work has focused on models that rerank entity candidates retrieved by an alias table \citep{fevry2020empirical}, an approach that works well for English entities with many linked mentions, but less so for the long tail of entities and languages. To overcome these shortcomings, this work makes the following key contributions: \begin{itemize} \item Reformulate entity linking as inherently multilingual: link mentions in 104 languages to entities in WikiData, a language-agnostic KB. \item Advance prior dual encoder retrieval work with improved mention and entity encoder architecture and improved negative mining targeting. \item Establish new state-of-the-art performance relative to prior cross-lingual linking systems, with one model capable of linking 104 languages against 20 million WikiData entities. \item Introduce \textbf{Mewsli-9}, a large dataset with nearly 300,000 mentions across 9 diverse languages with links to WikiData. The dataset features many entities that lack English Wikipedia pages and which are thus inaccessible to many prior cross-lingual systems. \item Present frequency-bucketed evaluation that highlights zero- and few-shot challenges with clear headroom, implicitly including low-resource languages without enumerating results over a hundred languages. \end{itemize} \section{Task Definition} \emph{Multilingual Entity Linking} (MEL) is the task of linking an entity mention $m$ in some context language $l^{\textrm{c}}$ to the corresponding entity $e\in V$ in a \emph{language-agnostic KB}. That is, while the KB may include textual information (names, descriptions, etc.) about each entity in one or more languages, we make no prior assumption about the relationship between these KB languages $L^{\textrm{kb}}=\{l_1, \dots, l_k\}$ and the mention-side language: $l^{\textrm{c}}$ may or may not be in $L^{\textrm{kb}}$. This is a generalization of \emph{cross-lingual EL} (XEL), which is concerned with the case where $L^{\textrm{kb}}=\{l'\}$ and $l^{\textrm{c}} \neq l'$. Commonly, $l'$ is English, and $V$ is moreover limited to the set of entities that express features in $l'$. \subsection{MEL with WikiData and Wikipedia} As a concrete realization of the proposed task, we use WikiData \cite{vrandevcic2014wikidata} as our KB: it covers a large set of diverse entities, is broadly accessible and actively maintained, and it provides access to entity features in many languages. WikiData itself contains names and short descriptions, but through its close integration with all Wikipedia editions, it also connects entities to rich descriptions (and other features) drawn from the corresponding language-specific Wikipedia pages. Basing entity representations on features of their Wikipedia pages has been a common approach in EL \cite[e.g.][]{sil-florian-2016-one,francis-landau-etal-2016-capturing,gillick-etal-2019-learning,wu2019zeroshot}, but we will need to generalize this to include multiple Wikipedia pages with possibly redundant features in many languages. \subsubsection{WikiData Entity Example}\label{sec:description_example} Consider the WikiData Entity \ientity{Sí Ràdio}{Q3511500}, a now defunct Valencian radio station. Its KB entry references Wikipedia pages in three languages, which contain the following descriptions:\footnote{We refer to the first sentence of a Wikipedia page as a description because it follows a standardized format.} \begin{itemize} \item (Catalan) \emph{\textbf{Sí Ràdio} fou una emissora de ràdio musical, la segona de Radio Autonomía Valenciana, S.A. pertanyent al grup Radiotelevisió Valenciana.} \item (Spanish) \emph{\textbf{Nou Si Ràdio} (anteriormente conocido como Sí Ràdio) fue una cadena de radio de la Comunidad Valenciana y emisora hermana de Nou Ràdio perteneciente al grupo RTVV.} \item (French) \emph{\textbf{Sí Ràdio} est une station de radio publique espagnole appartenant au groupe Ràdio Televisió Valenciana, entreprise de radio-télévision dépendant de la Generalitat valencienne.} \end{itemize} Note that these Wikipedia descriptions are not direct translations, and contain some name variations. We emphasize that this particular entity would have been completely out of scope in the standard cross-lingual task \citep{tsai-roth-2016-cross}, because it does not have an English Wikipedia page. In our analysis, there are millions of WikiData entities with this property, meaning the standard setting skips over the substantial challenges of modeling these (often rarer) entities, and disambiguating them in different language contexts. Our formulation seeks to address this. \subsection{Knowledge Base Scope} Our modeling focus is on using \emph{unstructured textual} information for entity linking, leaving other modalities or structured information as areas for future work. Accordingly, we narrow our KB to the subset of entities that have descriptive text available: We define our entity vocabulary $V$ as all WikiData items that have an associated Wikipedia page in \emph{at least one language}, independent of the languages we actually model.\footnote{More details in Appendix~\ref{appsec:dataprep}.} This gives 19,666,787 entities, \emph{substantially more than in any other task settings we have found}: the KB accompanying the entrenched TAC-KBP 2010 benchmark \citep{ji2010overview} has less than a million entities, and although English Wikipedia continues to grow, recent work using it as a KB still only contend with roughly 6~million entities \citep{fevry2020empirical,zhou_tacl2020}. Further, by employing a simple rule to determine the set of viable entities, we avoid potential selection bias based on our desired test sets or the language coverage of a specific pretrained model. \subsection{Supervision} We extract a supervision signal for MEL by exploiting the hyperlinks that editors place on Wikipedia pages, taking the anchor text as a linked mention of the target entity. This follows a long line of work in exploiting hyperlinks for EL supervision \cite{bunescu-pasca-2006-using,singh12:wiki-links,logan-etal-2019-baracks}, which we extend here by applying the idea to extract a large-scale dataset of 684 million mentions in 104 languages, linked to WikiData entities. This is at least six times larger than datasets used in prior English-only linking work \citep{gillick-etal-2019-learning}. Such large-scale supervision is beneficial for probing the quality attainable with current-day high-capacity neural models. \section{Mewsli-9 Dataset} We facilitate evaluation on the proposed multilingual EL task by releasing a matching dataset that covers a diverse set of languages and entities. \textbf{Mewsli-9} (\emph{\textbf{M}ultilingual Entities in N\textbf{ews}, \textbf{li}nked)} contains 289,087 entity mentions appearing in 58,717 originally written news articles from WikiNews, linked to WikiData.% \footnote{\url{www.wikinews.org}, using the 2019-01-01 snapshot from \url{archive.org}} The corpus includes documents in nine languages, representing five language families and six orthographies \footnote{Mewsli-9 languages \textcolor{gray}{(code, family, script)}: Japanese \textcolor{gray}{(`ja', Japonic, ideograms)}; German \textcolor{gray}{(`de', Indo-European (IE), Latin)}; Spanish \textcolor{gray}{(`es', IE, Latin)}; Arabic \textcolor{gray}{(`ar', Afro-Asiatic,} \textcolor{gray}{Arabic)}; Serbian \textcolor{gray}{(`sr', IE, Latin \& Cyrillic)}; Turkish \textcolor{gray}{(`tr', Turkic, Latin)}; Persian \textcolor{gray}{(`fa', IE, Perso-Arabic)}; Tamil \textcolor{gray}{(`ta', Dravidian, Brahmic)}; English \textcolor{gray}{(`en', IE, Latin)}.} Per-language statistics appear in \autoref{tab:wn_corpus}. Crucially, 11\% of the 82,162 distinct target entities in \mbox{Mewsli-9} \emph{do not have English Wikipedia pages}, thereby setting a restrictive upper bound on performance attainable by a standard XEL system focused on English Wikipedia entities.\footnote{As of 2019-10-03.} Even some English documents may contain such mentions, such as the Romanian reality TV show, \ientity{Noră pentru mama}{Q12736895}. WikiNews articles constitute a somewhat different text genre from our Wikipedia training data: The articles do not begin with a formulaic entity description, for example, and anchor link conventions are likely different. We treat the full dataset as a test set, avoiding any fine-tuning or hyperparameter tuning, thus allowing us to evaluate our model’s robustness to domain drift. \mbox{Mewsli-9} is a drastically expanded version of the English-only WikiNews-2018 dataset by \newcite{gillick-etal-2019-learning}. Our automatic extraction technique trades annotation quality for scale and diversity, in contrast to the MEANTIME corpus based on WikiNews \cite{minard-etal-2016-meantime}. \mbox{Mewsli-9} intentionally stretches the KB definition beyond English Wikipedia, unlike VoxEL \cite{rosales2018voxel}. Both MEANTIME and VoxEL are limited to a handful of European languages. \begin{table} \small \centering \begin{tabular}{rrrrr} \toprule & & & \multicolumn{2}{c}{\textbf{Entities}} \\ \cmidrule(lr){4-5} \textbf{Lang.} & \textbf{Docs} & \textbf{Mentions} & Distinct & $\notin$ EnWiki \\ \midrule ja & 3,410 & 34,463 & 13,663 & 3,384 \\ de & 13,703 & 65,592 & 23,086 & 3,054 \\ es & 10,284 & 56,716 & 22,077 & 1,805 \\ ar & 1,468 & 7,367 & 2,232 & 141 \\ sr & 15,011 & 35,669 & 4,332 & 269 \\ tr & 997 & 5,811 & 2,630 & 157 \\ fa & 165 & 535 & 385 & 12 \\ ta & 1,000 & 2,692 & 1,041 & 20 \\ en & 12,679 & 80,242 & 38,697 & 14 \\ \cmidrule(lr){2-5} & 58,717 & 289,087 & 82,162 & 8,807 \\ \midrule en$'$ & 1801 & 2,263 & 1799 & 0 \\ \bottomrule \end{tabular} \caption{Corpus statistics for Mewsli-9, an evaluation set we introduce for multilingual entity linking against WikiData. Line en$'$ shows statistics for English WikiNews-2018, by \newcite{gillick-etal-2019-learning}. \label{tab:wn_corpus}} \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{data/diagram_with_example.pdf} \caption{ \small Dual Encoder \textbf{Model F} diagram. The input to the \textit{Mention Encoder} is a sequence of WordPiece tokens that includes the document title ($T_i$), context immediately left of the mention ($L_i$), the mention span ($M_i$) demarcated by [E] and [/E] markers, and context immediately right of the mention ($R_i$). Segment labels ($SEG_i$) are also used to distinguish the input segments. The input to the (Model F) \textit{Entity Encoder} is simply the WordPiece tokens in the entity description ($D_i$). As usual, embeddings passed to the first transformer layer are the sum of positional embeddings (not pictured here), the segment embeddings, and the WordPiece embeddings. The example shows a Turkish mention of \ientity{Augustus}{Q211804} paired with its Italian description. \label{fig:architecture}} \end{figure*} \section{Model} Prior work showed that a dual encoder architecture can encode entities and contextual mentions in a dense vector space to facilitate efficient entity retrieval via nearest-neighbors search \citep{gillick-etal-2019-learning,wu2019zeroshot}. We take the same approach. The dual encoder maps a mention-entity pair $(m,e)$ to a score: \begin{equation}\label{eq:cosine} s(m,e) = \frac{\phi(m)^T \psi(e)}{\|\phi(m)\| \|\psi(e)\|}, \end{equation} where $\phi$ and $\psi$ are learned neural network encoders that encode their arguments as $d$-dimensional vectors ($d$=300, matching prior work). Our encoders are BERT-based Transformer networks \cite{vaswani2017attention,devlin-etal-2019-bert}, which we initialize from a pretrained multilingual BERT checkpoint.% \footnote{\url{github.com/google-research/bert} \texttt{multi\_cased\_L-12\_H-768\_A-12}} For efficiency, we only use the first 4 layers, which results in a negligible drop in performance relative to the full 12-layer stack. The WordPiece vocabulary contains 119,547 symbols covering the top 104 Wikipedia languages by frequency---this is the language set we use in our experiments. \subsection{Mention Encoder} The mention encoder $\phi$ uses an input representation that is a combination of \emph{local context} (mention span with surrounding words, ignoring sentence boundaries) and simple \emph{global context} (document title). The document title, context, and mention span are marked with special separator tokens as well as identifying token type labels (see \autoref{fig:architecture} for details). Both the mention span markers and document title have been employed in related work \citep{agarwal2020entity,fevry2020empirical}. We use a maximum sequence length of 64 tokens similar to prior work \citep{fevry2020empirical}, up to a quarter of which are used for the document title. The CLS token encoding from the final layer is projected to the encoding dimension to form the final mention encoding. \subsection{Entity Encoders} We experiment with two entity encoder architectures. The first, called \textbf{Model F}, is a featurized entity encoder that uses a fixed-length text description (64 tokens) to represent each entity (see \autoref{fig:architecture}). The same 4-layer Transformer architecture is used---without parameter sharing between mention and entity encoders---and again the CLS token vector is projected down to the encoding dimension. Variants of this entity architecture were employed by \newcite{wu2019zeroshot} and \newcite{logeswaran-etal-2019-zero}. The second architecture, called \textbf{Model E} is simply a QID-based embedding lookup as in \newcite{fevry2020empirical}. This latter model is intended as a baseline. \emph{A priori}, we expect Model E to work well for common entities, less well for rarer entities, and not at all for zero-shot retrieval. We expect Model F to provide more parameter-efficient storage of entity information and possibly improve on zero- and few-shot retrieval. \subsubsection{Entity Description Choice}\label{sec:description_heuristic} There are many conceivable ways to make use of entity descriptions from multiple languages. We limit the scope to using one primary description per entity, thus obtaining a single coherent text fragment to feed into the Model F encoder. We use a simple data-driven selection heuristic that is based on observed entity usage: Given an entity $e$, let $n_e(l)$ denote the number of mentions of $e$ in documents of language $l$, and $n(l)$ the global number of mentions in language $l$ across all entities. From a given source of descriptions---first Wikipedia and then WikiData---we order the candidate descriptions $(t_e^{l_1},t_e^{l_2},\dots )$ for $e$ first by the per-entity distribution $n_e(l)$ and then by the global distribution $n(l)$.% \footnote{The candidate descriptions (but not $V$) are limited to the 104 languages covered by our model vocabulary---in general, both Wikipedia and WikiData cover more than 300 languages.} For the example entity in Section~\ref{sec:description_example}, this heuristic selects the Catalan description because $9/16$ training examples link to the Catalan Wikipedia page \subsection{Training Process} In all our experiments, we use an 8k batch size with in-batch sampled softmax \citep{gillick2018end}. Models are trained with Tensorflow \citep{abadi2016tensorflow} using the Adam optimizer \cite{kingma2015adam,Loshchilov2019DecoupledWD}. All BERT-based encoders are initialized from a pretrained checkpoint, but the Model E embeddings are initialized randomly. We doubled the batch size until no further held-out set gains were evident and chose the number of training steps to keep the training time of each phase under one day on a TPU. Further training would likely yield small improvements. See Appendix~\ref{appsec:training} for more detail. \section{Experiments} We conduct a series of experiments to gain insight into the behavior of the dual encoder retrieval models under the proposed MEL setting, asking: \begin{itemize} \itemsep-0.2em \item What are the relative merits of the two types of entity representations used in Model E and Model F (embeddings vs. encodings of textual descriptions) \item Can we adapt the training task and hard-negative mining to improve results across the entity frequency distribution? \item Can a single model achieve reasonable performance on over 100 languages while retrieving from a 20 million entity candidate set \end{itemize} \subsection{Evaluation Data} We follow \newcite{upadhyay-etal-2018-joint} and evaluate on the ``hard'' subset of the Wikipedia-derived test set introduced by \newcite{tsai-roth-2016-cross} for cross-lingual EL against English Wikipedia, \textbf{TR2016\textsuperscript{hard}}. This subset comprises mentions for which the correct entity did not appear as the top-ranked item in their alias table, thus stress-testing a model's ability to generalize beyond mention surface forms. Unifying this dataset with our task formulation and data version requires mapping its gold entities from the provided, older Wikipedia titles to newer WikiData entity identifiers (and following intermediate Wikipedia redirection links). This succeeded for all but 233/42,073 queries in TR2016\textsuperscript{hard}{}---our model receives no credit on the missing ones. To be compatible with the pre-existing train/test split, we excluded from our training set all mentions appearing on Wikipedia pages in the full TR2016 test set. This was done for all 104 languages, to avoid cross-lingual overlap between train and test sets. This aggressive scheme holds out 33,460,824 instances, leaving our final training set with 650,975,498 mention-entity pairs. \autoref{fig:heldout_104} provides a break-down by language. \subsection{Evaluating Design Choices}\label{sec:eval_design_choices} \subsubsection{Setup and Metrics} In this first phase of experiments we evaluate design choices by reporting the \emph{differences} in Recall@100 between two models at a time, for conciseness. Note that for final system comparisons, it is standard to use Accuracy of the top retrieved entity (R@1), but to evaluate a dual encoder retrieval model, we prefer R@100 as this is better matched to its likely use case as a candidate generator. Here we use the TR2016\textsuperscript{hard}{} dataset, as well a portion of the 104-language set held out from our training data, sampled to have 1,000 test mentions per language. (We reserve the new \mbox{Mewsli-9} dataset for testing the final model in \autoref{sec:eval_wikinews9}.) Reporting results for 104 languages is a challenge. To break down evaluation results by entity frequency bins, we partition a test set according to the frequency of its gold entities as observed in the training set. This is in line with recent recommendations for finer-grained evaluation in EL \citep{waitelonis-gerbil2016,ilievski-etal-2018-systematic}. We calculate metrics within each bin, and report macro-average over bins. This is a stricter form of the label-based macro-averaging sometimes used, but better highlights the zero-shot and few-shot cases. We also report micro-average metrics, computed over the entire dataset, without binning. \begin{table*} \centering \begin{tabular}{l rr rr rr} \toprule & \multicolumn{2}{c}{\bf (a)} & \multicolumn{2}{c}{\bf (b)} & \multicolumn{2}{c}{\bf (c)} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \textbf{Bin} & holdout & TR2016\textsuperscript{hard}{} & holdout & TR2016\textsuperscript{hard}{} & holdout & TR2016\textsuperscript{hard}{} \\ \midrule $[0, 1)$ & +0.842 & +0.380 & +0.009 & +0.093 & +0.044 & +0.144 \\ $[1, 10)$ & +0.857 & +0.814 & +0.018 & +0.037 & +0.051 & +0.031 \\ $[10, 100)$ & +0.211 & +0.191 & +0.012 & +0.024 & +0.006 & -0.019 \\ $[100, 1k)$ & -0.010 & -0.031 & +0.007 & +0.019 & -0.005 & -0.015 \\ $[1k, 10k)$ & -0.018 & -0.051 & +0.008 & +0.011 & -0.003 & -0.007 \\ $[10k, +)$ & -0.009 & -0.089 & +0.004 & +0.003 & -0.002 & -0.013 \\ \midrule micro-avg & +0.018 & +0.008 & +0.006 & +0.017 & -0.001 & -0.006 \\ macro-avg & +0.312 & +0.202 & +0.010 & +0.031 & +0.015 & +0.020 \\ \bottomrule \end{tabular} \caption{R@100 differences between pairs of models: (a) model F (featurized inputs for entities) relative to model E (dedicated embedding for each entity); (b) add cross-lingual entity-entity task on top of the mention-entity task for model F; (c) control label balance per-entity during negative mining (versus not). \label{tab:three_pairwise}} \end{table*} \subsubsection{Entity Encoder Comparison} We first consider the choice of entity encoder, comparing Model F with respect to Model E. \autoref{tab:three_pairwise}(a) shows that using the entity descriptions as inputs leads to dramatically better performance on rare and unseen entities, in exchange for small losses on entities appearing more than 100 times, and overall improvements in both macro and micro recall. Note that as expected, the embedding Model~E gives 0\% recall in zero-shot cases, as their embeddings are randomly initialized and never get updated in absence of any training examples. The embedding table of Model E has 6 billion parameters, but there is no sharing across entities. Model F has approximately 50 times fewer parameters, but can distribute information in its shared, compact WordPiece vocabulary and Transformer layer parameters. We can think of these dual encoder models as classifiers over 20 million classes where the softmax layer is either parameterized by an ID embedding (Model E) or an encoding of a description of the class itself (Model F). Remarkably, using a Transformer for the latter approach effectively compresses (nearly) all the information in the traditional embedding model into a compact and far more generalizable model. This result highlights the value of analyzing model behavior in terms of entity frequency. When looking at the micro-averaged metric in isolation, one might conclude that the two models perform similarly; but the macro-average is sensitive to the large differences in the low-frequency bins. \subsubsection{Auxiliary Cross-Lingual Task} In seeking to improve the performance of Model~F on tail entities, we return to the (partly redundant) entity descriptions in multiple languages. By choosing just one language as the input, we are ignoring potentially valuable information in the remaining descriptions. Here we add an auxiliary task: cross-lingual entity description retrieval. This reuses the entity encoder $\psi$ of Model F to map two descriptions of an entity $e$ to a score, $s(t_e^l,t_e^{l'})\propto \psi(t_e^l)^T\psi(t_e^{l'})$, where $t_e^{l'}$ is the description selected by the earlier heuristic, and $t_e^l$ is sampled from the other available descriptions for the entity. We sample up to 5 such cross-lingual pairs per entity to construct the training set for this auxiliary task. This makes richer use of the available multilingual descriptions, and exposes the model to 39 million additional high-quality training examples whose distribution is decoupled from that of the mention-entity pairs in the primary task. The multi-task training computes an overall loss by averaging the in-batch sampled softmax loss for a batch of $(m,e)$ pairs and for a batch of $(e,e)$ pairs. \autoref{tab:three_pairwise}(b) confirms this brings consistent quality gains across all frequency bins, and more so for uncommon entities. Again, reliance on the micro-average metric alone understates the benefit in this data augmentation step for rarer entities. \begin{figure*} \centering \includegraphics[width=\textwidth]{data/heldout_104_stacked_color.pdf} \vspace{-2.8em} \caption{Accuracy of Model F\textsuperscript{+} on the 104 languages in our balanced Wikipedia heldout set, overlayed on alias table accuracy and Wikipedia training set size. (See \autoref{fig:heldout_104_larger} in the Appendix for a larger view.) \label{fig:heldout_104}} \end{figure*} \subsubsection{Hard-Negative Mining} Training with hard-negatives is highly effective in monolingual entity retrieval \citep{gillick-etal-2019-learning}, and we apply the technique they detail to our multilingual setting. In its standard form, a certain number of negatives are mined for each mention in the training set by collecting top-ranked but incorrect entities retrieved by a prior model. However, this process can lead to a form of the class imbalance problem as uncommon entities become over-represented as negatives in the resulting data set. For example, an entity appearing just once in the original training set could appear hundreds or thousands of times as a negative example. Instead, we control the ratio of positives to negatives on a per-entity basis, mining up to 10 negatives per positive. \autoref{tab:three_pairwise}(c) confirms that our strategy effectively addresses the imbalance issue for rare entities with only small degradation for more common entities. We use this model to perform a second, final round of the adapted negative mining followed by further training to improve on the macro-average further by +.05 (holdout) and +.08 (TR2016\textsuperscript{hard}{}). The model we use in the remainder of the experiments combines all these findings. We use Model F with the entity-entity auxiliary task and hard negative mining with per-entity label balancing, referenced as \textbf{Model F\textsuperscript{+}}. \subsection{Linking in 100 Languages}\label{sec:linking_in_100} Breaking down the model's performance by language (R@1 on our heldout set) reveals relatively strong performance across all languages, despite greatly varying training sizes (\autoref{fig:heldout_104}). It also shows improvement over an alias table baseline on all languages. While this does not capture the relative difficulty of the EL task in each language, it does strongly suggest effective cross-lingual transfer in our model: even the most data-poor languages have reasonable results. This validates our massively multilingual approach. \subsection{Comparison to Prior Work}\label{sec:eval_compare_to_prior_work} We evaluate the performance of our final retrieval model relative to previous work on two existing datasets, noting that direct comparison is impossible because our task setting is novel. \subsubsection{Cross-Lingual Wikification Setting}\label{sec:eval_xel} We compare to two previously reported results on TR2016\textsuperscript{hard}: the \textsc{WikiME} model of \newcite{tsai-roth-2016-cross} that accompanied the dataset, and the \textsc{Xelms-multi} model by \newcite{upadhyay-etal-2018-joint}. Both models depend at their core on multilingual word embeddings, which are obtained by applying (bilingual) alignment or projection techniques to pretrained monolingual word embeddings. \begin{table} \small \centering \begin{tabular}{r cccc} \toprule & \textbf{Tsai+} & \textbf{Upad.+} & \textbf{Model F\textsuperscript{+}}\\ \midrule \textbf{Languages} & 13 & 5 & 104 \\ \textbf{$|V|$} & 5m & 5m & 20m \\ \textbf{Candidates} & 20 & 20 & 20m \\ \midrule de & 0.53 & 0.55 & \textbf{0.62} \\ es & 0.54 & 0.57 & \textbf{0.58} \\ fr & 0.48 & 0.51 & \textbf{0.54} \\ it & 0.48 & 0.52 & \textbf{0.56} \\ \midrule \textbf{Average} & 0.51 & 0.54 & \textbf{0.57} \\ \bottomrule \end{tabular} \caption{Our best model outperforms previous related non-monolingual models that relied on alias tables and disambiguated among a much smaller set of entities. \emph{Bottom half:} linking accuracy on the TR2016\textsuperscript{hard}{} test set. \emph{Top half:} language coverage; entity vocabulary size; and entities disambiguated among at inference time. \emph{Middle columns:} \citep{tsai-roth-2016-cross} and \citep{ upadhyay-etal-2018-joint}. \label{tab:sota}} \end{table} As reported in \autoref{tab:sota}, our multilingual dual encoder outperforms the other two by a significant margin. To the best of our knowledge, this is the highest accuracy to-date on this challenging evaluation set. (Our comparison is limited to the four languages on which \newcite{upadhyay-etal-2018-joint} evaluated their multilingual model.) This is a strong validation of the proposed approach because the experimental setting is heavily skewed toward the prior models: Both are \emph{rerankers}, and require a first-stage candidate generation step. They therefore only disambiguate among the resulting $\leq$20 candidate entities (only from English Wikipedia), whereas our model performs retrieval against all 20 million entities. \begin{table} \centering \begin{tabular}{r cc} \toprule & \textbf{DEER} & \textbf{Model F\textsuperscript{+}} \\ \midrule \textbf{Languages} & 1 & 104 \\ \textbf{Candidates = $|V|$} & 5.7m & 20m \\ \midrule R@1 & 0.92 & 0.92 \\ R@100 & 0.98 & \textbf{0.99} \\ \bottomrule \end{tabular} \caption{Comparison to DEER model \citep{gillick-etal-2019-learning} on their English WikiNews-2018 dataset. \label{tab:flare_wikinews}} \end{table} \subsubsection{Out-of-Domain English Evaluation}\label{sec:eval_english} We now turn to the question of how well the proposed multilingual model can maintain competitive performance in English and generalize to a domain other than Wikipedia. \newcite{gillick-etal-2019-learning} provides a suitable comparison point. Their DEER model is closely related to our approach, but used a more light-weight dual encoder architecture with bags-of-embeddings and feed-forward layers without attention and was evaluated on English EL only. On the English WikiNews-2018 dataset they introduced, our Transformer-based multilingual dual encoder matches their monolingual model's performance at R@1 and improves R@100 by 0.01 (reaching 0.99) Our model thus retains strong English performance despite covering many languages and linking against a larger KB. See \autoref{tab:flare_wikinews}. \subsection{Evaluation on Mewsli-9}\label{sec:eval_wikinews9} \autoref{tab:wn9_results_main} shows the performance of our model on our new Mewsli-9 dataset compared with an alias table baseline that retrieves entities based on the prior probability of an entity given the observed mention string. \autoref{tab:wn9_results_bin} shows the usual frequency-binned evaluation. While overall (micro-average) performance is strong, there is plenty of headroom in zero- and few-shot retrieval. \begin{table}[t] \centering \begin{tabular}{r rrrr} \toprule & \multicolumn{2}{c}{\bf Alias Table} & \multicolumn{2}{c}{\bf Model F\textsuperscript{+}} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \textbf{Language} & R@1 & R@10 & R@1 & R@10 \\ \midrule ar & 0.89 & 0.93 & 0.92 & 0.98 \\ de & 0.86 & 0.91 & 0.92 & 0.97 \\ en & 0.79 & 0.86 & 0.87 & 0.94 \\ es & 0.82 & 0.90 & 0.89 & 0.97 \\ fa & 0.87 & 0.92 & 0.92 & 0.97 \\ ja & 0.82 & 0.90 & 0.88 & 0.96 \\ sr & 0.87 & 0.92 & 0.93 & 0.98 \\ ta & 0.79 & 0.85 & 0.88 & 0.97 \\ tr & 0.80 & 0.88 & 0.88 & 0.97 \\ \midrule micro-avg & 0.83 & 0.89 & 0.89 & 0.96 \\ macro-avg & 0.83 & 0.89 & 0.90 & 0.97 \\ \bottomrule \end{tabular} \caption{Results of our main dual encoder Model~F\textsuperscript{+} on the new Mewsli-9 dataset. Consistent performance across languages in a different domain from the training set points at good generalization. \label{tab:wn9_results_main}} \end{table} \begin{table}[t] \small \centering \begin{tabular}{l rrr r} \toprule & & \multicolumn{2}{c}{\bf Model F\textsuperscript{+}} & \multicolumn{1}{c}{\bf +CA} \\ \cmidrule(lr){3-4} \cmidrule(lr){5-5} \textbf{Bin} & \textbf{Queries} & \emph{R@1} & R@10 & \emph{R@1} \\ \midrule $[0, 1)$ & 3,198 & 0.08 & 0.34 & 0.07 \\ $[1, 10)$ & 6,564 & 0.58 & 0.81 & 0.60 \\ $[10, 100)$ & 32,371 & 0.80 & 0.93 & 0.82 \\ $[100, 1k)$ & 66,232 & 0.90 & 0.97 & 0.90 \\ $[1k, 10k)$ & 78,519 & 0.93 & 0.98 & 0.93 \\ $[10k, +)$ & 102,203 & 0.94 & 0.99 & 0.96 \\ \midrule micro-avg & 289,087 & 0.89 & 0.96 & 0.91 \\ macro-avg & & 0.70 & 0.84 & 0.71 \\ \bottomrule \end{tabular} \caption{Results on the new Mewsli-9 dataset, by entity frequency, attained by our main dual encoder Model~F\textsuperscript{+}, plus reranking its predictions with a Cross-Attention scoring model (CA). \label{tab:wn9_results_bin}} \end{table} \begin{table*} \small \centering \begin{tabular}{r L{.86\textwidth}} \toprule \textbf{Context 1} & \ldots Bei den neuen Bahnen handelt es sich um das Model \textbf{Tramino} von der polnischen Firma Solaris Bus \& Coach\ldots \\ \cmidrule(lr){2-2} \textbf{Prediction} & \ientity{Solaris Tramino}{Q780281}: Solaris Tramino -- rodzina tramwajów, które są produkowane przez firmę Solaris Bus \& Coach z Bolechowa koło Poznania\ldots \\ \cmidrule(lr){2-2} \textbf{Outcome} & \emph{\textbf{Correct}: A family of trams originally manufactured in Poland, mentioned here in German, linked to its Polish description.} \\ \midrule \textbf{Context 2} & \ldots sobre una tecnología que permitiría fabricar chocolate a partir de los \textbf{zumos de fruta}, agua con vitamina C o gaseosa dietética\ldots \\ \cmidrule(lr){2-2} \textbf{Prediction} & \ientity{fruit juice}{Q20932605}: Fruchtsaft , spezieller auch Obstsaft , ist ein aus Früchten einer oder mehrerer Fruchtarten gewonnenes flüssiges Erzeugnis\ldots \\ \cmidrule(lr){2-2} \textbf{Outcome} & \emph{\textbf{Correct}: A Spanish mention of ``fruit juice'' linked to its German description---only ``juice'' has a dedicated English Wikipedia page.} \\ \midrule \textbf{Context 3} & \foreignlanguage{russian}{\ldots Душан Ивковић рекао је да је његов тим имао императив победе над ( Италијом ) на Европском првенству\ldots } \\ \cmidrule(lr){2-2} \textbf{Prediction} & \ientity{It.\ men's water polo team}{Q261190}: La nazionale di pallanuoto maschile dell' Italia\ldots \\ \cmidrule(lr){2-2} \textbf{Expected} & \ientity{It.\ nat.\ basketball team}{Q261190}: La nazionale di pallacanestro italiana è la selezione dei migliori giocatori di nazionalità italiana\ldots \\ \cmidrule(lr){2-2} \textbf{Outcome} & \emph{\textbf{Wrong}: A legitimately ambiguous mention of ``Italy'' in Serbian (sports context), for which model retrieved the water polo and football teams, followed by the expected basketball team entity, all featurized in Italian.} \\\midrule \textbf{Context 4} & \ldots In July 2009 , action by the Federal Bureau of Reclamation to protect threatened fish stopped \textbf{irrigation pumping} to parts of the California Central Valley\ldots \\ \cmidrule(lr){2-2} \textbf{Prediction} & \ientity{irrigation sprinkler}{Q998539}: \begin{CJK}{UTF8}{min}スプリンクラー は 、 水 に 高圧 を かけ 飛 沫 に し て ノズル から 散布 する 装置\end{CJK} \\ \cmidrule(lr){2-2} \textbf{Outcome} & \emph{\textbf{Wrong}: Metonymous mention of \ientity{Central Valley Project}{Q2944429} in English, but model retrieved the more literal match, featurized in Chinese. Metonymy is a known challenging case for EL \citep{ling2015design}.} \\ \bottomrule \end{tabular} \caption{Correct and mistaken examples observed in error analysis of dual encoder model F\textsuperscript{+} on Mewsli-9.\label{fig:examples}} \end{table*} \subsubsection{Example Outputs} We sampled the model's correct predictions on Mewsli-9, focusing on cross-lingual examples where entities do not have an English Wikipedia page (\autoref{fig:examples}). These examples demonstrate that the model effectively learns cross-lingual entity representations. Based on a random sample of the model's errors, we also show examples that summarize notable error categories. \subsubsection{Reranking Experiment} We finally report a preliminary experiment to apply a cross-attention scoring model (CA) to rerank entity candidates retrieved by the main dual encoder (DE), using the same architecture of \newcite{logeswaran-etal-2019-zero}. We feed the concatenated mention text and entity description into a 12-layer Transformer model, initialized from the same multilingual BERT checkpoint referenced earlier. The CA model's CLS token encoding is used to classify mention-entity coherence. We train the model with a binary cross-entropy loss, using positives from our Wikipedia training data, taking for each one the top-4 DE-retrieved candidates plus 4 random candidates (proportional to the positive distributions). We use the trained CA model to rerank the \mbox{top-5} DE candidates for Mewsli-9 (\autoref{tab:wn9_results_bin}). We observed improvements on most frequency buckets compared to DE R@1, which suggests that the model's few-shot capability can be improved by cross-lingual reading-comprehension. This also offers an initial multilingual validation of a similar two-step BERT-based approach recently introduced in a monolingual setting by \citep{wu2019zeroshot}, and provides a strong baseline for future work. \section{Conclusion} We have proposed a new formulation for multilingual entity linking that seeks to expand the scope of entity linking to better reflect the real-world challenges of rare entities and/or low resource languages. Operationalized through Wikipedia and WikiData, our experiments using enhanced dual encoder retrieval models and frequency-based evaluation provide compelling evidence that it is feasible to perform this task with a single model covering over a 100 languages. Our automatically extracted Mewsli-9 dataset serves as a starting point for evaluating entity linking beyond the entrenched English benchmarks and under the expanded multilingual setting. Future work could investigate the use of non-expert human raters to improve the dataset quality further. In pursuit of improved entity representations, future work could explore the joint use of complementary multi-language descriptions per entity, methods to update representations in a light-weight fashion when descriptions change, and incorporate relational information stored in the KB. \section*{Acknowledgments} Thanks to Sayali Kulkarni and the anonymous reviewers for their helpful feedback. \bibliographystyle{acl_natbib}
1,116,691,498,766
arxiv
\section{Introduction} \subsection{Result and background} In the Steiner forest problem, we are given a set of $n$ pairs of {\em terminals} $\{(t_i,t_i')\}_{i = 1}^n$. The goal is to find a minimum-cost forest $F$ such that every pair of terminals is connected by a path in $F$. We consider the problem where the terminals are points in the Euclidean plane. The solution is a set of line segments of the plane; non-terminal points with more than two line segments adjacent to them in the solution are called {\em Steiner points}. The cost of $F$ is the sum of the lengths in $\ell_2$ of the line segments comprising it. Our main result is: \begin{theorem}\label{thm:main} There is a randomized $O(n \mathop{\mathrm{polylog}} n)$-time approximation scheme for the Steiner forest problem in the Euclidean plane. \end{theorem} An approximation scheme is guaranteed, for a fixed $\epsilon$, to find a solution whose total length is an most $1+\epsilon$ times the length of a minimum solution. There is a vast literature on algorithms for problems in the Euclidean plane. This work builds on the approximation scheme for geometric problems, such as Traveling Salesman and Steiner tree, due to Arora~\cite{Arora98}. (See~\cite{ApproximationAlgorithms} for a digest.) Similar techniques were suggested by Mitchell~\cite{Mitchell99} and improved by Rao and Smith for the Steiner tree and TSP problems~\cite{RS98}. Concerning approximation schemes, in addition to the work of Arora and Mitchell, others have built on similar ideas (e.g.~\cite{ARR98,KR07}). The Steiner forest problem, a generalization of the Steiner tree problem, is NP-hard~\cite{Karp75} and max-SNP complete~\cite{BP89,Thimm01} in general graphs and high-dimensional Euclidean space~\cite{Trevisan01}. Therefore, no PTAS exists for these problems. The 2-approximation algorithm due to Agrawal, Klein and Ravi~\cite{AKR95} can be adapted to Euclidean problems by restricting the Steiner points to lie on a sufficiently fine grid and converting the problem into a graph problem. We have formulated the connectivity requirements in terms of {\em pairs} of terminals. One can equivalently formulate these in terms of {\em sets} of terminals: the goal is then to find a forest in which each set of terminals are connected. Arora states~\cite{Arora2003} that his approach yields an approximation scheme whose running time is exponential in the number of sets of terminals, and this is the only previous work to take advantage of the Euclidean plane to get a better approximation ratio than that of Agrawal et al.~\cite{AKR95}. \subsection{Recursive dissection}\label{subsection:dissection} In Arora's paradigm, the feasible space is recursively decomposed by {\em dissection squares} using a randomized variant of the quadtree (Figure~\ref{fig:dissections}). The dissection is a 4-ary tree whose root is a square box enclosing the input terminals, whose width $L$ is twice the width of the smallest square box enclosing the terminals, and whose lower left-hand corner of the root box is translated from the lower left-hand corner of the bounding box by $(-a,-b)$, where $a$ and $b$ are chosen uniformly at random from the range $[0,L/2)$. Each node in the tree corresponds to a {\em dissection square}. Each square is dissected into four parts of equal area by one vertical and one horizontal {\em dissection line} each spanning the breadth of the root box. This process continues until each square contains at most one terminal (or multiple terminals having the same coordinates). \begin{figure}[h] \centering \includegraphics[scale = 2]{recursive-dissection-2.pdf} \caption{The shifted quad-tree dissection. The shaded box is the bounding box of the terminals.} \label{fig:dissections} \end{figure} Feasible solutions are restricted to using a small number of {\em portals}, designated points on each dissection line. A Structure Theorem states that there is a near-optimal solution that obeys these restrictions. The final solution is found by a dynamic program guided by the recursive decomposition. In the problems considered by Arora, the solutions are connected. However, the solution to a Steiner forest problem is in general disconnected, since only paired terminals are required to be connected. It is not known {\em a priori} how the connected components partition the terminal pairs. For that reason, maintaining feasibility in the dynamic program requires a table that is is exponential in the number of terminal pairs. In fact, Arora states~\cite{Arora2003} that his approach yields an approximation scheme whose running time is exponential in the number of sets of terminals. Nevertheless, here we use Arora's approach to get an approximation scheme whose running time is polynomial in the number of sets of terminals. The main technical challenge is in maintaining feasibility in a small dynamic programming table. \subsection{Small dynamic programming table} \label{sec:overview} We will use Arora's approach of a random recursive dissection. Arora shows (ie.~for Steiner tree) that the optimal solution can be perturbed (while increasing the length only slightly) so that, for each box of the recursive dissection, the solution within the box interacts weakly and in a controlled way with the solution outside the box. In particular, the perturbed solution crosses the boundary of the box only a constant number of times, and only at an $O(1)$-sized subset of $O(\log n)$ selected points, called {\em portals}. The optimal solution that has this property can be found using dynamic programming. Unfortunately, for Steiner forest those restrictions are not sufficient: maintaining feasibility constraints cannot be done with a polynomially-sized dynamic program. To see why, suppose the solution uses only 2 portals between adjacent dissection squares $R_E$ and $R_W$. In order to combine the solutions in $R_W$ and $R_E$ in the dynamic program into a feasible solution in $R_W\cup R_E$, we need to know, for each pair $(t,t')$ of terminals with $t \in R_W$ and $t' \in R_E$, which portal connects $t$ and $t'$ (Figure~\ref{fig:prop5}(a)). This requires $2^n$ configurations in the dynamic programming table. \begin{figure}[h] \centering \includegraphics[scale=2]{simple-connections.pdf} \caption{Maintaining feasibility is not trivially polynomial-sized.} \label{fig:prop5} \end{figure} To circumvent the problem in this example, the idea is to decompose $R_W$ and $R_E$ into a constant number of smaller dissection squares called {\em cells}. All terminals in a common cell that go to the boundary use a common portal. Thus, instead of keeping track of each terminal's choice of portal individually, the dynamic program can simply memoize each cell's choice of portal. The dynamic program also uses a specification of how portals must be connected {\em outside} the dissection squares. This information is sufficient to check feasibility when combining solutions of the subproblems for $R_W$ and for $R_E$. To show near-optimality, we show that a constant number of cells per square is sufficient for finding a nearly-optimal solution. \paragraph{Basic notation and definitions} For two dissection squares $A$ and $B$, if $A$ encloses $B$, we say that $B$ is a {\em descendent} of $A$ and $A$ is an {\em ancestor} of $B$. If no other dissection square is enclosed by $A$ and encloses $B$, we say that $A$ is the {\em parent} of $B$ and $B$ is the {\em child} of $A$. We will extend these definitions to describe relationships between cells. The {\em depth} of a square $S$ is given by its depth in the dissection tree ($0$ for the root). The depth of a dissection line is the minimum depth of squares it separates. Note that a square at depth $i$ is bounded by two perpendicular depth-$i$ lines and two lines of depth less than $i$. For a line segment $s$ (open or closed), we use $\text{length}(s)$ to denote the $\ell_2$ distance between $s$'s endpoints. For a set of line segments $S = \{s_1, s_2, \ldots\}$, $\text{length}(S) = \sum_i \text{length}(s_i)$. For a subset $X$ of the Euclidean plane, a component of $X$ is a maximal subset $Y$ of $X$ such that every pair of points in $Y$ are path-wise connected in $X$. We use $|X|$ to denote the number of components of $X$. \newcommand{\diam}{\text{diam}} The diameter of a connected subset $C$ of the Euclidean plane, $\diam(C)$, is the maximum $\ell_2$ distance between any pair of points in $C$. We use $\text{OPT}$ to denote both the line segments forming an optimal solution and the length of those line segments. \section{The algorithm} The algorithm starts by finding a rough {\em partition} of the terminals which is a coarsening of the connectivity requirements (subsection~\ref{subsection:step1}). We solve each part of this partition independently. We next {\em discretize} the problem by moving the terminals to integer coordinates of a sufficiently fine grid (subsection~\ref{subsection:step2}). We will also require that the Steiner points be integer coordinates. We next perform a recursive dissection (subsection~\ref{subsection:step3}) and assign points on the dissection lines as portals (subsection~\ref{subsection:step4}) as introduced in Section~\ref{subsection:dissection}. We then break each dissection square into a small number of cells. We find the best feasible solution $F$ to the discretized problem that only crosses between dissection squares at portals and such that for each cell $C$ of dissection square $R$, $F \cap R$ has only one component that connects $C$ to the boundary of $R$ (subsection~\ref{subsection:step5}). We will show that the expected length of $F$ is at most a $\frac{4}{10}\epsilon$ fraction longer than $\text{OPT}$. By Markov's inequality, with probability at least one-half the $\text{length}(F)\leq (1+\frac{8}{10}\epsilon) \text{OPT}$. We show that by moving the terminals back to their original positions (from their nearest integer coordinates) increases the length by at most $\frac{\epsilon}{40}\text{OPT}$. Therefore, the output solution has length at most $(1+\epsilon)\text{OPT}$ with probability one half. We now describe each of these steps in detail. \subsection{Partition}\label{subsection:step1} We first partition the set of terminal pairs, creating subproblems that can be solved independently of each other without loss of optimality. The purpose of this partition is to bound the size of the bounding box for each problem in terms of $\text{OPT}$. This bound is required for the next step, the result of which allows us to treat this geometric problem as a combinatorial problem. This discretization was also key to Arora's scheme, but the bound on the size of the bounding box for the problems he considers is trivially achieved. This is not the case for the Steiner forest problem. The size of the bounding box of all the terminals in an instance may be unrelated to the length of $\text{OPT}$. \newcommand{\db}{\text{dist}} Let $Q$ be the set of $m$ pairs $\{(t_i,t'_i)\}_{i=1}^m$ of $n$ terminals. Consider the Euclidean graph whose vertices are the terminals and whose edges are the line segments connecting terminal pairs in $Q$ and let $C_1, C_2, \ldots$ be the components of this graph. Let $\db(Q)=\max_i \diam(C_i)$; this is the maximum distance between any pair of terminals that must be connected. \begin{theorem}\label{thm:modify2} There exists a partition of $Q$ into independent instances $Q_1, Q_2, \ldots$ such that the optimal solution for $Q$ is the disjoint union of optimal solutions for each $Q_i$ and such that the diameter of $Q_i$ is at most $n_i^2 \db(Q_i)$ where $n_i$ is the number of terminals in $Q_i$. Further, this partition can be found in $O(n\log n)$ time. \end{theorem} \noindent We will show that the following algorithm, {\sc Partition}$(Q)$, produces such a partition. Let $T$ be the minimum spanning tree of the terminals in $Q$ . \begin{tabbing} {\sc Partition}$(Q,T)$\\ \qquad \= Let $e$ be the longest edge of $T$. \\ \> If $\text{length}(e) > n\,\db(Q)$, \\ \> \qquad \= remove $e$ from $T$ and let $T_1$ and $T_2$ be the resulting components.\\ \> \> For $i = 1, 2$, let $Q_i$ be the subset of terminal pairs connected by $T_i$.\\ \> \> $T:= \text{\sc Partition} (Q_1,T_1) \cup \text{\sc Partition} (Q_2,T_2)$.\\ \> Return the partition defined by the components of $T$. \end{tabbing} \begin{proof}[Proof of Theorem~\ref{thm:modify2}] First observe that by the cut property of minimum spanning trees, the distance between every terminal in $T_1$ and every terminal in $T_2$ is at least as long as the edge that is removed. Since a feasible solution is given by the union of minimum spanning trees of the sets of the requirement partition, and each edge in these trees has length at most $\db(Q)$, $\text{OPT} < n\, \db(Q)$. $\text{OPT}$ cannot afford to connect a terminal of $T_1$ to a terminal of $T_2$, because the distance between any terminal in $T_1$ and any terminal in $T_2$ is at least $n\, \db(Q)$ which is greater than the lower bound. (By definition of $\db$, there cannot have been a requirement to connect a terminal of $T_1$ to a terminal of $T_2$.) Therefore, $\text{OPT}$ must be the union of two solutions, one for the terminals contained by $T_1$ and one for the terminals contained by $T_2$. Inductively, the optimal solution for $Q$ is the union of optimal solutions for each set in {\sc Partition}$(Q)$, giving the first part of the theorem. The stopping condition of {\sc Partition} guarantees that there is a spanning tree of the terminals in the current subset $Q_i$ of terminals whose edges each have length at most $n_i\,\db(Q_i)$. Therefore, there is a path between each pair of terminals of length at most $n_i^2\,\db(Q_i)$, giving the second part of the theorem. Finally, we show that {\sc Partition} can be implemented to run in $O(n \log n)$ time. The diameter of a set of points in the Euclidean plane can be computed by first finding a convex hull and this can be done in $O(n\log n)$ by, for example, Graham's algorithm~\cite{Graham72}. Therefore, $\db(C_i)$ can be computed in $O(n \log n)$ time. The terminal-pair sets $Q_1$ and $Q_2$ for the subproblems need not be computed explicitly as the required information is given by $T_1$ and $T_2$. By representing $T$ with a top-tree data structure, we can find $n_i$ and $d(Q_i)$ by way of a cut operation and a sum and maximum query, respectively, in $O(\log n)$ time~\cite{GGT91}. Since there are $O(n)$ recursive calls, the total time for the top-tree operations is $O(n \log n)$. \end{proof} Our PTAS finds an approximately optimal solution to each subproblem $Q_i$ (as defined by Theorem~\ref{thm:modify2}) and combines the solutions. For the remainder of our description of the algorithm, we focus on how the algorithm addresses one such subproblem $Q_i$. In order to avoid carrying over subscripts and arguments $Q_i$, $\db(Q_i)$, $n_i$ throughout the paper, from now on we will consider an instance given by $Q$, $\db(Q)$, and $n$, and assume it has the property that the maximum distance between terminals, whether belonging to a requirement pair or not, is at most $n^2 \db(Q)$. $\text{OPT}$ will refer to the length of the optimal solution for this subproblem. \subsection{Discretize}\label{subsection:step2} We would like to treat the terminals as discrete combinatorial objects. In order to do so, we assume that the coordinates of the terminals lie on an integer grid. We can do so by {\em scaling} the instance, but this may result in coordinates of unreasonable size. Instead, we scale by a smaller factor and {\em round} the positions of the terminals to their nearest half-integer coordinates. \subsubsection*{Scale} We scale by a factor of \[\frac{40\sqrt 2 n}{\epsilon\, \db(Q)}.\] Before scaling, $\text{OPT} \ge \db(Q)$, the distance between the furthest pair of terminals that must be connected. After scaling we get the following lower bound: \begin{equation} \label{eq:OPT-lb} \text{OPT} \geq \frac {40 \sqrt 2 n} {\epsilon} \end{equation} Before scaling, $\diam(Q) \le n^2\,\db(Q)$ by Theorem~\ref{thm:modify2}. After scaling we get the following upper bound on the diameter of the terminals: \begin{equation} \label{eq:diam-ub} \diam(Q) \leq \frac {40 \sqrt 2 n^3} {\epsilon} \end{equation} Herein, $\text{OPT}$ refers to distances in the scaled version. \subsubsection*{Round} We round the position of each terminal to the nearest grid center. Additionally, we will search for a solution that only uses Steiner points that are grid centers. We call this constrained problem the {\em rounded problem}. The rounded problem may merge terminals (and thus, their requirements). \begin{lemma}\label{lemma:rounding} A solution to the Steiner forest can be derived from an optimal solution to the rounded problem at additional cost at most $\frac \epsilon {40} \text{OPT}$. \end{lemma} \begin{proof} Let $F$ be an optimal solution to the rounded problem. From this we build a solution to the original problem by connecting the original terminals to their rounded counterparts with line segments of length at most $1/ \sqrt 2$, ie.~half the length of the diagonal of a unit square. There are $n$ terminals, so the additional length is at most $n/\sqrt 2$ which is at most ${\epsilon \over 40} \text{OPT}$ by Equation~\eqref{eq:OPT-lb}. \end{proof} Let $F$ be an optimal solution to the rounded problem. We relate the number of intersections of $F$ with grid lines to $\text{length}(F)$. We will bound the cost of our restrictions to portals and cells with this relationship. \begin{lemma} \label{lem:sum-of-crossings} There is a solution to the rounded problem of length $(1+\frac{1}{10}\epsilon)\text{OPT}$ that satisfies \begin{equation} \label{eq:crossings-vs-length} \sum_{\text{grid lines }\ell} |F \cap \ell| \leq 3 \text{OPT}. \end{equation} \end{lemma} \begin{proof} We build a solution $F$ to the rounded problem from $\text{OPT}$ by replacing each line segment $e$ of $\text{OPT}$ with a line segment $e'$ that connects the half-integer coordinates that are nearest $e$'s endpoints (breaking ties arbitrarily but consistently). Since the additional length needed for this transformation is at most twice (for each endpoint of $e$) the distance from a point to the nearest half-integer coordinate: \[ \text{length}(e') \le \text{length}(e)+\sqrt 2 \] Since $\text{OPT}$ has at most $n$ leafs, $\text{OPT}$ has fewer than $n$ Steiner points and so has fewer than $4n$ edges. The additional length is therefore no greater than $4\sqrt 2 n$. Combining with Equation~\eqref{eq:OPT-lb}, this is at most ${1 \over 10}\epsilon \text{OPT}$. $F$ is composed of line segments whose endpoints are half-integer coordinates. Such a segment $S$ of length $s$ can cross at most $s$ horizontal grid lines and at most $s$ vertical grid lines. Therefore \[ \sum_{\text{grid lines }\ell} |S \cap \ell| \leq 2s \] and summing over all segments of $F$ gives \[ \sum_{\text{grid lines }\ell} |F \cap \ell| \leq 2\,\text{length}(F) \leq 2(1+{1 \over 10}\epsilon)\text{OPT} < 3\text{OPT} \] where the last inequality follows from $\epsilon < 1$. \end{proof} From here on out, our goal is to find the solution that is guaranteed by Lemma~\ref{lem:sum-of-crossings}. We will not be able to find this solution optimally, but will be able to find a solution within our error bound of $\epsilon\, \text{OPT}$. \subsection{Dissect}\label{subsection:step3} The recursive dissection starts with an $L \times L$ box that encloses the terminals and where $L$ is at least twice as big as needed. This allows some choice in where to center the enclosing box. We make this choice randomly. This random choice is used in bounding the incurred cost, in expectation, of structural assumptions (Section~\ref{sec:cell-props}) that help to reduce the size of the dynamic programming table. Formally, let $L$ be the smallest power of $2$ greater than $2 \cdot\text{diameter}(Q)$. In combination with Equation~\eqref{eq:diam-ub}, we get the following upper bound on $L$: \begin{equation} L \leq { 160 \sqrt 2 \over \epsilon} n^3 \label{eq:L} \end{equation} The $x$-coordinate (and likewise the $y$-coordinate) of the lower left corner of the enclosing box are chosen uniformly at random from the $L/2$ integer coordinates that still result in an enclosing box. We will refer to this as the {\em random shift}. As described in section~\ref{subsection:dissection}, we perform a recursive dissection of this enclosing box. This can be done in $O(n \log n)$ time~\cite{BET93}. By our choice of $L$ and the random shift, this dissection only uses the grid lines. Since the recursive dissection stops with unit dissection squares, the quad-tree has depth $\log L$. Consider a vertical grid line $\ell$. Since there are $L/2$ values of the horizontal shift, and $2^{i-1}$ of these values will result in $\ell$ being a depth-$i$ dissection line, we get \begin{equation}\label{eq:prob} \text{Prob}[\text{depth}(\ell)=i] = 2^i/L \end{equation} \subsection{Designate portals}\label{subsection:step4} We designate a subset of the points on each dissection line as {\em portals}. We will restrict our search for feasible solutions that cross dissection lines at portals only. We use the portal constant $A$, where \begin{equation} \label{eq:inter-portal-distance} A \text{ is the smallest power of two greater than }30\epsilon^{-1}\log L. \end{equation} Formally, for each vertical (resp. horizontal) dissection line $\ell$, we designate as portals of $\ell$ the points on $\ell$ with $y$-coordinates (resp. $x$-coordinates) which are integral multiples of \[\frac{L}{A 2^{\text{depth}(\ell)}}.\] There are no portals on the sides of root dissection square, the bounding box. Since a square at depth $i$ has sidelength $L/2^i$ and is bounded by 4 dissection lines at depth at most $i$, we get: \begin{lemma} \label{lem:n-square-portals} A dissection square has at most $4A$ portals on its boundary. \end{lemma} Consider perpendicular dissection lines $\ell$ and $\ell'$. A portal $p$ of $\ell$ may happen to be a point of $\ell'$ (namely, the intersection point), but $p$ may not be a portal of $\ell'$, that is, it may not be one of the points of $\ell'$ that were designated according to the above definition. The following lemma will be useful in Subsection~\ref{subsection:establishing-the-portal-property} for technical reasons. \begin{lemma} \label{lem:corner-portals} For every dissection square $R$, the corners of $R$ are portals (except for the points that are corners of the bounding box). \end{lemma} \begin{proof} Consider a square $R$ at depth $i$. Consider the two dissection lines that divide $R$ into 4 $\ell$ and $\ell'$. The depth of these lines is $i+1$. These lines restricted to $R$, namely $\ell_R = \ell \cap R$ and $\ell_R' = \ell' \cap R$, have length $L/2^i$, a power of 2. Portals are designated as integral multiples of $L/(2^{i+1} A)$, also a power of 2 and a $1/2A$ fraction of the length of $\ell_R$ and $\ell_R'$. It follows that the endpoints and intersection point of $\ell_R$ and $\ell_R'$ are portals of these lines. \end{proof} \subsection{Solve via dynamic programming}\label{subsection:step5}\label{section:dynamicprogram} In order to overcome the computational difficulty associated with maintaining feasibility (as illustrated in Figure~\ref{fig:prop5}), we divide each dissection square $R$ into a regular $B \times B$ grid of {\em cells}; $B$, which will be defined later, is $O(1/\epsilon)$ and is a power of 2. Each {\em cell} of the grid is either coincident with a dissection square or is smaller than the leaf dissection squares. Consider parent and child dissection squares $R_P$ and $R_C$; a cell $C$ of $R_p$ encloses four cells of $R_C$. The dynamic programming table for a dissection square $R$ will be indexed by two subpartitions (partition of a subset) of the portals and cells of $R$; one subpartition will encode the connectivity achieved by a solution within $R$ and the other will encode the connectivity required by the solution outside $R$ in order to achieve feasibility. The details are given in the next section. \section{The Dynamic Program} \label{sec:DP} \subsection{The dynamic programming algorithm} The dynamic program will only encode subsolutions that have low complexity and permit feasibility. We call such subsolutions {\em conforming}. We build a dynamic programming table for each dissection square. The table is indexed by {\em valid configurations} and the entry will be the best {\em compatible} conforming subsolution. \subsubsection*{Low complexity and feasible: conforming subsolutions} Let $R$ be a dissection square or a cell, and let $F$ be a finite number of line segments of $R$. We say that $F$ {\em conforms} to $R$ if it satisfies the following properties: \begin{itemize} \item ({\em boundary property}) $|F \cap \partial R| \le 4(D+1)$. \item ({\em portal property}) Every connected component of $F\cap \partial R$ contains a portal of $R$. \item ({\em cell property}) Each cell $C$ of $R$ intersects at most one connected component of $F$ that also intersects $\partial R$. \item ({\em terminal property}) If a terminal $t \in R$ is not connected to its mate by $F$ then it is connected to $\partial R$ by $F$. \end{itemize} The constant $D$ is defined in Equation~(\ref{eq:rho}) and is $O(1/\epsilon)$. Note that the first three properties are those that bound the complexity of the allowed solutions and the last guarantees feasibility. We say that a solution $F$ recursively conforms to $R$ if it conforms to all descendents dissection squares of $R$ (including $R$). We say that a solution $F$ is conforming if it recursively conforms to the root dissection square with every terminal connected to its mate. It is a trivial corollary of the last property that a conforming solution is a feasible solution to the Steiner forest problem. We will restate and prove the following in Section~\ref{sec:struct}; the remainder of this section will give a dynamic program that finds a conforming solution. \begin{theorem}[Structure Theorem]\label{thm:structure} There is a conforming solution that has, in expectation over the random shift of the bounding box, length at most $(1+\frac \epsilon 4)\text{OPT}$. \end{theorem} \subsubsection*{Indices of the dynamic programming table: valid configurations} The dynamic programming table $\text{DP}_R$ for a dissection square $R$ will be indexed by subpartitions of the portals and cells of $R$ that we call {\em configurations}. A {\em configuration of $R$} is a pair $(\pi^{\text{in}}, \pi^{\text{out}})$ with the following properties: $\pi^{\text{in}}$ is a subpartition of the cells and portals of $R$ such that each part contains at least one portal and at least one cell; $\pi^{\text{out}}$ is a coarsening of $\pi^{\text{in}}$. See Figure~\ref{fig:config}. $\pi^{\text{in}}$ will characterize the behaviour of the solution inside $R$ while $\pi^{\text{out}}$ will encode what connections remain to be made in order to make the solution feasible. For a terminal $t\in R$, we use $C_R[t]$ to denote the cell of $R$ that contains $t$. We say a configuration is {\em valid} if it has the following properties: \begin{itemize} \item ({\em compact}) $\pi^{\text{in}}$ has at most $4(D+1)$ parts and contains at most $4(D+1)$ portals. \item ({\em connecting}) For every terminal $t$ in $R$ whose mate is not in $R$, $C_R[t]$ is in a part of $\pi^{\text{in}}$. For every pair of mated terminals $t,t'$ in $R$, either $C_R[t]$ and $C_R[t']$ are in the same part of $\pi^{\text{out}}$ or neither $C_R[t]$ nor $C_R[t']$ are in $\pi^{\text{in}}$. \end{itemize} The connecting property will allow us to encode and guarantee feasible solutions. Since a dissection square has $4A$ portals (Lemma~\ref{lem:n-square-portals}) and $B^2$ cells, the first property bounds the number of configurations: \begin{lemma}\label{lem:n-configs} There are at most $(4A+B^2)^{O(D)}$ or $(\epsilon^{-2}\log n)^{O(1/\epsilon)}$ compact configurations of a dissection square. \end{lemma} We will use the following notation to work with configurations: For a subpartition $\pi$ of $S$ and an element $x\in S$, we use $\pi[x]$ to denote the part of $\pi$ containing $x$ if there is one, and $\emptyset$ otherwise. For two subpartitions $\pi$ and $\pi'$ of a set $S$, we use $\pi \vee \pi'$ to denote the finest possible coarsening of the union of $\pi$ and $\pi'$. If we eliminate the elements that are in partition $\pi'$ but not in partition $\pi$, $\pi \vee \pi'$ is a coarsening of $\pi'$ and vice versa. \begin{figure}[h] \centering \resizebox{6cm}{!}{\input configuration.pdf_t} \caption{A dissection square and cells (grid), terminal pairs (triangles and pentagons) and unmated terminal (square), and subsolution (dark lines). The grey components give the parts of $\pi^{\text{in}}$ with portals (half-disks). To be a valid configuration, the two parts containing the pentagon terminals must be in the same part of $\pi^{\text{out}}$. The subsolution conforms to $R$ and is compatible with $(\pi^{\text{in}},\pi^{\text{out}})$. } \label{fig:config} \end{figure} \subsubsection*{Entries of the dynamic programming table: compatible subsolutions} The entries of the dynamic programming table for dissection square $R$ are compatible subsolutions, subsolutions that satisfy. Formally, a subsolution $F$ and configuration $(\pi^{\text{in}}, \pi^{\text{out}})$ of $R$ are {\em compatible} if and only if $\pi^{\text{in}}$ has one part for every connected component of $F$ that intersects $\partial R$ and that part consists of the cells and portals of $R$ intersected by that connected component (Figure~\ref{fig:config}). Note that as a result, some valid configurations will not have a compatible subsolution: if a part of $\pi^{\text{in}}$ contains disconnected cells with terminals inside, then no set of line segments can connect these terminals and be contained by the cells of that part. The entries corresponding to such configurations will indicate this with $\infty$. \begin{observation}\label{obs:conform} If $F$ conforms to $R$ then $(\pi^{\text{in}}, \pi^{\text{out}})$ is a valid configuration. \end{observation} As is customary, our dynamic program finds the {\em value} of the solution; it is straightforward to augment the program so that the solution itself can be obtained. Our procedure for filling the dynamic programming tables, {\sc populate}, will satisfy the following theorem: \begin{theorem} \label{thm:correct} {\sc populate}$(R)$ returns a table $\text{DP}_R$ such that, for each valid configuration $(\pi^{\text{in}},\pi^{\text{out}})$ of $R$, $\text{DP}_R[\pi^{\text{in}}, \pi^{\text{out}}]$ is the minimum length of subsolution that recursively conforms to $R$ and that is compatible with $(\pi^{\text{in}}, \pi^{\text{out}})$. \end{theorem} We prove this theorem in Section~\ref{sec:dp-correct}. \subsubsection*{Consistent configurations} A key step of the dynamic program is to correctly match up the subsolutions of the child dissection squares $R_1, \ldots, R_4$ of $R_0$. Consider valid configurations $(\pi^{\text{in}}_i, \pi^{\text{out}}_i)$ for $i=0,\ldots,4$ and let $\pi^\vee_0 = \bigvee_{i=1}^4 \pi^{\text{in}}_i$. We say that the configurations $(\pi^{\text{in}}_i, \pi^{\text{out}}_i)$ for $i=0,\ldots,4$ are {\em consistent} if they satisfy the following connectivity requirements: \begin{enumerate} \item ({\em internal}) $\pi^{\text{in}}_0$ is given by $\pi^\vee_0$ with portals of $R_i$ that are not portals of $R_0$ removed, parts that do not contain portals of $R_0$ removed, and each cell of $R_i$ replaced by the corresponding (parent) cell of $R_0$. (If non-disjoint parts result from replacing cells by their parents, then the result is not a partition and cannot be $\pi^{\text{in}}_0$.) \item ({\em external}) For two elements (cells and/or portals) $x,x'$ of $R_i$, $\pi^{\text{out}}_i[x] = \pi^{\text{out}}_i[x']$ if and only if $\pi^\vee_0[x] = \pi^\vee_0[x']$ or there are portals $p,p'$ such that $\pi^{\text{out}}_0[p] = \pi^{\text{out}}_0[p']$, $\pi^\vee_0[x] = \pi^\vee_0[p]$, and $\pi^\vee_0[x'] = \pi^\vee_0[p']$. \item ({\em terminal}) For mated terminals $t\in R_i$ and $t'\in R_j$ with $1 \le i < j \le 4$, either $\pi^\vee_0[C_i[t]] = \pi^\vee_0[C_j[t']]$ or $\pi^{\text{out}}_0[C_0[t]] = \pi^{\text{out}}_0[C_0[t']]$. \end{enumerate} \subsubsection*{Dynamic programming procedure} We now give the procedure {\sc populate} that fills the dynamic programming tables. The top dissection square $R$ has a single entry, the entry corresponding to the configuration $(\emptyset, \emptyset)$. The desired solution is therefore given by $\text{DP}_R[\emptyset, \emptyset]$ after filling the table $\text{DP}_R$ with {\sc populate}$(R)$. The corresponding solution is conforming. The following procedure is used to populate the entries of $\text{DP}_{R_0}$. The procedure is well defined when the tables are filled for dissection squares in bottom-up order. \begin{tabbing} {\sc populate}$(R_0)$\\ \qquad\=\qquad\=\qquad\=\qquad\=\qquad\=\qquad\=\hspace{5cm}\=\\ \> If $R_0$ contains at most one terminal, then \>\>\>\>\>\> {\em \% $R_0$ is a leaf dissection square}\\ \>\> For every valid configuration $(\pi^{\text{in}}, \pi^{\text{out}})$ of $R_0$,\\ \>\>\> $\text{DP}_{R_0}[\pi^{\text{in}},\pi^{\text{out}}] := 0$ \\ \>\>\> For every part $P$ of $\pi^{\text{in}}$,\\ \>\>\>\> if the cells of $P$ are connected and contain the portals (and terminal) of $P$, \\ \>\>\>\>\> $F_P := $ \=minimum-length set of lines in the cells of $P$ that\\ \>\>\>\>\>\> connects the portals in $P$ (and terminal, if in $P$), \\ \>\>\>\> \>$\text{DP}_{R_0}[\pi^{\text{in}},\pi^{\text{out}}] := \text{DP}_{R_0}[\pi^{\text{in}},\pi^{\text{out}}] + \text{length}(F_P)$;\\ \>\>\>\> otherwise, $\text{DP}_{R_0}[\pi^{\text{in}},\pi^{\text{out}}] := \infty$. \>\>\> {\em \% no subsolution conforms to $\pi^{\text{in}},\pi^{\text{out}}$}\\ \\ \> Otherwise, \>\>\>\>\>\> {\em \% $R_0$ is a non-leaf dissection square}\\ \>\>let $R_1, R_2, R_4, R_4$ denote the children of $R_0$.\\ \>\>For every valid configuration $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$ of $R_0$, initialize $\text{DP}_{R_0}(\pi^{\text{in}}_0,\pi^{\text{out}}_0):= \infty$. \\ \>\>For every quintuple of indices $\left\{(\pi^{\text{in}}_i,\pi^{\text{out}}_i) \right\}_{i=0}^4$ to $\{\text{DP}_{R_i} \}_{i=0}^4$, \\ \>\>\> if $\left\{(\pi^{\text{in}}_i,\pi^{\text{out}}_i) \right\}_{i=0}^4$ are consistent,\\ \>\>\>\> $\text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0] :=\min \left\{ \text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0] , \sum_{i=1}^4 \text{DP}_{R_i}[\pi^{\text{in}}_i,\pi^{\text{out}}_i]\right\}$.\\ \end{tabbing} \subsection{Running time}\label{sec:run-time} Since each part of $\pi^{\text{in}}$ contains $O(D)$ portals (since $\pi^{\text{in}}$ is compact), $F_P$ is a Steiner tree of $O(D)$ terminals (portals and possibly one terminal) among the cells of $\pi^{\text{in}}$. To avoid the cells that are not in $\pi^{\text{in}}$, we will require at $O(B^2)$ Steiner points. $F_P$ can be computed in time proportional to $B$ and $D$ (which are $O(1/\epsilon)$) by enumeration. Since the number of compact configurations is polylogarithmic and since there are $O(n\log n)$ dissection squares, the running time of the dynamic program is therefore $O(n \log^\xi n)$, where $\xi$ is a constant depending on $\epsilon$. \subsection{Correctness (proof of Theorem~\ref{thm:correct})} \label{sec:dp-correct} We prove Theorem~\ref{thm:correct}, giving the correctness of our dynamic program, by bottom-up induction. In the following, we use the notation, definitions and conditions of {\sc populate}. The base cases of the induction correspond to dissection squares that contain at most one terminal. If any part $P$ of $\pi^{\text{in}}$ contains cells or portals that are disconnected, then there is no subsolution that is compatible with $\pi^{\text{in}}$ and $\text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0] = \infty$ represents this. Otherwise the subsolution $F_0$ that is given by the union of $\{F_P\ : \ \text{part $P$ of $\pi^{\text{in}}$}\}$ is compatible with $\pi^{\text{in}}$ by construction. Further $F_0$ satisfies the terminal property of conformance with $R_0$ by construction and the remaining properties since it is compatible with a valid conformation. When $R_0$ contains more than one terminal, for a valid configuration $(\pi^{\text{in}}, \pi^{\text{out}})$ of $R_0$, we must prove: \begin{description} \item[Soundness] If $\text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0]$ is finite then there is a subsolution $F_0$ that recursively conforms to $R_0$, is compatible with $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$ and whose length is $\text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0]$. \item[Completeness] Any minimal subsolution $F_0$ that recursively conforms to $R_0$ and is compatible with $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$ has length at least $\text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0]$. \end{description} The proof of Theorem~\ref{thm:correct} follows directly from this. We will use the following lemma: \begin{lemma} \label{lem:compatible} Let $\{(\pi^{\text{in}}_i,\pi^{\text{out}}_i)\}_{i=0}^4$ be consistent configurations for dissection square $R_0$ and child dissection squares $R_1,\ldots, R_4$. For $i = 1, \ldots, 4$, let $F_1,\ldots,F_4$ be subsolutions that recursively conform to $R_i$ and are compatible with $(\pi^{\text{in}}_i,\pi^{\text{out}}_i)$. Then $\cup_{i=1}^4 F_i$ recursively conforms to $R_0$ and is compatible with $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$. \end{lemma} \begin{proof} Recall that $F_0$ is compatible with $ (\pi^{\text{in}}_0,\pi^{\text{out}}_0)$ if $\pi^{\text{in}}_0$ has one part for every connected component of $F_0$ that intersects $\partial R_0$ and that part consists of the cells and portals intersected by that component. Consider a component $K$ of $F_0$ that intersects $\partial R_0$. There must be a child dissection square $R_i$ with a part of $\pi^{\text{in}}_i$ that consists of the cells and portals intersected by $K \cap R_i$. Consider all such parts $P_j$, $j = 1, \ldots$. (Note that there may be more than one such part from a given child dissection square.) These parts belong to a part $P$ of $\pi^\vee_0$. We argue that no other child configuration parts make up $P$. For a contradiction, suppose another part $P'$ is in the make up of $P$. Since $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$ is consistent with the child configurations, $P'$ cannot share a cell with any of $P_j$, $j = 1, \ldots$ for otherwise $P$ would not survive the pruning given by the internal connectivity requirement of consistency. Therefore, $P'$ must share a portal with some $P_j$; the corresponding parts $K'$ and $K_j$ would therefore also share this portal, implying that $K \cap K'$ is connected, a contradiction. Again, by the internal connectivity requirement of consistency, $P$ is obtained from $P_j$, $j = 1, \ldots$ by: \begin{itemize} \item Removing the portals that are not in $R_0$. The remaining portals are on $\partial R_0$, and $K$ connects them since $K_j$, $j = 1, \ldots$ connect them by the inductive hypothesis. \item Each cell $C$ of $P_j$ is replaced by the parent cell, which entirely contains $C \cap K$. \end{itemize} Finally $P$ is not removed altogether since $K$ intersects $\partial R_0$ and this intersection must contain a portal of $R_0$. Therefore, there is a part of $\pi^{\text{in}}_0$ obtained from $P$ that contains all the cells and portals intersected by $K$. \end{proof} \subsubsection*{Proof of soundness} If $\text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0]$ is finite, then there must be entries $\text{DP}_{R_i}[\pi^{\text{in}}_0,\pi^{\text{out}}_0]$ that are finite for $i = 1, \ldots, 4$ and such that $\text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0] = \sum_{i=1}^4 \text{DP}_{R_i}[\pi^{\text{in}}_0,\pi^{\text{out}}_0]$. Then, by the inductive hypothesis, for $i = 1, \ldots, 4$, there is a subsolution $F_i$ that recursively conforms to $R_i$, has length $\text{DP}_{R_i}[\pi^{\text{in}}_i,\pi^{\text{out}}_i]$, and is compatible with $\pi^{\text{in}}_i,\pi^{\text{out}}_i$. We simply define $F_0 = \bigcup_{i=1}^4 F_i$; by definition, $F_0$ has the desired length. By Lemma~\ref{lem:compatible}, $F_0$ is compatible with $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$. We show that $F_0$ conforms to $R_0$ by illustrating the four properties of conformance. \paragraph{$\mathbf{F_0}$ satisfies the portal property} Let $K$ be a component of $F_0 \cap \partial R_0$. For some child $R_i$, the intersection of $K$ with $\partial R_i \cap \partial R_0$ is nonempty. Since $F_i$ satisfies the portal property, $K \cap \partial R_i \cap \partial R_0$ must also contain a portal; that portal is also a portal of $R_0$. \paragraph{$\mathbf{F_0}$ satisfies the cell property} Let $C$ be a cell of $R_0$ that is enclosed by child dissection square $R_i$. Suppose for a contradiction that two connected components $K_1$ and $K_2$ intersect both $C$ and $\partial R$. Then $K_1 \cap R_i$ and $K_2 \cap R_i$ must be connected components of $F_i$ that intersect cells $C_1$ and $C_2$, respectively, and $\partial R_i$, where $C_1$ and $C_2$ are child dissection squares of $C$. Since $F_i$ satisfies the cell property w.r.t.\ $R_i$, $C_1 \neq C_2$ and these cells belong to parts $P_1 \ne P_2$ of $\pi^{\text{in}}_i$. By the internal connectivity quirement of consistency, these cells would both get replaced by $C$, implying that $\pi^{\text{in}}_0$ has two parts containing the same cell, a contradiction. \paragraph{$\mathbf{F_0}$ satisfies the terminal property} Consider a terminal $t$ in $R_i$ and $R_0$ such that $C_{R_i}[t]$ is in a part $P$ of $\pi^{\text{in}}_i$ (for otherwise, the terminal property follows from the inductive hypothesis). If $t$'s mate is not in $R_0$, then, by the connecting property of valid configurations, $C_{R_0}[t]$ is in a part of $\pi^{\text{in}}_0$ and the terminal property follows from compatibility. So suppose $t$'s mate, $t'$ is in $R_0$ (and child $R_j$). Since the configurations are valid, $t'$ is in a part $P'$ of $\pi^{\text{in}}_j$. If $\pi^{\text{out}}_0[C_{R_0}[t]] = \pi^{\text{out}}_0[C_{R_0}[t']]$, the terminal property follows from compatibility. If not, then by the terminal connecting property of configuration consistency, either $\pi^\vee_0[C_{R_i}[t]] = \pi^\vee_0[C_{R_j}[t']]$. Since parts of child configurations cannot share cells, there must be a series of parts $P_1, \ldots, P_k$ where $P_1$ contains $C_{R_i}[t]$, $P_k$ contains $C_{R_j}[t']$ and parts $P_\ell$ and $P_\ell+1$ contain a common portal $p_\ell$ for $\ell = 1, \ldots, k-1$. Since $F_1, \ldots, F_4$ are compatible with $\pi^{\text{in}}_1, \ldots, \pi^{\text{in}}_4$, respectively, by the inductive hypothesis, there is a component $K_\ell$ in $\cup_{i=1}^4 F_i$ that connects $t$ and $p_1$ (for $\ell=1$), $p_\ell$ and $p_{\ell+1}$ (for $\ell= 2, \ldots, k-1$) and $p_\ell$ to $t'$ (for $\ell = k$). $\cup_{\ell = 1}^k K_\ell$ is a component in $F_0$ that connects $t$ and $t'$, giving the terminal property. \paragraph{$\mathbf F_0$ satisfies the boundary property} Since $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$ is a valid configuration, $\pi^{\text{in}}_0$ has at most $4(D+1)$ parts. By compatibility, $F_0$ has at most $4(D+1)$ components intersecting $\partial R_0$. This proves the compactness property of conformance. \subsubsection*{Proof of completeness} Let $\hat F_0$ be any minimal subsolution that recursively conforms to $R_0$ and is compatible with $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$. We show that $\hat F_0$ has length at least $\text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0]$, proving completeness. For $i = 1, \ldots, 4$, let $\hat F_i = \hat F_0 \cap R_i$. \newcommand{{\hat \pi}^{\text{in}}}{{\hat \pi}^{\text{in}}} \newcommand{\hpiout}{{\hat \pi}^{\text{out}}} Since $\hat F_0$ recursively conforms to $R_0$, $\hat F_i$ recursively conforms to $R_i$. For $i = 1, \ldots, 4$, let $({\hat \pi}^{\text{in}}_i, \hpiout_i)$ be a configuration of $R_i$ that is compatible with $\hat F_i$. By Observation~\ref{obs:conform}, $({\hat \pi}^{\text{in}}_i, \hpiout_i)$ is a valid configuration. By the inductive hypothesis, $\text{length}(\hat F_i) \geq \text{DP}_{R_i}[({\hat \pi}^{\text{in}}_i, \hpiout_i)]$. It follows that $\text{length}(\hat F_0) \geq \sum_{i=1}^4 \text{DP}_{R_i}[({\hat \pi}^{\text{in}}_i, \hpiout_i)]$. If the child configurations $\{({\hat \pi}^{\text{in}}_i, \hpiout_i)\}_{i=1}^4$ are consistent with $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$, $\sum_{i=1}^4 \text{DP}_{R_i}[({\hat \pi}^{\text{in}}_i, \hpiout_i)]$ will be an argument to the minimization in {\sc populate} and therefore $\text{length}(\hat F_0) \geq \text{DP}_{R_0}[\pi^{\text{in}}_0,\pi^{\text{out}}_0]$. It is therefore sufficient to show that the child configurations $\{({\hat \pi}^{\text{in}}_i, \hpiout_i)\}_{i=1}^4$ are consistent with $(\pi^{\text{in}}_0,\pi^{\text{out}}_0)$. Equivalently, by Lemma~\ref{lem:compatible}, $\hat F_0$ is compatible with the configuration $({\hat \pi}^{\text{in}}_0,\hpiout_0)$ that is consistent with $\{({\hat \pi}^{\text{in}}_i, \hpiout_i)\}_{i=1}^4$ according to the connectivity requirements of consistency. \bigskip \noindent This completes the proof of Theorem~\ref{thm:correct}. \section{Proof of the Structure Theorem (Theorem~\ref{thm:structure})}\label{sec:struct} In this section we give a proof of the Structure Theorem (Theorem~\ref{thm:structure}). We restate and reword the theorem here for convenience. It is easy to see that the statement here is equivalent to the statement given in Section~\ref{sec:DP}; only the terminal property of conformance is missing, but that is encoded by feasibility. \begin{reptheorem}{thm:structure}[Structure Theorem] \nonumber There is a feasible solution $F$ to the rounded Steiner forest problem having, in expectation over the random shift of the bounding box, length at most $\frac{2}{5} \epsilon \text{OPT}$ more than $\text{OPT}$ such that each dissection square $R$ satisfies the following three properties: \vspace{2mm} \begin{minipage}[r]{0.95\linewidth} \begin{description} \item[Boundary Property] For each side $S$, $F \cap S$ has at most $D$ non-corner components, where \begin{equation} D=60\epsilon^{-1}\label{eq:rho} \end{equation} \item[Portal Property] Each component of $F \cap \partial R$ contains a portal. \item[Cell Property] For each cell $C$ of $R$, $F$ has at most one component that intersects both $\partial C$ and $\partial R$. \end{description} \end{minipage} \end{reptheorem} First, in a way similar to Arora, we illustrate the existence of a nearly-optimal solution that crosses the boundary of each dissection square a small number of times ({\em Boundary Property}) and does so at portals ({\em Portal Property}). To that end, starting with the solution $F_0$ as guaranteed by Lemma~\ref{lem:sum-of-crossings}, we augment $F_0$ to create a solution $F_1$ that satisfies the Boundary Components Property, then augment $F_1$ to a solution $F_2$ that also satisfies the Portal Property. The {\em Cell Property} is then achieved by carefully adding to $F_2$ boundaries of cells that violate the Cell Property. By Lemma~\ref{lem:sum-of-crossings}, $F_0$ is longer than $\text{OPT}$ by $\frac{\epsilon}{10}\text{OPT}$. We show that we incur an additional $\frac{\epsilon}{10}\text{OPT}$ in length in satisfying each of these three properties, for a total increase in length of $\frac{4}{10} \epsilon \text{OPT}$, giving the Theorem. \subsection{The Boundary Property} We establish the Boundary Property constructively by starting with $F_1=F_0$ and adding closures of the intersection of $F_1$ with the sides of dissection squares. For a subset $X$ of a line, let $\text{closure}(X)$ denote the minimum connected subset of the line that spans $X$. For a side $S$ of a dissection square $R$, a connected component of a subset of $S$ is a {\em non-corner component} if it does not include a corner of $R$ The construction is a simple greedy bottom-up procedure: \begin{tabbing} {\sc SatisfyBoundary}:\\ \quad \= For each $j$ decreasing from $\log L$ to 0,\\ \> \quad \= For each dissection line $\ell$ such that $\text{depth}(\ell)\leq j$,\\ \>\> \quad \= for each $j$-square with a side $S \subseteq \ell$, \\ \> \>\> \quad \= if $|\{ \text{non-corner components of }F_1 \cap S\}| > D$, \\ \> \> \>\> \quad add $\text{closure}( \text{non-corner components of }F_1 \cap S)$ to $F_1$. \end{tabbing} \subsubsection*{{\sc SatisfyBoundary} establishes the Boundary Property} Consider a dissection square $R$, a side $S$ of $R$, and the dissection line $\ell$ containing $S$. The iteration involving $\ell$ and $j=\text{depth}(\ell )$ ensures that, at the end of that iteration, there are at most $D$ components of $F_1\cap S$ not including the endpoints of $S$, which are corners of $R$. We need to show that later iterations do not change this property. Consider an iteration corresponding to $j'\leq j$, a line $\ell'$ with $j'\geq \text{depth}(\ell')$, and a side $S'\subseteq \ell'$ of a $j'$-square $R'$. By the nesting property and since $S'$ cannot be enclosed by $S$, $S\cap \ell'$ is either empty, a corner of $R$ or equal to $S$. In the first case, $S \cap F_1$ is not affected by adding a segment of $S'$. In the second case, no new non-corner component of $F_1\cap S$ appears. In the third case, if adding a segment of $S'$ would reduce $|S \cap F_1|$ to one. See Figure~\ref{fig:sat-bdy}. \begin{figure}[ht] \centering \input boundary-prop.pdf_t \caption{The second (right) and third (left) cases for showing that {\sc SatisfyBoundary} can only decrease the number of components along the side of another dissection square or adding a corner component when a segement (thick line) of a dissection square side ($R\cap \ell$) is added to $F$ (not shown).} \label{fig:sat-bdy} \end{figure} \subsubsection*{The increase in length due to {\sc SatisfyBoundary} is small}\label{sec:leng-sat-bound} For iteration $j$ of the outer loop and iteration $\ell$ such that $j\geq \text{depth}(\ell)$ of the second loop, let random variable $C_{\ell,j}$ denote the number of executions of the last step:\\ \centerline{ add $\text{closure}( \text{non-corner components of }F_1 \cap S)$ to $F_1$}\\ Note that, conditioning on $\text{depth}(\ell)\leq j$, $C_{\ell,j}$ is independent of $\text{depth}(\ell)$ (however $C_{\ell,j}$ does depend on the random shift in the direction perpendicular to $\ell$). Initially the number of non-corner components of $F_1 \cap \ell$ is at most the number of components, $|F_0 \cap \ell|$. As argued above: for every $j\geq \text{depth}(\ell)$, every $j$-square either is disjoint from $\ell$ or has a side on $\ell$, so dealing with a line $\ell'$ parallel to $\ell$ does not increase the number of components on $\ell$; For every $j<\text{depth}(\ell)$, dealing with a line $\ell'$ perpendicular to $\ell$ can only introduce a corner component on $\ell$. So, the total number of non-corner components on $\ell$ never increases. Since it decreases by $D$ at each of the $C_{\ell,j}$ closure operations, we have $$ \sum_{j = \text{depth}(\ell)}^{\log L} C_{\ell, j} \leq |F_0 \cap \ell|/D .$$ Since $\text{length}(S) = L/2^j$, the total increase in length resulting from these executions is at most $C_{\ell,j} (L/2^j) $. Therefore, the expected increase in length along $\ell$ is \begin{eqnarray*} E(\text{length} (F_1\cap\ell)-\text{length} (F_0\cap \ell)) & \leq & \sum_i\text{Prob}[\text{depth}(\ell)=i]\sum_{j \geq i} E[C_{\ell,j}| \text{depth}(\ell)=i] \frac{L}{2^j}\\ & = & \sum_i \frac{2^i}{L} \sum_{j \geq i} E[C_{\ell,j}| \text{depth}(\ell)\leq j] \frac{L}{2^j} \\ & = & \sum_j E[C_{\ell,j}| \text{depth}(\ell)\leq j]\frac{1}{2^j} \sum_{i\leq j} 2^i\\ & \leq & 2 E[\sum_{j\geq \text{depth} (\ell)} C_{\ell,j}| \text{depth}(\ell)]\\ & \leq & 2 |F_0\cap \ell|/D. \end{eqnarray*} Summing over all dissection lines $\ell$, and using the bounds on $\sum_\ell |F_0 \cap \ell|$ and $D$ as given by Equations~\eqref{eq:crossings-vs-length} and~\eqref{eq:rho}, respectively, we infer that the length of $F_1$ is at most $\frac{\epsilon}{10}\text{OPT}$ more than the length of $F_0$. \subsection{The Portal Property}\label{subsection:establishing-the-portal-property} We establish the Portal Property constructively by starting with $F_2=F_1$ and extending $F_2$ along the boundaries of dissection squares to nearest portals. We say a component is portal-free if it does not contain a portal. The following construction establishes the Portal Property: \begin{tabbing} {\sc SatisfyPortal}: \\ \quad \= For each $j$ decreasing from $\log L$ to 0,\\ \> \quad \= For each dissection line $\ell$ such that $\text{depth}(\ell)= j$,\\ \> \> \quad \= for each portal-free component $K$ of $F_2 \cap \ell$,\\ \> \> \> \quad\= extend $K$ to the nearest non-corner portal on $\ell$. \end{tabbing} \subsubsection*{{\sc SatisfyPortal} preserves the Boundary Property} Focus on dissection line $\ell$. Before the iteration corresponding to $\ell$, possible extensions along lines $\ell'$ that are perpendicular to $\ell$ and of depth greater than of equal to $\text{depth} (\ell)$ do not extend to $\ell$, because $\ell'\cap\ell$ is a corner of $\ell'$. After the iteration corresponding to $\ell$, for each possible extension along lines $\ell'$ that are perpendicular to $\ell$ and of depth strictly less than $\text{depth} (\ell)$, $\ell'\cap \ell$ is a corner of any dissection square $R$ with a side along $\ell$ containing $\ell\cap\ell'$, so the Boundary Property for $\ell$ is not violated. \subsubsection*{The increase in length due to {\sc SatisfyPortal} is small} Consider a dissection line $\ell$. When dealing with line $\ell$, {\sc SatisfyPortal} only merges components and, in doing so, does not increase the number of components of $F_1 \cap \ell$. When dealing with a dissection line $\ell'$ perpendicular to $\ell$, As {\sc SatisfyPortal} might add the component $\ell \cap \ell'$ to $F_1 \cap \ell$. However, similar to the argument used above, in that case $\ell'\cap \ell$ is a corner of any dissection square $R$ with a side along $\ell$ containing $\ell\cap\ell'$. Since, by Lemma~\ref{lem:corner-portals}, corners are portals, no extension is made for this component. Therefore, each component of $F_1 \cap \ell$ that does not already contain a portal is an extension of what was originally already a component of $F_0 \cap \ell$ and so, at most $|F_0 \cap \ell|$ extensions are made along $\ell$. Each of these extensions adds a length of at most $L/(A 2^{\text{depth}(\ell)} )$ (the inter-portal distance for line $\ell)$. Therefore, the total length added along dissection line $\ell$ is bounded by $|F_0 \cap \ell|\, L/(A 2^{\text{depth}(\ell)} )$. Since $\text{Prob}[\text{depth}(\ell)=i] = 2^i/L$, the expected increase in length due to dissection line $\ell$ is \[ \sum_{i = 1}^{\log L} \frac{2^i}{L} |F_0\cap \ell| \frac{L}{2^i A} = \frac{|F_0\cap \ell| \log L}{A} \] Summing over all dissection lines and using Equations~\eqref{eq:crossings-vs-length} and~\eqref{eq:inter-portal-distance}, we infer that the length of $F_2$ is at most $\frac{\epsilon}{10}\text{OPT}$ more than the length of $F_1$. \subsection{The Cell Property} \label{sec:cell-props} We establish the Cell Property constructively by starting with $F_3 = F_2$ and adding to $F_3$ boundaries of cells that violate the Cell Property. Let $C$ be a cell of a dissection square $R$. We say $C$ is {\em happy} with respect to the solution $F_3$ if there is at most one connected component of $F_3$ that touches both the interior of $C$ and $\partial R$. We cheer up an unhappy cell $C$ by adding to $F_3$ a subset $A$ of $\partial C$, as illustrated in Figure~\ref{fig:simple}: \begin{equation} \label{eq:A} A(C,F_3) = \partial(C) \setminus \{\text{sides $S$ of }C\ : \ \text{depth}(S)<\text{depth}(C)\text{ and } S\cap F_3 = \emptyset\}. \end{equation} Recall that each cell $C$ of $R$ is either coincident with a dissection square that is a descendant of $R$ or is smaller than and enclosed by a leaf dissection squares that is a descendant of $R$. Definitions for the depth of a cell and its sides are inherited from the definitions of dissection-square depths and dissection-line depths. \begin{figure}[ht] \centering \includegraphics[scale=2]{augmentation.pdf} \caption{The three cases (up to symmetry) of augmenting $C$. The dotted lines are $F_3$, $C$ is the smaller square and $C$'s parent is the larger square (to illustrate the relative depth of $C$'s sides). In cases (a) and (b), the augmentation $A$ is not all of $\partial C$ so is open at the ends. In (a), $F_3$ intersects neither of the sides of $C$ that have depth less than that of $C$, so the augmentation $A$ consists only of the two sides having depth equal to that of $C$. In (b), one of the low-depth sides intersects $F_3$, so it belongs to $A$. In (c), both low-depth sides intersect $F_3$, so $A$ is all of $\partial C$. } \label{fig:simple} \end{figure} \noindent Happiness of all cells, and therefore the Cell Property, is established by the following procedure: \begin{tabbing} {\sc SatisfyCellAbstract}:\\ \quad \= While there is an unhappy cell $C$,\\ \> \quad \= add $A(C,F_3)$ to $F_3$. \end{tabbing} Let $\cal C$ be the set of cells that we augment in the above procedure. We claim that there is a function $h$ from the cells $\cal C$ to the components of $F_0$ (the {\em original} forest that we started with prior to the {\sc Satisfy} procedures) that is injective and, such that, for a cell $C$ of dissection square $R$, $f(C)$ is a component of $F_0$ that intersects $\partial R$. To define $h$, consider the following abstract directed forest $H$ whose vertices correspond to connected components of $F_0$ and whose edges correspond to augmentations made by {\sc SatisfyCell} (defined formally as follows). An augmentation for cell $C$ is triggered by the existence of at least two connected components $T,T'$ of the current $F_3$ that both touch the interior of $C$ and the boundary of its associated dissection square $R$. Since the {\sc Satisfy} procedures augment the solution, $T$ and $T'$ each contain (at least one) connected component $T_0$ and $T_0'$ of $F_0$ -- it is the vertices corresponding to $T_0$ and $T_0'$ that are adjacent in $H$; we will show shortly that there exist such components that intersect $\partial R$. Arbitrarily root each tree of $H$ and direct each of its edges away from the root. For augmentation of cell $C$, we then define $h(C)$ as the component of $F_0$ that corresponds to the head of the edge of $H$ associated with the augmentation of $C$. Since each vertex of $H$ has indegree at most 1, $h$ is injective. We show, by way of contradiction, that there is a component of $F_0$ contained by $T$ that intersects $\partial R$. Consider all the components $\cal T$ of $F_0$ that are contained by $T$ and suppose none of these intersect $\partial R$. Let $\ell$ be a dissection line bounding $R$ that $T$ intersects. Since $\cal T$ does not intersect $\partial R$, $T$ must have been created from $\cal T$ by augmentations (by way of {\sc SatisfyBoundary} and {\sc SatisfyCell}) one of which added a subset $X$ of dissection line $\ell'$ such that $X$ intersects $\ell$. Since $\cal T$ does not intersect $X$ and neither {\sc SatisfyBoundary} nor {\sc SatisfyCell} augment to the corner of a dissection line, $\ell$ and $\ell'$ must be perpendicular. Further $X$ is a subset of a side $S'$ of square $R'$ and does not contain a corner of $R'$. In summary, $R$ and $R'$ are dissection squares bounded by perpendicular dissection lines $\ell$ and $\ell'$ but for which $\ell \cap \ell'$ is not a corner of $R'$ or $R$, contradicting that dissection squares nest. We are now ready to give an implementation of {\sc SatisfyCellAbstract}: \begin{tabbing} {\sc SatisfyCell}:\\ \quad \= For each dissection line $\ell$,\\ \>\quad \= for $j$ decreasing from $\log L$ to $\text{depth} (\ell )$,\\ \>\>\quad \= for each $j$-square $R$ with side $S \subseteq \ell$,\\ \>\>\>\quad \= while there is an unhappy cell $C$ such that $h(C)$ intersects $\ell$ \\ \>\>\>\>\quad \= add $A(C,F_3)$ to $F_3$. \end{tabbing} Since $h(C)$ intersects some side of some dissection square, this procedure makes each of the cells happy. \subsubsection*{The increase in length due to {\sc SatisfyCell} is small} Let the random variable $C_{\ell,j}$ denote the number of augmentations corresponding to dissection line $\ell$ and index $j$. Thanks to the injective mapping $h$, we have: $$\sum_j C_{\ell,j}\leq |F_0\cap \ell|.$$ Since a cell has boundary length shorter than its $j$-square by a factor of $B$, the total increase in length corresponding to these iterations is at most $C_{\ell,j} \text{length}(j\text{-square})/B$. Summing over $j$, the total length added by {\sc SatisfyCell} corresponding to dissection line $\ell$ is at most \[ \sum_{j \geq \text{depth}(\ell)} C_{\ell,j} \frac{4L}{B 2^j}. \] Since the probability that grid line $\ell$ is a dissection line of depth $k$ is $2^k/L$, the expected increase in length added by {\sc SatisfyCell} corresponding to dissection line $\ell$ is at most \[ \sum_k \frac{2^k}{L} \sum_{j \geq k} E[C_{\ell,j}| \text{depth} (\ell)=k] \frac{4L}{B 2^j}. \] As in Section~\ref{sec:leng-sat-bound}, we observe that $C_{\ell,j}$ conditioned on $\text{depth} (\ell)\leq j$ is independent of $\text{depth} (\ell)$. By the same swapping of sums as before, this is then bounded by $$(8/B) E[\sum_{j\geq \text{depth} (\ell )} C(\ell, j)| \text{depth} (\ell)]\leq \frac{8}{B}|F_0 \cap \ell|$$ Summing over all dissection lines, our bound on the expected additional length becomes \[ \frac{8}{B}\sum_\ell |F_0 \cap \ell| = \frac{24}{B} (1+\epsilon) \text{OPT} \] For $B = 240/\epsilon$, this is at most $\frac{\epsilon}{10} \text{OPT}$ by Equation~\eqref{eq:OPT-lb}. \subsubsection*{{\sc SatisfyCell} maintains the Boundary and Portal Properties} We show that {\sc SatisfyCell} maintains the Boundary and Portal Properties by showing that for any forest $F$ satisfying the Boundary and Portal Properties, any single {\sc SatisfyCell} augmentation of $F$ also satisfies these properties. Let $C$ be an unhappy cell and let $R$ be a dissection square satisfying the Boundary and Portal Properties. Let $A$ be the augmentation that is used to cheer up $C$. If $A \cap \partial R$ contains a corner of $R$, then the Boundary Property is satisfied because $A\cap \partial R$ would be a corner component and the Portal Property is satisfied because the corners of dissection squares are portals. So, suppose that $A\cap \partial R$ is not empty but does not contain a corner of $R$. Refer to Figure~\ref{fig:RandC} for relative positions of $R$ and $C$. Then $\partial C \cap \partial R$ cannot include an entire side of $R$, so it must be that $\text{depth}(C) > \text{depth}(R)$. Further, if $A\cap \partial R$ does not include a corner of $R$, then $A \cap \partial R$ must be a subset of a single dissection line, $\ell$. \begin{figure}[ht] \centering \input rel-pos-R-C.pdf_t \caption{Relative positions of $R$ and $C$.} \label{fig:RandC} \end{figure} If $A \cap \ell \cap F$ is not empty, then $F \cap \ell$ is not empty. Since $F$ satisfies the Portal Property, $F \cap \ell$ also includes a portal. Since the addition of $A$ can only act to merge components, $|\ell \cal R \cap (F \cup A)| \le |\ell \cal R \cap A| $ and so $F$ still satisfies the Boundary Property. If $A \cap \partial R \cap F$ is empty, then, by Equation~\eqref{eq:A}, $\text{depth}(\ell) \geq \text{depth}(C)$. But $\text{depth}(C) > \text{depth}(R)$, so $\text{depth}(\ell) > \text{depth}(R)$. This is impossible because $\ell$ is a line bounding $R$. This completes the proof of Theorem~\ref{thm:structure}. \subsection{Proof of Theorem~\ref{thm:main}} Recall Theorem~\ref{thm:main}, stating that there is a randomized $O(n \mathop{\mathrm{polylog}} n)$-time approximation scheme for the Steiner forest problem in the Euclidean plane. The proof of this Theorem is a corollary of Theorems~\ref{thm:structure},~\ref{thm:correct},~\ref{thm:modify2} and Lemma~\ref{lemma:rounding} as follows. Theorem~\ref{thm:correct} guarantees that we can compute, using dynamic programming, a solution that satisfies Theorem~\ref{thm:structure}. Section~\ref{sec:run-time} argues that this DP takes $O(n \mathop{\mathrm{polylog}} n)$ time. Lemma~\ref{lemma:rounding} and Theorem~\ref{thm:modify2} shows that we can convert the solution(s), of near-optimal cost, guaranteed by Theorem~\ref{thm:structure} to near-optimal solutions for the original problem, thus giving Theorem~\ref{thm:main}. \section{Conclusion} We have given a randomized $O(n \mathop{poly}\log n)$-time approximation scheme for the Steiner forest problem in the Euclidean plane. Previous to this result polynomial-time approximation schemes (PTASes) have been given for subset-TSP~\cite{Klein06} and Steiner tree~\cite{BKK07,BKM09} in planar graphs, using ideas inspired from their geometric counterparts. Since the conference version of this paper appeared, a PTAS has been given for Steiner forest in planar graphs by Bateni et~al.~\cite{BHM10}. Like our result here, Bateni et~al.\ first partition the problem and then face the same issue of maintaining feasibility that we presented in Section~\ref{sec:overview}, except in graphs of bounded treewidth. They overcome this by giving a PTAS for Steiner forest in graphs of bounded treewidth; they also show this problem in NP-complete, even in graphs of treewidth 3. Recently we have seen this technique generalized to prize collecting versions of the problem for both Euclidean and planar~\cite{BCEHKM11} instances. \bibliographystyle{plain}
1,116,691,498,767
arxiv
\section{Applications} \begin{figure}[h] \centering \includegraphics[width=3.3in]{extrapolation_comparison.pdf} \caption{Comparison results of portrait extrapolation. We compare our full model with Yu et al. \cite{yu2018generative} and the model G. The results show that our method can generate plausible lower body parts and faces, while other methods fail.} \label{fig:extrapolation_result} \end{figure} \subsection{Portrait extrapolation} As a general photography rule, it is not recommended to cut off people's feet or foreheads in the composition. Nevertheless, amateur photographers often make such mistakes. It is desirable to extend the portrait image to recover the missing feet or faces to create a better composition. Fortunately, this problem can be directly solved by our framework, by treating it as a completion problem, where the unknown region is at the bottom or top part of the extrapolated image. We re-train the entire framework by fixing the cropped rectangle in the input image. For the downward extrapolation, the bottom is cropped by the size of $64 \times 256$, and for the upward extrapolation, the top is cropped by the size of $32 \times 256$. The reason for the different heights is that normally the missing head region is shorter than the missing legs/feet region. We abandon the local discriminator due to the exceeded scale of the hole. We also add dropout layers into the completion network for more flexible results. Our human parsing network can analyze the posture and predict reasonable structures for the legs and shoes, as well as the hair and face. Then the completion network can generate the extrapolated image that contains consistent body parts and background. The face region would be further enhanced by the final face network. For comparisons, we also train the model of Yu et al. \cite{yu2018generative} with the same pattern. Note that, we also remove the local discriminator in their framework. To demonstrate the importance of the structural information in the portrait extrapolation task, we train the model G without the parsing guidance as a comparison, as shown in Fig.~\ref{fig:extrapolation_result}. These results suggest that by generating high fidelity human body completion results, our method opens new possibilities for portrait image enhancement and editing. \subsection{Occlusion removal} Occlusion removal is a natural application for the task of image completion. Fig.~\ref{fig:occlusion_removal} shows that our approach can recover the full human body when removing the unwanted objects. Because our framework has a human parsing stage to generate correct structural information for the hole region, we can handle large occlusions as shown in the bottom example in Fig.~\ref{fig:occlusion_removal}. Note that we preserve the geometric features along the arm and clothes boundaries after completion. \begin{figure} \centering \includegraphics[width=2.6in]{occlusion_removal.pdf} \caption{Examples of occlusion removal by our approach.} \label{fig:occlusion_removal} \end{figure} \section{Approach} To realistically synthesize missing human body parts, one needs to estimate plausible region-level body structure as well as coherent textures in these regions. Training a network for simultaneously predicting both the structural configuration and appearance features is extremely difficult. We instead propose a deep learning framework which employs a two-stage approach to solve this problem. In stage-I, from the incomplete human body image we predict a complete parsing map through a human parsing network. In stage-II, we use a completion network to generate the inpainting result with the guidance of the parsing map, and a face refinement network to further improve the face region. \begin{figure} \includegraphics[width=3.5in]{human_parsing_network.pdf} \caption{Overview of our human parsing network at stage-I. The input image is fed into both the parsing subnet and the pose subnet. The two subnets then produce a human parsing map and pose heatmap respectively. Finally, the refinement network refines the parsing map with the help of the pose heatmap.} \label{fig:parsing} \end{figure} \subsection{Stage-I: Human parsing} Human parsing aims to segment a human body image into different parts with per-pixel labels. Compared with pose estimation, human parsing not only extracts the human body structure, but also estimates the image region boundary of each body part, which is beneficiary for image completion at the next stage. On the other hand, as suggested in JPPNet \cite{liang2018look}, pose estimation can help increase the accuracy of human parsing. Thus, we jointly train our human parsing network for both human body parsing and pose estimation. We then use the generated pose heatmap to refine the human parsing result within the unknown region. The input of the human parsing network is an image with a pre-defined fill-in region. We denote the input image as $\mathbf{x}$, the human parsing network as $\mathbf{P}$. Following JPPNet \cite{liang2018look}, $\mathbf{P}$ consists of two subnets, a parsing subnet and a pose subnet. The two subnets share the first four stages in ResNet-101 \cite{he2016deep}. In the parsing subnet, atrous spatial pyramid pooling (ASPP) \cite{chen2018deeplab} is applied to the fifth stage of ResNet-101 after the shared layers, to robustly segment different body parts. The parsing subnet produces the initial parsing result $\mathbf{p_0}$. In the pose subnet, several convolution layers are applied after the shared layers. The pose subnet produces the pose heatmap $\mathbf{h}$. Then a refinement subnet utilizes $\mathbf{h}$ to refine the initial parsing result $\mathbf{p_0}$ and produces the final result $\mathbf{p}$. We train the overall network end-to-end. Fig.~\ref{fig:parsing} shows the overview of our human parsing network. For effectiveness, we make several modifications to the network architecture of JPPNet \cite{liang2018look} in $\mathbf{P}$. Specifically, we remove the pose refinement network in JPPNet and just keep the parsing refinement network. We thus do not apply iterative refinement and only refine the parsing map once. We regard $\mathbf{p}$ as our final result instead of averaging all results over different iterations. After applying these simplifications, we have found that we can generate parsing results faster with better visual quality in the unknown region. The typical loss function of human parsing is the mean of softmax losses at each pixel, formulated as: \begin{equation} L = \frac{1}{W \times H} \sum_{i}^W \sum_{j}^H L_S(p_{ij}, \hat p_{ij}). \end{equation} $L_S$ denotes the softmax loss function. $\mathbf{\hat p}$ denotes the ground-truth labels of the human parsing output. $W$ and $H$ are the width and height of the parsing map. Importantly, in our completion task we only need to generate image content inside the unknown hole region. Thus, the parsing accuracy inside the unknown region is much more important than that of the known region. Therefore, we propose a spatial weighted loss function that gives more weights on the pixels inside the hole. It is defined as: \begin{equation} L = \frac{1}{W \times H} \sum_{i}^W \sum_{j}^H (\alpha m_{ij} + 1)L_S(p_{ij}, \hat p_{ij}), \end{equation} where $\mathbf{m}$ is a binary mask indicating where to complete (1 for unknown pixels) and $\alpha$ is a weighting parameter. We apply the spatial weighted loss to both the parsing subnet and the refinement subnet. We set $\alpha = 9$ in our experiments by default. \begin{figure*} \centering \includegraphics[width=5.5in]{completion_networks.pdf} \caption{Overview of our image completion networks at stage-II. The completion network generates the result image from the input image, parsing map and mask. It is trained with the perceptual loss and two adversarial losses. The perceptual loss measures feature maps from the VGG-19 network and the adversarial losses are backward from the discriminators. The result image with human parsing is fed into the discriminators, to tell whether it is realistic and in agreement with the structure. The input of global discriminator is the entire image with parsing, while the input of local discriminator is the local area surrounding the hole.} \label{fig:completion} \end{figure*} \subsection{Stage-II: Image completion} In stage-II, we use a completion network to synthesize missing regions with structure guidance. The input of the completion network consists of the input image $\mathbf{x}$, the human parsing map $\mathbf{p}$ and the binary mask $\mathbf{m}$ for marking the unknown region. We denote the completion network as $\mathbf{G}$, and its fully convolutional architecture is shown in Fig.~\ref{fig:completion}. We bring in residual blocks \cite{he2016deep} to enhance the representation ability, and employ dilated convolutions \cite{yu2015multi} to enlarge the spatial support. Instead of using a loss based on per-pixel color distance, we use a perceptual loss measured by feature maps from a pre-trained VGG-19 network \cite{simonyan2014very}. Furthermore, to ensure that the network generates fine details that are semantically consistent with the known parts of the image, we feed the output of $\mathbf{G}$ to local and global discriminators to measure adversarial losses. The details of the architecture and our training method are described in the following sections. \subsubsection{Completion network architecture} The completion network begins with a stride-1 convolutional layer and then uses two stride-2 convolutional layers to downsample the resolution to $\frac{1}{4}$ of the input size. Four residual blocks are followed to extract features of both the input image and the human parsing map. Each residual block contains two stride-1 convolutional layers with a residual connection. Residual connection \cite{he2016deep} has demonstrated its excellent ability to resolve the gradient vanishing problem, which usually happens in deep neural networks. In our experiments, we found residual blocks can significantly improve the quality of the synthesized results. Similar to Iizuka et al. \cite{iizuka2017globally}, we use dilated convolutions \cite{yu2015multi} in the middle part of the network. Dilated convolution delivers large field of view without increasing computational cost, which is necessary for global consistency of the completed image. After that, 4 residual blocks, 2 stride-2 deconvolutional layers and a stride-1 convolutional layer are applied sequentially. Kernel sizes of the first and the last convolution layers are 7. Kernel sizes of the other convolutional layers are all 3. Batch normalization layer and ReLU activation are applied after each convolution layer except the last one. We use a tanh layer in the end to normalize the result into [-1, 1]. The completion network $\mathbf{G}$ is fully convolutional and can be adapted to arbitrary size of input images. \subsubsection{Perceptual loss} Most previous completion methods \cite{pathak2016context,iizuka2017globally,yu2018generative} use $L_1$ or $L_2$ per-pixel distance as loss function to force the output image to be close to the input image. Recent works \cite{chen2017photographic} have discovered that perceptual loss is more effective than $L_1$ or $L_2$ loss for synthesis tasks due to its advanced representation. Perceptual loss is first proposed by Gatys et al. \cite{gatys2016image} for image style transfer. Between a source and a target image, it measures the distance of their feature maps extracted from a pre-trained perception network. We use VGG-19 \cite{simonyan2014very} as the perception network, which is pre-trained for ImageNet classification \cite{deng2009imagenet}. Denote the pre-trained perception network as $\Phi$. Layers in $\Phi$ contain hierarchical features extracted from the input image. Shallower layers usually represent low-level features like colors and edges, while deeper layers represent high-level semantic features and more global statistics. Therefore, collection of layers from different levels contain rich information for image content synthesis. We define our perceptual loss function as: \begin{equation}L_p=\sum_{i=1}^n \left\|\Phi_i(\hat x)-\Phi_i(G(x,p,m))\right\|^2_2,\end{equation} where $n$ is the number of selected layers and $\Phi_i$ is the i-th selected layer. In our experiments, $\Phi$ includes \textit{relu1\_2}, \textit{relu2\_2}, \textit{relu3\_2}, \textit{relu4\_2} and \textit{relu5\_2} layers in VGG-19 \cite{simonyan2014very}. \subsubsection{Discriminators} Using only perceptual loss in training often leads to obvious artifacts in output images. Inspired by previous image completion methods \cite{pathak2016context,iizuka2017globally,yu2018generative}, we add an adversarial loss into the completion network to prevent generating unrealistic results. The adversarial loss is based on Generative Adversarial Networks (GANs) \cite{goodfellow2014generative}. GANs consist of two networks, a generator and a discriminator. The discriminator learns to distinguish real images from generated ones. The generator tries to produce realistic images to fool the discriminator. The two networks compete with each other, and after convergence the generator can produce realistic output. In our task, the generated result (i.e. the filled holes) needs to be not only realistic, but also coherent with the structure guidance. Therefore, we employ conditional GANs \cite{mirza2014conditional} in our completion network, where the discriminator $\mathbf{D}$ also learns to determine whether the generated content conforms to the condition, i.e. the human parsing map $\mathbf{p}$. Hence we define the adversarial loss as: \begin{equation}L_{adv}=\min \limits_G \max \limits_D \mathbb{E}[\log D(p, \hat x) + \log (1-D(p, G(x,p,m))].\end{equation} Inspired by Iizuka et al. \cite{iizuka2017globally}, we use both global and local discriminators in our completion network. The input to the global discriminator is a concatenation of the input image and the parsing map. The input to the local discriminator is the image region and its corresponding parsing patch centered around the hole to be completed. At training stage, the size of the image is scaled to $256\times256$ and the size of the patch is fixed to $128\times128$. As in Iizuka et al. \cite{iizuka2017globally}, we only use $\mathbf{G}$ to complete a single hole at training time. Note that $\mathbf{G}$ can fill multiple holes at once in testing stage. Noted by Pinheiro et al. \cite{pinheiro2016learning}, unequal channels of the signal may cause imbalance that one signal dominates the other ones. Therefore, we repeat the one-channel parsing map for three times to match the RGB channels. We adopt the network architecture proposed by DCGAN \cite{radford2015unsupervised} for two discriminators. All convolutional layers in the discriminators have kernels of size 4 and stride 2, except the last stride-1 convolution. The global discriminator has 7 convolution layers and the local discriminator has 6. They both end with a Sigmoid layer to produce a true or false label. \\ \\ We define the overall loss function for the completion network as \begin{equation}L_c=\lambda_p L_p + \lambda_g L_{adv-g} + \lambda_l L_{abv-l}.\end{equation} $L_{adv-g}$ and $L_{adv-l}$ represent adversarial losses for the global and the local discriminators, respectively. $\lambda_p$, $\lambda_g$ and $\lambda_l$ are the hyper parameters to balance the different losses. We set $\lambda_p = 100$, $\lambda_g = \lambda_l = 1$ in our experiments. \subsection{Face refinement} The human face may only occupy a small area in the input image, but contains many delicate details that the human vision system is sensitive to. Completion network $\mathbf{G}$ can recover general missing regions well, but may have difficulties with human faces as it is not specifically trained on them. Similar to Chan et al. \cite{chan2018everybody}, we thus propose a dedicated face refinement network $\mathbf{F}$ to refine inpainted human faces. We crop the inpainting result $G(x,p,m)$ produced by the completion network to a window around the synthesized face $G_{f}(x,p,m)$, then feed it along with the cropped parsing map $\mathbf{p_f}$ and the cropped mask $\mathbf{m_f}$ into the face network $\mathbf{F}$. We calculate the centre of mass of the face region through the human parsing map and crop the region around the center by the size of $64\times64$. Then $\mathbf{F}$ produces the residual face image $R_{f}=F(G_{f}(x,p,m),p_{f},m_{f})$ with the size of $64\times64$. The final synthesized result $\mathbf{\hat R}$ is the addition of the residual face image $\mathbf{R_f}$ and the initial inpainting result $G(x,p,m)$. Fig.~\ref{fig:face} shows the pipeline of the face refinement. After the training of the completion network is finished, we train the face network with the parameters of the completion network fixed. For the realism of the refined face image, we introduce a face discriminator $\mathbf{D_f}$ to distinguish generated faces from the real ones. We also use the perceptual loss to ensure that the generated faces are perceptually indistinguishable from the ground-truth. The architecture of the face network $\mathbf{F}$ is similar to the completion network $\mathbf{G}$ except that the number of the residual blocks is reduced to 4. The face discriminator $\mathbf{D_f}$ also adopts the architecture proposed by DCGAN \cite{radford2015unsupervised}. The full objective is \begin{equation}L_f=\lambda \sum_{i=1}^n \left\|\Phi_i(\hat x_f)-\Phi_i(\hat R_f)\right\|^2_2 + \log (1-D_{f}(p_f, \hat R_f)).\end{equation} We set $\lambda = 10$ in our experiments. \begin{figure} \includegraphics[width=3.5in]{face_pipeline.pdf} \caption{Overview of our face refinement network. We crop the initial inpainting result of the face region and feed it into the face network. The final output is the sum of the produced residual image and the initial result.} \label{fig:face} \end{figure} \subsection{Implementation details} We train the networks at two stages separately. At stage-I, we train the human parsing network by stochastic gradient descent with momentum. We set the learning rate to 0.0001 and momentum to 0.9. At stage-II, We use the Adam solver \cite{kingma2014adam} with a batch size of 1 to train all the networks. We set the learning rate to 0.0002. We scale the input image to $256\times256$ at training time. We randomly crop a rectangular region in the input image. The edge of the rectangle is randomly set in range [64, 128]. Since we are concerned about completion for the human body, we make sure that the rectangle overlaps the human body in image. We set the pixels inside the cropped region to the mean pixel value of the datasets. To prevent overfitting, we apply several data augmentation methods for training, including scaling, cropping and left-right flipping. When training the face network, we randomly crop a rectangular region in the face area by the size of $32\times32$. At testing time, the input image goes through the human parsing network, the completion network and the face network sequentially to generate the final result. Because all the networks are fully-convolutional, our method can be adapted to arbitrary size of image. We use Poisson image blending \cite{perez2003poisson} to post-process the final result, as previous completion methods \cite{iizuka2017globally,yang2017high} do, for more consistent boundary of the completion region. \section{Conclusion and Limitations} We propose a two-stage deep learning framework to solve portrait image completion problem. We first employ a human parsing network to extract structural information from the input image. Then we employ a completion network to generate the unknown region with the guidance of the parsing result and a following face network to refine the face appearance. We have demonstrated that, aware of the structure of the human body, we can produce more reasonable and more realistic result compared to other methods. And we have shown the capability of our method for applications like occlusion removal and portrait extrapolation. Besides humans, we have also experimented our method on animals and achieved impressive completion results, which indicates that our framework can be extended to other kinds of images where the inherent semantic structures could be encoded and predicted. Our method may fail in some cases as shown in Fig.~\ref{fig:failure}. Firstly, when most of the textured region or the logo is covered, our method may not yield satisfactory completion result due to the inadequate information, as shown in Fig.~\ref{fig:failure} (left). Secondly, our model will be confused when the hole region is connected to multiple persons, as shown in Fig.~\ref{fig:failure} (right). That is because our current human parsing network is only trained by portrait images with a single person. We plan to extend our framework to deal with multiple persons in the future. \begin{figure} \centering \includegraphics[width=3.2in]{failure_cases.pdf} \caption{Some failure cases from the ATR dataset. Our method cannot produce satisfactory result when the textured region is heavily covered and will be confused with multiple persons.} \label{fig:failure} \end{figure} \section{Introduction} \IEEEPARstart{T}{here} are common mistakes that novice users often make when shooting a portrait photo. As a typical example, while photography rules suggest that cutting off hands, feet, and foreheads can ruin the visual flow, many portrait images are taken with such improper composition. The feet are often cut off as the photographer is focusing mostly on the face region when shooting the picture (see Fig.~\ref{fig:teaser2}). At other times, the accessories that the person is carrying (e.g. the bag in Fig.~\ref{fig:teaser1}), or the other objects that partially occlude the main subject (e.g. the dog in Fig.~\ref{fig:occlusion_removal}) could be distracting and better be removed from the photo. To remove these imperfections, one could specify the unwanted objects to remove, or expand the border of the image to try to cover the whole human body, both leaving holes or blank regions in the image to be filled in properly. Although image completion has been actively studied in the last twenty years, there is no existing approach that can work well for portrait images, where holes on the human body need to be filled in. A successful completion method is required to recover not only the correct semantic structure in the missing region, but also the accurate appearance of the missing object parts. Both goals are challenging to achieve for portrait photos. In terms of semantic structures, although human body has a very constrained 3D topology, the large variation of its pose configurations, coupled with 3D to 2D projection, makes accurate structure estimation and recovery from a single 2D image difficult. In term of appearance, despite the large variation in clothing, there are strong semantic constraints such as symmetry that the completion algorithm has to obey, e.g. two shoes usually have the same appearance regardless of how far they are separated in the image. As we will show later, without paying special attention to these constraints, general-purpose hole filling methods often fail to generate satisfactory results on portrait photos (see Fig.~\ref{fig:ATR_completion_result} and \ref{fig:LIP_completion_result}). \if 0 As an active research area in computer graphics and vision community, image manipulation technology has become a central part of the process of images. In order to enable the novice users to beautify their own photos, or make automatically enhancement of the massive media data on internet platforms, researchers have been exploring various methods and techniques for decades to achieve desired results. How to fill the unknown region with compatible structure and content aware details is the core and fundamental problem for all the advanced image manipulation tasks. Because of the difficulty in reconstructing natural appearance like a real photo in a total unknown region with limited cues, it still remains to be an open research problem. Texture synthesis technology was firstly used to restore the natural textures by region growing from the hole boundaries \cite{efros1999texture,efros2001image}. A multi-scale synthesis framework was then introduced in the image completion \cite{drori2003fragment,sun2005image,barnes2009patchmatch}, which can propagate the structural feature from the coarsest level to the finest level to get the proper details in different frequencies. Due to the lack of information in a single image, researchers started to explored the possibilities of using large datasets to find the most feasible content for inpainting the unknown region \cite{hays2007scene,barnes2015patchtable}. But when the existing content of the target image doesn't match any image of the database, the method will fail. Recently, there has been significant progress in image completion due to the fast development in deep generative models, such as generative adversarial networks (GANs) \cite{yeh2017semantic,pathak2016context,iizuka2017globally}. They have shown success across a variety of scene categories, like buildings, landscapes and faces. However, when coming to heavily structural image which contains human or animals, their methods might be failed with artifacts. Human image is one of the most important category among all the massive images on the Internet. An important characteristic of this type of images is the inherent structures of human bodies. That means, the composition of different body parts follows a universal rule. For example, normal people should have two symmetrical arms and two symmetrical legs. And the head should be above the neck and the shoulders should be on both sides. Current image completion methods don't pay attention to these inherent structures, so it is much more difficult for them to generate satisfactory results. Nevertheless, the inherent structure of human body has been well studied in human parsing problem \cite{liang2015human,Gong_2017_CVPR} and pose estimation problem \cite{newell2016stacked,cao2017realtime}. Naturally, we can utilize these methods to extract structural information from human images at first, then use these structural information to guide the completion process afterwards. \fi Extracting human body structures from images and videos has been well studied in previous human parsing \cite{liang2015human,Gong_2017_CVPR} and pose estimation approaches \cite{newell2016stacked,cao2017realtime}. Naturally, we want to rely on these methods to estimate human body structure from an incomplete portrait photo first, and then use it to guide the image completion process. Following this idea, we propose a two-stage deep learning framework for portrait photo completion and expansion. In the first stage, we utilize a human parsing network to estimate both of a human pose and a parsing map simultaneously from the input image. The human pose is then used to help refine the parsing map, especially inside the unknown region. In the second stage, we employ an image completion network to synthesize the missing region in the input image with the guidance of the parsing map, followed by a face refinement network for improving the generated face region. These two networks are trained sequentially with both the perceptual loss and the adversarial loss for improved realism. We show that our approach can be used in many portrait image editing tasks that are difficult or impossible for traditional methods to achieve, such as portrait extrapolation for recovering missing human body parts (e.g. Fig.~\ref{fig:extrapolation_result}), and occlusion removal (e.g. Fig.~\ref{fig:occlusion_removal}). Furthermore, We demonstrate that the proposed learning framework is quite generic, and is applicable for other types of images such as ones of horses and cows (e.g. Fig.~\ref{fig:horse_cow_result}). To the best of our knowledge, we are the first to integrate deep human body analysis techniques into an image completion framework. Our main contributions of this paper are summarized as follows: \begin{itemize} \item we propose a novel two-stage deep learning framework where human body structure is explicitly recovered to guide human image completion with face refinement; \item we show that our framework enables new portrait photo editing capabilities such as re-composition by extrapolation. \end{itemize} \begin{figure*} \centering \subfloat[][occlusion removal]{\includegraphics[width=2.4in]{teaser1.pdf}\label{fig:teaser1}} \subfloat[][portrait extrapolation]{\includegraphics[width=3.7in]{teaser2.pdf}\label{fig:teaser2}} \caption{We address the problem of portrait image completion and extrapolation. (a) shows that our method can remove the unwanted object from the portrait image. (b) shows that our method can extrapolate the portrait image to recover the lower-body or the forehead.} \label{fig:teaser} \end{figure*} \section*{Acknowledgment} The authors would like to thank all the reviewers. This work was supported by National Natural Science Foundation of China (Project Number 61561146393 and 61521002). Fang-Lue Zhang was supported by Research Establishment Grant of Victoria University of Wellington (Project No. 8-1620-216786-3744). Ariel Shamir was supported by the Israel Science Foundation (Project Number 2216/15). \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Related Work} \subsection{Image completion} Traditional image completion methods can be categorized into diffusion-based and patch-synthesis-based ones. Diffusion-based methods, first proposed by Bertalmio et al. \cite{bertalmio2000image} and then extended by Ballester et al. \cite{ballester2001filling} and Bertalmio et al. \cite{bertalmio2003simultaneous}, propagate nearby image structures to fill in the holes by assuming local continuity. These techniques however can only handle small holes or narrow gaps. Patch-based methods are derived from texture synthesis algorithms \cite{efros1999texture,efros2001image,kwatra2003graphcut}. They extract patches from the known region of the image and use them to fill large holes. Criminisi et al. \cite{criminisi2004region} proposed a best-first algorithm by searching the most similar patch. Simakov et al. \cite{simakov2008summarizing} presented a global optimization approach based on bidirectional similarity. These techniques were greatly accelerated by PatchMatch \cite{barnes2009patchmatch,barnes2010generalized}, a randomized nearest neighbor field algorithm. They were further improved with appending gradients into the distance metric by Darabi et al. \cite{darabi2012image}. These methods can handle larger holes than propagation-based ones, but still need semantic guidance for structured scenes. Many inpainting approaches rely on additional guidance for semantically meaningful hole filling. Some use manually specified guidance, such as points of interest \cite{drori2003fragment}, lines \cite{sun2005image}, and perspective \cite{pavic2006interactive}. Other methods estimate image structures automatically, by various means such as tensor voting \cite{jia2003image}, search space constraints \cite{kopf2012quality}, statistics of patch offsets \cite{he2012statistics} and regularity in planar projection \cite{huang2014image}. However, since they only depend on low-level visual features, such methods can only derive meaningful guidance in simple cases. Hays and Efros \cite{hays2007scene} presented the first data-driven method to use a large reference dataset for hole filling. Whyte et al. \cite{whyte2009get} extended this approach with geometric and photometric registration. Zhu et al. \cite{zhu2016faithful} presented a faithful completion method for famous landmarks using their images found on the Internet. Barnes et al. \cite{barnes2015patchtable} proposed a patch-based data structure for efficient patch query from the image database. These techniques may fail if the input image has a unique scene that cannot be found in the dataset. Recently, deep learning emerges as a powerful tool for image completion. However, initial approaches can only handle very small regions \cite{xie2012image,kohler2014mask,ren2015shepard}. Context encoders \cite{pathak2016context} introduced a generative adversarial loss \cite{goodfellow2014generative} into the inpainting network and combined this loss with an $L_2$ pixel-wise reconstruction loss. It can produce plausible result in an $128\times128$ image for a centered $64\times64$ hole. Yang et al. \cite{yang2017high} proposed to update the coarse result iteratively by searching nearest neural patches in the texture network for handling high resolution images. Yeh et al. \cite{yeh2017semantic} searched for the closest encoding in latent space with an adversarial loss and a weighted content loss, and then decoded it to a new complete image. More recently, Iizuka et al. \cite{iizuka2017globally} proposed an end-to-end completion network that was trained with a local and a global discriminator. The local discriminator examines the small region centered around the hole for local reality, while the global discriminator examines the entire image for global consistency. Dilated convolution layers \cite{yu2015multi} were also used to enlarge its spatial support. Yu et al. \cite{yu2018generative} extended this method with a novel contextual attention layer, which utilizes features surrounding the hole. Li et al. \cite{Li_2017_CVPR} focused on face completion with the help of semantic regularization. Existing learning-based methods are able to produce realistic completion results for general scenes such as landscapes and buildings, or certain specific objects such as human faces. However, they cannot handle portrait images as they do not consider the high-level semantic structures of human body. \subsection{Human parsing} Human parsing has been extensively studied in the liturature. Liu et al. \cite{liu2015matching} combined CNN with KNN to predict matching confidence and displacements for regions into the test image. Co-CNN \cite{liang2015human} integrates multi-level context into a unified network. Chen et al. \cite{chen2016attention} introduced an attention mechanism into the segmentation network, which learns to weight the importance of features at different scales and positions. Liang et al. \cite{liang2016semantic,liang2016semantic2} employed Long Short-Term Memory (LSTM) units to exploit long range relationships in portrait images Without recovering the underlying human body structure, previous human parsing methods sometimes produce unreasonable results. In contrast, extracting human body structure has been extensively studied by pose estimation techniques \cite{wei2016convolutional,newell2016stacked,cao2017realtime}. Therefore, pose information has recently been integrated into human parsing frameworks to improve their performance. Gong et al. \cite{Gong_2017_CVPR} presented a self-supervised structure-sensitive learning approach. It generates joint heatmaps from the parsing result map and the ground-truth label, and then calculates the joint structure loss by their Euclidean distances. JPPNet \cite{liang2018look} builds a unified framework, which learns to predict human parsing maps and poses simultaneously. It then uses refinement networks to iteratively refine the parsing map and the pose. In our problem, pose estimation is helpful to guide the parsing prediction in the unknown region. Therefore, we adopt the basic architecture of JPPNet in our human parsing network, and specifically improve it for the following completion stage. \subsection{Portrait image editing} Previous work has explored editing portrait images in various ways, and body shape editing is one that has been paid special attention. Zhou et al. \cite{zhou2010parametric} integrated the 3D body morphable model into a single image warping approach, which is capable of reshaping a human body according to different weights or heights. PoseShop \cite{chen2013poseshop} constructed a large segmented human image database, from which new human figures can be synthesized with given poses. There are also methods for generating temporally-coherent human motion sequences with required poses \cite{xu2011video} or shapes \cite{jain2010moviereshape}. These techniques are mostly based on geometric deformation or database retrieval, so that their flexibility is limited. Recently, many studies have been devoted to human image synthesis based on deep generative models. Lassner et al. \cite{Lassner_2017_ICCV} used the Variational Auto-Encoder (VAE) \cite{kingma2013auto} to synthesize diverse clothes in human images. FashionGAN \cite{Zhu_2017_ICCV} can change the dress of the human figure in an input image with a given text description. Zhao et al. \cite{zhao2017multi}, Ma et al. \cite{ma2017pose} and Balakrishnan et al. \cite{Balakrishnan_2018_CVPR} learned to generate human images with user-specified views or poses, while maintaining the character's identity and dress. Our goal is quite different from these methods, as we aim to recover only the missing parts of the human figure, not to generate an entirely new one. \section{Results} We evaluate our method on two human image datasets, ATR dataset \cite{liang2015human} and LIP dataset \cite{Gong_2017_CVPR}. ATR dataset contains 17700 human images with parsing annotations on 17 body part categories. We randomly select 1/10 of the images for testing and others for training. LIP dataset contains 30462 training images and 10000 validation images. Each image in LIP dataset has per-pixel parsing annotations on 19 categories and annotations on 16 keypoints for the pose estimation. Because there is no pose annotation in ATR dataset, we use JPPNet \cite{liang2018look} pre-trained on LIP dataset to predict the body pose for each image in ATR dataset and use the result as the ground-truth label for training. We initialize the human parsing network with the parameters loaded from pre-trained JPPNet and we then train the network for 10 epochs on the two datasets. Meanwhile, we train the completion network for 200 epochs on ATR dataset and 100 epochs on LIP dataset. After that, we train the face network with 100 epochs on ATR dataset and 50 epochs on LIP dataset. It takes about 20 hours to train the human parsing network, 100 hours for the completion network and 50 hours for the face network. When testing on GPU, our two-stage framework spends about 0.48s to produce the final result for an input image of size $256 \times 256$. We evaluate our method on using an Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 4 cores and NVidia GTX 1080 Ti GPU. \begin{table}[tp] \centering \begin{threeparttable} \begin{tabular}{cccccc} \toprule Dataset&Method&$L_1$ error&PSNR&SSIM&FID \cr \midrule \multirow{5}{*}{ATR} & PatchMatch \cite{barnes2009patchmatch} & 43.0182 & 18.8905 & 0.8615 & 50.9721 \cr & Iizuka et al. \cite{iizuka2017globally} & 12.5181 & 23.7539 & 0.9532 & 21.4838 \cr & Yu et al. \cite{yu2018generative} & 13.7748 & 23.6954 & 0.9481 & 16.4192 \cr & G & 10.4527 & 25.2684 & 0.9591 & 11.8323 \cr & G+P & 9.3675 & 27.0852 & \bf{0.9736} & \bf{10.6247} \cr & G+P+F(ours) & \bf{8.4620} & \bf{27.1511} & 0.9730 & 10.6578 \cr \midrule \multirow{5}{*}{LIP} & PatchMatch \cite{barnes2009patchmatch} & 35.4193 & 20.6823 & 0.9063 & 10.9311 \cr & Iizuka et al. \cite{iizuka2017globally} & 12.8931 & 25.0089 & 0.9646 & 9.3290 \cr & Yu et al. \cite{yu2018generative} & 16.8708 & 22.9498 & 0.9437 & 5.3085 \cr & G & 12.9677 & 25.1877 & 0.9630 & 3.8510 \cr & G+P & 10.6944 & 26.0007 & \bf{0.9696} & 3.1399 \cr & G+P+F(ours) & \bf{10.3144} & \bf{26.0641} & 0.9693 & \bf{2.9604} \cr \bottomrule \end{tabular} \end{threeparttable} \caption{Comparison with existing completion methods in terms of $L_1$ error, PSNR, SSIM and FID.} \label{Tab:completion} \end{table} \begin{figure} \centering \includegraphics[width=3.5in]{parsing_comparison.pdf} \caption{Completion results by different human parsing methods. We complete the input image with different human parsing method. We show that our method leads to the best completion performance.} \label{fig:parsing_comparison} \end{figure} \begin{figure} \centering \includegraphics[width=3.5in]{face_completion.pdf} \caption{Completion results before (middle) and after (right) the face refinement. The face network refines the facial structures and makes the completion result more realistic.} \label{fig:face_completion} \end{figure} \subsection{Comparisons with existing completion methods} We compare our approach with several existing completion methods, including Photoshop Content Aware Fill (PatchMatch) \cite{barnes2009patchmatch}, Iizuka et al. \cite{iizuka2017globally} and Yu et al. \cite{yu2018generative}. We train the model of Iizuka et al. \cite{iizuka2017globally} on training sets of ATR and LIP for the same epoch numbers as our method. We initialize the model with the parameters per-trained on Places2 dataset \cite{zhou2017places}. We also train the model of Yu et al. \cite{yu2018generative} for the same epoch and initialize it with the model pre-trained on ImageNet dataset. We test the models of Iizuka et al. \cite{iizuka2017globally}, Yu et al. \cite{yu2018generative} and our method at the scale of $256\times256$. We then resize the result to the scale of the original image for a fair comparison. Fig.~\ref{fig:ATR_completion_result} shows some comparisons from the test set in ATR. Fig.~\ref{fig:LIP_completion_result} shows some comparison results from the validation set in LIP. These results suggest that our approach can generate plausible results for large holes, while other methods fail to handle complicated human body structures. Fig.~\ref{fig:ATR_completion_m_result} also shows some completion results with multiple holes by our method. We also compare our approach with other approaches quantitatively. We report the evaluation in terms of $L_1$ error, PSNR and SSIM \cite{wang2004image}. Table.~\ref{Tab:completion} shows that our method outperforms other methods under these measurements on the two public datasets. Please note that these evaluation metrics are by no means the best metrics for evaluating the results, since there may be many visually pleasing completion results that are equally acceptable other than the ground truth images. Recently, FID \cite{heusel2017gans} was proposed to measure the perceptual quality of the generative model. Our method also achieves the best performance under this perceptual metric. \begin{figure*} \centering \includegraphics[width=6.5in]{ATR_result.pdf}{} \caption{Comparison results on the ATR testing set. We compare our method with PatchMatch \cite{barnes2009patchmatch}, Iizuka et al. \cite{iizuka2017globally} and Yu et al. \cite{yu2018generative}.} \label{fig:ATR_completion_result} \end{figure*} \begin{figure} \centering \includegraphics[width=3.5in]{lip_result.pdf} \caption{Comparison results on the LIP validation set. We compare our method with PatchMatch \cite{barnes2009patchmatch}, Iizuka et al. \cite{iizuka2017globally} and Yu et al. \cite{yu2018generative}.} \label{fig:LIP_completion_result} \end{figure} \begin{figure} \centering \includegraphics[width=3.5in]{ATR_completion_m.pdf} \caption{Completion results with multiple holes on the ATR testing set.} \label{fig:ATR_completion_m_result} \end{figure} \subsection{Comparisons on human parsing} We compare our human parsing network with JPPNet \cite{liang2018look} to show its effectiveness for completion. We use JPPNet trained on complete images and refined on incomplete images with the same pattern as our parsing network. We apply the mean of intersection over union (IoU) as the performance metric. We measure the mean IoU for the entire image and the unknown region. We report the results in Table.~\ref{Tab:parsing}. As shown in the table, our method achieves the best parsing performance especially inside the unknown region. Fig.~\ref{fig:parsing_comparison} also shows that our human parsing method leads to the best completion result. \begin{table}[tp] \centering \begin{threeparttable} \begin{tabular}{cccc} \toprule Dataset & Method & Entire image & Unknown region \cr \midrule \multirow{3}{*}{ATR} & JPPNet \cite{liang2018look} & 0.5193 & 0.1132 \cr & JPPNet-retrained & 0.5786 & 0.3656 \cr & Ours ($\alpha=0$) & \bf{0.5965} & 0.3735 \cr & Ours & {0.5937} & \bf{0.3971} \cr \midrule \multirow{3}{*}{LIP} & JPPNet \cite{liang2018look} & 0.4125 & 0.0796 \cr & JPPNet-retrained & 0.4649 & 0.3099 \cr & Ours ($\alpha=0$) & \bf{0.4817} & 0.3257 \cr & Ours & 0.4685 & \bf{0.3381} \cr \bottomrule \end{tabular} \end{threeparttable} \caption{Comparison of the human parsing methods in terms of mean IoU. We measure the IoU for both the entire images and for the unknown regions on ATR dataset and LIP dataset.} \label{Tab:parsing} \end{table} \subsection{Completion results for animals} Although we focus on human body completion in this paper, the proposed learning framework is generic and can be applied to other types of images with highly structured visual features, such as animal images. To demonstrate it, we train our model on the Horse-Cow dataset \cite{wang2015semantic}. Horse-Cow dataset contains 295 images for training and 277 images for testing. We additionally label the keypoints of poses for this dataset since it only includes parsing annotations. We re-train the parsing network for 300 epochs and the completion network for 1000 epochs on the training set. Notably, we train the horses and cows together since these two kinds of animals have similar structures. Fig.~\ref{fig:horse_cow_result} shows some completion results using the testing images. We also compare our method with Iizuka et al. \cite{iizuka2017globally} and Yu et al. \cite{yu2018generative}. We train the models of the two methods for the same epochs as ours. From the results, we can see that our two-stage deep structure can be generalized to other images in which the inherent semantic structures could be captured and predicted. \begin{figure} \centering \includegraphics[width=3.2in]{horse_cow_comparison1.pdf} \caption{Comparison results on Horse-Cow dataset. We compare our method with Iizuka et al. \cite{iizuka2017globally} and Yu et al. \cite{yu2018generative}. We show that our method is generic and also available for animals like horse or cow.} \label{fig:horse_cow_result} \end{figure} \subsection{Ablation study} We perform an ablation study to validate the importance of our human body structure guidance and face refinement network. We train an exactly the same completion network without using the parsing map as input. The training protocols are kept unchanged as well. This model is denoted by G. We also train a completion network with human body parsing maps, denoted as G+P. Similarly, our full model with the face refinement network is denoted as G+P+F. Results in Table.~\ref{Tab:completion} show that the parsing guidance and the face refinement are both beneficiary to the completion result. Fig.~\ref{fig:face_completion} compares the completion results before and after the face refinement, showing the effectiveness of our face network. We also evaluate the effectiveness of our spatial weighted loss for human parsing, by comparing to the results when $\alpha$ is 0. As shown in Table.~\ref{Tab:parsing}, the spatial weighted loss can increase the parsing accuracy inside the holes, which is good for the following completion process.
1,116,691,498,768
arxiv
\section{Introduction} Many results in Lie group theory have both an algebraic and a geometrical origin. The Weyl character formula~\cite{weyl.group} is no exception and among the many proofs of it some are purely algebraic in nature, others purely geometrical. This complementarity extends often to the case of infinite dimensional loop groups with central extension (for example, \cite{kac.book,pressley-segal.loopgroups,f-l-m.book}). In view of the current importance of loop groups and of the related affine Lie algebras in theoretical physics it is useful to have, as much as possible, independent proofs of important results based on physical methods. It is well known from past experience in quantum mechanics that the operator formalism is best suited to obtain algebraic insight while the path integral is better apt to reveal the geometrical foundations of a particular theory. We adopt this viewpoint to derive the central result of this paper which is the Weyl-K\v{a}c character formula. The motivation is to get a better geometrical understanding of some related aspects of loop groups and conformal field theory. To develop the necessary machinery in field theory it is always useful to have as much as possible a quantum mechanical analogy. The corresponding derivation of the classical Weyl character formula was previously published separately \cite{asw.weyl} and will be referred to as~I throughout the rest of this paper. The reader unfamiliar with the material of Section~\ref{review} is encouraged to consult it for a pedagogical introduction to the methods used here. We obtain the Weyl-K\v{a}c character formula by computing the index of a certain Dirac operator on an infinite dimensional manifold. The index obtained is different from previous elliptic genus computations \cite{akmw.irvine,akmw.cmp,p-s-w,witten.genera,witten.loop} which were associated with Dirac operators on $LM$, the loop space of a finite dimensional manifold $M$. The elliptic genus from the algebraic topology viewpoint is discussed in \cite{landweber.book}. Here we discuss the Dirac operator on $LG/T$, a homogeneous space naturally associated to a connected, simply connected, simple, compact Lie group $G$ with maximal torus $T$. The space $LG/T$ is not the loop space of any manifold. However, there is still an $S^1$ action on it which plays an important role since it is responsible for the affine grading of the character. The paper is organized as follows. In Section~\ref{review} we review the Borel-Weil construction of the representations of a loop group $LG$. The representations are obtained as holomorphic sections of line bundles over the coset space $LG/T$. We also explain the crucial role of $\widetilde{LG}$, the central extension of $LG$. In particular we discuss how the central extension may be seen as a $U(1)$ bundle over $LG$ \cite{pressley-segal.loopgroups}, an interpretation which is important for our purpose. We also discuss the generic features of the construction of Atiyah and Bott~\cite{a-b.1,a-b.2} which links group characters with fixed point formulas and character indices. This analysis permits us to identify the correct physical theory whose supersymmetric ground states will realize the representations of the loop group. We also introduce the partition function which will give the character formula for loop groups. The explicit realization of these ideas in the framework of a very special field theory with chiral supersymmetry as well as the construction of its lagrangian is developed in Section~\ref{lagrangian}. There we introduce the coupling of ``matter'', \hbox{\it i.e.}} \def\etc{\hbox{\it etc.}\/ the $T$ gauge couplings which correspond to the Borel-Weil bundles mentioned above. This entire construction requires a delicate extension of the concept of {\em horizontal supersymmetry\/} already introduced in~I. Finally the explicit computation of the Weyl-K\v{a}c character formula is detailed in Section~\ref{computation}. Most of our notational conventions are defined in Appendix~A and we will use them freely without further notice. Bernard~\cite{bernard.wzw} has derived the Weyl-K\v{a}c character formula by using a mixture of conformal field theory and mathematical results. He uses mixed Virasoro --- affine Lie algebra Ward identities in the WZW model~\cite{witten.wzw}, properties of the Macdonald identities derived without using affine algebras, and a variety of results related to the heat kernel on $G$. A field theoretic purely algebraic construction of the Weyl-K\v{a}c character formula was presented by Bouwknegt, McCarthy and Pilch~\cite{b-m-p.freefield}. These authors apply the Euler-Poincar\'e-Lefshetz principle to certain free field Fock spaces which are built with the aid of BRST operators associated with ``screening currents''. Warner~\cite{warner.kac} has employed supersymmetric index technology to give a proof of the Weyl-K\v{a}c character formula along more algebraic lines. \section{Borel-Weil Theory and Further Preliminaries} \label{review} We now briefly set the stage for our problem (a complete treatment in the spirit of this paper was given in I for the ordinary Lie group case). Firstly we will build the representations of the loop group following the method of Borel-Weil: \begin{itemize} \item to each irreducible representation we associate its infinitesimal character in the maximal torus of the group (essentially the highest weight of the representation); \item this character uniquely defines a line bundle over the complex manifold formed by the coset space of the group over its maximal torus; \item the holomorphic sections of the line bundle provide an explicit construction of the representation. \end{itemize} The group we have to consider is $\widetilde{LG}$, the central extension by $U(1)$ of the loop group $LG$ (which locally looks like $LG \times U(1)$). The multiplication of two elements (in local coordinates) $(g(x),u)$ and $(g'(x),v)$ of $\widetilde{LG}$ is given by \begin{equation} (g(x),u)(g'(x),v)=(g(x)g'(x),uv\Phi(g,g'))\;, \end{equation} where $\Phi$ denotes the cocycle associated with the $U(1)$ central extension of the loop group. In this case the steps outlined above construct the line bundle ${\cal L}_{(\lambda,k)}$ over $\widetilde{LG}/(T\times U(1))$ associated with a character $(\lambda,k)$ of $T\times U(1)$. Three important remarks are in order here. Firstly, the space $\widetilde{LG}/(T\times U(1))$ is isomorphic to $LG/T$. Secondly one can view the central extension $\widetilde{LG}$ as a special $U(1)$ bundle over $LG$\footnote{For more details on these questions we refer the reader to the excellent exposition given in the book of Pressley and Segal \cite{pressley-segal.loopgroups}. We remind the reader that the Lie group $G$ is connected, simply connected, simple and compact.}. Thirdly, the special line bundle ${\cal L}$ over $LG/T$ arising from the basic central extension of $LG$ (as discussed in the Introduction) is isomorphic to ${\cal L}_{(0,1)}$. The transposition of these ideas to a physical context is {\it a priori\/} straightforward: we construct a quantum mechanical system whose configuration space is the coset manifold $LG/T$ and couple it to an external gauge field corresponding to the group $T\times U(1)$ via an ordinary minimal coupling $A_\mu\dot{x}^\mu$. The wave functions of this system will be sections of ${\cal L}_{(\lambda,k)}$. The subtlety lies in the proper identification of the $U(1)$ part of this coupling. To elucidate this point it is best to view a quantum mechanical system over $LG$ as a non-linear two dimensional $\sigma$-model with group $G$. It can then be shown that the central $U(1)$ coupling corresponds to the addition of the Wess-Zumino term (see below.) We still have to quotient by $T\times U(1)$. This amounts to choosing an appropriate connection as we will see in Section~\ref{computation}. The second main ingredient we need is, {\it mutatis mutandis\/}, the Atiyah-Bott construction \cite{a-b.1,a-b.2}. Since the irreducible representation coincides with the holomorphic sections of ${\cal L}_{(\lambda,k)}$, it is clear that they belong to the kernel of $\bar{\partial}\otimes I_{{\cal L}_{(\lambda,k)}}$, the $\bar{\partial}$ operator over $LG/T$ twisted by the line bundle ${\cal L}_{(\lambda,k)}$. The computation of the index of this operator should in principle give us the dimension of the representation\footnote{Actually one should prove a vanishing theorem since the irreducible representation is given by the cohomology group $H^{(0,0)}(LG/T,{{\cal L}_{(\lambda,k)}})$ and the rest of the cohomology groups are required to vanish.} while to find the character of the irreducible representation we have to compute the character index. We can equivalently work with the Dirac operator $\hbox{$\partial$\kern-1.2ex\raise.2ex\hbox{$/$}}$ provided we compensate by an extra twist (see I) to make up for the difference between the Dirac operator and $\bar{\partial}$. The analogous statements in our physical setting are familiar: we first construct the supersymmetric extension of the model (the generator of supersymmetry is identified with the Dirac operator\footnote{The Dirac operator we consider is the naive Dirac operator plus Clifford multiplication by the natural $S^1$ vector field~\cite{witten.argonne} plus appropriate gauge couplings.} on the configuration space) and then compute $\mathop{\rm Tr}(-1)^Fg$ to obtain the character index formula~\cite{goodman-witten.index,goodman.index}. Here $(-1)^F$ is the fermion parity operator and $g\in T$. Again this construction carries over to the loop group case; one simply works with the Dirac-Ramond operator $\bar{G}_0$ with appropriate gauge couplings. The Dirac-Ramond operator is the generator of supersymmetry in our $\sigma$-model. Notice that the above discussion implies that one should have only one supersymmetry generator: our construction will be chiral in an intrinsic way. The general principles involved in the computation of the index of the Dirac-Ramond operator are by now standard from the work on elliptic genera~\cite{akmw.irvine,akmw.cmp,p-s-w,witten.genera,witten.loop}. The main task facing us is thus the construction of the lagrangian germane to the situation we have just analyzed. Let us warn the reader that the description just given is very sketchy, in particular we will see below and in the next section that the naive $\sigma$-model lagrangian is completely unacceptable before we add the Wess-Zumino term. The introduction of the central extension is forced by reasons of symmetry. All these details as well as the boundary conditions will be discussed at length below and in the next section. Next we note that there is a left action by the maximal torus of the group (here $T\times U(1)$) on the coset space. The fixed points of this action are the affine Weyl group $\waff$ defined in Section~\ref{computation}. In the loop group case this fixed point set is infinite; still we expect the index computation to reduce to a neighborhood of the fixed point set~\cite{pressley-segal.loopgroups}. The first step in implementing these ideas should be the construction of a lagrangian which admits the loop group $LG$ as a group of symmetries. As previously mentioned, let us see why the obvious attempt at constructing such a lagrangian fails. The simplest choice is the standard $(1+1)$-dimensional nonlinear $\sigma$-model defined by a map $g: \Sigma \to G$ where the world sheet $\Sigma$ will be taken to be a torus. The dynamics of the model defined by the classical action \begin{equation} \label{NLS} \int_\Sigma d^2z\; \mathop{\rm Tr} (g^{-1} \partial_a g) (g^{-1} \partial_a g) \end{equation} may be interpreted as the motion of a ``particle'' on the loop group $LG$. Unfortunately, (\ref{NLS}) does not have a large enough symmetry group for our purposes. The classical action is not invariant under the action by the loop group $LG$. In fact, the symmetry group of (\ref{NLS}) is $G \times G$ where the group action is defined by $ g(x,y) \mapsto h_L g(x,y) h_R^{-1}$ for $(h_L,h_R) \in G\times G$. The hamiltonian defined by (\ref{NLS}) does not have the $LG$ symmetry we require. However, it is well known \cite{witten.wzw} that the addition of the Wess-Zumino term to the lagrangian (\ref{NLS}) extends the symmetry group to $LG\times LG$ (in Minkowski space). The Wess-Zumino-Witten (WZW) action~\cite{witten.wzw} reads \begin{eqnarray} I_{\rm WZW} &=& { k \over 6 \pi i K(h_\psi,h_\psi)} \left( \vphantom{\int_\Sigma \int_B}\right. - 6i \int_\Sigma d^2z\; \mathop{\rm Tr}\left\{ (g^{-1} \partial_z g)(g^{-1}\partial_{\bar{z}}g) - \gamma \partial_z \gamma + (g^{-1} \partial_z g)\gamma\gamma \right\} \nonumber\\ \label{WZW} &+&\left. \vphantom{\int_\Sigma \int_B} \left[ \int_B \mathop{\rm Tr}(g^{-1} dg)^3 + 6i \int_\Sigma d^2 z\;\mathop{\rm Tr}( g^{-1} \partial_z g)\gamma\gamma \right] \right)\;, \end{eqnarray} where $B$ is a three manifold such that $\partial B=g(\Sigma)$, $\mathop{\rm Tr}$ stands for $\displaystyle \mathop{\rm Tr}_{\rm ad}$, and $k$ is a positive integer; see Appendix~A for the notation. This model is conformally invariant and also formally admits $LG_{\Bbb{C}} \times LG_{\Bbb{C}}$ as a symmetry group, where $G_{\Bbb{C}}$ is the complexification of the Lie group $G$. To be more precise one has the following formal symmetry of the action: $ g(z,\bar{z}) \mapsto h_L(z) g(z,\bar{z}) h_R(\bar{z})^{-1}$ where we think of the left action as being generated by locally holomorphic maps into $G_{\Bbb{C}}$ and the right action being generated by locally antiholomorphic maps into $G_{\Bbb{C}}$. For many practical purposes we may think of $LG_{\Bbb{C}}$ as analytic maps of an annulus into the group $G_{\Bbb{C}}$. The relative normalization of the kinetic energy term of (\ref{WZW}) and the Wess-Zumino term was forced on us by demanding that we choose a lagrangian which admits an $LG\times LG$ symmetry\footnote{Throughout this article we will follow the physics convention of referring to this symmetry as $LG \times LG$. A further discussion of $LG$ versus $LG_{\Bbb{C}}$ will be given shortly.}. We now explain the phrase ``Wess-Zumino'' term which is liberally used throughout this article. A Wess-Zumino term is a special case of the following general set up. Assume $M$ is a connected, simply connected manifold with a line bundle with connection~$A$. The lagrangian describing the motion of a particle (on the base $M$) moving in the presence of the connection $A$ will generically have three types of terms: kinetic energy terms, potential energy terms and a gauge coupling term. We are interested in the gauge coupling term. Assume a path $\gamma$ begins at a point $u_0\in M$ and ends at $u_1$. The gauge coupling term contribution to the path integral is simply \begin{equation} \label{par-trans} \exp \int_\gamma A\;, \end{equation} \hbox{\it i.e.}} \def\etc{\hbox{\it etc.}, parallel transport from $u_0$ to $u_1$ along $\gamma$. Note that this object transforms ``bi-locally'' under a gauge transformation. When $\gamma$ is a loop, the simply connected nature of $M$ tells us that $\gamma = \partial D$ where $D$ is a disk. In this case we see that \begin{equation} \exp \int_\gamma A = \exp \int_D F\; \end{equation} where $F= dA$ is the curvature. Thus for loops, the lagrangian may be formulated in terms of curvature. Note that $\exp \int_D F$ is independent of choice of disk because the first Chern class of a line bundle $\int iF/2\pi$ is integral. In the case where $M=LG$ the term $\int_D F$ is called a Wess-Zumino term. Although we have to consider open paths, we will be able to formulate the problem in terms of curvature which simplifies calculations. Our path integral calculation is dominated by the critical points of the steepest descent approximation. Since the result is given exactly by the quadratic approximation all we have to do is understand what happens in a neighborhood of a critical point $u_c$. Pick a family $\{\Gamma(u)\}$ of fiducial paths connecting the origin $u_c$ to a point $u$ in the neighborhood. If $\gamma$ is a path connecting the initial point $u_0$ to the final point $u_1$ then the gauge coupling term may be written as \begin{equation} \label{WZ} \exp \int_\gamma A = \exp \left\{ \int_{D(\gamma)} F - \int_{\Gamma(u_0)} A + \int_{\Gamma(u_1)} A \right\}\;, \end{equation} Where $D(\gamma)$ is a disk with boundary given by the loop $\Gamma(u_0)\circ\gamma\circ\Gamma(u_1)^{-1}$. In the steepest descent approximation to the path integral, we have to sum over all paths $\gamma$ in the neighborhood of the critical point $u_c$. As the path $\gamma$ varies, the only term in (\ref{WZ}) which changes is the curvature term. The line integrals along $\Gamma(u_0)$ and $\Gamma(u_1)$ are there to enforce the gauge transformation properties of parallel transport. Remember that the curvature term is gauge invariant. The situation is actually a bit better as we will see later in this section. Even though the correct coupling is (\ref{par-trans}) we will abuse the situation and write it as a Wess-Zumino curvature term. The justification is two fold. First, we have the discussion of the previous paragraph. Second, in the case of a loop group, it is very easy to write the curvature yet the expression for the connection is neither nice nor illuminating. From now on we will blur the distinction between a Wess-Zumino term and the correct gauge coupling. We interpret a Wess-Zumino term as parallel transport when necessary. The connection to Borel-Weil theory will require a supersymmetric model and in anticipation we have included a $(0,1/2)$ fermion $\gamma$ in (\ref{WZW}). The fermion $\gamma(z,\bar{z})$ is a {\it left invariant\/} element of the Lie algebra of $G$ and is the superpartner of $g$ (see Section~\ref{lagrangian}). Equation~(\ref{WZW}) is a chiral $(0,1)$ supersymmetric extension of the ordinary WZW action\footnote{A non-chiral $(1,1)$ supersymmetric WZW was first studied in~\cite{d-k-p-r.susywzw}. The model discussed here is quite different.}. Notice that a term of the form $(g^{-1}\partial_z g)\gamma\gamma$ does not appear in (\ref{WZW}) due to a cancellation between the contribution in the curly braces and the one in the square brackets. The curly braces expression and the square brackets expression are each independently supersymmetric. In fact, the term in curly braces is the generalization of equation (4.15) of I to field theory. The term in square brackets is the supersymmetric version of the Wess-Zumino term. It is of the form $A \dot{x} + F \psi\psi$ discussed in I (actually the $A \dot{x}$ term is written as a curvature term). Later in this section we will see that the curvature is given by (\ref{Lambda-def}) and thus $(g^{-1}\partial_z g)\gamma\gamma$ is of the $F\psi\psi$ type. The classical equations of motion are \begin{eqnarray} \partial_z\left( g^{-1} \partial_{\bar{z}} g \right) &=& 0\;,\\ \partial_z \gamma &=& 0\;. \end{eqnarray} Action (\ref{WZW}) is invariant under the supersymmetry transformations: \begin{eqnarray} g^{-1} \delta_s g &=& \varepsilon\gamma\;,\\ \delta_s \gamma &=& \varepsilon(g^{-1}\partial_{\bar{z}} g - \gamma\gamma)\;, \end{eqnarray} where $\varepsilon$ is an anticommuting parameter. The associated supercurrent has conformal weight $(0,3/2)$ and is defined by \begin{equation} {\cal S} \propto \mathop{\rm Tr}\left[ \left(g^{-1}\partial_{\bar{z}}g\right) \gamma + \gamma\gamma\gamma \right]\;. \end{equation} The two current algebras associated with $LG\times LG$ are given by \begin{eqnarray} \label{Jbar} J_{\bar{z}} &=&{2k\over K(h_\psi,h_\psi)} \left(g^{-1}\partial_{\bar{z}}g + \gamma\gamma\right) \;,\\ J_z &=&{-2k\over K(h_\psi,h_\psi)}\; \left(\partial_z g\right) g^{-1}\;. \end{eqnarray} Note that $J_{\bar{z}}$ generates the right group action and $J_z$ generates the left group action. To each element $X\in L{\euf g}$, the Lie algebra of $LG$, we associate the operator \begin{eqnarray} \label{currents} J_X&=&-\oint {dz\over 2\pi i} K(X(z),J(z))\\ &=&\oint {dz\over 2\pi i} X^a(z)J_a(z)\\ &=&2k\oint {dz\over 2\pi i} {K(\partial_zgg^{-1},X)\over K(h_\psi,h_\psi)}\; . \end{eqnarray} The affine algebra is given by the commutation relations: \begin{equation} \label{k-m} \left[ J_X,J_Y\right]=J_{[X,Y]}- 2k \;\oint {dz\over 2\pi i}\; {K\left(X(z),{dY(z)\over dz}\right) \over K(h_\psi,h_\psi)}\; , \end{equation} or in an orthonormal basis where $K(e_a,e_b)=-\delta_{ab}$ and $[e_a,e_b]=f_{ab}{}^c e_c$ define the structure constants, the associated operator product expansion is \begin{equation} J_a(z)J_b(w)\sim - {\delta_{ab}\over K(h_\psi,h_\psi)}{2k\over (z-w)^2} + {f_{ab}{}^c J_c(w)\over (z-w)}\; . \end{equation} We now return to the discussion of $LG$ and $LG_{\Bbb{C}}$. For simplicity we will temporarily assume that the worldsheet $\Sigma$ is either Minkowski or Euclidean space and only provide a ``local description''. The confusion in whether to write $LG$ or $LG_{\Bbb{C}}$ arises in the Wick rotation from Minkowski space to Euclidean space. The physical world sheet is Minkowski space. The analogues of complex coordinates are the light cone coordinates $x^{\pm} = x \pm t$ where $x$ is the spatial coordinate and $t$ is the temporal coordinate. In terms of these coordinates, the symmetry of the WZW model is $g(x,t) \mapsto h_L(x^-) g(x,t) h_R(x^+)$. We immediately see that $h_L$ and $h_R$ are functions of a single real variable. If we take the worldsheet to be a Minkowski cylinder then we will have a legitimate $LG\times LG$ symmetry. When we Wick rotate to Euclidean space $x^- \to z$ and $x^+ \to \bar{z}$, so $h_L(x^-)$ and $h_R(x^+)$ become functions of $z$ and $\bar{z}$ respectively; thus one has to complexify and look at analytic and anti-analytic maps into $G_{\Bbb{C}}$. The following observation illustrates the nature of $h_L(z)$. Consider the standard mode expansion of the current operators in conformal field theory \begin{equation} J_a(z) = \sum_{n=-\infty}^\infty {J_{a,n} \over z^{n+1}}\;. \end{equation} The hermiticity of the currents in Minkowski space translates into the operator relations $J_{a,n}^\dagger = J_{a,-n}$ in the conformal field theory. The operator \begin{equation} \exp J_X = \exp \oint {dz \over 2\pi i}\; X^a(z) J_a(z) \end{equation} can be a unitary operator on the Hilbert space if $X(z)$ is chosen appropriately. If in the mode expansion $X^a(z) = \sum_{-\infty}^\infty X^a_n/z^n$ we require $X^a_{-n} = - \overline{X^a_n}$ then $J_X$ is antihermitian and one formally gets a unitary operator $\exp J_X$ on the Hilbert space. It is in this sense that one has a map into $LG$, more precisely, a unitary representation of the centrally extended loop group. Such a $X(z)$, which in general is not an analytic function, may be formally considered a map of the annulus into ${\euf g}$. Note that on $|z|=1$, $X^a(z)$ is pure imaginary and thus define via exponentiation a map into $LG$. Often in physics one concentrates collectively on the basis $\{J_{a,n}\}$ and thus the distinction of whether one is working on $L{\euf g}$ or $L{\euf g}_{\Bbb C}$ is blurred. For our purposes we will need a different interpretation of the Wess-Zumino term in (\ref{WZW}). Notice that this term is first order in the time derivatives and thus is of the $A_\mu(x)\dot{x}^\mu$ form previously mentioned and also described in detail in I. Equivalently, the centrally extended loop group $\widetilde{LG}$ may be interpreted as a $U(1)$ bundle over $LG$, see for example~\cite{pressley-segal.loopgroups}. The WZW action describes the motion of a superparticle on $LG$ in the presence of a $U(1)$ gauge potential; therefore the quantum mechanical wavefunction for this system is a section of a line bundle over $LG$ with first Chern class $k$. In summary, we found a supersymmetric action $I_{\rm WZW}$ for a superparticle moving on $LG$ which admits $LG\times LG$ as a symmetry group --- this is still too large a group since the Hilbert space would decompose into representations of $LG \times LG$ \cite{gepner-witten.wzw}; we need only one $LG$ symmetry factor. Right now we are at the same developmental stage as equation~(4.15) of I where we had a Lagrangian for a superparticle moving on $G$ with symmetry group $G\times G$. Now we can exploit the full machinery developed in that paper: in particular, we can use the horizontal supersymmetry construction to build a supersymmetric $\sigma$-model for a particle moving on $LG/T$. Our construction guarantees that the model remains supersymmetric and only admits a left action by the loop group $LG$ as a symmetry; in projecting down from $LG$ to $LG/T$ we lose the right action of $LG$ as a symmetry. If we write ${\cal L}_{(\lambda,k)}$ as ${\cal L}_{(0,k)} \otimes {\cal L}_{(\lambda,0)}$ then we are still missing the implementation of the line bundle ${\cal L}_{(\lambda,0)}$ over $LG/T$, a problem we address in Section~\ref{lagrangian}. The supersymmetric $LG/T$ model we schematically described above admits the following maximal set of commuting operators: \begin{itemize} \item $P_0$: the hamiltonian which generates time translations; \item $ P_1$: the momentum which generates spatial translations; \item $(-1)^F$: the fermion parity operator; \item $\{H_i\}$: a basis for the Cartan subalgebra corresponding to the left $T$ action on $LG/T$. \end{itemize} It is important to notice that the holomorphic sector (right moving) and antiholomorphic sector (left moving) of the $\sigma$-model are not identical. Our $\sigma$-model has a $(0,1)$-supersymmetry which acts only on the left moving sector. Also, the Virasoro central extensions $c$ and $\bar c$ of the left and right moving sector do not coincide. It is convenient to introduce the operators $L_0$ and $\bar{L}_0$ defined by: \begin{eqnarray} P_0 &=& (L_0-{c\over 24})+(\bar{L}_0-{\bar{c}\over 24})\;,\\ P_1 &=& (L_0-{c\over 24})-(\bar{L}_0-{\bar{c}\over 24})\;. \end{eqnarray} The supersymmetry generator $\bar{G}_0$ is related to $\bar{L}_0$ by \begin{equation} \label{G} \bar{G}^2_0=\bar{L}_0-{\bar{c}\over 24}\;. \end{equation} Of fundamental importance in our work is the quantum mechanical partition function \begin{eqnarray} Z(\theta, \tau_1, \tau_2) &=& \mathop{\rm Tr} (-1)^F e^{i\theta} e^{2\pi i \tau_1 P_1} e^{-2\pi \tau_2 P_0} \nonumber\\ \label{partition} &=& \mathop{\rm Tr} (-1)^F e^{i\theta} q^{L_0 -c/24} \left(\bar{q}\right)^{\bar{L}_0-\bar{c}/24} \end{eqnarray} where $q= \exp(2\pi i \tau)$, $\theta= \sum_j \theta^j H_j \in {\euf t}$, and $\tau = \tau_1 + i \tau_2$. Using $\{(-1)^F, \bar{G}_0\}=0$ and the usual pairing of states argument (implied by \ref{G}) one concludes that the full trace reduces to a trace only over the kernel of $\bar{G}_0$. This kernel consists of precisely the supersymmetric states of the theory, namely those states $\Psi$ of the Hilbert space which satisfy $\bar{G}_0 \Psi = 0$. The partition function may be written as \begin{equation} \label{susy-part} Z(\theta,\tau_1,\tau_2) =\mathop{\rm Tr}_{\rm{SUSY}} (-1)^F e^{i\theta} q^{L_0-c/24} \;. \end{equation} In the above $\displaystyle \mathop{\rm Tr}_{{\rm SUSY}}$ means the trace only over the kernel of $\bar{G}_0$. Note that $Z(\theta,\tau_1,\tau_2)$ is an analytic function of $\tau$ and that it is the character index of $\bar{G}_0$. The analyticity of the partition function in $\tau$ plays a crucial role in our path integral computations. We will study the path integral in the $\tau_2\to 0$ limit. In this limit, the path integral is dominated by critical points and we show that the quadratic approximation near the critical points leads to an analytic function of $\tau$. The corrections to the quadratic approximation are a power series in $\sqrt\tau_2$ and thus will not be analytic. Supersymmetry tells us that all these terms must vanish. Thus the path integral in the $\tau_2\to 0$ limit may be used to calculate the index. At the risk of repeating ourselves, perhaps a more mathematical synopsis of this paper would be useful. We learn from examining the elliptic genus that there are two ways of computing the $S^1$--index of the Dirac operator $\hbox{$\partial$\kern-1.2ex\raise.2ex\hbox{$/$}}$ on $LM$ (in the weak coupling limit~\cite{taubes.elliptic}). One can use a fixed point formula or one can use path integrals generalizing the supersymmetric quantum mechanics derivation of the index formula for the Dirac operator~\cite{ag.index,friedan-windey.index,witten.asindex}. In \cite{pressley-segal.loopgroups} one finds a heuristic sketch deriving the Weyl-K\v{a}c character formula via the fixed point method extending Atiyah and Bott for the Weyl character formula. As a warm up exercise, we derived the Weyl formula via path integrals in I. Here we ``complete the square'' by using path integrals to obtain the Weyl-K\v{a}c formula. One expects that the extension from $G$ to $LG$ should be routine but there are several obstacles. First, the standard supersymmetric non-linear sigma model Lagrangian (\ref{NLS}) is not invariant under left or right translation by elements of LG (because of the derivative in the $S^1$ direction). Adding a Wess-Zumino term restores $LG$ invariance, and has a geometric interpretation as parallel transport for a line bundle with connection over $LG$. Now the Lagrangian for paths on $LG/T$ is simple: the usual kinetic term for the curve and its fermionic partner (a tangent vector field along the loop), potential energy terms associated with the natural vector field on $LG$, plus a Wess-Zumino term we have just described. Although the Lagrangian is conceptually simple, it is not amenable to computation. We need to lift curves in $LG/T$ to curves in $LG$ which are, of course, maps of a cylinder (or torus) into $G$. An essential step is the lifting of supercurves on $LG/T$ to superhorizontal curves on $LG$. For simplicity we discuss the nonsupersymmetric case (the reader can verify by using concepts developed in Section~\ref{lagrangian} that all the arguments we shall give go through in the supersymmetric case). We can then express the original lagrangian in terms of a lagrangian on the lifts. That is done locally by a local splitting of $LG$ into $\widetilde{g}(U) \times T$ using a section $\widetilde{g}: U \subset LG/T \to LG$. One is finally in a position to compute the path integral by the steepest descent approximation at the fixed points of the action of $T$ on $LG/T$. We now present the geometric background in a little more detail (see \cite[Chapter 4]{pressley-segal.loopgroups}). The space $LG$ has a natural bi-invariant inner product which at the identity element is the inner product on the Lie algebra of $LG$: \begin{equation} \IP{X}{Y} = \int_0^1 dx \; K( X(x), Y(x))\;. \end{equation} Hence $LG/T$ has an inherited inner product, and $LG$ is a principal bundle with group $T$ and has a natural connection $\omega$ --- the orthogonal complement of $T$-orbits. The evaluation map $e: S^1 \times LG \to G$ gives a closed left invariant 2--form $\Lambda$ on $LG$, given by the formula \begin{eqnarray} \Lambda &=& -2\pi i \int_{S^1} e^* \sigma \;,\\ \label{Lambda-def} &=& {1\over 2\pi i K(h_\psi,h_\psi)} \int_0^1 dx\; \mathop{\rm Tr}\left( g^{-1} {dg\over dx}\; g^{-1}\delta g \wedge g^{-1} \delta g \right)\;, \end{eqnarray} where $\sigma$ is the basic integral 3--form on $G$ generating $H^3(G,{\Bbb Z})$. Now $i\Lambda/2\pi$ is in $H^2(LG,{\Bbb Z})$ and so defines a line bundle ${\cal L}_\Lambda$ over $LG$ with connection whose curvature is $\Lambda$. But $\Lambda$ is not the pull back of a 2--form on $LG/T$. It appears as if the standard Wess-Zumino term on $LG$ cannot be used to describe motion on $LG/T$ since it does not descend. We will see that this is not so. We could instead have used the 2--form $\Omega$ on $LG$ with \begin{equation} \Omega(X,Y) = {i \over \pi K(h_\psi,h_\psi)}\; \IP{{dX \over dx}}{Y}\;. \end{equation} For conceptual\footnote{Stone~\cite{stone.lg} has studied the WZW action using the form $\Omega$ from a geometric quantization viewpoint. Alekseev and Shatashvili~\cite{alek-shat.geom} have also discussed loop groups and their representations from the point of view of geometric quantization.} use $\Omega$ is much better than $\Lambda$ because it is left invariant under $LG$. It is easy to see that $i\Omega/2\pi$ is the pull back of a closed 2--form $i\widetilde{\widetilde{\Omega}}/2\pi$ on $LG/T$ which is integral so that $\cal L$ is the pullback of a line bundle $\TT{\cal L}$ with connection $\TT{B}$. We want to be as close to the WZW model as possible for practical reason, \hbox{\it i.e.}} \def\etc{\hbox{\it etc.}, we would like to use $\Lambda$. But $\Lambda = \Omega + d\mu$ where $\mu$ is the 1--form on $LG$: \begin{equation} \mu(X) = {1\over 2\pi i K(h_\psi,h_\psi)} \IP{g^{-1}\,{dg\over dx}}{X} \end{equation} at $g(x)$. Although $\mu$ does not come from $LG/T$, it is right invariant under $T$. Split $\mu$ into $\mu_v + \mu_h$, its vertical and horizontal pieces, so that \begin{equation}\mu_v(W) = {1\over 2\pi i K(h_\psi,h_\psi)} \IP{g^{-1}\,{dg\over dx}}{\omega(W)}\;. \end{equation} Now $\mu_h$ is the pullback of $\TT{\mu}_h$ and we can modify the connection $\TT{B}$ to $\TT{B} + \TT{\mu}_h$, with curvature $\TT{\Omega} + d\TT{\mu}_h$. We use this connection in a Wess-Zumino term; when we lift to horizontal curves, we get the same Wess-Zumino term as using $\Lambda$ and its connection $A$. That is, $\Lambda =dA$, $\Omega = dB$ and $\Lambda -\Omega = d(A-B) = d\mu= d\mu_v + d\mu_h$. Hence $A-(B + \mu_h) = \mu_v + df$ for some function $f$ since $LG$ is simply connected. The function $f$ may be absorbed into the choice of $A$ by letting $A \to A -df$. This does not change the curvature $\Lambda$. On horizontal lifts $\mu_v$ is zero so \begin{equation} \label{A-con} \int_C A = \int_C (B+\mu_h) \;, \end{equation} where $C$ is the horizontal lift of the path $\gamma$ on $LG/T$ up to the bundle. Formula (\ref{A-con}) is very important from the practical viewpoint because it means that we can use the Wess-Zumino term $\int \Lambda$ on $LG$ in our calculations. The path integral for the motion of a particle on $LG/T$ which we have to evaluate to get the Weyl-K\v{a}c character formula is a supersymmetric variant of the following: \begin{equation} \label{schematic} \int_{{\cal P}(\ell)} \rho(u,\ell) \exp\left\{ \vphantom{\biggl(} -I_K[\gamma(u,\ell)] -I_V[\gamma(u,\ell)] - k I_P[\gamma(u,\ell)] -I_T[\gamma(u,\ell)]\right\}\;. \end{equation} ${\cal P}(\ell)$ is the set of all paths $\gamma(u,\ell)$ with initial point $u\in LG/T$ and endpoint $\ell\cdot u$ being the translate of $u$ by the induced action of $\ell\in T$ on $LG/T$. The kinetic energy contribution to the action $I_K[\gamma(u,\ell)]$ is simply the square of the velocity integrated along the curve. The potential energy term $I_V[\gamma(u,\ell)]$ is the square of the natural $S^1$ vector field on $LG/T$ (induced from the natural $S^1$ action on $LG$) integrated along the curve. The parallel transport term, $k I_P[\gamma(u,\ell)]$, is parallel transport on the $k$-th power of ${\cal L}$ via the connection $k(\TT{B} + \TT{\mu}_h)$. Finally we need to select a $T$ character and for this we use the induced natural $T$-connection $\omega$ on an associated homogeneous line bundle with infinitesimal $T$-character $\lambda$. $I_T[\gamma(u,\ell)]$ is parallel transport on this line bundle. Thus we see that we have a quantum mechanical system whose wave function is a section of a homogeneous line bundle (with connection) over $LG/T$ which we shall denote by ${\cal L}_{(\lambda,k)}$. Now $\ell\in T$ acts on this line bundle and maps the fiber over $u$ into the fiber over $\ell\cdot u$ via a map $\rho(u,\ell)$. Putting all this together we see that (\ref{schematic}) is gauge invariant. It is possible to write down the full supersymmetric action on $LG/T$, but it is cumbersome to do so. It is expressed most easily on $LG$. \section{The Lagrangian} \label{lagrangian} Let us summarize briefly what we have done so far. At the one loop level, the Wess-Zumino-Witten model can be seen either as a modified $\sigma$--model on a torus with target space $G$ or the quantum mechanics for a particle moving on $LG$, the loop group of $G$. In the former approach one knows that the Wess-Zumino term renders the theory conformally invariant and that there exists an infinite number of conservation laws corresponding to the generators of an affine Lie algebra at level $k$ and the associated Virasoro algebra. In the latter approach which better corresponds to the geometrical intuition we have tried to convey, the Wess-Zumino term corresponds exactly to a coupling of the particle to a $U(1)$ gauge field. This coupling, linear in the time derivative, is of the form $A_\mu\dot{x}^\mu$ and the gauge field comes from the $U(1)$ central extension $\widetilde{LG}$ of the loop group. It was explained previously why we have to build the operator $\bar{\partial} \otimes I_{{\cal L}_{(\lambda,k)}}$ and how it corresponds to the generator of a chiral $(0,1)$ supersymmetry. We then built the supersymmetric extension of this model but we still need its projection to the coset space $\widetilde{LG}/(T\times U(1)) = LG/T$. The trick to constructing a supersymmetric lagrangian on $LG/T$ is to exploit the discussion of the previous section on the supersymmetric WZW model. Let us temporarily forget about supersymmetry and review how one would construct a bosonic lagrangian on $LG/T$ given lagrangian (\ref{NLS}) on $LG$ as a starting point. For pedagogical reasons we begin by discussing the example of I. The lagrangian for a particle moving on $G$ is $(g^{-1}(y) \partial_y g(y))^2$, where $g(y)$ is the curve on $G$. How does one construct the lagrangian for the motion of the particle on $G/T$? One notices that $G$ is a principal $T$-bundle over $G/T$ with a bi-invariant metric and a natural $T$ connection $(g^{-1} dg)_{{\euf t}}$. Thus $G/T$ has a natural metric $\langle\cdot,\cdot\rangle$ induced by the horizontal spaces of the $T$-connection. A curve $u(y)$ on $G/T$ has a unique horizontal lift to $G$ (after specifying the starting point) which we will call $g_{h}(y)$. {}From the geometry it is clear that the natural lagrangian on $G/T$: $\langle \partial_y u(y), \partial_y u(y) \rangle$ is the same as \begin{equation} \label{G-lag} \mathop{\rm Tr}\left(g_{h}^{-1}(y) \partial_yg_{h}(y)\right)^2\;. \end{equation} We remark that the right invariance of (\ref{G-lag}) under the action of $T$ shows that (\ref{G-lag}) is independent of the starting point for the lift. This invariant description suffers at the practical level. Namely, $g_{h}(y)$ is a complicated solution to a differential equation and thus $g_{h}(y)$ is not very useful in a path integral computation. The solution to our dilemma is to give a local reformulation of the invariant description by exploiting the principal $T$-bundle structure in such a way that everything will patch smoothly. Let $\widetilde{g}:G/T \to G$ be a local section. We can lift the curve $u(y)$ on $G/T$ to $G$ as $\widetilde{g}(u(y))$. We know that $g_{h}(y)$ and $\widetilde{g}(u(y))$ are related by an element of $T$: $g_{h}(y) = \widetilde{g}(u(y)) t^{-1}(y)$. By using the $T$-connection we see that locally \begin{equation} \mathop{\rm Tr}\left(g_{h}^{-1}(y) \partial_yg_{h}(y)\right)^2 = \mathop{\rm Tr}\left(\widetilde{g}^{-1}(u(y)) \partial_y \widetilde{g}(u(y))\right)^2_{{\euf m}}\;. \end{equation} We leave it as an exercise to the reader to verify that the local description patches together in a natural way. Thus a section can be used to locally describe the Lagrangian in a way which as we shall see is amenable for efficient path integral use. For example, a convenient section near the identity of $G$ is to write $\widetilde{g} = \exp \widetilde{\varphi}_{{\euf m}}$ where $\widetilde{\varphi}_{{\euf m}}$ has values in ${\euf m}$, and a convenient section near any other point is the left translate of $\exp \widetilde{\varphi}_{{\euf m}}$. Let us introduce some notation for discussing the $LG/T$ case. An element in $LG$ will be written $g(x)$, $x\in [0,1]$, and an element in $T$ will simply be written $t$. Curves on these spaces will also depend on the time variable $y\in [0,\tau_2]$. From a two dimensional viewpoint we will have fields $g(x,y)$ and $t(y)$ together with their respective supersymmetric partners\footnote{Please note that the modular parameter of the torus is denoted by $\tau$ while the supersymmetric partner of $t$ is denoted by $\widehat{\tau}$.} $\gamma(x,y)$ and $\widehat{\tau}(y)$. The variables $(x,y)$ parametrize the two dimensional torus. We also define for later use the complex variables $z=x+iy$ and $\bar{z}=x-iy$. The generator of supersymmetry is given by ${\bf Q}=\partial_\theta-\theta\partial_{\bar{z}}$, where $\theta$ is a grassmann variable of weight $(0,-\frac{1}{2})$. We use $\delta$ to denote the differential on the infinite dimensional space of fields. To commence our discussion of the $LG/T$ case we forget about supersymmetry. The nonlinear sigma model (\ref{NLS}) describes the evolution of a curve $g(x,y)$ in $LG$. This lagrangian has both a kinetic energy term $$\int dx \mathop{\rm Tr}( g^{-1}(x,y) \partial_y g(x,y))^2$$ and a potential energy term $$\int dx \mathop{\rm Tr}( g^{-1}(x,y) \partial_x g(x,y))^2 \; .$$ To construct a natural lagrangian on $LG/T$ induced from (\ref{NLS}) we exploit that $LG$ is a principal $T$-bundle over $LG/T$ with a bi-invariant metric and a natural $T$-connection. The $T$--connection on the bundle is defined as follows. The connection $1$-form $\omega$ maps a tangent vector to $LG$ at $g(x)$ into an element of ${\euf t}$. The tangent vector translated to the identity in $LG$ is an element of the Lie algebra of $LG$, namely $L{\euf g}$, and denoted by $X(x)$. Project for each $x$ onto ${\euf t}$ and integrate over $S^1$: \begin{equation} \omega(X) = \int_{S^1}dx\;X(x)_{{\euf t}}\;. \end{equation} In terms of the left invariant differential forms on $LG$ this may be written as \begin{equation} \label{T-con} \omega = \int_0^1 dx\; (g^{-1}(x)\delta g(x))_{{\euf t}}\;. \end{equation} More geometrically, $\omega$ is orthogonal projection of the tangent space to $LG$ onto the tangent space to the orbit of $T$ relative to the bi-invariant metric on $LG$ which at the identity is $\int_{S^1} K(\cdot\, ,\cdot)$. It follows that $LG/T$ has a natural metric $\langle \cdot, \cdot \rangle$ induced by the horizontal spaces of the connection. A curve $u(y)$ on $LG/T$ has a unique horizontal lift (after specifying the initial point) to $LG$ which we will call $g_{h}(x,y)$. From the geometry it is clear that the natural kinetic energy term on $LG/T$: $\langle \partial_y u(y), \partial_y u(y)\rangle$ may be written as \begin{equation} \int_0^1 dx\; \mathop{\rm Tr}\left( g_{h}^{-1}(x,y) \partial_y g_{h}(x,y) \right)^2\;. \end{equation} Note that the potential energy term on $LG$ descends to a function $V[u]$ on $LG/T$ which is defined by \begin{equation} V[u] = \int_0^1 dx\; \mathop{\rm Tr} \left(g_{h}^{-1}(x,y) \partial_x g_{h}(x,y) \right)^2\;. \end{equation} We find ourselves in much the same situations as discussed in the $G/T$ case. Although we have an invariant formulation it turns out that working with $g_{h}$ is impractical. We give a local description which patches together nicely. Let $\widetilde{g}:LG/T \to LG$ be a local section. We can lift the curve $u(y)$ on $LG/T$ to $LG$ as $\widetilde{g}(u(y))$. We know that $g_{h}(x,y)$ and $\widetilde{g}(u(y))$ are related by an element of $T$: $g_{h}(x,y) = \widetilde{g}(u(y)) t^{-1}(y)$. By using the connection $\omega$ we see that the horizontal condition on $g_{h}(x,y)$ requires $t(y)$ to satisfy the differential equation \begin{equation} \label{hor0} \int_0^1 dx\; \left( \widetilde{g}^{-1}(u(y)) \partial_y \widetilde{g}(u(y))\right)_{{\euf t}} - \partial_y t(y)\; t^{-1}(y) = 0 \;. \end{equation} If we define Fourier modes \begin{equation} \left( \widetilde{g}^{-1}(u(y)) \partial_y \widetilde{g}(u(y))\right)_{{\euf t}} = \sum_{n=-\infty}^\infty {\cal H}_{y,n}(y) e^{2\pi i nx} \end{equation} then one can see that the kinetic energy term may be written as \begin{equation} \int_0^1 dx\; \mathop{\rm Tr} \left( \widetilde{g}^{-1}(x,y) \partial_y \widetilde{g}(x,y) \right)^2_{{\euf m}} + \sum_{n \neq 0} \mathop{\rm Tr} {\cal H}_{y,n}(y) {\cal H}_{y,-n}(y)\;. \end{equation} One can verify that the kinetic energy term above patches nicely. We leave the potential energy term as an exercise to the reader. We now return to the supersymmetric discussion associated with the $LG/T$ case. In what follows we will often suppress the coordinate dependence of the fields but it is important to remember that since $t$ and $\widehat{\tau}$ belong to $T$ and its tangent space and not to $LT$, they do not depend on the spatial coordinate $x$. We will now use the natural $T$-connection (\ref{T-con}) on $LG$ to induce supersymmetry on $LG/T$ from a naturally formulated supersymmetry on $LG$. This is precisely analogous to using a connection to define the horizontal tangent spaces on the bundle and relating these to tangent spaces on the base. The importance of our construction is that it allows us to express the supersymmetric model on $LG/T$ in terms of quantities defined on $LG$ suitable for path integral use. Firstly we must define supersymmetry on $LG$. Consider a supercurve which may be expressed in superfield notation as ${\bf G}(x,y) = g(x,y) e^{\theta\gamma(x,y)}$ (see~I). The supersymmetric variation of ${\bf G}$ is given by \begin{equation} \delta_s{\bf G}=\varepsilon{\bf Q}{\bf G}\;, \end{equation} where $\varepsilon$ is the anticommuting parameter of the transformation. In terms of components the supersymmetry transformations are given by \begin{eqnarray} \label{susy1} g^{-1}\delta_sg &=& \varepsilon\gamma\;,\\ \label{susy2} \delta_s\gamma &=& \varepsilon\left(g^{-1}\partial_{\bar{z}} g-\gamma \gamma\right)\;. \end{eqnarray} Note that the supersymmetry transformations are equivariant under the right $T$-action on $LG$. How do we lift a supercurve on $LG/T$ to a {\it superhorizontal\/} curve on $LG$? The condition that a curve in $LG$ is the horizontal lift of a curve in $LG/T$ is that the global 1--form $\omega=\int^1_0 dx\left(g^{-1}\delta g\right)_{{\euf t}}$ vanish when evaluated along the curve. We generalize this to the supersymmetric case by noticing that one can interpret (\ref{susy1}) as a tangent vector; thus it is natural to impose the vanishing of $\omega_s=\int^1_0 dx\left(g^{-1}\delta_s g\right)_{{\euf t}}$ as the first superhorizontal condition. Using (\ref{susy1}) we see that this condition is simply \begin{equation} \label{ghoriz1} \int_0^1 dx\; \gamma_{{\euf t}}(x,y) = 0\;. \end{equation} For consistency we must also impose that the supersymmetric transform of (\ref{ghoriz1}) also vanish: \begin{equation} \label{ghoriz2} \int_0^1 dx\; \left(g^{-1}(x,y)\partial_{\bar{z}} g(x,y) -\gamma(x,y)\gamma(x,y)\right)_{{\euf t}} = 0\;. \end{equation} If one forgets about the fermions then the above is almost the condition that the lift be horizontal in the ordinary sense\footnote{It would be the standard condition if it was a derivative with respect to $y$, see (\ref{hor0}).}. The additional term is a Pauli type coupling (see~I). Note that the formulation of superhorizontal has been done in a global way. The equivariance of the superhorizontality conditions tells us that arguments concerning the lagrangian we gave in the bosonic case will go through in the supersymmetric case. For example, the supersymmetric kinetic energy term on $LG/T$ may be formulated on $LG$ by the use of superhorizontal lifts. We now turn to the local parametrization of supersymmetry and superhorizontal lifts. We parametrize a loop $g(x)$ in $LG$ by a local section $\widetilde{g}$ and an element of $T$ as $g(x)=\widetilde{g} t^{-1}$. Using the decomposition of the Lie algebra of $G$ into ${\fam\euffam\twelveeuf g}={\fam\euffam\twelveeuf t}\oplus{\fam\euffam\twelveeuf m}$ leads to the equations \begin{eqnarray} \left(g^{-1}\delta g\right)_{{\euf t}} &=& \left(\widetilde{g}^{-1} \delta\widetilde{g}\right)_{{\euf t}} -dtt^{-1}\;,\\ \left(g^{-1}\delta g\right)_{{\euf m}} &=& t\left(\widetilde{g}^{-1} \delta\widetilde{g}\right)_{{\euf m}} t^{-1} \;. \end{eqnarray} To find the equivalent relations for $\gamma$ it is best to reintroduce a superfield notation ${\bf G}=ge^{\theta\gamma}$. We have the local parametrization ${\bf G}={\bf\widetilde{G}}{\bf T}^{-1}$ given by the local supersections ${\bf \widetilde{G}} = \widetilde{g} e^{\theta \widetilde{\gamma}}$ and superfiber variables ${\bf T}=te^{\theta\widehat{\tau}}$. This gives \begin{eqnarray} \label{paran1} \gamma_{{\euf t}} &=& (\widetilde{\gamma})_{{\euf t}}-\widehat{\tau} \;, \\ \label{paran2} \gamma_{{\euf m}} &=& (t\widetilde{\gamma}t^{-1})_{{\euf m}}\;. \end{eqnarray} In terms of the section, the $T$-connection in local coordinates may be written as $\omega=A-dtt^{-1}$ where \begin{equation} \label{T-connection} A=\int^1_0 dx\;\left(\widetilde{g}^{-1}(x)\delta \widetilde{g}(x)\right)_{{\euf t}}\;. \end{equation} This is the connection we will use to get local formulas. The supersymmetry transformations of the fiber $T$ are given by (defining $t = \exp f$) \begin{eqnarray} \label{susy3} \delta_stt^{-1} &=& \delta_s f =\varepsilon\widehat{\tau}\;,\\ \label{susy4} \delta_s\widehat{\tau} &=& \varepsilon\partial_{\bar{z}} f \;. \end{eqnarray} We now have enough information to formulate the supersymmetry transformations of the local sections: \begin{eqnarray} \left(\widetilde{g}^{-1}\delta_s\widetilde{g}\right)_{{\euf m}}(x,y)&=& \varepsilon\widetilde{\gamma}_{{\euf m}}(x,y)\;,\\ \left(\widetilde{g}^{-1}\delta_s\widetilde{g}\right)_{{\euf t}}(x,y)&=& \delta_s f (y)+\varepsilon\left(\widetilde{\gamma}_{{\euf t}}(x,y) -\widehat{\tau}(y)\right)\nonumber\\ &=& \varepsilon\widetilde{\gamma}_{{\euf t}}(x,y)\;. \end{eqnarray} In the last equation we have used (\ref{susy3}). Using similar algebraic manipulations will give the supersymmetric transformation of the fermionic partner of $\widetilde{g}$. Expressing $\gamma$ in terms of the section we have \begin{eqnarray} \delta_s\gamma &=& \delta_s\left(t\widetilde{\gamma}t^{-1}-\widehat{\tau} \right)\nonumber\\ &=& \delta_s t\widetilde{\gamma}t^{-1}+t\delta_s\widetilde{\gamma}t^{-1} -t\widetilde{\gamma}t^{-1}\delta_stt^{-1}-\delta_s\widehat{\tau}\nonumber\\ &=&\varepsilon\widehat{\tau} t\widetilde{\gamma}t^{-1} +\varepsilon t\delta_s\widetilde{\gamma} t^{-1}+\varepsilon t\widetilde{\gamma}t^{-1}\widehat{\tau}-\varepsilon \partial_{\bar{z}}tt^{-1}\;. \end{eqnarray} The same variation can be written by the use of (\ref{susy2}) which in terms of the section reads \begin{equation} \delta_s\gamma=\varepsilon t\left[\widetilde{g}^{-1}\partial_{\bar{z}} \widetilde{g}-t^{-1}\partial_{\bar{z}}t-\widetilde{\gamma}\widetilde {\gamma}+\widetilde{\gamma}\widehat{\tau}+\widehat{\tau}\widetilde{\gamma}\right]t^{-1} \;. \end{equation} Comparing the two expressions we find \begin{equation} \delta_s\widetilde{\gamma}=\varepsilon\left(\widetilde{g}^{-1}\partial _{\bar{z}}\widetilde{g}-\widetilde{\gamma}\widetilde{\gamma}\right) \end{equation} which can be decomposed as \begin{eqnarray} \delta_s\widetilde{\gamma}_{{\euf m}} &=& \varepsilon\left((\widetilde{g}^{-1} \partial_{\bar{z}}\widetilde{g})_{{\euf m}}-(\widetilde{\gamma}\widetilde {\gamma})_{{\euf m}}\right)\;,\nonumber\\ \delta_s\widetilde{\gamma}_{{\euf t}} &=& \varepsilon\left((\widetilde{g}^{-1} \partial_{\bar{z}}\widetilde{g})_{{\euf t}}-(\widetilde{\gamma}_{{\euf m}}\widetilde {\gamma}_{{\euf m}})_{{\euf t}}\right)\;. \end{eqnarray} We are now ready to express the superhorizontality conditions in terms of the local section. We denote the superhorizontal lifts of the supercurve on $LG/T$ by $g_{h}$ and $\gamma_{h}$. {}From (\ref{paran1}) we find the first condition \begin{equation} \label{horiz1} 0=\int^1_0 dx\left(\widetilde{\gamma}_{{\euf t}}(x,y)-\widehat{\tau}(y)\right)\;. \end{equation} Applying a supersymmetry transformation to this equation we find the second horizontality condition \begin{eqnarray} 0 &=& \int^1_0 dx\left(\delta_s\widetilde{\gamma}_{{\euf t}}(x,y)-\delta_s\widehat{\tau} (y)\right) \nonumber \\ \label{horiz2} &=& \int^1_0 dx\left[(\widetilde{g}^{-1}\partial_{\bar{z}}\widetilde {g})_{{\euf t}}-(\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}})_{{\euf t}}\right] -\partial_{\bar{z}}tt^{-1}\;. \end{eqnarray} Using the mode expansion \begin{equation} \label{modes} \widetilde{\gamma}_{{\euf t}}(x,y) =\sum^\infty_{n=-\infty}\widetilde{\gamma}_{{\euf t},n}(y) e^{2\pi inx}\;, \end{equation} the first condition (\ref{horiz1}) gives \begin{equation} \label{horiz1.mode} \widetilde{\gamma}_{{\euf t},0}-\widehat{\tau}=0 \;. \end{equation} {}From the expressions (\ref{paran1}) and (\ref{paran2}) we find $\gamma_{h}=t\widetilde{\gamma}_{{\euf m}} t^{-1}+\widetilde{\gamma}_{{\euf t}}-\widehat{\tau}$. Equivalently, using (\ref{modes}) and (\ref{horiz1.mode}), the final form of the superhorizontal lift is \begin{equation} \gamma_{h}(x,y)=t(y)\widetilde{\gamma}_{{\euf m}}(x,y)t^{-1}(y)+\sum_{n\neq 0} \widetilde{\gamma}_{{\euf t},n}(y)e^{2\pi inx}\; . \end{equation} Note the important fact that the absence of zero modes implies that all dependence on $\widehat{\tau}$ has disappeared. Similar algebraic manipulations and the mode expansion \begin{equation} \label{g-modes} \left(\widetilde{g}^{-1}\partial_a\widetilde{g}\right)_{{\euf t}}= \sum^\infty_{n=-\infty}{\cal H}_{a,n}(y)e^{2\pi inx} \qquad\hbox{for $(a=z,\bar{z})$}. \end{equation} give \begin{eqnarray} n\neq 0:\qquad \left(g_{h}^{-1}\partial_a g_{h}\right)_{{\euf t},n}&=&{\cal H}_{a,n}\;,\\ n=0:\qquad\left(g_{h}^{-1}\partial_a g_{h}\right)_{{\euf t},0} &=&{\cal H}_{a,0}-\partial_a f \;. \end{eqnarray} {}From (\ref{T-connection}) we see that the $T$-connection in local coordinates is given by \begin{equation} A(y) = \int_0^1 dx\; {\cal H}_y(x,y) = {\cal H}_{y,0}(y) \end{equation} Note that ${\cal H}_{x,0}$ is gauge invariant with respect to $T$~gauge transformations. It is now a matter of algebra to project the kinetic part of the SUSY-WZW lagrangian on $LG$ to $LG/T$: \begin{eqnarray} \int d^2z \mathop{\rm Tr}\left(g_{h}^{-1}\partial_zg_{h} g_{h}^{-1}\partial_{\bar{z}}g_{h}\right) &+&\int d^2z\mathop{\rm Tr}\gamma_{h}\partial_z\gamma_{h}\nonumber\\ &=&\int d^2z\mathop{\rm Tr}\left(\widetilde{g}^{-1}\partial_z\widetilde{g}\right)_{{\euf m}} \left(\widetilde{g}^{-1}\partial_{\bar{z}}\widetilde{g}\right)_{{\euf m}} \nonumber\\ &+&\sum_{n\neq 0}\int dy\mathop{\rm Tr}{\cal H}_{z,n}{\cal H}_{\bar{z},-n}\nonumber\\ &+&\int dy\mathop{\rm Tr}({\cal H}_{z,0}-\partial_z f )({\cal H}_{\bar{z},0}-\partial_ {\bar{z}} f )\nonumber\\ &+&\int d^2z\;\mathop{\rm Tr}\widetilde{\gamma}_{{\euf m}} \left( \partial_z\widetilde{\gamma}_{{\euf m}} + [\partial_z f ,\widetilde{\gamma}_{{\euf m}}]\right) \nonumber\\ &+&\int d^2z\;\xi_{{\euf t}}\partial_z\xi_{{\euf t}} \;. \end{eqnarray} In the above, $\xi_{{\euf t}}$ is defined by \begin{equation} \xi_{{\euf t}}=\sum_{n\neq 0}\widetilde{\gamma}_{{\euf t},n}(y)e^{2\pi inx}\;. \end{equation} The Wess-Zumino term on $LG$ restricted to a superhorizontal lift becomes \begin{equation} \int \mathop{\rm Tr}\left(g_{h}^{-1}dg_{h}\right)^3 = \int \mathop{\rm Tr} \left(\widetilde{g}^{-1} d\widetilde{g}\right)^3 -6i\int dy\mathop{\rm Tr} \left({\cal H}_{z,0}\partial_{\bar{z}} f -{\cal H}_{\bar{z},0}\partial_z f \right)\;. \end{equation} For completeness we list below the expression of the superhorizontal lift in terms of the local section for each term in the WZW action. The bosonic kinetic energy term is given by \begin{eqnarray} \int d^2z\; \mathop{\rm Tr} \left(g_{h}^{-1}\partial_zg_{h} g_{h}^{-1}\partial_{\bar{z}}g_{h}\right) &=&\int d^2z\mathop{\rm Tr}\left(\widetilde{g}^{-1}\partial_z\widetilde{g}\right)_{{\euf m}} \left(\widetilde{g}^{-1}\partial_{\bar{z}}\widetilde{g}\right)_{{\euf m}} \nonumber\\ &+& \sum_{n\neq 0} \int dy\; \mathop{\rm Tr} {\cal H}_{n,z} {\cal H}_{-n,\bar{z}} \nonumber\\ &+& \int dy\; \mathop{\rm Tr} {\cal H}_{x,0} \left(\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}}\right)_{{\euf t},0} \nonumber\\ &-& \int dy\; \mathop{\rm Tr} \left(\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}}\right)_{{\euf t},0}^2 \;. \end{eqnarray} The fermionic kinetic energy term is \begin{eqnarray} \int d^2 z\; \mathop{\rm Tr} \gamma_{h} \partial_z \gamma_{h} &=& \int d^2 z\; \mathop{\rm Tr} \xi_{{\euf t}} \partial_z \xi_{{\euf t}} + {1\over 2} \int d^2 z\; \mathop{\rm Tr} \widetilde{\gamma}_{{\euf m}} \partial_x \widetilde{\gamma}_{{\euf m}} \nonumber\\ &-& {i\over 2} \int d^2 z\; \mathop{\rm Tr} \widetilde{\gamma}_{{\euf m}} \left( \partial_y \widetilde{\gamma}_{{\euf m}} + \left[ A, \widetilde{\gamma}_{{\euf m}}\right] \right) \nonumber\\ &+& \int dy\; \mathop{\rm Tr} {\cal H}_{x,0} \left(\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}}\right)_{{\euf t},0} -2 \int dy\;\mathop{\rm Tr} \left(\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}}\right)_{{\euf t},0}^2 \end{eqnarray} The WZ term is given by \begin{eqnarray} \int \mathop{\rm Tr}(g_{h}^{-1}dg_{h})^3 &=& \left\{ \int \mathop{\rm Tr}(\widetilde{g}^{-1} d\widetilde{g})^3 + 3 \int dy \mathop{\rm Tr} A {\cal H}_{x,0} \right\} \nonumber\\ &-& 3i \int dy\; \mathop{\rm Tr} {\cal H}_{x,0}^2 + 6i \int dy\; \mathop{\rm Tr} {\cal H}_{x,0}\left(\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}}\right)_{{\euf t},0} \end{eqnarray} One can verify that the quantity in curly braces is $T$-gauge invariant. Collating all the terms together we find the following action for the supersymmetric model on $LG/T$: \begin{eqnarray} {i \pi K(h_\psi,h_\psi) \over 2k}\; I_{LG/T} &=&- \frac{i}{2}\int d^2z\mathop{\rm Tr}(\widetilde{g}^{-1}\partial_z \widetilde{g})_{{\euf m}}(\widetilde{g}^{-1}\partial_{\bar{z}}\widetilde{g})_{{\euf m}} \nonumber \\ &+&\frac{i}{2}\int d^2z \mathop{\rm Tr}\widetilde{\gamma}_{{\euf m}}\left( \partial_z\widetilde{\gamma}_{{\euf m}} + \left[ {\cal H}_{z,0}, \widetilde{\gamma}_{{\euf m}} \right] \right) \nonumber\\ &+& \frac{1}{12}\left\{ \int\mathop{\rm Tr}(\widetilde{g}^{-1}d\widetilde{g})^3 + 3 \int dy\; \mathop{\rm Tr} A {\cal H}_{x,0} \right\} \nonumber\\ &-& \frac{i}{2}\sum_{n\neq 0}\int dy\mathop{\rm Tr}{\cal H}_{z,n}{\cal H}_{\bar{z},-n} +\frac{i}{2}\int d^2 z\;\mathop{\rm Tr} \xi_{{\euf t}}\partial_z\xi_{{\euf t}} \nonumber\\ &-& \frac{i}{4}\int dy\mathop{\rm Tr} {\cal H}_{x,0}{\cal H}_{x,0} \nonumber\\ &+& i \int dy\mathop{\rm Tr} {\cal H}_{x,0} \left(\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}}\right)_{{\euf t},0} \nonumber\\ \label{lgt-action} &-&\frac{i}{2}\int dy\mathop{\rm Tr}\left(\widetilde{\gamma}_{{\euf m}} \widetilde{\gamma}_{{\euf m}}\right)_{{\euf t},0} \left(\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}}\right)_{{\euf t},0} \;. \end{eqnarray} It is not clear to us which is the best way of writing the above. The reason is that the $LG/T$ lagrangian is not Lorentz invariant and therefore there is no obvious way to group the terms. We decided on the above grouping because it makes certain features clear. The first two lines are the kinetic energy terms for bosons and fermions on $L(G/T)$. The third line is the Wess-Zumino term on $LG/T$. The fourth line is the kinetic energy terms for bosons and fermions on $LT/T$. The last three lines are respectively $T$~gauge invariant potential energy, Yukawa and curvature terms which are required by supersymmetry. It is important to observe that the lagrangian we just derived solves one of our main problems which was to find a good way of handling curves on $LG/T$. The main tool used in this respect was the implementation of horizontal supersymmetry which allows us to lift supercurves on $LG/T$ to superhorizontal curves on $LG$, a space which is more amenable to field theoretic methods. Our next task is to describe the appropriate modifications of this basic lagrangian which will give the coupling to the different irreducible representations of $LG$. In what follows we will often refer to it as the matter coupling, adopting the traditional field theoretic language. Firstly we construct the line bundles over $LG/T$ and study the $U(1)$ and $T$ action on them. We have explained above why these bundles play such a crucial role in the construction of the irreducible representations of $LG$. A line bundle ${\cal L}_{(\nu,0)}$ over $LG/T$ is determined by an appropriate one dimensional representation of the group $T$ with infinitesimal character $\nu$. A section of ${\cal L}_{(\nu,0)}$ is the same as a function on the entire principal bundle $\pi:LG\rightarrow LG/T$ satisfying \begin{equation} F(gt^{-1})=\rho(t)F(g)\ , \end{equation} where $\rho(t)$ is the irreducible representation of $T$ with infinitesimal character $\nu$. We will often indicate explicitly the dependence on the point $u$ in the base $LG/T$; for example $\widetilde{g}(u)$ will stand for a given local section of the bundle $LG$. It is important to keep in mind the difference between the right and the left action of $T$. The right action lets us move up and down the fiber and tells us that the function $F$ transforms under the representation given by $\rho$. A local section $\widetilde{F}$ of ${\cal L}_{(\nu,0)}$ must then be parametrized by the coordinates $u$ of $LG/T$ and is defined by \begin{equation} \widetilde{F}(u)=F(\widetilde{g}(u))\,. \end{equation} This determines the left action of $T$ on $\widetilde{F}$: \begin{equation} L_\ell\widetilde{F}(u)=F(\ell\widetilde{g}(u))\, , \end{equation} where $\ell\in T$. Note that $\ell\widetilde{g}(u)$ is a new element of $LG$ in the fiber above the point $\ell\cdot u=\pi(\ell\widetilde{g}(u))$ in the base. We can use right multiplication to relate $\ell\widetilde{g}(u)$ to the section by introducing $t(u,\ell)\in T$ as follows $\ell\widetilde{g}(u) = \widetilde{g}(\ell\cdot u)\;t^{-1}(u,\ell)$ We now can rewrite the left $T$-action on a section $\widetilde{F}$ as follows: \begin{eqnarray} L_\ell\widetilde{F}(u) &=& F\left(\widetilde{g}(\ell\cdot u)t^{-1}(u,\ell)\right)\nonumber\\ &=& \rho(t(u,\ell))F\left(\widetilde{g}(\ell\cdot u)\right)\nonumber\\ \label{left2} &=& \rho(t(u,\ell))\widetilde{F}(\ell\cdot u)\;. \end{eqnarray} We see that the transformation law (\ref{left2}) for the sections has both an orbital part and a ``spin'' part. Because of the presence of the spin part there are some phases which will have to be accounted for in the path integral computation. We now have all the necessary ingredients to construct the matter coupling at the lagrangian level. This will be done by using a minimal coupling scheme, \hbox{\it i.e.}} \def\etc{\hbox{\it etc.}, by writing a term of the form $q\int dt A_\mu\dot{x}^\mu$. Notice that such an abelian coupling term in the lagrangian keeps track of the change in angle along the path between the initial and final point multiplied by the appropriate ``charge''. This fixes its normalization uniquely. In the case at hand we just have to compute the $T$-connection which is determined by $(\partial_{\bar{z}}t)t^{-1}$. Notice that $t=t(y)$ depends only on the ``time'' variable $y$. We write $t = \exp f$, and with the help of (\ref{g-modes}), rewrite the usual $T$-connection in Eq. (\ref{T-connection}) as $A(y)=\int_0^1 dx {\cal H}_y(x,y)$. Notice that $A$ does not depend on $x$ since this is a $T$ connection and not an $LT$ connection. Using this and the mode expansion introduced in (\ref{modes}), we rewrite the supersymmetric horizontality condition (\ref{horiz2}): \begin{equation} \partial_{y}f=A(y)-i\int_0^1 dx \left[{\cal H}_{x,0} +2i \int_0^1 dx (\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}})_{{\euf t}}\right] \, . \end{equation} Notice that the last two terms are specifically supersymmetric contributions. This equation gives us the required matter action: \begin{equation} I_T=2i\int d^2z \; \nu\left({\cal H}_{\bar{z}}- (\widetilde{\gamma}_{{\euf m}}\widetilde{\gamma}_{{\euf m}})_{{\euf t}}\right) \;. \end{equation} This last equation exhibits the $(0,1)$ nature of the coupling as required by our chiral supersymmetry. This action gives the desired modification of the generator of sypersymmetry. In other words it produces the appropriate twisting of the Dirac operator by the holomorphic $T$-bundle associated with the infinitesimal $T$-character $\nu$ as required by the Borel-Weil construction. \section{The Weyl-K\v{a}c Character Formula} \label{computation} \subsection{The Quadratic Approximation Around the Fixed Points} The partition function (\ref{partition}) may be computed in the $\tau_2 \to 0$ limit as was mentioned at the end of Section~\ref{review}. We remind the reader that the partition function is actually an analytic function of $q = \exp(2\pi i \tau)$ and we can exploit this fact to compute it exactly. In this limit, any time dependent\footnote{The reader is reminded that $y$ plays the role of time.} field configuration will lead to an action that behaves as $1/\tau_2$ and thus such configurations will be suppressed. The dominant contributions to the path integral will arise from static field configurations which satisfy the appropriate boundary conditions. Supersymmetry requires the fields to be periodic under $z\to z+1$. However, it is clear from (\ref{partition}) and the discussion in I that we must use twisted boundary conditions on the fields in the time direction. On the bosonic fields $\widetilde{g}(x,y)$, the appropriate twisted boundary conditions are given by \begin{equation} \label{BC} \widetilde{g}(x+\tau_1,\tau_2)T = \ell \widetilde{g}(x,0)T \end{equation} where $\ell = \exp(i\theta)$ is the element of $T$ in expression (\ref{partition}) and $\tau_1$ enters because of the rotation induced by the momentum operator $P_1$. We now use the fact that the dominant configurations must be static and conclude that the saddle points are described by \begin{equation} \widetilde{g}(x+\tau_1,0)T = \ell \widetilde{g}(x,0)T \;. \end{equation} We remark that in the above, two elements of $LG$ are identified if they differ by an element of $T$ and a translation in $x$. This equation has to be true for all $\ell\in T$ and thus the above may be rewritten in the equivalent form \begin{equation} \widetilde{g}(x+\tau_1,0)^{-1} \ell \widetilde{g}(x,0) \in T \;. \end{equation} The solution to the above is the {\it group\/} \begin{equation} {\cal N} = \left\{ n \exp\left(2\pi i \check{\mu} x\right) \mid n\in N(T:G),\; \check{\mu} \in \check{T}\right\}\;, \end{equation} where $N(T:G)$ is the normalizer of $T$ in $G$, and $\check{T}$ is the coroot lattice (see the Appendix for a definition). If we observe that the momentum operator $P_1$ generates a circle group $S^1$ of symmetries of the lagrangian by translating the loop parameter then we can recast $\cal N$ in a more group theoretical setting. Consider the group $S^1 \times LG $ and note that the maximally commuting subgroup is $S^1 \times T$ where $S^1$ is the circle group associated with translating the loop parameter. Our collection ${\cal N}$ is actually the {\it group\/} $N(S^1 \times T:S^1 \times LG)$ and the quotient $\waff = {\cal N}/(S^1 \times T)$ is a group called the {\it affine Weyl group\/} of $LG$. From the definition we see that the affine Weyl group $\waff$ is the semidirect product of $W$, the ordinary Weyl group of $G$, and $\check{T}$, the coroot lattice. Its elements are \begin{equation} \ewaff=(w, e^{2\pi i\check{\mu} x})\; \mbox{with} \; w\in W\; . \end{equation} Each element of $\waff$ is associated with a fixed point of the $S^1 \times T$ action in $LG/T$. The evaluation of the path integral by steepest descent will require a sum over the infinite set of Weyl points described above. Since we only have to study the path integral in the $\tau_2 \to 0$ limit, it is clear from the above discussion that it will suffice to consider the fluctuations around the Weyl points. A supersection of $LG$ near the Weyl point represented by $ n \exp(2\pi i\check{\mu} x)$ may be written as \begin{equation} \widetilde{\bf G}(x,y) = n e^{2\pi i\check{\mu} x} e^{\widetilde{\varphi}(x,y)} e^{\widetilde{\gamma}(x,y)\theta}\;, \end{equation} where $\widetilde{\varphi}$ and $\widetilde{\gamma}$ parametrize the fluctuations. The superfield is periodic under $z\to z+1$ and under $z\to z+\tau$ it satisfies\footnote{A more detailed explanation can be found in Section~\ref{group-action}.} \begin{equation} \label{bounda1} \widetilde{\bf G}(x+\tau_1,\tau_2)T = \ell \widetilde{\bf G}(x,0)T\;. \end{equation} If we define \begin{equation} \kappa = e^{-2\pi i\check{\mu} \tau_1} (n^{-1}\ell n) \in T \end{equation} then the boundary conditions (\ref{bounda1}) may be formulated as \begin{eqnarray} \widetilde{\varphi}(x+\tau_1,\tau_2) &=& \kappa \widetilde{\varphi}(x,0) \kappa^{-1}\;,\\ \widetilde{\gamma}(x+\tau_1,\tau_2) &=& \kappa \widetilde{\gamma}(x,0) \kappa^{-1}\;. \end{eqnarray} It is important to remember that, since we are working on the coset space $LG/T$, there is no ``translationally invariant'' mode\footnote{Note that $\int_0^1 dx\; \widetilde{\varphi}_{{\euf t}}(x,y) =0$ and similarly for $\widetilde{\gamma}_{{\euf t}}$.} in $\widetilde{\varphi}_{{\euf t}}$ and $\widetilde{\gamma}_{{\euf t}}$. We also note that $\ell_w \equiv n^{-1}\ell n$ only depends on the choice of coset $w=nT$ (see I). We shall often, by abuse of notation, write $w^{-1} \ell w$ to remind the reader that it only depends on the choice of an element of the Weyl group of $G$. The above boundary conditions may be equivalently written as \begin{eqnarray} \widetilde{\varphi}_{{\euf m}}(x+\tau_1,\tau_2) &=& \kappa \widetilde{\varphi}_{{\euf m}}(x,0) \kappa^{-1}\;,\\ \widetilde{\gamma}_{{\euf m}}(x+\tau_1,\tau_2) &=& \kappa \widetilde{\gamma}_{{\euf m}}(x,0) \kappa^{-1}\;,\\ \widetilde{\varphi}_{{\euf t}}(x+\tau_1,\tau_2) &=& \widetilde{\varphi}_{{\euf t}}(x,0) \;,\\ \widetilde{\gamma}_{{\euf t}}(x+\tau_1,\tau_2) &=& \widetilde{\gamma}_{{\euf t}}(x,0) \;. \end{eqnarray} It is easy to verify that to quadratic order near a Weyl point we have: \begin{eqnarray} A(y) &=& -\;{1\over 2} \int_0^1 dx\; \left[\widetilde{\varphi}_{{\euf m}},\partial_y\widetilde{\varphi}_{{\euf m}}\right]_{{\euf t}} + O(\widetilde{\varphi}^3) \;, \\ {\cal H}_{x,0} &=& 2\pi i \check{\mu} + {1\over 2}\; \int_0^1 dx\; \left[ \partial_x \widetilde{\varphi}_{{\euf m}} + \left[ 2\pi i \check{\mu}, \widetilde{\varphi}_{{\euf m}}\right], \widetilde{\varphi}_{{\euf m}}\right]_{{\euf t}} + O(\widetilde{\varphi}^3)\; . \end{eqnarray} These expressions enter in the perturbative expansion of the full lagrangian near the fixed points. After a considerable amount of algebra one finds the following relatively simple form for the lagrangian to quadratic order near a fixed point: \begin{eqnarray} I_{\rm total}^{(2)} &=& {k\over 4\pi h_{{\euf g}}}\; \left\{ \; 2 \pi^2\tau_2 \mathop{\rm Tr}(\check{\mu}\cmu) -{8\pi^2 h_{{\euf g}}\over k}\;\tau_2 \mathop{\rm Tr}(\nu\check{\mu}) \right. \nonumber\\ &-& \int d^2 z\;\mathop{\rm Tr}\left(\partial_z \widetilde{\varphi}_{{\euf m}}- {1\over2}\; \left[ 2\pi i\check{\mu} - {8\pi i h_{{\euf g}}\over k}\;\nu, \widetilde{\varphi}_{{\euf m}} \right] \right) \left(\partial_{\bar{z}}\widetilde{\varphi}_{{\euf m}} + {1\over 2}\; \vphantom{8\pi i h_{{\euf g}}\over k} \left[ 2\pi i \check{\mu}, \widetilde{\varphi}_{{\euf m}}\right] \right)\nonumber\\ &+& \int d^2 z\; \mathop{\rm Tr} \widetilde{\gamma}_{{\euf m}} \left(\partial_z \widetilde{\gamma}_{{\euf m}}- {1\over2}\; \left[ 2\pi i\check{\mu} - {8\pi i h_{{\euf g}}\over k}\;\nu, \widetilde{\gamma}_{{\euf m}} \right] \right) \nonumber\\ &-& \int d^2 z\; \mathop{\rm Tr} \partial_z \widetilde{\varphi}_{{\euf t}} \partial_{\bar{z}} \widetilde{\varphi}_{{\euf t}}\nonumber\\ \label{quad-action} &+& \left. \int d^2 z\; \mathop{\rm Tr} \widetilde{\gamma}_{{\euf t}} \partial_z \widetilde{\gamma}_{{\euf t}} \; \vphantom{8\pi^2 h_{{\euf g}}\over k} \right\} \; . \end{eqnarray} Formula~(\ref{quad-action}) is the final form of the quadratic part of the lagrangian we will use; its derivation is nontrivial and involves subtle and delicate cancellations which reflect the underlying geometry. The bosonic degrees of freedom which appear in the above are a representation of the following geometrical fact. There is a local equivalence between $LG/T$ and the space \begin{equation} {LG \over LT} \times {LT \over T}\;. \end{equation} More precisely, $LG/T$ is a principal $LT/T$ bundle over $LG/LT$. Notice that $LG/LT$ is the configuration space for an ``ordinary'' sigma model since one can also show that $LG/LT = L(G/T)$. Locally, the fields in our model may be thought as an ordinary sigma model on $G/T$ represented by $\widetilde{\varphi}_{{\euf m}}$ and some extra abelian excitations, $\widetilde{\varphi}_{{\euf t}}$ associated with $LT/T$. We have previously emphasized that the abelian excitations do not contain a constant mode. The gaussian integration of the above quadratic action yields \begin{eqnarray} &&\exp \left[ -2\pi k\tau_2 \; {K(\check{\mu},\check{\mu}) \over K(h_\psi,h_\psi)} \; + 2\pi\tau_2 \nu(\check{\mu}) \right] \nonumber\\ \label{quad-partition} &\times&\left[\prod_{\alpha\succ 0} \det\left(\partial_{\bar{z}} +{1\over 2}2\pi i\alpha(\check{\mu})\right)\right]^{-1} \;\left[\vphantom{{1\over 2} \prod_{\alpha\succ 0}} \widehat{\det}(\partial_{\bar{z}})\right]^{-l/2}\, , \end{eqnarray} where $\widehat{\det}$ indicates the omission of the $x$ translationally invariant modes and $l$ is the rank of the group $G$. As was discussed after Eq. (\ref{left2}), we also have to take into account the prefactors coming form the ``spin'' part of the transformation law arising from both the circle action and the $T$ action on the wave functions. We will see that these prefactors are crucial in turning the above into an analytic function of $\tau$ as required by (\ref{susy-part}). \subsection{Group Action Around the Weyl Points} \label{group-action} In the partition function $Z(\theta, \tau_1, \tau_2)$ defined in Eq.~(\ref{partition}), the operators $e^{i\theta}$ and $e^{2\pi i \tau_1 P_1}$ act on the wave functions at the end of the paths or more precisely on the sections of the matter line bundles. We saw in Section~\ref{lagrangian} that this action induces both a spin and an orbital spin part in our discussion of the line bundle ${\cal L}_{(\nu,0)}$. As discussed at the beginning of Section~\ref{review} we actually need to work with the line bundle ${\cal L}_{(\lambda,k)}$. By mimicking the derivation at the end of Section~\ref{lagrangian}, we will compute the action of $S^1\times T\times U(1)$ on a section of ${\cal L}{(\lambda,k)}$ and find both a ``spin'' and an orbital part. The former will appear as a prefactor in the computation of the path integral while the latter also determines the proper boundary conditions to use in the evaluation of the determinants. The general form of the left action on the sections was given in Eq.~(\ref{left2}). Using the same notations we see that we have to determine the representation matrix $\rho(t)$ where $t$ is the solution of $\ell g(x)= g(x) (g^{-1}(x) \ell g(x)) = g(x) t^{-1}$ with $\ell\in T$ and $g(x)\in LG$. The prefactor will simply be given by the computation of $\rho(t)$ on the line bundle ${\cal L}_{(\lambda,k)}$. To perform this calculation we require the Lie algebra relation \begin{equation} \left[ (X,r),(Y,s) \right] =([X,Y],\phi(X,Y))\;, \end{equation} where $(X,r)\in {L{\euf g}} \oplus {\Bbb{R}}$ (the infinitesimal version of the $\widetilde{LG}$ multiplication law). The algebra cocycle is explicitly given by \begin{equation} \label{cocycle} \phi(X,Y)={i\over\pi}\int_0^1 dx {K(X(x),{d\over dx}Y(x))\over K(h_\psi,h_\psi)}\;. \end{equation} {}From the quantum field theory viewpoint it is useful to reexpress the above relations directly in terms of the currents $J_X$ which were defined in (\ref{currents}). Note that we view $J_X$ as an operator acting on a Hilbert space. In the same spirit we can often view the group element $(g,u)$ as an operator of the form $ue^{J_X}$ in the case $k=1$. We then recover the group law as a relation between operators: \begin{eqnarray} ue^{J_X}\,ve^{J_Y}&=&uve^{J_X+J_Y+{1\over 2}[J_X,J_Y]+\cdots}\\ &=&uve^{J_X+J_Y+{1\over 2}(J_{[X,Y]}+\phi(X,Y))+\cdots}\\ &=&uve^{{1\over 2}\phi(X,Y)+\cdots} e^{J_X+J_Y+{1\over 2}J_{[X,Y]} +\cdots}\; , \end{eqnarray} where we have used the current algebra (\ref{k-m}). In computing the prefactor, remember that the character index is localized on the fixed points of $S^1\times T$ in $LG/T$, so we need only to know the behavior of the sections only around the elements $\ewaff$ of the affine Weyl group $\waff$. In a neighborhood of the fixed points we can parametrize an element of $LG/T$ by $\widetilde{g}_{w,\varphi}(x)= w \exp\left(2\pi i \check{\mu} x\right)e^{\widetilde{\varphi}(x)}$. It is just a matter of algebra to find \begin{eqnarray} e^{i\theta}e^{-2\pi i\tau_1 P_1}\widetilde{ g}_{w,\varphi}(x) &=&we^{2\pi i\check{\mu} x}e^{\widetilde{\varphi}'(x-\tau_1)} e^{i(\theta_w-2\pi\tau_1\check{\mu})}\nonumber\\ &\times &\exp{\left[-2i{K(\check{\mu},\theta_w)\over K(h_\psi,h_\psi)}\right]} \nonumber \\ &\times &\exp{\left[2\pi i\tau_1{K(\check{\mu},\check{\mu})\overK(h_\psi,h_\psi)}\right]} e^{-2\pi i\tau_1 P_1} \;, \end{eqnarray} where we have defined $\theta_w=w^{-1}\theta w$ and $\displaystyle \widetilde{\varphi}'=e^{-2\pi i\check{\mu} x}e^{i\theta_w}\widetilde{\varphi} e^{-i\theta_w} e^{2\pi i\check{\mu} x}$. From the above we deduce two results. Firstly we see that the boundary conditions on the fluctuations around the fixed points are given by \begin{equation} \label{bcxi} \widetilde{\varphi}(x+\tau_1,\tau_2)=\widetilde{\varphi}'(x,0)\, . \end{equation} Secondly the prefactor (or spin part) for a bundle with weight $(\lambda,k)$ is: \begin{equation} \label{prefac} e^{i\lambda(\theta_w-2\pi\tau_1\check{\mu})} \exp{\left[k\left(2 i{K(\check{\mu},\theta_w)\over K(h_\psi,h_\psi)} -2\pi i\tau_1{K(\check{\mu},\check{\mu})\over K(h_\psi,h_\psi)}\right)\right]}\;. \end{equation} It is worth mentioning that this factor is independent of the scale of the scalar product. The inclusion of the superpartner does not modify the discussion above. In more mathematical terms, the line bundle ${\cal L}_{(\lambda,k)}$ is induced from the representation $(0,\lambda,k)$ on $S^1 \times T \times U(1)$ for the principal bundle $S^1 \times \widetilde{LG}$ over $(S^1 \times \widetilde{LG})\bigl/ (S^1\times T \times U(1)) = LG/T$. Left translation by $S^1\times T \times U(1)$ has an induced action on ${\cal L}_{(\lambda,k)}$. At a fixed point of the action of $S^1 \times T$ given by the affine Weyl coset $\ewaff T = w e^{2\pi i \check{\mu} x} T$, the induced action is multiplication by a complex number of modulus one on the line ${\cal L}_{(\lambda,k)}$ at the coset $\ewaff T$. That number in terms of $\lambda$, $k$ and $\ewaff$ is given by formula (\ref{prefac}). In other words, the spin part of the action is the lift of left translation to the line bundle. \subsection{Determinants} The evaluation of the determinants will follow the discussion given in \cite{akmw.cmp,akmw.irvine}. Because we are working on a torus one can associate eigenvalues with each of the first order differential operators which appear in (\ref{quad-action}). In any reasonable regularization scheme one will have a term by term cancellation between the fermionic modes and the ``anti-holomorphic'' bosonic modes. This is guaranteed by the existence of the supersymmetry. Thus only the ``holomorphic'' bosonic sector contributes in a non trivial way. We would like to regulate the determinants in such a way that holomorphicity is preserved. The determinant of $\partial_{\bar{z}}$ is easily evaluated with the result \begin{equation} \widehat{\det} \partial_{\bar{z}} = \eta(\tau)^2\;, \end{equation} where the Dedekind $\eta$-function is defined by \begin{equation} \eta(\tau) = q^{1/24}\; \prod_{n=1}^\infty \left(1-q^n\right)\;. \end{equation} Since our lagrangian describes particle motion on $LG/T$ we note that there are no ``pointlike particle'' modes\footnote{A pointlike particle mode would be the time evolution of a constant loop.} associated with $T$ ensuring the absence of any dependence on $\tau_2$ in the determinant $\widehat{\det} \partial_{\bar{z}}$. Had the pointlike particle modes associated with $T$ been present then we would have found $\det'\partial_{\bar{z}}= 2\tau_2 \eta(\tau)^2$ as the result of the gaussian integration\footnote{The prime means the omission only of the zero eigenvalue mode.}. We should carefully keep track of all such extraneous factors because equation (\ref{susy-part}) tells us that the final answer must be an analytic function of $\tau$. We will now carefully define \begin{equation} \det\left(\partial_{\bar{z}} +{1\over 2}2\pi i\alpha(\check{\mu})\right) \end{equation} in a way which preserves holomorphicity in $\tau$ and guarantees the correct periodicity in $T$. By using the boundary condition one easily determines that the eigenvalues are given by \begin{equation} {\pi\over \tau_2}\;\left( m + n\tau -\zeta \right) \end{equation} where $m$ and $n$ are integers, and \begin{equation} \label{det-zeta} \zeta = {\alpha(\theta_w) \over 2\pi} - \alpha(\check{\mu}) \tau \;. \end{equation} It is useful to study the following formal ratio of determinants \begin{equation} {\det\left(\partial_{\bar{z}} +{1\over 2}2\pi i\alpha(\check{\mu})\right) \over \det'\left(\partial_{\bar{z}}\right)} = {\displaystyle \prod {\pi\over \tau_2}\;\left( m + n\tau -\zeta \right) \over \displaystyle \mathop{{\prod}'} {\pi\over \tau_2}\;\left( m + n\tau \right) } \end{equation} where the prime again means to eliminate the $m=0$, $n=0$ mode. Note that the right hand side is formally an odd function of $\zeta$. The above ratio may be written as \begin{equation} \label{W-sig1} {-2\pi \zeta \over 2\tau_2} \; \mathop{{\prod}'} \left( 1 - { \zeta \over m + n\tau}\right) \;, \end{equation} which is only formal since the product is divergent. We proceed in two different ways. Firstly we note that (\ref{W-sig1}) is essentially the definition of the Weierstrass $\sigma$-function. If we employ some unspecified cutoff then it may be rewritten as: \begin{eqnarray} \det\left(\partial_{\bar{z}} +{1\over 2}2\pi i\alpha(\check{\mu})\right) &=& - 2\pi \eta(\tau)^2 \sigma(\zeta;\tau) \nonumber\\ \label{W-sig2} &\times& \exp\left[ -\mathop{{\sum}'} {\zeta \over m+n\tau} -{1\over 2} \mathop{{\sum}'} \left( {\zeta \over m+ n\tau}\right)^2 \right]\;. \end{eqnarray} The entire issue revolves on how we handle the divergent sums. Expression~(\ref{W-sig2}) already tells us that the ambiguity in different regularizations is a quadratic polynomial in $\zeta$. Secondly, this may also be seen differently by noticing that formally \begin{equation} {\partial^3 \over \partial \zeta^3} \log \det\left(\partial_{\bar{z}} +{1\over 2}2\pi i\alpha(\check{\mu})\right) \end{equation} is finite without need for regularization. We can define the regularized determinant by the differential equation \begin{equation} \label{der3-det} {\partial^3 \over \partial \zeta^3} \log \det\left(\partial_{\bar{z}} +{1\over 2}2\pi i\alpha(\check{\mu})\right) = - \sum {2\over (m + n\tau - \zeta)^3}\;. \end{equation} The right hand side is an elliptic function, the derivative of the Weierstrass $\wp$ function, because the sum is uniformly convergent. Integrating~(\ref{der3-det}) leads to an expression which will be ambiguous by a quadratic polynomial in $\zeta$. In conclusion we have a holomorphic regularization scheme which leads to a polynomial ambiguity as in all renormalizable quantum field theories. For our purposes it is more convenient to express the answer in terms of $\vartheta$-functions\footnote{We use the $\vartheta$ function conventions of Mumford \cite{mumford.tataI}. The function $\vartheta_{11}(\zeta,\tau)$ is odd in $\zeta$. Definitions of the Weierstrass functions and the constant $\eta_1$ may be found in \cite{whittaker-watson}.}. The following identity \begin{equation} -2\pi \eta(\tau)^2 \sigma(\zeta;\tau) = { \vartheta_{11}(\zeta;\tau) \over \eta(\tau) } \; e^{\eta_1 \zeta^2} \end{equation} allows us to express the determinant as \begin{equation} \det\left(\partial_{\bar{z}} +{1\over 2}2\pi i\alpha(\check{\mu})\right) = { \vartheta_{11}(\zeta;\tau) \over \eta(\tau) }\; e^{{\cal P}(\zeta)}\;, \end{equation} where ${\cal P}(\zeta)$ is a quadratic polynomial in $\zeta$. It is easy to resolve the ambiguity in the definition of the determinant. We note that as we go around a cycle in the maximal torus $T$ the variable $\zeta$ shifts by an integer, see (\ref{det-zeta}). For example, in $SU(2)$ the change is $2$ as we go around the cycle in $T$. Since we are interested in studying integral representations of $\widetilde{LG}$, it follows that the determinant should be periodic under $\zeta\to \zeta+r$, where the integer $r$ is the greatest common factor of the shifts when all cycles of $T$ are considered. The function $\vartheta_{11}(\zeta;\tau)$ has the aforementioned periodicity; hence the polynomial must be of the form ${\cal P}(\zeta) = 2\pi i p \zeta/r + {\rm constant}$, where $p$ is an integer. The integer $p$ is zero because it is natural to require the determinant to be odd under $\zeta \to -\zeta$ as was previously remarked. Thus we conclude that $p=0$. The value of the constant may be fixed by requiring that the $n=0$ mode of the determinant reproduces the corresponding one for a point particle (up to the standard central charge correction). In summary, we can choose ${\cal P}(\zeta)$ to be an appropriate constant and all properties we require will be satisfied. Thus we summarize by saying that the value of the determinant in question is \begin{equation} \label{det-theta} \det\left(\partial_{\bar{z}} +{1\over 2}2\pi i\alpha(\check{\mu})\right) = { \displaystyle -i\vartheta_{11}\left({\alpha(\theta_w) \over 2\pi} - \alpha(\check{\mu}) \tau ;\tau\right) \over \eta(\tau) }\;. \end{equation} The mathematical interpretation of (\ref{det-theta}) is as follows. Let $J(\Sigma)$ be the Jacobian of the complex torus $\Sigma$. Each point of $J(\Sigma)$ gives an elliptic operator $\bar{\partial}_\chi$ where $\bar{\partial}: \Lambda^0(\Sigma) \to \Lambda^{(0,1)}(\Sigma)$ and $\bar{\partial}_\chi$ is $\bar{\partial}$ coupled to the flat line bundle determined by the character $\chi$ of $\pi_1(\Sigma) \to \Bbb{C}\backslash\{0\}$. The operators $\bar{\partial}_\chi$ have index zero so we get a holomorphic map $\phi$ from $J(\Sigma)$ to ${\cal F}_0$, the space of Fredholm operators of index zero\footnote{Strictly speaking, ${\cal F}_0$ is the space of index zero Fredholm operators from a Sobolev space $H^1(\Sigma)$ to $L^2(\Sigma)$.}. But ${\cal F}_0$ is a complex manifold with a natural holomorphic line bundle, the determinant line bundle\footnote{The determinant line bundle of an appropriate space $S$ will be denoted by $\mathop{\rm DET}(S)$.} $\mathop{\rm DET}({\cal F}_0)$, and a natural section $s$ \cite{quillen.det,freed.sandiego}. Now $\mathop{\rm DET}(\bar{\partial}) = \phi^*(\mathop{\rm DET}({\cal F}_0))$ is a holomorphic line bundle over $J(\Sigma)$ with holomorphic section $\phi^*(s)$. If we take the simply connected covering of $J(\Sigma)$ with coordinates $(\zeta,\tau)$, then $\phi^*(s)$ pulled up to the covering is a section of a trivial bundle; it is the $\vartheta$-function \begin{equation} {-i\vartheta_{11}\left(\zeta;\tau\right)\over \eta(\tau)} \end{equation} with transformation law (\ref{theta-X}) below. We are restricting the family of $\bar{\partial}$ operators to those with $\zeta= \alpha(\theta_w)/2\pi -\alpha(\check{\mu})\tau$. Hence we get the {\it function\/} \begin{equation} {\displaystyle -i\vartheta_{11}\left({\alpha(\theta_w) \over 2\pi} - \alpha(\check{\mu}) \tau ;\tau\right) \over \eta(\tau) }\;. \end{equation} \subsection{The Character Index Formula} We are now in a position to put the prefactors and the determinants together in a concise expression for the character index $I_{(\nu,k)}(\theta,\tau) = Z(\theta,\tau)$ of the Dirac-Ramond operator. The reader is reminded that one has to sum over $\waff$, the fixed points of the $S^1 \times T$ action and that $\waff = W \times \check{T}$. Collating all the information developed in this section we have \begin{eqnarray} I_{(\nu,k)}(\theta,\tau) &=& \sum_{w \in W} \sum_{\check{\mu}\in \check{T}} q^{\displaystyle k\,K(\check{\mu},\check{\mu})/K(h_\psi,h_\psi)} q^{\displaystyle -\nu(\check{\mu})} \nonumber\\ &\times& \exp\left( i\nu(\theta_w) -2ik {K(\check{\mu},\theta_w) \over K(h_\psi,h_\psi)} \right) \nonumber\\ \label{index0} &\times& \left[\prod_{\alpha\succ 0} \det\left(\partial_{\bar{z}} +{1\over 2}2\pi i\alpha(\check{\mu}) \right)\right]^{-1} \;\left[\vphantom{{1\over 2} \prod_{\alpha\succ 0}} \widehat{\det}(\partial_{\bar{z}})\right]^{-l/2}\;. \end{eqnarray} Before expressing (\ref{index0}) in terms of classical functions we make several remarks about the abstract structure. As we have noted, the troublesome determinant we discussed is not a function but a holomorphic section of a line bundle over the Jacobian variety of $\Sigma$. The nontriviality of this section is closely related to the necessity for regularization. We will presently see that the shift by the Coxeter number arises because we have a section and not a function. Had the determinant been finite (which of course it cannot) the shift by the Coxeter number would not be there. Formula~(\ref{index0}) may be rewritten as, \begin{eqnarray} I_{(\nu,k)}(\theta,\tau) &=& \sum_{w \in W} \sum_{\check{\mu}\in \check{T}} q^{\displaystyle k\,K(\check{\mu},\check{\mu})/K(h_\psi,h_\psi)} q^{\displaystyle -\nu(\check{\mu})} \nonumber\\ &\times& \exp\left( i\nu(\theta_w) -2ik {K(\check{\mu},\theta_w) \over K(h_\psi,h_\psi)} \right) \nonumber\\ \label{index1} &\times& \left[\prod_{\alpha\succ 0} { \displaystyle -i\vartheta_{11}\left({\alpha(\theta_w)\over 2\pi} - \alpha(\check{\mu}) \tau ;\tau\right) \over \eta(\tau) }\right]^{-1} \; \eta(\tau)^{-l}\;, \end{eqnarray} and is the central result of this article. It is the natural form for the character index from the path integral viewpoint. All other forms are derived from this one by $\vartheta$-function identities and algebraic manipulation. There are several important remarks which should be made before proceeding. As was strongly advertised, (\ref{index1}) is an analytic function of $q$ which involves $\vartheta$-functions on the Jacobian of $\Sigma$ and not $\Theta$-functions on the torus $T$. As expected, the expression is independent of the choice of scale for the inner product. Formula~(\ref{index1}) is almost the Weyl-K\v{a}c character formula; it is the index of the Dirac-Ramond operator on $LG/T$ instead of the $\bar{\partial}$ operator on $LG/T$. As explained in I, $\bar{\partial}$ and the Dirac operator are related by twisting. If we are interested in the character associated to a representation with highest $G$ weight $\lambda$ then we should choose $\lambda$ to differ from $\nu$ by the Weyl weight $\rho$. To transform (\ref{index1}) into a more conventional form of the character formula and to see the shift by the Coxeter number we need the $\vartheta$-function identity \begin{equation} \label{theta-X} \vartheta_{11}(\zeta + m\tau;\tau) = (-1)^m q^{m^2/2} e^{-2\pi i m\zeta} \vartheta_{11}(\zeta; \tau) \end{equation} where $m$ is an integer and formula (\ref{coxeter}) for the Coxeter number~$ h_{{\euf g}}$. Thus the index may be rewritten as \begin{eqnarray} I_{(\nu,k)}(\theta,\tau) &=& \sum_{w\in W} \sum_{\check{\mu} \in \check{T}} q^{\displaystyle -\nu(\check{\mu})} q^{\displaystyle (k + h_{{\euf g}})K(\check{\mu},\check{\mu})/K(h_\psi,h_\psi)} \nonumber\\ &\times& \exp\left( i\nu(\theta_w) -2i(k+ h_{{\euf g}})\; {K(\check{\mu},\theta_w) \over K(h_\psi,h_\psi)} \right) \nonumber\\ \label{index2} &\times& \eta(\tau)^{-l}\; \left[\prod_{\alpha\succ 0}\; { \displaystyle -i\vartheta_{11}\left({\alpha(\theta_w)\over 2\pi} ;\tau\right) \over \eta(\tau) }\right]^{-1}\;. \end{eqnarray} We used the fact that, since $\rho$ is a weight and $\check{\mu}$ is in the coroot lattice, then $\rho(\check{\mu}) \in {\Bbb Z}$. Also, if we use the Killing form to identify ${\euf t}$ with ${\euf t}^*$, one can easily see that $2\check{\mu}/K(h_\psi,h_\psi)$ is a weight. To write (\ref{index2}) in a more recognizable form, one proceeds in two different ways. Either we do the $W$ sum first or we do the $\check{T}$ one. These two alternatives lead to very different looking formulas. We recall from I that if $\nu$ is a weight of $G$ then the $T$-index of the Dirac operator coupled to the $\nu$-line bundle is \begin{eqnarray} I_\nu(\theta) &=& \sum_{w\in W} e^{i\nu(\theta_w)} \prod_{\alpha\succ 0} {1\over 2i \sin {1\over 2} \alpha(\theta_w)}\\ &=& \sum_{w\in W} (-1)^{\ell(w)} e^{i\nu(\theta_w)} \prod_{\alpha\succ 0} {1\over 2i \sin {1\over 2} \alpha(\theta)}\;, \end{eqnarray} where $\ell(w)$ is defined as the number of positive roots turned into negative roots by $w$. Note the ordinary index has only a single subscript while the loop one has a double subscript. It is convenient to define the Weyl denominator by \begin{equation} D_W(\theta) = \prod_{\alpha\succ 0} 2i\sin\frac{1}{2}\alpha(\theta) \end{equation} and the K\v{a}c denominator by \begin{equation} D_K(\theta) = \prod_{n>0} \left( 1- q^n\right)^l\; \prod_{\alpha\succ 0} \prod_{n>0} \left(1-q^n e^{i\alpha(\theta)}\right) \left(1-q^n e^{-i\alpha(\theta)}\right) \;. \end{equation} The Weyl and the K\v{a}c denominators are closely related to our index formulas because of the identity \begin{equation} {-i \vartheta_{11}(\zeta;\tau) \over \eta(\tau)} = q^{1/12}\; 2i\sin\pi\zeta \; \prod_{n>0} \left(1 - q^n e^{2\pi i \zeta}\right) \left(1 - q^n e^{-2\pi i \zeta}\right) \;. \end{equation} It is now a matter of algebra to transform (\ref{index2}) into one of the standard forms for the Weyl-K\v{ac} character formula. Define the sublattice $\check{T}^* = \{ 2\check{\mu}/K(h_\psi,h_\psi)\;|\;\check{\mu}\in\check{T} \}$ of the weight lattice, and the dilated-translated lattice $\check{T}^*(\nu,a) = \nu+ a\check{T}^*$. Now let us express (\ref{index2}) in a different way by first summing over the Weyl group $W$. This organizes the elements of the expansion in terms of the ordinary Dirac index: \begin{eqnarray} I_{(\nu,k)}(\theta,\tau) &=& { q^{-(\dim{\euf g})/24} \over D_K(\theta)}\; q^{\displaystyle - (\nu,\nu)/[(k+ h_{{\euf g}})(\psi,\psi)]} \nonumber\\ &\times& \sum_{\omega\in\check{T}^*(\nu,k+ h_{{\euf g}})} I_{\omega}(\theta) q^{\displaystyle (\omega,\omega)/[(k+ h_{{\euf g}})(\psi,\psi)]}\;. \end{eqnarray} Next, we could have first summed over the coroot lattice generating a $\Theta$-function. Consider the lattice $\check{T}^*(\nu,a)$ and the associated $\Theta$-function \begin{equation} \Theta_{(\nu,a)}(z,\tau) = \sum_{\omega\in \check{T}^*(\nu,a)} \exp\left[ {2\pi i\tau\over a}\; {\left(\omega,\omega\right) \over (\psi,\psi)} + 2\pi i \omega(z) \right]\;. \end{equation} The index may be written as \begin{eqnarray} I_{(\nu,k)}(\theta,\tau) &=& {q^{-(\dim{\euf g})/24} \over D_W(\theta) D_K(\theta)}\; q^{\displaystyle - (\nu,\nu)/[(k+ h_{{\euf g}})(\psi,\psi)]}\nonumber\\ &\times& \sum_{w\in W} (-1)^{\ell(w)} \Theta_{(\nu,k+ h_{{\euf g}})}\left({\theta_w\over 2\pi},\tau\right)\;. \end{eqnarray} In order to incorporate the twist that turns the Dirac operator into $\bar{\partial}$ we remind you that in the Weyl character case the highest weight $\lambda$ is related to $\nu$ by $\nu = \lambda + \rho$ and that the character index and group character for $G$ are related by \begin{equation} I_\nu(\theta) = \chi_{\subchi{\lambda}}(\theta)\;. \end{equation} Thus we see that the Weyl character formula for the highest weight representation $\lambda$ of the group $G$ is \begin{eqnarray} \chi_{\subchi{\lambda}}(\theta) &=& \sum_{w\in W} e^{i(\lambda+\rho)(\theta_w)} \prod_{\alpha\succ 0} {1\over 2i \sin {1\over 2} \alpha(\theta_w)}\\ &=& \sum_{w\in W} (-1)^{\ell(w)} e^{i(\lambda+\rho)(\theta_w)} \prod_{\alpha\succ 0} {1\over 2i \sin {1\over 2} \alpha(\theta)}\;. \end{eqnarray} In what follows we will write $\chi_{\subchi{\lambda}}$ even if $\lambda$ is not a highest weight because every weight $\lambda$ is conjugate via an element of the Weyl group to a highest weight. In the same way, the loop index and the associated character for $\widetilde{LG}$ are related by \begin{equation} I_{(\nu,k)}(\theta,\tau) = \chi_{\subchi{(\lambda,k)}}(\theta,\tau)\;. \end{equation} A little algebra leads to the following two formulas for the character \begin{eqnarray} \chi_{\subchi{(\lambda,k)}}(\theta,\tau) &=& { q^{-(\dim{\euf g})/24} \over D_K(\theta)}\; q^{\displaystyle -[(\lambda+\rho,\lambda+\rho)-(\rho,\rho)]/[(k+ h_{{\euf g}})(\psi,\psi)]} \nonumber\\ \label{char1} &\times& \sum_{\omega\in\check{T}^*(\lambda,k+ h_{{\euf g}})} \chi_{\subchi{\omega}}(\theta)\; q^{\displaystyle [(\omega +\rho,\omega+\rho)-(\rho,\rho)] /[(k+ h_{{\euf g}})(\psi,\psi)]} \;,\\ &=& {q^{-(\dim{\euf g})/24} \over D_W(\theta) D_K(\theta)}\; q^{\displaystyle - (\lambda+\rho,\lambda+\rho)/[(k+ h_{{\euf g}})(\psi,\psi)]} \nonumber\\ \label{char2} &\times& \sum_{w\in W} (-1)^{\ell(w)} \Theta_{(\lambda+\rho,k+ h_{{\euf g}})}\left({\theta_w\over 2\pi},\tau\right)\;. \end{eqnarray} Equation (\ref{char1}) is the same as equation (14.3.10) of \cite{pressley-segal.loopgroups} with the proviso that we use $L_0-c/24$ in our trace while they use $L_0$. It is important to realize that in this context $c = \dim{\euf g}$, see~(\ref{quad-action}), and that $c$ is not the Sugawara value. Equation~(\ref{char2}) may be put in a more useful form by mimicking the following computation with the Weyl character formula. If one considers the trivial representation (highest weight $\lambda=0$ with $\chi_{\subchi{0}}(\theta)=1$) then one easily sees that the Weyl denominator may be written as \begin{equation} D_W(\theta) = \sum_{w\in W} (-1)^{\ell(w)} e^{i\rho(\theta_w)} \end{equation} and thus the Weyl character formula may be rewritten as \begin{equation} \chi_{\subchi{\lambda}}(\theta) = { \displaystyle \sum_{w\in W} (-1)^{\ell(w)} e^{i(\lambda+\rho)(\theta_w)} \over \displaystyle \sum_{w\in W} (-1)^{\ell(w)} e^{i\rho(\theta_w)} }\;. \end{equation} The analogous equation in the loop group case exploits the fact that the trivial representation has $\lambda=0$, $k=0$, and $\displaystyle \chi_{\subchi{(0,0)}}(\theta,\tau) =q^{-(\dim{\euf g})/24}$. Thus we conclude that the denominators satisfy \begin{equation} D_W(\theta) D_K(\theta) = q^{\displaystyle -(\rho,\rho)/[ h_{{\euf g}}(\psi,\psi)]} \sum_{w\in W} (-1)^{\ell(w)} \Theta_{(\rho, h_{{\euf g}})}\left({\theta_w\over 2\pi},\tau\right)\;, \end{equation} leading to the following form for the character formula \begin{eqnarray} \chi_{\subchi{(\lambda,k)}}(\theta,\tau) &=& q^{\displaystyle -(\lambda+\rho,\lambda+\rho)/[(k+ h_{{\euf g}})(\psi,\psi)]} \nonumber\\ &\times& {\displaystyle \sum_{w\in W} (-1)^{\ell(w)} \Theta_{(\lambda+\rho,k+ h_{{\euf g}})}\left({\theta_w\over 2\pi},\tau\right) \over \displaystyle \sum_{w\in W} (-1)^{\ell(w)} \Theta_{(\rho, h_{{\euf g}})}\left({\theta_w\over 2\pi},\tau\right) } \end{eqnarray} which may explicitly obtained from the formulas in Chapter~12 of \cite{kac.book} as discussed in the vicinity of equation~(A.25) in reference~\cite{gepner-witten.wzw}. It is well known that the affine characters have modular transformation properties \cite{kac.book,pressley-segal.loopgroups}. The origin of these properties was originally considered very mysterious but the connection of affine Lie algebras to conformal field theory demystified the issue. In \cite{gepner-witten.wzw}, the authors discussed the modular invariance of the WZW model's partition function, a sum of the modulus squared of characters. We can use our results to discuss the origin of the modular properties of individual characters. The key observation is that the quadratic action (\ref{quad-action}) is a non-chiral conformal field theory. One should view the determinants in (\ref{index0}) as short hand for the path integral over the quadratic action. The modular transformations properties of this chiral conformal field theory explains the modular properties of the characters. \par\vskip .2in\noindent {\large\bf Acknowledgements} We would like to thank C.~Itzykson for insisting that the ``proof of the pudding is in the writing''. Each author would like to thank the home institutions of the other two authors for visits while the research was in progress.
1,116,691,498,769
arxiv
\section{Introduction and preliminaries} \label{sec1} The study of $h$-vectors of simplicial complexes is an important topic in combinatorial commutative algebra, because it determines the coefficients of the numerator of the Hilbert series of a Stanley--Reisner ring associated to a simplicial complex. We refer the reader to Stanley's book \cite{s'} and the book of Herzog and Hibi \cite{hh} for an introduction to Simplicial complexes and Stanley--Reisner rings. Let $\mathbb{K}$ be a field and $S=\mathbb{K}[x_1,\dots,x_n]$ be the polynomial ring in $n$ variables over $\mathbb{K}$. A finitely generated $S$-module $M$ is said to satisfy the {\it Serre's condition} ($S_r$), if $${\rm depth}\ M_{\frak{p}}\geq \min\{r,\dim M_{\frak{p}}\},$$ for every $\frak{p}\in \rm{Spec}(S)$. We say that a simplicial complex $\Delta$ is an ($S_r$) {\it simplicial complex}, if its Stanley--Reisner ring satisfies the Serre's condition ($S_r$). It is easy to see that every simplicial complex is ($S_1$). Therefore, we assume that $r\geq2$. We refer the reader to \cite{psty} for a survey about ($S_r$) simplicial complexes. The classical result of Stanley characterizes the $h$-vector of Cohen--Macaulay simplicial complexes (see \cite[Theorem 3.3, Page 59]{s'}). Murai and Terai \cite{mt} studied the $h$-vector of ($S_r$) simplicial complexes. They proved that if $\Delta$ is a $(d-1)$-dimensional ($S_r$) simplicial complex with $h(\Delta)=(h_0, \ldots, h_d)$, then $(h_0,h_1,\ldots,h_r)$ is an $M$-vector (i.e., it is the $h$-vector of a Cohen--Macaulay simplicial complex) and $h_r+h_{r+1}+ \cdots+h_d$ is nonnegative. In \cite{gpsy}, the authors extended the result of Murai and Terai by giving $r$ extra necessary conditions. Indeed, they proved that$${i\choose i}h_r+{i+1\choose i}h_{r+1}+\cdots+{i+d-r\choose i}h_d\geq 0,$$ for every integer $i$ with $1\leq i\leq r$. Notice that for $i=0$, the above inequality reduces to the inequality $h_r+ \ldots+ h_d\geq 0$, which was obtained by Murai and Terai. In \cite{gpsy}, the authors asked whether the above mentioned conditions are also sufficient for a sequence of integers to be the $h$-vector of a ($S_r$) simplicial complex. In fact, they proposed the following question. \begin{ques}[\cite{gpsy}, Question 2.6] \label{quest} Let $d$ and $r$ be integers with $d\geq r\geq2$ and let $\mathbf{h}=(h_0, h_1,\ldots,h_d)$ be the $h$-vector of a simplicial complex in such a way that the following conditions hold: \vspace{0.2cm} \begin{itemize} \item[(1)] $(h_0,h_1,\ldots,h_r)$ is an $M$-vector, and \vspace{0.3cm} \item[(2)] ${i\choose i}h_r+{i+1\choose i}h_{r+1}+\cdots+{i+d-r\choose i}h_d$ is nonnegative for every $i$ with $0\leq i\leq r$. \end{itemize} \vspace{0.2cm} \noindent Does there exist a $(d-1)$-dimensional ($S_r$) simplicial complex $\Delta$ with $h(\Delta)=\mathbf{h}$? \end{ques} In this paper, we give a negative answer to this question, by presenting a class of infinitely many sequences which satisfy the assumptions of Question \ref{quest} for $r=2$, while they are not the $h$-vector of any ($S_2$) simplicial complex. It is still interesting to know whether Question \ref{quest} is true in the case $r\geq 3$. Another result obtained by Murai and Terai \cite[Theorem 1.2]{mt} states that if $\mathbf{h}=(h_0,h_1,\ldots,h_d)$ is the $h$-vector of a ($S_r$) simplicial complex and $h_i=0$ for some $i\leq r$, then $h_k=0$ for all $k\geq i$. This is in fact a necessary condition for a sequence of integers to be the $h$-vector of a ($S_r$) simplicial complex. Our example shows that if we add this necessary condition to the assumptions of Question \ref{quest}, then the answer would be still negative. Let $\mathbf{h}=(h_0, h_1,\ldots,h_d)$ be a sequence of integers. One may ask, whether there exists a ($S_2$) simplicial complex $\Delta$ with $h(\Delta)=\mathbf{h}$, provided that $\mathbf{h}$ satisfies the conditions (1) and (2) of Question \ref{quest} and moreover, $\mathbf{h}$ is the $h$-vector of a "pure" simplicial complex. In fact, Our example shows that the answer of this question is also negative (see Lemma \ref{pure}). More explicit, we prove the following result. \begin{thm} \label{main} For every integer $d\geq 5$, there exists a vector $\mathbf{h}=(h_0, h_1,\ldots,h_d)$ of nonzero integers which is the $h$-vector of a pure simplicial complex and moreover, \vspace{0.2cm} \begin{itemize} \item[(1)] $(h_0,h_1,h_2)$ is an $M$-vector, and \vspace{0.3cm} \item[(2)] ${i\choose i}h_2+{i+1\choose i}h_3+\cdots+{i+d-2\choose i}h_d$ is nonnegative for every $i$ with $0\leq i\leq 2$. \end{itemize} \vspace{0.2cm} \noindent But there is no $(d-1)$-dimensional ($S_2$) simplicial complex $\Delta$ with $h(\Delta)=\mathbf{h}$. \end{thm} \section{Proof of Theorem \ref{main}} \label{sec2} Let $d\geq 5$ be an integer and set $\mathbf{h}=(1,2, \underbrace{1, \ldots, 1,}_{(d-2)\text{ times}} -1)$. In Lemma \ref{pure}, we prove that $\mathbf{h}$ is the $h$-vector of a pure simplicial complex. Before stating this lemma, we remind that for a graded $S$-module $M=\oplus_{i\geq 0}M_i$, the {\it Hilbert series} of $M$ is defined to be$${\rm Hilb}_M(t)=\sum_{i\geq 0}({\rm dim}_{\mathbb{K}}M_i)t^i.$$It is well-known that for a $(d-1)$-dimensional simplicial complex $\Delta$, with $h(\Delta)=(h_0, h_1,\ldots,h_d)$, we have$${\rm Hilb}_{\mathbb{K}[\Delta]}(t)=\frac{h_0+h_1t+h_2t^2+ \ldots +h_dt^d}{(1-t)^d}.$$ \begin{lem} \label{pure} Assume that $d\geq 5$ is an integer. Let $\Delta$ be the simplicial complex over $[d+2]$ with facets$$\mathcal{F}(\Delta)=\Big\{[d+2]\setminus\{1,j\}: 2\leq j \leq d\Big\}\bigcup\Big\{[d+2]\setminus\{d+1,d+2\}\Big\}.$$Then $h(\Delta)=(1,2, \underbrace{1, \ldots, 1,}_{(d-2)\text{ times}} -1)$. \end{lem} \begin{proof} Set $n=d+2$. By \cite[Lemma 1.5.4]{hh}, we have$$I_{\Delta}=\Big(\bigcap_{2\leq j\leq d}(x_1, x_j)\Big)\cap (x_{d+1}, x_{d+2}).$$Set $L=\bigcap_{2\leq j\leq d}(x_1, x_j)=(x_1, x_2x_3\ldots x_d)$ and $K=(x_{d+1}, x_{d+2})$. Then $I_{\Delta}=L\cap K$. Consider the following exact sequence of graded $S$-modules: \[ \begin{array}{rl} 0\longrightarrow S/I_{\Delta}\longrightarrow S/L\oplus S/K\longrightarrow S/(L+K) \longrightarrow 0. \end{array} \] It follows that$${\rm Hilb}_{S/I_{\Delta}}(t)={\rm Hilb}_{S/L}(t)+{\rm Hilb}_{S/K}(t)-{\rm Hilb}_{S/L+K}(t).$$Notice that$${\rm Hilb}_{S/L}(t)={\rm Hilb}_{\mathbb{K}[x_2, \ldots, x_n]/(x_2x_3 \ldots x_d)}(t)=\frac{1-t^{d-1}}{(1-t)^{d+1}},$$where that last equality follows from the fact $x_2x_3 \ldots x_d$ is a regular element of $\mathbb{K}[x_2, \ldots, x_n]$ with degree $d-1$. Similarly,$${\rm Hilb}_{S/K}(t)=\frac{1}{(1-t)^d}$$and$${\rm Hilb}_{S/L+k}(t)=\frac{1-t^{d-1}}{(1-t)^{d-1}}.$$A simple computation, using the above equalities shows that $${\rm Hilb}_{S/I_{\Delta}}(t)=\frac{1+2t+t^2+t^3+ \ldots +t^{d-1}-t^d}{(1-t)^d}.$$Hence, $h(\Delta)=(1,2, \underbrace{1, \ldots, 1,}_{(d-2)\text{ times}} -1)$. \end{proof} It can be easily seen that $(1, 2, 1)$ is an $M$-vector. Indeed, it is the $h$-vector of the simplicial complex over $[4]$ with facets $\{1, 2\}, \{2, 3\}, \{3, 4\}$ and $\{4, 1\}$, which is Cohen--Macaulay. On the other hand, we are assuming that $d\geq 5$ and thus $$h_2+ h_3+ \ldots +h_d=d-3\geq 0,$$ $$h_2+2h_3+ \ldots + (d-1)h_d=1+2+ \ldots + (d-2)-(d-1)=\frac{(d-4)(d-1)}{2}\geq 0$$ and $$h_2+ 3h_3+ \ldots + {d \choose 2}h_d=1+3+ \ldots + {d-1 \choose 2}-{d \choose 2}={d\choose 3}-{d\choose 2}\geq 0,$$where the last equality follows from \cite[Page 368, Theorem 4]{r}. Thus, $\mathbf{h}$ satisfies the assumptions of Theorem \ref{main}. We show in Proposition \ref{s2} that $\mathbf{h}$ is not the $h$-vector of any ($S_2$) simplicial complex. We first remind some definitions and basic facts. Let $\Delta$ be a $(d-1)$-dimensional simplicial complex with vertex set $[n]$. Assume that $f(\Delta)=(f_0, \ldots, f_{d-1})$ and $h(\Delta)=(h_0, \ldots, h_d)$ are the $f$-vector and $h$-vector of $\Delta$, respectively. It is well-known (see e.g. \cite[Corollary 5.1.9]{bh}) that \[ \begin{array}{rl} h_1=n-d \ \ \ \ \ {\rm and} \ \ \ \ \ h_0+h_1+ \ldots +h_d=f_{d-1} \end{array} \tag{$\ast$} \label{ast} \] A simplicial complex $\Delta$ is called a {\it cone} if it has a vertex which belongs to every facet of $\Delta$. The proof of the following lemma is simple and is omitted. \begin{lem} \label{cone} Let $\Delta$ be a $(d-1)$-dimensional cone, with $h(\Delta)=(h_0, \ldots, h_d)$. Then $h_d=0$. \end{lem} Let $G$ be a graph with vertex set $V(G)=\big\{v_1, \ldots, v_n\big\}$ and edge set $E(G)$. The {\it complementary graph} $\overline{G}$ is a graph with $V(\overline{G})=V(G)$ and $E(\overline{G})$ consists of those $2$-element subsets $\{v_i,v_j\}$ of $V(G)$ for which $\{v_i,v_j\}\notin E(G)$. A subset $C$ of $V(G)$ is called a {\it vertex cover} of the graph $G$ if every edge of $G$ is incident to at least one vertex of $C$. The {\it cover ideal} of $G$, denoted by $J(G)$, is the squarefree monomial ideal which is generated by the set$$\big\{\prod_{v_i\in C}x_i : C \ {\rm is \ a \ vertex \ cover \ of} \ G\big\}.$$By \cite{v}, we know tha for every graph $G$,$$J(G)=\bigcap_{\{v_i, v_j\}\in E(G)}(x_i, x_j).$$A monomial ideal is said to be {\it unmixed} if all its associated primes have the same height. The above equality shows that a squarefree monomial ideal is unmixed of height $2$ if and only it is the cover ideal of a graph. We are now ready to complete the proof of Theorem \ref{main}. \begin{prop} \label{s2} Let $d\geq 5$ be an integer. Then there is no ($S_2$) simplicial complex $\Delta$ with $h(\Delta)=(1,2, \underbrace{1, \ldots, 1,}_{(d-2)\text{ times}} -1)$. In particular, the answer of Question \ref{quest} is in general negative. \end{prop} \begin{proof} Assume by contradiction that there exists a ($S_2$) simplicial complex $\Delta$ with $h(\Delta)=(1,2, \underbrace{1, \ldots, 1,}_{(d-2)\text{ times}} -1)$. Thus, ${\rm dim}(\Delta)=d-1$ and it follows from Lemma \ref{cone} that $\Delta$ is not a cone. By \cite[Lemma 2.6]{mt}, we know that $\Delta$ is a pure simplicial complex. Therefore, the equalities \ref{ast} imply that $\Delta$ has $d+2$ vertices and $d$ facets. It then follows from \cite[Lemma 1.5.4]{hh} that $I_{\Delta}$ is an unmixed ideal of height $2$. Hence, $I_{\Delta}$ is the cover ideal of a graph, say $G$. Since $\Delta$ is not a cone, $G$ has no isolated vertex. The number of edges of $G$ is equal to the number of facets of $\Delta$, which is $d$. On the other hand, $G$ has $d+2$ vertices. Thus, $G$ is not a connected graph. Let $H_1$ and $H_2$ be two connected components of $G$. Assume that $\{u, v\}$ and $\{z, t\}$ are edges of $H_1$ and $H_2$, respectively. Then $u, z, v, t$ is an induced $4$-cycle in $\overline{G}$. On the other hand, $S/J(G)=S/I_{\Delta}$ satisfies the Serre's condition ($S_2$) and it follows from \cite[Corollary 3.7]{y} and \cite[Theorem 2.1]{eghp} (see also \cite[Theorem 5.8]{psty}) that $\overline{G}$ can not have any induced $4$-cycle, which is a contradiction. Therefore, $((1,2, \underbrace{1, \ldots, 1,}_{(d-2)\text{ times}} -1)$ is not the $h$-vector of any ($S_2$) simplicial complex. \end{proof} \section*{Acknowledgment} The author thanks Naoki Terai and Siamak Yassemi for reading an earlier version of the paper and their helpful comments. He also thanks the referee for careful reading of the paper and for valuable comments. This work was supported by a grant from Iran National Science Foundation: INSF (No. 95820482).
1,116,691,498,770
arxiv
\section{Introduction} The dynamics of social and economic systems are necessarily based on individual behaviors, by which single subjects express, either consciously or unconsciously, a particular strategy, which is heterogeneously distributed. The latter is often based not only on their own individual purposes, but also on those they attribute to other agents. However, the sheer complexity of such systems makes it often difficult to ascertain the impact of personal decisions on the resulting collective dynamics. In particular, interactions among individuals need not have an additive, linear character. As a consequence, the global impact of a given number of entities (``field entities'') over a single one (``test entity'') cannot be assumed to merely consist in the linear superposition of any single field entity action. This nonlinear feature represents a serious conceptual difficulty to the derivation, and subsequent analysis, of mathematical models for that type of systems. In the last few years, a radical philosophical change has been undertaken in social and economic disciplines. An interplay among Economics, Psychology, and Sociology has taken place, thanks to a new cognitive approach no longer grounded on the traditional assumption of rational socio-economic behavior. Starting from the concept of bounded rationality \cite{simon1959tdm}, the idea of Economics as a subject highly affected by individual (rational or irrational) behaviors, reactions, and interactions has begun to impose itself. In this frame, the contribution of mathematical methods to a deeper understanding of the relationships between individual behaviors and collective outcomes may be fundamental. All of these concepts are expressed in the PhD dissertation \cite{ajmone2009npm}, that the interested reader is referred to also for additional pertinent bibliography. More in general in fields ranging from Economics to Sociology and Ecology, the last decades have witnessed an increasing interest for the introduction of quantitative mathematical methods that could account for individual, not necessarily rational, behaviors. Terms as game theory, bounded rationality, evolutionary dynamics are often used in that context and clearly illustrate the continuous search for techniques able to provide mathematical models than can describe, and predict, living behaviors, see \cite{camerer2003bgt,camerer2003abe,simon1982mbr-2,simon1982mbr-1,simon1997mbr-3}, as also documented in the bibliography on evolutionary game theory cited in the following. As a result, a picture of social and biological sciences as evolutionary complex systems is unfolding \cite{arlotti2012cid,bellouquid2012mvt}. A key experimental feature of such systems is that interaction among heterogeneous individuals often produces unexpected outcomes, which were absent at the individual level, and are commonly termed emergent behaviors. The new point of view promoted the image of Economics as an evolving complex system, where interactions among heterogeneous individuals produce unpredictable emerging outcomes \cite{arthur1997eec,kirman2000lbl}. In this context, setting up a mathematical description able to capture the evolving features of socio-economic systems is a challenging, however difficult, task, which calls for a proper interaction between mathematics and social sciences. In this paper, a preliminary step in this direction is attempted. A mathematical framework is outlined, suitable to incorporate some of the main complexity features of socio-economic systems. Out of it, specific mathematical models are derived, focusing in particular on the prediction of the so called \emph{Black Swan}. The latter is defined to be a rare event, showing up as an irrational collective trend generated by possibly rational individual behaviors \cite{taleb2007bsi,taleb2010ffr}. To achieve our goal, we will use mathematical tools based on a development of the kinetic theory for active particles, see e.g., \cite{arlotti2002gkb,bellomo2010mhl,bellomo2011mtl}, suitable to include nonlinear interactions and learning phenomena. The hallmarks of the approach can be summarized as follows: the system is partitioned into \emph{functional subsystems}, whose entities, called \emph{active particles}, are characterized by an individual state termed \emph{activity}; the state of each functional subsystem is defined by a probability distribution over the activity variable; interactions among active particles, generally nonlocal and nonlinearly additive, are treated as \emph{stochastic games}, meaning that the pre-interaction states of the particles and the post-interaction ones can be known only in probability; finally, the evolution of the probability distribution is obtained by a balance of particles within elementary volumes of the space of microscopic states, the inflow and outflow of particles being related to the aforementioned interactions. A general theory for linearly additive interactions, along with various applications, is reported in \cite{bellomo2008mcl}, whereas a first extension to non-additive and nonlocal interactions, modeled by methods of the stochastic game theory, is included in \cite{bellomo2012mts}. This mathematical approach has been applied to various fields of Life Sciences, such as social systems \cite{bertotti2008cla}, opinion formation \cite{bertotti2008dgk}, and has been revisited in \cite{ajmone2009msc,ajmone2008mtc} with reference to Behavioral Sciences including Politics and Economics. Moreover, it has also been applied in fields different from the above-mentioned ones, for instance propagation of epidemics under virus mutations \cite{delillo2009mev,delitala2011mmk} and theory of evolution \cite{bellomo2011mtl}. In all of these applications, the heterogeneous behavior of individuals and random mutations are important features characterizing the systems under consideration. The conceptual link between methods of statistical mechanics and game theory was also introduced by Helbing \cite{helbing2010qsd}; on the other hand, also methods of the mean-field kinetic theory have been used to model socio-economic systems, see e.g., \cite{during2009bfp,toscani2006kmo}. The specialized literature offers a great variety of different approaches, such as population dynamics with structure \cite{webb1985tna} or super-macroscopic dynamical systems \cite{nuno2011mmc}. In all cases, the challenging goal to be met consists in capturing the relevant features of living complex systems. After the above general overview, the plan of the paper can now be illustrated in more detail. The contents are distributed into four more sections. Section~\ref{sect:compl.asp} analyzes the complexity aspects of socio-economic systems, in particular five key features are selected to be retained in the modeling approach. Section~\ref{sect:compl.red} introduces the mathematical structures of the kinetic theory for active particles, which offer the basis for the derivation of specific models. Section~\ref{sect:case.studies} opens with two illustrative applications focused on social conflicts: the first one shows that a social competition, if not properly controlled, may induce an unbalanced distribution of wealth with a clustering of the population in two extreme classes (a large class of poor people and a small oligarchic class of wealthy ones); the second one exemplifies, in connection with the aforesaid dynamics, how such a clustering can lead to a growing opposition against a government. Subsequently, the investigation moves on the identification of premonitory signals, that can provide preliminary insights into the emergence of a Black Swan viewed as a large deviation from some heuristically expected trend. Section~\ref{sect:discussion} finally proposes a critical analysis and focuses on research perspectives. \section{Complexity aspects of socio-economic systems} \label{sect:compl.asp} In this section the complexity features of socio-economic systems are analyzed, with the aim of extracting some hallmarks to be included in mathematical models. Socio-economic systems can be described as ensembles of several living entities, viz. active particles, whose individual behaviors simultaneously affect and are affected by the behaviors of a certain number of other particles. These actions depend, in most cases, on the number of interacting particles, their localization, and their state. Generally, individual actions are rational, focused on a well-defined goal, and aimed at individual benefit. On the other hand, some particular situations may give rise to behaviors in contrast with that primary goal, like e.g., in case of panic. A further aspect to be considered is that the system is generally composed by parts, which are interconnected and interdependent. Namely, every system is formed by nested subsystems, so that interactions occur both within and among subsystems. As a matter of fact, all living systems exhibit some common features, whereas others can vary depending on the type of system under consideration. We will assume that all subsystems of a given system are characterized by the same features, possibly expressed with larger or smaller intensity according to their specificity. Bearing all above in mind, in the following some specific aspects of socio-economic systems, understood as living complex systems, are identified and commented. The selection is limited to five features, in order to avoid an over-proliferation of concepts, considering that mathematical equations cannot include the whole variety of complexity issues. Thus, the list below does not claim to be exhaustive, rather it is generated by the authors' personal experience and bias. \begin{enumerate} \item \textbf{Emerging collective behaviors}. Starting from basic individual choices, interaction dynamics produce the spontaneous emergence of collective behaviors, that, in most cases, are completely different or apparently not contained in those of the single active particles. Ultimately, \emph{the whole can be much more than the sum of its parts}. This is possible because active particles typically operate out of equilibrium. \item \textbf{Strategy, heterogeneity, and stochastic games}. Active particles have the ability to develop specific strategies, which depend also on those expressed by the other particles. Normally, such strategies are generated by rational principles but are heterogeneously distributed among the particles. Furthermore, irrational behaviors cannot be excluded. Accordingly, the representation of the system needs random variables, and interactions have to be modeled in terms of stochastic games because it is not possible to identify an average homogeneous rational attitude. \item \textbf{Nonlinear and nonlocal interactions}. Interactions among active particles are generally nonlinear and nonlocal, because they depend on the global distribution of some close and/or far neighbors. The latter have to be identified in terms of a suitable distance among the microscopic states of the particles. Active particles play a game at each interaction: the outcome, which depends nonlinearly on the states of all interacting particles, modifies their state in a stochastic manner. \item \textbf{Learning and evolution}. Individuals in socio-economic systems are able to learn from their experience. This implies that the expression of the strategy evolves in time, and consequently that interaction dynamics undergo modifications. In some cases, special situations (e.g., onset of panic) can even induce quick modifications. Moreover, adaptation to environmental conditions and search of one's own benefit may induce mutations and evolution. \item \textbf{Large number of components}. Living systems are often constituted by a great deal of individual diversity, such that a detailed description focused on single active particles would be actually infeasible. Therefore, a complexity reduction, by means of suitable mathematical strategies, is necessary for handling them at a practical level. \end{enumerate} \section{Complexity reduction and mathematical tools} \label{sect:compl.red} This section provides the conceptual lines leading to the methods of the kinetic theory for active particles, which has been selected as the mathematical framework suitable to derive specific models here. The presentation is followed by a critical analysis aimed at checking the consistency of the mathematical approach with the issues discussed in Section~\ref{sect:compl.asp}, as well as its efficiency in reducing the complexity of the real system. Modeling is concerned with systems of interacting individuals belonging to different groups. Their number is supposed to be constant in time, namely birth and death processes or inlets from an outer environment are not taken into account. It is worth stressing that the approach used in the various papers cited in the Introduction was based on linearly additive interactions with parameters constant in time. Here, on the contrary, both nonlinearly additive interactions and time-evolving parameters, due to the conditioning by the collective state of the system, are considered. We recall that interactions are said to be \emph{linearly additive} when the outcome of each of them is not influenced by the presence of particles other than the interacting ones, so that a superposition principle holds true: the action on a particle is the sum of all actions applied individually by the other particles. For instance, this is the case of mean field theories. Otherwise, interactions are said to be \emph{nonlinearly additive}. We point out that the progress from linear to nonlinear interactions is crucial in the attempt of inserting into mathematical equations individual behaviors that may change quickly, possibly under the influence of the outer environment. Nonlinearities can be generated in several ways. In particular, the particles which play a role in the interactions may be selected by means of rational actions, whereas the interaction output can be obtained from individual strategies and interpretation of actions exerted vby other particles. These concepts are well understood in the interpretation of swarming phenomena from the point of view of both Physics \cite{ballerini2008ira} and Mathematics \cite{bellomo2012mts,bellouquid2012mvt,cristiani2011eai}. \subsection{Active particles, heterogeneity, functional subsystems, and representation issues} As already stated, the living entities of the system at hand will be regarded as particles able to express actively a certain social and/or economic strategy based on their socio-economic state. Such a strategy will be called \emph{activity}. In general, the system might be constituted by different types of active particles, each of them featuring a different strategy. However, aiming at a (necessary) complexity reduction, the system can be decomposed in \emph{functional subsystems} constituted by active particles that individually express the same strategy. In other words, whenever the strategy of the active particles of a system is heterogeneous, the modeling approach should identify an appropriate decomposition into functional subsystems, within each of which the strategy is instead homogeneous across the member active particles. In each functional subsystem, the microscopic activity, denoted by $u$, can be taken as a scalar variable belonging to a domain $D_u\subseteq\mathbb{R}$. In some cases, it may be convenient to assume that $D_u$ coincides with the whole $\mathbb{R}$. Let us consider a decomposition of the original system into $m\geq 1$ functional subsystems labeled by an index $p=1,\,\dots,\,m$. According to the kinetic theory for active particles, each of them is described by a time-evolving distribution function over the microscopic activity $u$: $$ f^p=f^p(t,\,u):[0,\,T_\textup{max}]\times D_u\to\mathbb{R}_+, $$ $T_\textup{max}>0$ being a certain final time (possibly $+\infty$), such that the quantity $f^p(t,\,u)\,du$ is the (infinitesimal) number of active particles of the $p$-th subsystem having at time $t$ an activity comprised in the (infinitesimal) interval $[u,\,u+du]$. Under suitable integrability conditions, the number of active particles in the $p$-th functional subsystem at time $t$ is $$ N^p(t):=\int\limits_{D_u}f^p(t,\,u)\,du. $$ More in general, it is possible to define (weighted) moments of any order $l$ of the distribution functions\footnote{Notice that, in this formula, $l$ is a true exponent whereas $p$ is a superscript.}: $$ \mathbb{E}^p_l(t):=\int\limits_{D_u}u^lf^p(t,\,u)w(u)\,du, $$ where $w:D_u\to\mathbb{R}_+$ is an appropriate weight function with unit integral on $D_u$. $\mathbb{E}^p_l(t)$ can be either finite or infinite, according to the integrability properties of $f^p(t,\,\cdot)$. If the number of particles within each subsystem is constant in time, so that no particle transition occurs among subsystems, then each $f^p$ can be normalized with respect to $N^p(0)$ and understood as a probability density. Alternatively, it is possible to normalize with respect to the total number of particles of the system $\sum_{p=1}^{m}N^p(0)$, which entails $$ \sum_{p=1}^{m}\int\limits_{D_u}f^p(t,\,u)\,du=1, \quad \forall\,t\in[0,\,T_\textup{max}]. $$ It has been shown in \cite{bertotti2008dgk}, see also the references cited therein, that for various applications it is convenient to assume that the activity is a discrete variable, especially when \emph{activity classes} can be more readily identified in the real system. A lattice $I_u=\{u_1,\,\dots,\,u_i,\,\dots,\,u_n\}$ is thus introduced in the domain $D_u$, admitting that $u\in I_u$. The representation of the $p$-th functional subsystem is now provided by a set of $n\geq 1$ distribution functions $$ f^p_i=f^p_i(t):[0,\,T_\textup{max}]\to\mathbb{R}_+, \quad i=1,\,\dots,\,n $$ such that $f^p_i(t)$ is the (possibly normalized) number of active particles in the $i$-th activity class of the $p$-th subsystem at time $t$. Formally, we have $f^p_i(t)=f^p(t,\,u_i)$ or, in distributional sense, $$ f^p(t,\,u)=\sum_{i=1}^{n}f^p_i(t)\delta_{u_i}(u), $$ where $\delta_{u_i}$ is the Dirac distribution centered at $u_i$ . The formulas given above for the moments of the distribution remain valid, provided integrals on $D_u$ are correctly understood as discrete sums over $i$: \begin{equation} N^p(t)=\sum_{i=1}^{n}f^p_i(t), \qquad \mathbb{E}^p_l(t)=\sum_{i=1}^{n}u_i^lf^p_i(t)w(u_i). \label{eq:moments.discr} \end{equation} \subsection{Interactions, stochastic games, and collective dynamics} \label{sect:int.stochgam.colldyn} In general, interactions involve particles of both the same and different functional subsystems. In some cases, interactions between active particles and the outer environment have also to be taken into account. The outer environment is typically assumed to have a known state, which is not modified by interactions with the system at hand. In particular, in this paper we consider the simple case of systems which do not interact with the outer environment, besides possibly an action applied by the latter that modifies some interaction rules. The description of the interactions can essentially be of the following two types: \emph{deterministic} if the output is univocally identified given the states of the interacting entities (generally related to standard rational behavior of the particles, when large deviations are not expected); \emph{stochastic} if the output can be known only in probability, due for instance to a variability in the reactions of the particles to similar conditions. In the latter case, interactions are understood as \emph{stochastic games}. In the present context, our interest is mainly in stochastic games because of possible irrational behaviors, as outlined in Section~\ref{sect:compl.asp}. When describing interactions among active particles, it is useful to distinguish three main actors named test, candidate, and field particles. This terminology is indeed standard in the kinetic theory for active particles. \begin{itemize} \item The \textbf{test} particle, with activity $u$, is a generic representative entity of the functional subsystem under consideration. Studying interactions within and among subsystems means studying how the test particle can loose its state or other particles can gain it. \item \textbf{Candidate} particles, with activity $u_\ast$, are the particles which can gain the test state $u$ in consequence of the interactions. \item \textbf{Field} particles, with activity $u^\ast$, are the particles whose presence triggers the interactions of the candidate particles. \end{itemize} The modeling of the interactions is based on the derivation of two terms: the \emph{interaction rate} and the \emph{transition probabilities}. Let us consider, separately, some preliminary guidelines for their construction. \paragraph*{Interaction rate} This term, denoted by $\eta^{pq}$, models the frequency of the interactions between candidate and field particles belonging to the $p$-th and $q$-th functional subsystems, respectively. \paragraph*{Transition probabilities} The general rule to be followed in modeling stochastic games is that candidate particles can acquire, in probability, the state of the test particle after an interaction with field particles, while the test particle can lose, in probability, its own. Such dynamics are described by the transition probabilities $\mathcal{B}^{pq}$, which express the probability that a candidate particle of the $p$-th subsystem ends up into the state of the test particle (of the same subsystem) after interacting with a field particle of the $q$-th subsystem. In case of linear interactions, $\mathcal{B}^{pq}$ is conditioned only by the states of the interacting particles for each pair of functional subsystems: $\mathcal{B}^{pq}=\mathcal{B}^{pq}(u_\ast\to u\vert u_\ast,\,u^\ast)$. In addition, it satisfies the following condition: \begin{equation} \int\limits_{D_u}\mathcal{B}^{pq}(u_\ast\to u\vert u_\ast,\,u^\ast)\,du=1, \quad \forall\,u_\ast,\,u^\ast\in D_u, \quad \forall\,p,\,q=1,\,\dots,\,m, \label{eq:sum.prob} \end{equation} which, in case of discrete activity, becomes \begin{equation} \sum_{i=1}^{n}\mathcal{B}^{pq}_{hk}(i)=1, \quad \forall\,h,\,k=1,\,\dots,\,n, \quad \forall\,p,\,q=1,\,\dots,\,m, \label{eq:sum.prob.discr} \end{equation} where we have denoted $\mathcal{B}^{pq}_{hk}(i):=\mathcal{B}^{pq}(u_h\to u_i\vert u_h,\,u_k)$. Nonlinear interactions imply, instead, that particles are not simply subject to the superposition of binary actions but are also affected by the global current state of the system. Consequently, $\mathcal{B}^{pq}$ may be conditioned by the moments of the distribution functions. Denoting by $\mathcal{E}^p_L=\{\mathbb{E}^p_l\}_{l=1}^{L}$ the set of all moments of the distribution function $f^p$ up to some order $L\geq 0$, the formal expression of the transition probabilities is now $\mathcal{B}^{pq}=\mathcal{B}^{pq}(u_\ast\to u\vert u_\ast,\,u^\ast;\,\mathcal{E}^p_L,\,\mathcal{E}^q_L)$, along with a condition analogous to that expressed by Eq.~\eqref{eq:sum.prob}. In case of discrete activity, rather than introducing a new notation we simply redefine $\mathcal{B}^{pq}_{hk}(i):=\mathcal{B}^{pq}(u_h\to u_i\vert u_h,\,u_k;\,\mathcal{E}^p_L,\,\mathcal{E}^q_L)$, so as to avoid an over-proliferation of symbols. \medskip The above models of interactions lead straightforwardly to the derivation of a system of evolution equations for the set of distribution functions $\{f^p\}_{p=1}^m$, obtained from a balance of incoming and outgoing fluxes in the elementary volume $[u,\,u+du]$ of the space of microscopic states. The resulting mathematical structure, to be used as a paradigm for the derivation of specific models, is as follows: \begin{align} & \frac{\partial f^p}{\partial t}(t,\,u)= \sum_{q=1}^{m}\iint\limits_{D_u^2}\eta^{pq}(t,\,u_\ast,\,u^\ast) \mathcal{B}^{pq}(u_\ast\to u\vert u_\ast,\,u^\ast;\,\mathcal{E}^p_L,\,\mathcal{E}^q_L) f^p(t,\,u_\ast)f^q(t,\,u^\ast)\,du_\ast\,du^\ast \nonumber \\ & \qquad - f^p(t,\,u)\sum_{q=1}^{m}\int\limits_{D_u}\eta^{pq}(t,\,u,\,u^\ast)f^q(t,\,u^\ast)\,du^\ast, \qquad p=1,\,\dots,\,m, \label{eq:evol.cont} \end{align} which, in case of discrete activity, formally modifies as \begin{equation} \frac{df^p_i}{dt}(t) = \sum_{q=1}^{m}\sum_{k=1}^{n}\sum_{h=1}^{n}\eta^{pq}_{hk}(t) \mathcal{B}^{pq}_{hk}(i)f^p_h(t)f^q_k(t)-f^p_i(t)\sum_{q=1}^{m}\sum_{k=1}^{n}\eta^{pq}_{ik}(t)f^q_k(t) \label{eq:evol.disc} \end{equation} for $i=1,\,\dots,\,n$ and $p=1,\,\dots,\,m$. \subsection{Interactions with transitions across functional subsystems} \label{sect:transitions} The mathematical framework presented in the preceding section does not account for changes of functional subsystem by the active particles. However, transitions across subsystems may be relevant in modeling concomitant social and economic dynamics, particularly if the various subsystems can be related to different aspects of the microscopic state of the active particles. To be more specific, and to anticipate the application that we will be concerned with in the next sections, consider the case of a vector activity variable $\mathbf{u}=(u,\,v)\in D_u\times D_v\subseteq\mathbb{R}^2$, with $u$ representing the economic state of the active particles of a certain country and $v$ their level of support/opposition to the government policy. In order to reduce the complexity of the system, we will assume that the component $v$ of the microscopic state is discrete: $v\in I_v=\{v_1,\,\dots,\,v_p,\,\dots,\,v_m\}\subset D_v$, and we will use the lattice $I_v$ as a criterion for identifying the functional subsystems. In practice, each subsystem gathers individuals expressing a common opinion on the government's doings. It is plain that, in order to obtain an accurate picture of the interconnected socio-economic dynamics, besides economic interactions within and among subsystems, transitions of active particles across the latter have also to be considered. To this end, the mathematical structures previously derived need to be duly generalized. Specifically, the transition probabilities read now $$ \mathcal{B}^{pq}(r)=\mathcal{B}^{pq}(r)(u_\ast\to u\vert u_\ast,\,u^\ast;\,\mathcal{E}^p_L,\,\mathcal{E}^q_L,\,\mathcal{E}^r_L), \quad u,\,u_\ast,\,u^\ast\in D_u, \quad p,\,q,\,r=1,\,\dots,\,m $$ for expressing the probability that a candidate particle of the $p$-th subsystem with activity $u_\ast$ ends up into the $r$-th subsystem with activity $u$ after an interaction with a field particle of the $q$-th subsystem with activity $u^\ast$. Notice that, in case of nonlinearly additive interactions, these probabilities are generally conditioned also by the moments of the distribution functions of the output subsystem. The new transition probabilities satisfy the normalization condition: $$ \sum_{r=1}^{m}\int\limits_{D_u}\mathcal{B}^{pq}(r)(u_\ast\to u\vert u_\ast,\,u^\ast;\,\mathcal{E}^p_L,\,\mathcal{E}^q_L,\,\mathcal{E}^r_L)\,du=1, \quad \forall\,u_\ast,\,u^\ast\in D_u,\,p,\,q=1,\,\dots,\,m. $$ Following the same guidelines that led to the derivation of the mathematical structures of Section~\ref{sect:int.stochgam.colldyn}, we obtain the new equations with transitions across subsystems as \begin{align} \frac{\partial f^r}{\partial t}(t,\,u)&=\sum_{p=1}^{m}\sum_{q=1}^{m}\iint\limits_{D_u^2}\eta^{pq}(t,\,u_\ast,\,u^\ast) \mathcal{B}^{pq}(r)(u_\ast\to u\vert u_\ast,\,u^\ast;\,\mathcal{E}^p_L,\,\mathcal{E}^q_L,\,\mathcal{E}^r_L) \nonumber \\ & \phantom{=} \qquad\qquad\quad\times f^p(t,\,u_\ast)f^q(t,\,u^\ast)\,du_\ast\,du^\ast \nonumber \\ & \phantom{=} -f^r(t,\,u)\sum_{q=1}^{m}\int\limits_{D_u}\eta^{rq}(t,\,u,\,u^\ast)f^q(t,\,u^\ast)\,du^\ast, \qquad r=1,\,\dots,\,m. \label{eq:evol.cont.2} \end{align} It is worth noticing that by putting $$ \mathcal{B}^{pq}(r)(u_\ast\to u\vert u_\ast,\,u^\ast;\,\mathcal{E}^p_L,\,\mathcal{E}^q_L,\,\mathcal{E}^r_L)= \mathcal{B}^{pq}(u_\ast\to u\vert u_\ast,\,u^\ast;\,\mathcal{E}^p_L,\,\mathcal{E}^q_L)\delta_{pr}, $$ where $\delta_{pr}=1$ if $p=r$, $\delta_{pr}=0$ otherwise, one recovers from Eq.~\eqref{eq:evol.cont.2} the particular case of interactions without transitions across subsystems described by Eq.~\eqref{eq:evol.cont}. Models relying on Eq.~\eqref{eq:evol.cont.2} are \emph{hybrid}, because the economic state $u$ is treated as a continuous variable whereas the decomposition in functional subsystem, linked to socio-political beliefs, is of a discrete nature. Correspondingly, the space of microscopic states is $D_u\times I_v$. If also the variable $u$ is discrete within each subsystem then the space of microscopic states is the full lattice $I_u\times I_v$ and Eq.~\eqref{eq:evol.cont.2} reads \begin{align} \frac{df^r_i}{dt}(t) &= \sum_{p=1}^{m}\sum_{q=1}^{m}\sum_{k=1}^{n}\sum_{h=1}^{n}\eta^{pq}_{hk}(t) \mathcal{B}^{pq}_{hk}(i,\,r)f^p_h(t)f^q_k(t) \nonumber \\ & \phantom{=} -f^r_i(t)\sum_{q=1}^{m}\sum_{k=1}^{n}\eta^{rq}_{ik}(t)f^q_k(t), \quad i=1,\,\dots,\,n, \quad r=1,\,\dots,\,m, \label{eq:evol.disc.2} \end{align} where $\mathcal{B}^{pq}_{hk}(i,\,r):=\mathcal{B}^{pq}(r)(u_h\to u_i\vert u_h,\,u_k;\,\mathcal{E}^p_L,\,\mathcal{E}^q_L,\,\mathcal{E}^r_L)$ fulfills $$ \sum_{r=1}^{m}\sum_{i=1}^{n}\mathcal{B}^{pq}_{hk}(i,\,r)=1, \quad \forall\,h,\,k=1,\,\dots,\,n, \quad \forall\,p,\,q=1,\,\dots,\,m. $$ Equation~\eqref{eq:evol.disc.2} is a generalization of Eq.~\eqref{eq:evol.disc}, which is recovered as a particular case by letting $\mathcal{B}^{pq}_{hk}(i,\,r)=\mathcal{B}^{pq}_{hk}(i)\delta_{pr}$. \section{On the interplay between socio-economic dynamics and political conflicts} \label{sect:case.studies} The theory developed in the previous sections provides a background for tackling some illustrative applications concerned with social competition problems. Particularly, our main interest here lies in phenomena such as unbalanced wealth distributions possibly leading to popular rebellion against governments, which can be classified as Black Swans. The envisaged scenario shares some analogies with the events recently observed in North Africa countries, though in a simplified context. The mathematical models, indeed, are going to be some minimal exploratory ones, with a small number of functional subsystems and parameters for describing interactions at the microscopic scale. We focus on closed systems, such as a country with no interactions with other countries featuring similar political and/or religious organizations. A natural goal to pursue is then understanding which kind of interactions among different socio-economic classes of the same country can produce the aforesaid Black Swans. It is worth mentioning that, when interactions with other countries are considered, the investigation can also address propagation by a domino effect. The contents of the following subsections organize the previous ideas through three steps. The first one refers to welfare dynamics in terms of cooperation and competition among economic classes. The second one focuses instead on dynamics of support and opposition to a certain regime triggered by the welfare distribution. Finally, the third one proposes a preliminary approach to the identification of premonitory signals possibly implying the onset of a Black Swan, here understood as an exceptional growth of opposition to the regime fostered by the synergy with socio-economics dynamics. In order to model the above-mentioned cooperation/competition interactions, we adopt the following qualitative paradigm of consensus/dissensus dynamics: \begin{itemize} \item \emph{Consensus} -- The candidate particle sees its state either increased, by profiting from a field particle with a higher state, or decreased, by pandering to a field particle with a lower state. After mutual interaction, the states of the particles become closer than before the interaction. \item \emph{Dissensus} -- The candidate particle sees its state either further decreased, by facing a field particle with a higher state, or further increased, by facing a field particle with a lower state. After mutual interaction, the states of the particles become farther than before the interaction. \end{itemize} Once formalized at a quantitative level, this paradigm can act as a base for constructing the transition probabilities introduced in Section~\ref{sect:compl.red}. \begin{remark} Although the modeling of the transition probabilities relates to the microscopic interactions among active particles, it is worth mentioning that recent contributions to game theory, especially the approach to evolutionary games presented in \cite{gintis2009gte,helbing2010qsd,nowak2006ede,nowak2004edb,santos2006eds,santos2012edc}, address interactions at the macroscopic scale. In such a context, learning abilities and evolution are essential features of the modeling strategy, while changes in the external environment can induce modifications of rational behaviors up to irrational ones. \end{remark} \subsection{Modeling socio-economic competition} \label{sect:welfare} In this section we consider the modeling of socio-economic interactions based on the previously discussed consensus/dissensus dynamics. This problem has been first addressed in \cite{bertotti2008cla} for a large community of individuals divided into different social classes. The model proposed by those authors introduces a critical distance, which triggers either cooperation or competition among the classes. In more detail, if the actual distance between the interacting classes is lower than the critical one then a competition takes place, which causes a further enrichment of the wealthier class and a further impoverishment of the poorer one. Conversely, if the actual distance is greater than the critical one then the social organization forces cooperation, namely the richer class has to contribute to the wealth of the poorer one. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{coop-comp_dyn} \caption{Dynamics of competition (top) and cooperation (bottom) between pairs of candidate (index $h$) and field (index $k$) active particles. The critical distance $\gamma$, which triggers either behavior depending on the actual distance between the interacting classes, may evolve in time according to the global evolution of the system.} \label{fig:coop-comp_dyn} \end{figure} In the above-cited paper, linearly additive interactions are used along with a constant critical distance. Such an approach is here revisited by introducing nonlinearly additive interactions and a critical distance which evolves in time depending on the global wealth distribution. In more detail, the characteristics of the present framework are summarized as follows. \begin{itemize} \item \emph{Functional subsystems}. A single functional subsystem ($m=1$) is considered, constituted by the population of a country or of a regional area. For the sake of convenience, in this case we drop any superscript referring to functional subsystems ($p,\,q,\,\dots$). \item \emph{Activity}. The activity variable $u$ identifies the wealth status of the active particles. \item \emph{Encounter rate}. Two different rates of interactions are considered, corresponding to competitive and cooperative interactions, respectively. \item \emph{Strategy leading to the transition probabilities}. When interacting with other particles, each active particle plays a game with stochastic output. If the difference of wealth class between the interacting particles is lower than a critical distance $\gamma$ then the particles compete in such a way that those with higher wealth increase their state against those with lower wealth. Conversely, if the difference of wealth class is higher than $\gamma$ then the opposite occurs (see Fig.~\ref{fig:coop-comp_dyn}). The critical distance evolves in time according to the global wealth distribution over wealthy and poor particles. It may be influenced, at least partially, by the social policy of the government, to be regarded in the present application as an external action. \end{itemize} This modeling approach can be developed for both continuous and discrete activity variable. The specific model proposed here is derived assuming that the activity is a discrete variable, which, as observed in \cite{bertotti2008elc}, allows one to identify the microscopic states of the population by ranges, namely by a finite number $n$ of classes $u_1,\,\dots,\,u_n$. This is not only practical from the technical point of view, but also more realistic for the description, in mathematical terms, of the real-world system at hand. The reference mathematical structure is therefore Eq.~\eqref{eq:evol.disc}, which we rewrite adapting it to the present context: \begin{equation} \frac{df_i}{dt}(t)=\sum_{k=1}^{n}\sum_{h=1}^{n}\eta_{hk}(t)\mathcal{B}_{hk}(i)f_h(t)f_k(t) -f_i(t)\sum_{k=1}^{n}\eta_{ik}(t)f_k(t), \quad i=1,\,\dots,\,n. \label{eq:evol.disc.onepop} \end{equation} Among the possible choices, we select a uniformly spaced wealth grid in the interval $D_u=[-1,\,1]$ with odd $n$: \begin{equation} \begin{array}{c} I_u=\{u_1=-1,\,\dots,\,u_{\frac{n+1}{2}}=0,\,\dots,\,u_n=1\}, \\[3mm] u_i=\dfrac{2}{n-1}i-\dfrac{n+1}{n-1}, \quad i=1,\,\dots,\,n, \end{array} \label{eq:wealth.grid} \end{equation} agreeing that $u_i<0$ identifies a poor class whereas $u_i>0$ a wealthy one. We next assume that the \textbf{encounter rate} $\eta_{hk}\geq 0$ is piecewise constant over the wealth classes: \begin{equation} \eta_{hk}= \begin{cases} \eta_0 & \text{if\ } \abs{k-h}\leq\gamma \ \text{(competition)}, \\ \mu\eta_0 & \text{if\ } \abs{k-h}>\gamma \ \text{(cooperation)}, \end{cases} \label{eq:enc.rate} \end{equation} where $\eta_0>0$ is a constant to be hidden in the time scale and $0<\mu\leq 1$. The \textbf{transition probabilities} $\mathcal{B}_{hk}(i)\in [0,\,1]$ are required to satisfy condition~\eqref{eq:sum.prob.discr}, which implies the conservation in time of the total number of active particles: $$ N(t)=\sum_{i=1}^{n}f_i(t)=\text{constant}, \quad\forall\,t\geq 0, $$ plus an additional condition ensuring the conservation of the average wealth status of the population: \begin{equation} \sum_{i=1}^{n}u_if_i(t)=\text{constant}, \quad\forall\,t\geq 0. \label{eq:const.wealth} \end{equation} This means that the interaction dynamics cause globally neither production nor loss of wealth, but simply its redistribution among the classes. We will denote by $U_0$ the average wealth status as fixed at the initial time: $$ U_0:=\sum_{i=1}^{n}u_if_i(0). $$ By computing on Eq.~\eqref{eq:evol.disc.onepop}, it turns out that sufficient conditions for the fulfillment of \eqref{eq:const.wealth} are: \begin{itemize} \item symmetric encounter rate, i.e., $\eta_{hk}=\eta_{kh}$, $\forall\,h,\,k=1,\,\dots,\,n$; \item transition probabilities such that \begin{equation} \sum_{i=1}^{n}u_i\mathcal{B}_{hk}(i)=u_h+\sigma_{hk}, \qquad \forall\,h,\,k=1,\,\dots,\,n, \label{eq:trans.prob.cons.U0} \end{equation} where $\sigma_{hk}$ is an antisymmetric tensor, i.e., $\sigma_{hk}=-\sigma_{kh}$, $\forall\,h,\,k=1,\,\dots,\,n$. \end{itemize} Notice that the encounter rate \eqref{eq:enc.rate} is indeed symmetric. In order to explain condition~\eqref{eq:trans.prob.cons.U0}, let us consider preliminarily the particular case $\sigma_{hk}=0$ for all $h,\,k$. Then \eqref{eq:trans.prob.cons.U0} reduces to $\sum_{i=1}^{n}u_i\mathcal{B}_{hk}(i)=u_h$, which says that the expected wealth class of a candidate particle after an interaction coincides with its class before the interaction. Namely, interactions do not cause, in average, either enrichment or impoverishment, pretty much like a fair game. In the general case, Eq.~\eqref{eq:trans.prob.cons.U0} allows for fluctuations of the expected post-interaction wealth classes, over the pre-interaction one, however such that they globally balance: $\sum_{h,k=1}^{n}\sigma_{hk}=0$. A possible set of transition probabilities describing cooperation/competition dynamics according to the distance between the interacting classes is, with minor modifications, that proposed in \cite{bertotti2008cla}: \begin{eqnarray} && h=k \begin{cases} \mathcal{B}_{hh}(h)=1 \\ \mathcal{B}_{hh}(i)=0\ \forall\,i\ne h \end{cases} \nonumber \\ && h\ne k \begin{cases} \begin{minipage}[c]{2.2cm} $\abs{k-h}\leq\gamma$ \\ (competition) \end{minipage} \begin{cases} h=1,\,n \begin{cases} \mathcal{B}_{hk}(h)=1 \\ \mathcal{B}_{hk}(i)=0\ \forall\,i\ne h \end{cases} \\ h\ne 1,\,n \begin{cases} h<k \begin{cases} k\ne n \begin{cases} \mathcal{B}_{hk}(h-1)=\alpha_{hk} \\ \mathcal{B}_{hk}(h)=1-\alpha_{hk} \\ \mathcal{B}_{hk}(i)=0\ \forall\,i\ne h-1,\,h \end{cases} \\ k=n \begin{cases} \mathcal{B}_{hn}(h)=1 \\ \mathcal{B}_{hn}(i)=0\ \forall\,i\ne h \end{cases} \end{cases} \\ h>k \begin{cases} k\ne 1 \begin{cases} \mathcal{B}_{hk}(h)=1-\alpha_{hk} \\ \mathcal{B}_{hk}(h+1)=\alpha_{hk} \\ \mathcal{B}_{hk}(i)=0\ \forall\,i\ne h,\,h+1 \end{cases} \\ k=1 \begin{cases} \mathcal{B}_{h1}(h)=1 \\ \mathcal{B}_{h1}(i)=0\ \forall\,i\ne h \end{cases} \end{cases} \end{cases} \end{cases} \\[7mm] \begin{minipage}[c]{2.2cm} $\abs{k-h}>\gamma$ \\ (cooperation) \end{minipage} \begin{cases} h<k \begin{cases} \mathcal{B}_{hk}(h)=1-\alpha_{hk} \\ \mathcal{B}_{hk}(h+1)=\alpha_{hk} \\ \mathcal{B}_{hk}(i)=0\ \forall\,i\ne h,\,h+1 \end{cases} \\ h>k \begin{cases} \mathcal{B}_{hk}(h-1)=\alpha_{hk} \\ \mathcal{B}_{hk}(h)=1-\alpha_{hk} \\ \mathcal{B}_{hk}(i)=0\ \forall\,i\ne h-1,\,h, \end{cases} \end{cases} \end{cases} \label{eq:table.games.1} \end{eqnarray} where it is assumed that interactions within the same class produce no effect. The parameter $\alpha_{hk}\in[0,\,1]$ appearing in Eq.~\eqref{eq:table.games.1} has the following meaning: \begin{itemize} \item in case of competition, it is the probability that the candidate particle further increases or decreases its wealth if it is, respectively, richer or poorer than the field particle; \item in case of cooperation, it is the probability that the candidate particle gains or transfers part of its wealth if it is, respectively, poorer or richer than the field particle. \end{itemize} This probability may be constant, like in the already cited work \cite{bertotti2008cla}, or, as we will assume throughout the remaining part of this paper, may depend on the wealth classes, e.g., \begin{equation} \alpha_{hk}=\frac{\abs{k-h}}{n-1}, \label{eq:alpha} \end{equation} in such a way that the larger the distance between the interacting classes the more stressed the effect of cooperation or competition. Any proportionality constant can be transferred into a scaling of the time variable. It can be checked, using Eq.~\eqref{eq:table.games.1}, that $$\sum_{i=1}^{n}u_i\mathcal{B}_{hk}(i)=u_h+\epsilon_{hk}\alpha_{hk}\Delta{u}, $$ where $\Delta{u}=\frac{2}{n-1}$ is the constant step of the grid \eqref{eq:wealth.grid} while $\epsilon_{hk}$ may be either $-1$, or $0$, or $1$ (depending on $h,\,k$) and is antisymmetric. Since $\alpha_{hk}$ given by Eq.~\eqref{eq:alpha} is instead symmetric, the previous one turns out to be precisely condition~\eqref{eq:trans.prob.cons.U0} with $\sigma_{hk}=\epsilon_{hk}\alpha_{hk}\Delta{u}$, which guarantees that this model preserves the average wealth status of the system. \begin{remark} For wealth conservation purposes, the transition probabilities \eqref{eq:table.games.1} are such that the extreme classes never take part nor trigger social competition. Namely, a candidate particle in the class $h=1$ or $h=n$ can only stay in the same class after any interaction with whatever field particle. Correspondingly, a field particle in the class $k=1$ or $k=n$ can only cause a candidate particle to remain in its pre-interaction class, no matter what the latter is. \end{remark} The \textbf{critical distance} $\gamma$, taken constant in \cite{bertotti2008cla}, is here assumed to depend on the instantaneous distribution of the active particles over the wealth classes, in order to account for nonlinearly additive interactions. In more detail, the time evolution of $\gamma$ should translate the following phenomenology of (uncontrolled) social competition: \begin{itemize} \item in general, $\gamma$ grows with the number of poor active particles, thus causing larger and larger gaps of social competition. Few wealthy active particles insist on maintaining, and possibly improving, their benefits; \item in a population constituted almost exclusively by poor active particles $\gamma$ attains a value such that cooperation is inhibited, for individuals tend to be involved in a ``battle of the have-nots''; \item conversely, in a population constituted almost exclusively by wealthy active particles $\gamma$ attains a value such that competition is inhibited, because individuals tend preferentially to cooperate for preserving their common benefits. \end{itemize} Bearing these ideas in mind, we introduce the number of poor and wealthy active particles at time $t$: $$ N^{-}(t)=\sum_{i=1}^{\frac{n-1}{2}}f_i(t), \qquad N^{+}(t)=\sum_{i=\frac{n+3}{2}}^{n}f_i(t). $$ Notice that, by excluding the middle class $u_\frac{n+1}{2}=0$ from both $N^{-}$ and $N^{+}$, we implicitly regard it as economically ``neutral''. Up to normalization over the total number of active particles, we have $0\leq N^\pm\leq 1$ with also $N^{-}+N^{+}\leq 1$, hence the quantity $$ S:=N^{-}-N^{+}, $$ which provides a macroscopic measure of the \emph{social gap} in the population, is bounded between $-1$ and $1$. Given that, we now look for a quadratic polynomial dependence of $\gamma$ on $S$ taking into account the following conditions, which bring to a quantitative level the previous qualitative arguments: \begin{itemize} \item $S=S_0\Rightarrow\gamma=\gamma_0$, where $S_0$, $\gamma_0$ are a reference social gap and the corresponding reference critical distance, respectively; \item $S=1\Rightarrow\gamma=n$, which implies that when the population is composed by poor particles only ($N^{-}=1$, $N^{+}=0$) the socio-economic dynamics are of full competition; \item $S=-1\Rightarrow\gamma=0$, which implies that, conversely, when the population is composed by wealthy particles only ($N^{-}=0$, $N^{+}=1$) the socio-economic dynamics are of full cooperation. \end{itemize} Considering further that only integer values of $\gamma$ are meaningful, for so are the distances between pairs of wealth classes, the resulting analytical expression of $\gamma$ turns out to be \begin{equation} \gamma=\floor{\frac{2\gamma_0(S^2-1)-n(S_0+1)(S^2-S_0)}{2(S_0^2-1)}+\frac{n}{2}S}, \label{eq:gamma} \end{equation} where $\floor{\cdot}$ denotes integer part (floor). In particular, if the reference social gap is taken to be $S_0=0$ (i.e., when $N^{-}=N^{+}$) then the expression of $\gamma$ specializes as (see Fig.~\ref{fig:blackswan1_gamma_var}) \begin{equation} \gamma=\floor{\frac{n-2\gamma_0}{2}S^2+\frac{n}{2}S+\gamma_0}. \label{eq:gamma_S0_0} \end{equation} \begin{remark} Both $N^{-}$ and $N^{+}$ can be read, according to Eq.~\eqref{eq:moments.discr}, as zeroth-order weighted moments of the set of distribution functions $\{f_i\}_{i=1}^{n}$, with respective weights $$ w(u)= \begin{cases} 1 & \text{for\ } u\in [-1,\,0), \\ 0 & \text{for\ } u\in [0,\,1], \end{cases} \qquad \text{and} \qquad w(u)= \begin{cases} 0 & \text{for\ } u\in [-1,\,0], \\ 1 & \text{for\ } u\in (0,\,1]. \end{cases} $$ Therefore, the dependence of $\gamma$ on $S$ introduces nonlinearly additive interactions in the transition probabilities $\mathcal{B}_{hk}(i)$. \end{remark} \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth,clip]{blackswan1_gamma_var} \caption{The critical distance $\gamma$ vs. the macroscopic social gap $S$ as in Eq.~\eqref{eq:gamma_S0_0} for the two cases $\gamma_0=3,\,7$ and $n=9$ social classes. Empty bullets indicate the reference value $\gamma_0$ corresponding to the reference social gap $S_0=0$. Filled bullets indicate instead the actual initial critical distance $\gamma(t=0)$ corresponding to the actual initial social gap $S(t=0)$ for the case study addressed in Fig.~\ref{fig:blackswan1_U0-04}. Dotted lines, drawing the parabolic profile of function \eqref{eq:gamma_S0_0} without integer part, are plotted for visual reference.} \label{fig:blackswan1_gamma_var} \end{figure} The evolution of the system predicted by the model depends essentially on the four parameters $n$ (the number of wealth classes), $\mu$ (the relative encounter rate for cooperation, cf. Eq.~\eqref{eq:enc.rate}), $U_0$ (the average wealth of the population), and $\gamma_0$ (the reference critical distance). The next simulations aim at exploring some aspects of the role that they play on the asymptotic configurations of the system. In more detail: \begin{itemize} \item $n=9$ and $\mu=0.3$ are selected; \item two case studies for $U_0$ are addressed, namely $U_0=-0.4<0$ and $U_0=0$, in order to compare, respectively, the economic dynamics of a society in which poor classes dominate with those of a society in which the initial distribution of active particles encompasses uniformly poor and rich classes; \item in addition, in each of the case studies above the asymptotic configurations for both constant and variable $\gamma$ are investigated, assuming, for duly comparison, that in the former the critical distance coincides with $\gamma_0$. Notice that a constant critical distance can be interpreted as an external control, for instance exerted by a Government, in order to supervise and regulate the wealth redistribution. The specific value of $\gamma$ can be related to more or less precautionary policies, depending on the allowed level of socio-economic competition. Particularly, $\gamma_0=3$, corresponding to a mainly cooperative attitude, and $\gamma_0=7$, corresponding instead to a strongly competitive attitude, are chosen. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width=\textwidth,clip]{blackswan1_U0-04} \caption{Asymptotic distributions of active particles over wealth classes for $U_0=-0.4$.} \label{fig:blackswan1_U0-04} \end{figure} Figure \ref{fig:blackswan1_U0-04} illustrates the asymptotic configurations in case of negative average wealth status $U_0$ (poor society). The model predicts, in general, a consolidation of the poorest classes. Nevertheless, in a basically cooperative framework ($\gamma_0=3$) a certain redistribution of part of the wealth is observed, which for controlled (viz. constant) $\gamma$ involves moderately poor and moderately rich classes whereas for uncontrolled (viz. variable) $\gamma$ further stresses the difference between the poorest and the wealthiest classes. In fact, imposing a constant critical distance coinciding with the reference value $\gamma_0$ corresponds to forcing the society to behave as if the social gap were $S\equiv S_0=0$. On the other hand, the spontaneous attitude of the modeled society, in which the actual initial social gap computed from the given initial condition is $S(t=0)=\frac{8}{15}\approx 0.53>0$, is much more competitive than that implied by $\gamma_0=3$, as Fig.~\ref{fig:blackswan1_gamma_var} demonstrates. Analogous considerations can be repeated in a competitive framework ($\gamma_0=7$). Now the tendency is a strong concentration in the extreme classes, which in particular results in the consolidation of oligarchic rich classes which were nearly absent at the beginning. \begin{figure}[!t] \centering \includegraphics[width=\textwidth,clip]{blackswan1_U00} \caption{Asymptotic distributions of active particles over wealth classes for $U_0=0$.} \label{fig:blackswan1_U00} \end{figure} Figure \ref{fig:blackswan1_U00} illustrates instead the asymptotic trend in case of null average wealth status (economically ``neutral'' society). In this case there is no difference between the asymptotic configurations reached under constant and variable $\gamma$. Indeed, the initial symmetry of the distribution about the intermediate class $u_5=0$, which ensures $U_0=0$ and is preserved during the subsequent evolution, forces $S=0$, hence $\gamma=\gamma_0$, at all later times. In other words, controlled and spontaneous behaviors of the population coincide. The stationary configurations show a quite intuitive progressive clustering of the population in the extreme classes as the level of competition increases from $\gamma_0=3$ to $\gamma_0=7$. As a general concluding remark, we notice that, even in a scenario of spontaneous/uncontrolled socio-economic dynamics (variable critical distance), none of the asymptotic configurations of the system seems to be properly identifiable as a Black Swan. On the other hand, the simple case studies addressed in this subsection are preliminary to the contents of the next subsection, which will focus on the joint effect of socio-economic and political dynamics. It is from the complex interplay between these two social aspects that Black Swans are mostly expected to arise. \subsection{Modeling support/opposition to a Government} \label{sect:support-opposition} In this section we investigate how the welfare dynamics considered in Section~\ref{sect:welfare} can induce changes of personal opinions in terms of support/opposition to a certain political regime. In doing so, we will keep in mind recent results in the literature of Social Sciences; see for instance \cite{acemoglu2006eod,acemoglu2010tmd,acemoglu2011epi,acemoglu2010pcw,alesina2005wps}. The mathematical structures to be used are those presented in Section~\ref{sect:transitions}, in particular the additional discrete microscopic variable $v$, which partitions the population into $m$ functional subsystems, represents the attitude of the individuals to the government. It is customary to use also for $v$ the uniformly spaced lattice \begin{gather*} I_v=\{v_1=-1,\,\dots,\,v_{\frac{m+1}{2}}=0,\,\dots,\,v_m=1\}, \\[3mm] v_p=\dfrac{2}{m-1}p-\dfrac{m+1}{m-1}, \quad p=1,\,\dots,\,m, \end{gather*} agreeing that $v_1=-1$ corresponds to the strongest opposition whereas $v_m=1$ to the maximum support. Mathematical models based on Eq.~\eqref{eq:evol.disc.2} are obtained by prescribing the encounter rate $\eta_{hk}^{pq}$ and the transition probabilities $\mathcal{B}_{hk}^{pq}(i,\,r)$. A very simple approach is proposed here, deferring to the next section a discussion on possible improvements. For the \textbf{encounter rate} the same model given by Eq.~\eqref{eq:enc.rate} is assumed, according to the idea that encounters among active particle are mainly driven by the wealth state rather than by the difference of political opinion. Thus $\eta_{hk}^{pq}$ is independent of the functional subsystems that candidate and field particles belong to, $\eta_{hk}^{pq}=\eta_{hk}$. Notice that this amounts to disregarding political persuasion dynamics. The model could be made more precise, for instance, by allowing the encounter rate to depend on the proximity of political point of view of the interacting particles. For the \textbf{transition probabilities} the following factorization on the output test state $(u_i,\,v_r)$ is proposed, relying simply on intuition: $$ \mathcal{B}_{hk}^{pq}(i,\,r)=\bar{\mathcal{B}}_{hk}^{pq}(i)\hat{\mathcal{B}}_{hk}^{pq}(r), $$ where: \begin{itemize} \item $\bar{\mathcal{B}}_{hk}^{pq}(i)$ encodes the transitions of wealth class, which are further supposed to be independent of the political feelings of the interacting pairs: $\bar{\mathcal{B}}_{hk}^{pq}(i)=\bar{\mathcal{B}}_{hk}(i)$. For this term the structure given by Eq.~\eqref{eq:table.games.1} is used; \item $\hat{\mathcal{B}}_{hk}^{pq}(r)$ encodes the changes of political opinion resulting from interactions. Coherently with the observation made above that political persuasion is neglected, so that political feelings originate in the individuals in consequence of their own wealth condition, this term is assumed to depend on the economic and political state of the candidate particle only: $\hat{\mathcal{B}}_{hk}^{pq}(r)=\hat{\mathcal{B}}_h^p(r)$. \end{itemize} In view of the special structure $$ \mathcal{B}_{hk}^{pq}(i,\,r)=\bar{\mathcal{B}}_{hk}(i)\hat{\mathcal{B}}_h^p(r), $$ it turns out that sufficient conditions ensuring the conservation in time of both the total number of active particles and the average wealth status of the system are: $$ \begin{cases} \displaystyle\sum_{i=1}^{n}\bar{\mathcal{B}}_{hk}(i)=1, & \forall\,h,\,k=1,\,\dots,\,n \\[4mm] \displaystyle\sum_{i=1}^{n}u_i\bar{\mathcal{B}}_{hk}(i)=u_h+\sigma_{hk}, & \forall\,h,\,k=1,\,\dots,\,n,\ \sigma_{hk}\ \text{antisymmetric} \\[4mm] \displaystyle\sum_{r=1}^{m}\hat{\mathcal{B}}_h^p(r)=1, & \forall\,h=1,\,\dots,\,n,\ \forall\,p=1,\,\dots,\,m; \end{cases} $$ in particular, the first two statements are directly borrowed from Eq.~\eqref{eq:table.games.1}. As far as the modeling of $\hat{\mathcal{B}}_h^p(r)$ is concerned, the following set of transition probabilities is proposed: \begin{eqnarray} && U_0<0,\ u_h<0 \begin{cases} p=1 \begin{cases} \hat{\mathcal{B}}_h^1(1)=1 \\ \hat{\mathcal{B}}_h^1(r)=0\ \forall\,r\ne 1 \end{cases} \\ p>1 \begin{cases} \hat{\mathcal{B}}_h^p(p-1)=2\beta \\ \hat{\mathcal{B}}_h^p(p)=1-2\beta \\ \hat{\mathcal{B}}_h^p(r)=0\ \forall\,r\ne p-1,\,p \end{cases} \end{cases} \nonumber \\ && \begin{minipage}[c]{23.5mm} \centering $U_0<0$, $u_h\geq 0$ \\ or \\ $U_0\geq 0$, $u_h<0$ \end{minipage} \begin{cases} p=1 \begin{cases} \hat{\mathcal{B}}_h^1(1)=1-\beta \\ \hat{\mathcal{B}}_h^1(2)=\beta \\ \hat{\mathcal{B}}_h^1(r)=0\ \forall\,r\ne 1,\,2 \end{cases} \\ 1<p<m \begin{cases} \hat{\mathcal{B}}_h^p(p-1)=\beta \\ \hat{\mathcal{B}}_h^p(p)=1-2\beta \\ \hat{\mathcal{B}}_h^p(p+1)=\beta \\ \hat{\mathcal{B}}_h^p(r)=0\ \forall\,r\ne p-1,\,p,\,p+1 \end{cases} \\ p=m \begin{cases} \hat{\mathcal{B}}_h^m(m-1)=\beta \\ \hat{\mathcal{B}}_h^m(m)=1-\beta \\ \hat{\mathcal{B}}_h^m(r)=0\ \forall\,r\ne m-1,\,m \end{cases} \end{cases} \label{eq:table.games.2} \\ && U_0\geq 0,\ u_h\geq 0 \begin{cases} p<m \begin{cases} \hat{\mathcal{B}}_h^p(p)=1-2\beta \\ \hat{\mathcal{B}}_h^p(p+1)=2\beta \\ \hat{\mathcal{B}}_h^p(r)=0\ \forall\,r\ne p,\,p+1 \end{cases} \\ p=m \begin{cases} \hat{\mathcal{B}}_h^m(m)=1 \\ \hat{\mathcal{B}}_h^m(r)=0\ \forall\,r\ne m, \end{cases} \end{cases} \nonumber \end{eqnarray} where $\beta\in[0,\,\frac{1}{2}]$ is a parameter expressing the basic probability of changing political opinion. According to Eq.~\eqref{eq:table.games.2}, transitions across functional subsystems are triggered jointly by the individual wealth status of the candidate particle and the average collective one of the population, in such a way that: \begin{itemize} \item poor individuals in a poor society ($U_0<0$, $u_h<0$) tend to distrust markedly the government policy, sticking in the limit at the strongest opposition; \item wealthy individuals in a poor society ($U_0<0$, $u_h\geq 0$) and poor individuals in a wealthy society ($U_0\geq 0$, $u_h<0$) exhibit, in general, the most random behavior. In fact, they may trust the government policy either because of their own wealthiness, regardless of the possibly poor general condition, or because of the collective affluence, in spite of their own poor economic status. On the other hand, they may also distrust the government policy either because of the poor general condition, in spite of their individual wealthiness, or because of their own poor economic status, regardless of the collective affluence; \item wealthy individuals in a wealthy society ($U_0\geq 0$, $u_h\geq 0$) tend instead to trust earnestly the government policy, sticking in the limit at the maximum support. \end{itemize} In all cases, transitions are of at most one functional subsystem at a time, i.e., the output state of the candidate particle is possibly in the higher or lower nearest subsystem. In spite of a number of possible refinements of the model, some preliminary numerical simulations can be developed toward the main target of this paper. Specifically, we consider again the two cases corresponding to an economically neutral ($U_0=0$) and a poor ($U_0=-0.4<0$) society, assuming that the political feelings are initially uniformly distributed within the various wealth classes. The relevant parameters related to welfare dynamics are set as in Section~\ref{sect:welfare}. Additionally, the basic probability of changing political orientation is set to $\beta=0.4$, and $m=9$ functional subsystems are selected corresponding to as many levels of political support/opposition. \begin{figure}[!t] \centering \includegraphics[width=\textwidth,clip]{blackswan2_U00} \caption{Asymptotic distributions of active particles over wealth classes and political orientation for $U_0=0$.} \label{fig:blackswan2_U00} \end{figure} Figure~\ref{fig:blackswan2_U00}, referring to the case $U_0=0$, shows that in an economically neutral society with uniform wealth distribution, such that controlled and uncontrolled welfare dynamics coincide, not only do wealthy classes stick at an earnest support to the Government policy, but also poor ones do not completely distrust them, especially in a context of prevalent cooperation among the classes ($\gamma_0=3$). Therefore, this example does not suggest the development of significant polarization in that society. On the other hand, Figure~\ref{fig:blackswan2_U0-04}, corresponding to the case $U_0=-0.4$, clearly shows a strong radicalization of the opposition. The model predicts indeed that, in such a poor society, poor classes stick asymptotically at the strongest opposition, whereas wealthy classes spread over the whole range of political orientations, however with a mild tendency toward opposition for the moderately rich ones (say, $u_5=0$, $u_6=0.25$, and $u_7=0.5$). The growth of political aversion is especially emphasized under uncontrolled welfare dynamics (i.e., variable critical distance $\gamma$), when the marked clustering of the population in the lowest wealth classes, due to a more competitive spontaneous attitude, entails in turn a clustering in the highest distrust of the regime. \begin{figure}[!t] \centering \includegraphics[width=\textwidth,clip]{blackswan2_U0-04} \caption{Asymptotic distributions of active particles over wealth classes and political orientation for $U_0=-0.4$.} \label{fig:blackswan2_U0-04} \end{figure} \begin{remark} The case studies discussed above indicate that an effective interpretation of the social phenomena under consideration requires a careful examination of the probability distribution over the microscopic states. Indeed, Figs.~\ref{fig:blackswan2_U00},~\ref{fig:blackswan2_U0-04} show entirely different scenarios, that might not be completely caught simply by average macroscopic quantities. \end{remark} \subsection{Looking for early signals of the Black Swan} \label{sect:black.swan} The simulations presented in the preceding sections have put in evidence that an unfair policy of welfare distribution can cause a radical surge of opposition to the regime. If this happens, intuitive consequences are, for example, strong social conflicts possibly degenerating into revolutions. Therefore, it is of some practical interest to look for early signals that may precede the occurrence of this situation. To begin with, it is worth detailing a little more the expression \emph{Black Swan}, introduced in the specialized literature for indicating unpredictable events, which are far away from those generally observed by repeated empirical evidence. In \cite{taleb2007bsi}, a Black Swan is specifically characterized as follows: \begin{quotation} {\it ``A Black Swan is a highly improbable event with three principal characteristics: It is unpredictable; it carries a massive impact; and, after the fact, we concoct an explanation that makes it appear less random, and more predictable, than it was.''} \end{quotation} and a critical analysis is developed about the failure of the existing mathematical approaches to address such situations. In the author's opinion, this is due to the fact that mathematical models usually rely on what is already known, thus failing to predict what is instead unknown. It is worth observing that \cite{taleb2007bsi} is a rare example of research moving against the main stream of the traditional approaches, generally focused on well-predictable events. The book \cite{taleb2007bsi} had an important impact on the search of new research perspectives: for instance, it motivated applied mathematicians and other scholars to propose formal approaches to study the Black Swan, in an attempt to forecast conditions for its onset. In this context, the following remarks are in order. \begin{itemize} \item Mathematical models can serve either \emph{predictive} or \emph{exploratory} purposes. In the first case, they predict the evolution in time of the system for fixed initial conditions and parameters; in other words, they are used to simulate specific real-world situations of interest. In the second case, instead, they focus on the influence of initial conditions and free parameters on the overall evolution; namely, they are used to investigate the conditions under which desired or undesired behaviors may come up. \item A successful modeling approach will eventually provide analytical methods for identifying the Black Swan, which in turn will be carefully defined in mathematical terms. \item Individual behavioral rules and strategies are not, in most cases, constant in time due to the evolutionary characteristics of living complex system. Particularly, some parameters of the models, related to the interactions among the individuals, can change in time depending on the global state of the system. Such a variability may generate unpredictable events. \item The qualitative analysis of social phenomena cannot be fully understood simply by average quantities. As already mentioned, the proper detail of mathematical description has to be retained over the microscopic states of the interacting subjects. Statistical distributions can serve such a purpose, while not forcing a one-by-one characterization of the agents. \end{itemize} It is plain that the mathematical search of the Black Swan can hardly rely on a purely macroscopic viewpoint. On the other hand, early signals of upcoming extreme events can be profitably sought at a macroscopic level, in order for them to be observable, hence recognizable, in practice. Bearing in mind the previous remarks, we now provide some suggestions for the possible detection of a Black Swan within our current mathematical framework. The arguments that follow refer to closed systems in the absence of migrations, so that up to normalization the distribution functions can be regarded as probability densities. Let us assume that a specific model, derived from the mathematical structures presented in Section~\ref{sect:compl.red}, has a trend to an asymptotic configuration described by stationary distribution functions $\{f_\text{asympt}^p\}_{p=1}^{m}$: \begin{equation} \lim_{t\to +\infty}\norm{f_\text{asympt}^p-f^p(t,\,\cdot)}=0, \quad p=1,\,\dots,\,m, \label{eq:asympt} \end{equation} where $\norm{\cdot}$ is a suitable norm over the activity $u$, for instance \begin{equation} \norm{g}_{1,w}=\int\limits_{D_u}\abs{g(u)}w(u)\,du \quad \text{for\ } g\in L^1_{w}(D_u), \label{eq:norm.one} \end{equation} and $w:D_u\to\mathbb{R}_+$ is a weight function which takes into account the critical ranges of the activity variable. Equation~\eqref{eq:norm.one} is written for a continuous activity variable; its counterpart in the discrete setting is $$ \norm{g}_{1,w}=\sum_{i=1}^{n}\abs{g(u_i)}w(u_i), $$ now valid for $g,\,w\in C^0(D_u)$. Alternative metrics can also be introduced depending on the phenomenology of the system at hand, which may need, for instance, either uniform or averaged ways of measuring the distance between different configurations. In addition, let us assume that the modeled system is expected to exhibit a stationary trend described by some phenomenologically guessed distribution functions $\{\tilde{f}_\text{asympt}^p\}_{p=1}^{m}$. In principle, such expected distribution have to be determined heuristically for each specific case study, as we will see in the following. Inspired by Eq.~\eqref{eq:asympt}, we define the following time-evolving distance $d_\textup{BS}$ (the subscript ``BS'' standing for Black Swan): \begin{equation} d_\textup{BS}(t):=\max_{p=1,\,\dots,\,m}\norm{\tilde{f}_\text{asympt}^p-f^p(t,\,\cdot)}, \label{eq:dbs} \end{equation} which, however, will generally not approach zero as time goes by for the heuristic asymptotic distribution does not translate the actual trend of the system. Using the terminology introduced in \cite{scheffer2009ews}, this function can be possibly regarded as one of the \emph{early-warning signals} for the emergence of critical transitions to rare events, because it may highlight the onset of strong deviations from expectations. \begin{remark} Specific applications may suggest other distances different from \eqref{eq:dbs}. For instance, linear or quadratic moments might be taken into account. We consider that extreme events are likely to be generated by the interplay of different types of dynamics, a fact that should be reflected in the choice of appropriate metrics. \end{remark} \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth,clip]{dBS} \caption{The mapping $t\mapstod_\textup{BS}(t)$ computed in the case studies with variable $\gamma$ illustrated in Fig.~\ref{fig:blackswan2_U0-04}, taking as phenomenological guess the corresponding asymptotic distributions obtained with constant $\gamma$.} \label{fig:dBS} \end{figure} It is interesting to examine the time evolution of the distance $d_\textup{BS}$ with reference to the case study $U_0=-0.4$ with variable critical distance addressed in Section~\ref{sect:support-opposition}. A meaningful choice of the expected asymptotic distribution is, for both $\gamma_0=3$ and $\gamma_0=7$, the one resulting from the corresponding dynamics with constant critical distance. Reference is to a situation in which a government underestimates the role played by free interaction rules, presuming that the actual dynamics do not differ substantially from those observed under imposed rules. Figure~\ref{fig:dBS} shows the qualitative trends of the mapping $t\mapstod_\textup{BS}(t)$: an initial decrease of the distance, which may suggest a convergence to the guessed distribution, hence apparently a confirmation of the government's conjecture, is then followed by a sudden increase (notice the singular point in the graph of $d_\textup{BS}$) toward a nonzero steady value, which ultimately indicates a deviation from the expected outcome. Such a turnround is possibly a macroscopic signal that a Black Swan is about to appear. However, in order to get a complete picture, the average gross information delivered by $d_\textup{BS}(t)$ has to be supplemented by the detailed knowledge of the probability distribution over the microscopic states, which is the only one able to properly distinguish between lower ($\gamma_0=3$) and higher ($\gamma_0=7$) radicalization of political feelings when welfare dynamics are left to individual selfishness \cite{acemoglu2006eod}. \section{Critical analysis} \label{sect:discussion} In this paper we have considered the problem of modeling complex systems of interacting individuals, focusing in particular on the ability of the models to predict the onset of rare events that cannot be generally foreseen on the basis of past empirical evidences. The results presented in the preceding sections are encouraging, yet a critical analysis is necessary in order to understand how far we still are from the challenging goal of devising suitable mathematical tools for studying the emergence of highly improbable events. We feel confident that a first step is that direction has been made in this paper. On the other hand, we do not naively pretend that the ultimate target has been met. With the aim of contributing to further improvements, we propose in the following some considerations about specific problems selected according to our scientific bias. Hopefully, this selection addresses key issues of the theory. \paragraph*{Mathematical tools for complex systems} The leading idea of the present paper is that the modeling approach to socio-economic and political systems, where individual behaviors can play a relevant role on the collective dynamics, needs to consider the latter as living complex systems. This implies characterizing them in terms of qualitative complexity issues proper of Social Sciences (cf. Section~\ref{sect:compl.asp}), that have then to be translated in the mathematical language. The mathematical tools presented in the preceding sections are potentially able to capture such issues, taking advantage of a procedure of complexity reduction in modeling heterogeneous behaviors and expression of strategies. Yet, no matter how promising this approach may appear, we are not pretending that it is sufficient ``as-is'' for chasing the Black Swan. Instead, models such as those reported in Section~\ref{sect:case.studies} can provide a detailed analysis of events whose broad dynamics are rather well understood. Furthermore, simulations contribute to put in evidence the role of some key parameters and can indicate how to devise external actions in order to eventually obtain a specific behavior of the society under consideration. \paragraph*{Modeling interplays toward the Black Swan} Based on the preliminary results that we have obtained, we believe that rare events can only be generated by several concomitant causes. In this paper we have addressed the interplay between welfare dynamics and the level of consent/dissent to the policies of a government, with the aim of detecting the onset of the opposition to a certain regime. Numerical simulations have indicated that a strong opposition can result from specific conditions, such as a poor average wealth status administered under a welfare policy leaving freely to the market the rules of cooperation and competition, without any action by the central government. On the other hand, the social dissent is attenuated if the government has some control on the welfare dynamics, for instance if it is able to keep an acceptable level of cooperation within the population in spite of the spontaneous competitive behavior induced by the poor collective condition. Of course, the investigation can be further refined by taking into account additional causes. According to the methodological approach proposed in this paper, the latter imply partitioning the population in additional functional subsystems. \paragraph*{A naive interpretation of the recent events in North Africa} The contents and findings of the present work inspire some considerations, no matter how naive they may appear, about the recent conflicts in North Africa countries. First of all, we notice that the latter feature all issues which, according to Taleb's definition \cite{taleb2007bsi}, characterize a Black Swan. Such events were indeed not expected, but their social impact has been definitely important. Furthermore, once they happened it actually seemed that they could have been foreseen, for instance it was argued that a wiser political management of welfare would have limited, or even avoided, their occurrence. Coming to the conceivable predictive ability of the mathematical approach presented in this paper, we remark that the dynamics just recalled are indeed accounted for by the models proposed in Section~\ref{sect:case.studies}. Actually, we are well aware that the events we are reasoning upon were generated by numerous concomitant causes other than simply welfare dynamics. Therefore, while not claiming to have exhaustively tackled them, we hope to have provided a significant contribution to the problem of detecting early signals for such critical events. \paragraph*{Further generalizations of the model} The modeling approach can be generalized for instance by considering the case of open systems, in which external actions can significantly modify both individual and collective system dynamics. Analytical properties of formal mathematical structures, which may be profitably employed to address such issues, have been studied in \cite{arlotti2012cid}, however models specifically targeted at real-world problems are not yet available. Other interesting applications concern the case of several interacting societies, for example the study of how and when a domino effect, like the one recently observed in the aforesaid North Africa countries, can arise. Even more challenging appears to be the generalization of the model to large social networks \cite{barabasi1999mft,vegaredondo2007csn}. Recent studies, among others \cite{bastolla2009amn,rand2011dsn}, indicate that the role and structure of the networks can act as additional inputs for determining the predominance of either cooperation or competition. However, exploring this issue requires a substantial development of the mathematical structures presented in this paper. \paragraph*{Analytical problems} The qualitative analysis of models of the kind presented in this paper generates interesting analytical problems. As a matter of fact, showing the existence and uniqueness of solutions to the initial value problem is not a difficult task, because one can exploit the conservation of the total number of individuals and of their average wealth status. For linearly additive interactions the proof can be obtained by a simple application of fixed point theorems in a suitable Banach space, see \cite{arlotti1996snc}; the generalization to nonlinearly additive interactions has been recently proposed in the already cited paper \cite{arlotti2012cid}. Far more challenging is the analysis, for arbitrary numbers of wealth classes and functional subsystems, of existence and stability of asymptotic configurations, which are at the core of the practical implications of the model. Simulations suggest that, for a given initial condition, the system reaches a unique asymptotic configuration in quite a broad range of parameters, but the existing literature still lacks precise analytical results able to confirm or reject such a conjecture. Some preliminary insights, however confined to linearly additive interactions, can be found in \cite{arlotti1996snc}, that hopefully may serve as a starting point for more general proofs. \bibliographystyle{plain}
1,116,691,498,771
arxiv
\section*{Introduction} When a high electric voltage is suddenly applied to ionisable matter like air, streamers occur as rapidly growing fingers of ionised matter that due to their shape and conductivity enhance the electric field at their heads. This allows them to penetrate into regions where the background field was below the breakdown value before they approached it. On their path, streamers are frequently seen to branch. The streamers form the primary path of a discharge that can later heat up and transform into a lightning leader~\cite{Bazelyan2000,Williams2006a} or a spark~\cite{Bazelyan1998,Gallimberti2002}. Streamers are also the main ingredient of huge sprite discharges in the thin air high above thunderstorms~\cite{Pasko2007,Ebert2010}. Streamers also have important applications in initiating gas chemistry in so-called corona reactors where the later heating phase is avoided by limiting the duration of the voltage pulse~\cite{Veldhuizen2000,Fridman2005}. The streamer head consists of an ionisation wave that moves with velocities ranging from comparable to the local electron drift velocity to orders of magnitude faster. On these time scales, the energy is in the electrons and then in the excited and ionised atoms and molecules in the gas, while the background gas initially stays cold. This is the reason why this process is used for very energy efficient gas chemistry, with applications like, for example, gas and water cleaning~\cite{clements1989,Veldhuizen2000,Grabowski2006,Winands2006a}, ozone generation~\cite{Veldhuizen2000}, particle charging~\cite{Veldhuizen2000,Kogelschatz2004} or flow control~\cite{Moreau2007,Starikovskii2008}. An important factor for the gas treatment is which volume fraction of the gas is treated by the discharge, and this fraction clearly is determined by the branching behaviour. Another question concerns the similarity between streamers at normal pressure and sprite discharges at air pressures in the range from mbar to $\upmu$bar at 40 to 90~km altitude in the atmosphere~\cite{Ebert2010}. Recently Kanmae \emph{et~al.}~\cite{Kanmae2012} stated, citing a private communication with Ebert in 2010, that in contrast to sprite discharges, laboratory streamers typically split into two branches only. Indeed, many streamers form out of the primary inception cloud around a needle electrode~\cite{Briels2008c}, but a propagating streamer in the lab typically splits into only two branches. There are only occasional reports of splitting into three branches~\cite{BrielsThesis_on3branch}, but these events could be a misinterpretation of images that show only a two-dimensional projection of the actual three-dimensional branching event. Theory cannot follow the full branching dynamics either. The present stage of understanding is that the streamer can run into an unstable state that occurs when the streamer radius becomes much larger than the thickness of the space charge layer around the streamer head. This state is susceptible to a Laplacian instability~\cite[and references therein]{Luque2011a}. While this instability can develop into streamer branching in a fully deterministic manner, electron density fluctuations in the leading edge of the ionisation front accelerate the branching process. However, present simulations only can determine the time and conditions of branching, but not the evolution of the branching structure after the instability. Studies on the full electrical discharge trees are based on dielectric breakdown models~\cite{Niemeyer1984,Pasko2000,Akyuz2003}. In these studies, a fractal-like structure is assumed for the discharge tree. The appearance of branchings is included in a phenomenological manner. Development of these models would greatly benefit from thorough knowledge on the occurrence of streamer branching. The present paper is therefore devoted to a systematic investigation of branching into three new channels in air under laboratory conditions. In the remainder of this paper, this event will be referred to as a three-branch. As positive streamers are much easier to generate and much more frequently seen, the investigation is limited to positive streamers. \section*{Stereo photography} Most of the previous experimental streamer investigations are based on two-dimensional images of streamer discharges. In reality, streamers are however a three-dimensional phenomenon. In imaging a 3D phenomenon in 2D, part of the information is lost. Some details may not be visible at all, because the line of sight of the camera is obscured. For the present study, it is important to regard this effect. When, from the point of view of the camera, two streamers are located behind each other, they cannot be individually imaged. Instead, they will overlap on the camera image. If one of these streamers splits in two branches, these two branches combined with the other, continuing, streamer will, in a 2D projection, look like a three-branch. A stereo photography setup has been introduced by Nijdam \emph{et al.}~\cite{Nijdam2008}. This allowed simultaneous measurement of a streamer discharge from two viewing angles, different by roughly $10^\circ$. Using this it was possible to make 3D reconstructions of a streamer discharge and measure the real (3D) branching angles. This has also been used to study the reconnection and merging of streamers~\cite{Nijdam2009}. A different setup is used by Ichiki \emph{et~al.}~\cite{Ichiki2011,Ichiki2012} to measure branching angles in atmospheric and underwater streamers. They image each discharge from three angles. For the reconstruction two images from $0^\circ$ and $90^\circ$ are used. This large angle allows for better depth resolution compared to the $10^\circ$ angle used by Nijdam \emph{et al.}. Identifying the same streamer in both views is however much more difficult. Therefore Ichiki \emph{et~al.} used an additional image from a $225^\circ$ angle to facilitate the identification of the same streamer in the two views. In the present study, identification of the streamers is more important than an accurate depth-coordinate. Therefore a stereographic setup based on the setup employed by Nijdam \emph{et~al.} will be used. Below, we will show that imaging at a third angle is necessary for unambiguous identification. Therefore the setup is extended with an additional camera. It should be noted, that as far as the authors are aware, no 3D reconstructions of sprite streamers are available. Producing this would be very difficult as it would require two telescopic cameras (such as for example the one used by Kanmae \emph{et~al.}~\cite{Kanmae2012}), both aimed at the right (on forehand unknown) sprite location. \section*{Setup} The used point-plane discharge setup is extensively described by Nijdam \emph{et al.}~\cite{Nijdam2010}. A positive voltage pulse of, in our case, 10~kV with a rise time of about 60~ns is applied to a sharp tip to initiate streamers. The streamers propagate toward a grounded plate 160~mm below the tip. The high voltage is created with the so-called C-supply, as described extensively by Briels \emph{et al.}~\cite{Briels2006}. In this setup a charged capacitor is discharged through a spark gap switch. This creates a positive voltage pulse on the tip. The vessel is filled with 100~mbar of artificial air. This is a pre-mixed gas mixture consisting of 20\% oxygen and 80\% nitrogen, both with less than 1~ppm contamination. These conditions were chosen such that the resulting images showed a reasonable number of branches per discharge on the one hand, but on the other hand were not so crowded that individual streamers could no longer be identified within the two views. For comparison: a pressure of 100~mbar is equal to the conditions in the earth atmosphere at 16~km altitude. The setup is schematically drawn in fig.~\ref{fig:setup}. The tip-plane geometry is depicted on the right. The surrounding vacuum vessel is omitted from the drawing. Two cameras are shown in the top and the bottom left corner. The bottom camera images the streamers through a stereographic setup as explained by Nijdam \emph{et~al.}~\cite{Nijdam2008,Nijdam2009}. Lines are added, indicating the different angles at which the streamers are imaged. \begin{figure} \includegraphics[width=\linewidth]{figure1.png} \caption{Simplified drawing of the used setup.The first camera is shown in the bottom left. The stereography setup, consisting of a central prism with reflecting sides and two mirrors, can be seen to its right. The second camera is visible in the top left. The point-plate discharge geometry is depicted on the right. The vacuum vessel enclosing the discharge is omitted for clarity. Lines are added to indicate the different viewing angles.} \label{fig:setup} \end{figure} An example image of a discharge as imaged by the bottom camera using the stereo photography setup is shown in fig.~\ref{fig:example}. It shows the discharge twice; once with a viewing angle slightly from the left and once slightly from the right. In both views the tip is in the top-right corner. Only the left half of the discharge is imaged. \begin{figure} \includegraphics[width=\linewidth]{figure2.png} \caption{Example of a discharge. The discharge is imaged through a stereo photographic setup. Therefore it is shown twice; once slightly from the left, once slightly from the right. A branching event that is explained in the text is indicated with an arrow.} \label{fig:example} \end{figure} One branching event is indicated with a white arrow. In the left view this branching appears to be a three-branch. When looking at the right view, it is however clear that in reality it is a streamer splitting in two with a second streamer propagating in front of it. In only a single 2D image, this two-branch would have been mistaken for a three-branch. It has been noticed that even when using the stereo photography setup, it is still possible that two streamers coincide from both viewing angles. This happens if they propagate closely behind each other, especially when they propagate (almost) horizontally. To circumvent this problem a second camera was placed in the setup. It was positioned above the original camera and images the streamers in a downward direction, as depicted in fig.~\ref{fig:setup}. In the final configuration there is a horizontal angle of $12^\circ$ between the left and the right view and a vertical angle of $15^\circ$ with the top view. \section*{Results} Figure~\ref{fig:threebranch1} shows the two images of one discharge acquired with both cameras. The top image shows the image from the top, downward looking, camera and the bottom image shows the images acquired through the stereo photography setup showing the left and the right view. \begin{figure} \includegraphics[width=\linewidth]{figure3.png} \caption{Example of a discharge viewed from three directions as described in the text. A branching event that is explained in the text is indicated with an arrow.} \label{fig:threebranch1} \end{figure} The branching event indicated with the arrow in the figure is a three-branch. This is visible from all three viewing angles. This confirms the existence of three-branches in laboratory experiments. 2187 discharges in total have been imaged. From these images, a total of 18 three-branches have been identified. On average $1.6 \pm 1.3$ branching events per picture can be identified in all three views, where the indicated error is the standard deviation of counting the number of visible branchings in 98 pictures. We estimate that roughly one out of 200 branching events under the used conditions is a three-branch. It should however be noted that linking the branches in the different views is a manual task and estimating the possible identifiability of three-branches from an image is highly tedious. In the present study only one set of conditions (10~kV pulses in 100~mbar artificial air) is studied. Therefore no conclusion on the influence of different conditions on the number of three-branches can be drawn. No events of streamers branching in four or more branches have been observed in the present study. If they exist, they are obviously more rare than three-branches under the given conditions. \section*{Branching distance} A three-branch can also be interpreted as a streamer forming a two-branch twice within a short propagation distance. If the propagation between these two subsequent branching events is small (order of the streamer thickness) it is reported as a three-branch. Measurements on the distribution of the distance between subsequent branchings would indicate whether the branching in three is a special case or that it is just an extreme in the tail of this distance distribution. For this comparison, the data measured by Nijdam \emph{et~al.}~\cite{Nijdam2008} have been analysed. They measured the ratio between the streamer length between two branchings and its width for 94 streamers. This was done for discharges with 47~kV pulses in a 14~cm point-plane gap geometry filled with 200, 565 or 1000~mbar of ambient air. Figure~\ref{fig:stream_length_to_widths_ratio} shows a histogram of the natural logarithm $\ln(L/d)$ of this ratio. Note that in this figure data for all three pressures have been combined, as no significant difference in the ratio was found for the different pressures. \begin{figure} \includegraphics[width=\linewidth]{figure4.png} \caption{Normalised histogram of the natural logarithm of the ratio of the streamer length between two branchings and its width as measured by Nijdam \emph{et~al.}~\cite{Nijdam2008} (blue hatched bars) with a Gaussian distribution fit (red line).} \label{fig:stream_length_to_widths_ratio} \end{figure} A Gaussian distribution has been fitted through the ratio distribution, as shown in fig.~\ref{fig:stream_length_to_widths_ratio}. It should be noted that the choice of fitting a Gaussian distribution on the logarithm of the ratio is purely based on the visible shape of the shown histogram and not on a physical theory regarding the expected distribution. From the Gaussian fit it has been calculated that there is a chance of 1:1000 that a streamer would branch twice within propagating its own width (i.e. ratio~$\leq$~1). This is less often than the number of three-branches of one out of $200$ reported above. It should be noted that the ratio measurements performed by Nijdam \emph{et~al.} were conducted under different conditions than the present measurements, namely in a smaller gap with a higher applied voltage at higher pressures in slightly different gas (ambient air versus artificial air). The ratio measurements however did not appear to depend on the gas pressure. Secondly it should be noted, that the available data set is limited. Few data points are available in the low ratio region, therefore a large error is introduced in the extrapolation of the fitted distribution. Taking the extremes in the 95\% certainty interval for the fitted parameters of the Gaussian distribution, the chance of a ratio~$\leq$~1 can range from one in 1:100 to 1:10\,000. As explained above, the choice for a Gaussian distribution is arbitrary and has no physical basis. Therefore the range given above can be even larger when assuming other distributions. Further measurements on the distance between subsequent streamer branchings are thus desirable. \section*{Streamer widths} Figure~\ref{fig:threebranch_widths_before} shows a histogram of the widths of the 18~streamers, just before a three-branch. The widths have been determined in the same manner as explained by Nijdam \emph{et~al.}~\cite{Nijdam2010}. The streamer width is determined as the full width at half maximum of the average of multiple cross sections along the streamer channel. Note that the widths shown in the figure are the average of the widths measured in the left and the right view of the discharge. \begin{figure} \includegraphics[width=\linewidth]{figure5.png} \caption{Normalised histogram of the width of a streamer before a two-branch (blue hatched bars) and a three-branch (red solid bars).} \label{fig:threebranch_widths_before} \end{figure} Beside the three-branches, many other (two-)branches are seen in the imaged discharges. For comparison a normalised histogram of the width of 55~streamers before such a two-branch is also displayed in the figure. It can be seen that relatively thick streamers are more likely to form a three-branch than thinner streamers. The average thickness of a streamer before a two-branch is $3.8 \pm 0.8$~mm, whereas the average thickness before a three-branch is $4.3 \pm 1.0$~mm. The given uncertainties are the standard deviation of the width distribution. Note that according to Student's t-test the chance of these width distributions being from populations with equal means is 4.7\%. This significance level is limited by the low number of measured three-branches. Streamers at higher reduced electric fields are generally thicker. Therefore different conditions may lead to more three-branches. This might explain why streamers splitting in more than two branches are observed more often in sprite streamers than in laboratory experiments, as their reduced diameter is larger~\cite{Kanmae2012}. Note that the width a streamer appears to have in an image is dependent on the distance to the camera. Especially if the streamer is not in the focal plane, it will appear wider than it really is. The high voltage electrode is in the focal plane of the camera. However, as the streamer discharge is three dimensional, some streamers will propagate in front or behind this focal plane. Therefore the reported diameters are an upper limit for the real streamer diameters. As no dependence of the appearance of two or three-branches on the position has been found, this effect will be equally large for two and three-branches. Therefore comparison between the two is valid even though widths are somewhat overestimated. Figure~\ref{fig:threebranch_widths_after} shows a histogram of the widths of streamer branches after a two and a three-branch. This data is obtained from the same branching events as the data in fig.~\ref{fig:threebranch_widths_before}, but now for the two or three streamers after the branch. It is immediately clear that these branches are on average thinner than the streamers before the branching event; $2.5 \pm 0.6$ and $2.1 \pm 0.5$ after respectively a two and a three-branch. According to the t-test, the p-values for the null hypothesis are less than 1\%. \begin{figure} \includegraphics[width=\linewidth]{figure6.png} \caption{Normalised histogram of the width of the streamer branches after a two-branch (blue hatched bars) and a three-branch (red solid bars).} \label{fig:threebranch_widths_after} \end{figure} Above, it was shown that the streamers before a three-branch are on average thicker than before a two-branch. This last figure however indicates that streamers after a three-branch are thinner than after a two-branch. This means that the three-branch reduces the streamer diameter more than a two-branch. This effect is shown in more detail in fig.~\ref{fig:threebranch_widths_ratio}. Here, the ratio between the width of a streamer after a branch to its width before the branch is shown for two and three-branches. For a two-branch this ratio is $0.68 \pm 0.18$ and for a three-branch it is $0.51 \pm 0.15$. This ratio is thus smaller for three-branches, meaning these branches result in relatively thinner streamers. With the t-test, the p-value for the null hypothesis is found to be less than 1\%. \begin{figure} \includegraphics[width=\linewidth]{figure7.png} \caption{Normalised histogram of the ratio of the streamer widths after to before a two-branch (blue hatched bars) and a three-branch (red solid bars).} \label{fig:threebranch_widths_ratio} \end{figure} If one were to assume that the surface of the total cross section of the streamers before and after the branching would be constant, the width ratio would be~$\sqrt{1/2} \approx 0.71$ and $\sqrt{1/3} \approx 0.58$ for respectively a two and a three-branch. These values are comparable to the $0.68 \pm 0.18$ and $0.51 \pm 0.15$ found in fig.~\ref{fig:threebranch_widths_ratio}. This indicates that indeed the surface of the total cross section of the streamers before and after the branching remains approximately constant. As a theoretical consideration: if the maximal electric field at the tip of the streamers is the same, even if they have different diameters before and after branching, the surface charge density, determining the difference between the electric field inside and ahead of the streamer, is approximately equal. If then the electric charge of the streamer is mainly concentrated at the streamer tip and the total charge would be conserved, the cross sections of parent and daughter streamers are roughly the same. \section*{Conclusion} It has been shown that streamer branching in three does occur in laboratory discharges created with 10~kV pulses in a 160~mm point-plane geometry filled with 100~mbar of artificial air. More than two viewing angles are required for this assessment. Under the investigated conditions it only occurs in roughly one out of $200$~branching events. This was compared to the expectation from the distance between subsequent branchings. Not enough data on the statistical distribution of this length was available for a decisive conclusion whether a three-branch is a special case or just the lower limit of the distance between two branchings. It is shown that the three-branches on average occur in thicker streamers compared to two-branch. This might explain why it is observed more often in sprite discharges. Streamer branches are thinner than their parent streamer both after a two and a three-branch. The reduction in diameter is bigger over a three-branch than over a two-branch. The ratio between streamers before and after a branching coincides with the value determined assuming a constant total streamer cross section surface. \bibliographystyle{unsrt} \balance
1,116,691,498,772
arxiv
\section{Introduction.} Among the results of the loop approach to non-perturbative quantum gravity there are several which tell us that the picture of geometry on scales small as compared with our usual scales (on Planckian scales) looks quite differently from the habitual picture of Riemannian geometry. The usual classical picture seems to arise only on coarse-graining. The fundamental excitations of the emerging quantum geometry are one-dimensional loop excitations and the whole quantum picture is of essentially discrete, combinatorial character. (See, for instance, recent works \cite{QGeometry,GeomEig}, which also contain extensive references to the previous papers on the loop approach). Let us assume now that we can formulate a reasonable approximation criteria, and for any classical geometry configuration find the set of approximating it quantum states. It is natural to expect that if the precision of approximation is chosen not too high, there will be a lot of quantum states corresponding to the same `geometry'. It is quite tempting to consider a usual Riemannian geometry description as a macroscopical one and to regard all quantum states approximating a Riemannian metric as micro-states corresponding to the same macro-state. This point of view brings up a lot of interesting possibilities. Indeed, recall that with distinguishing between macro- and micro-states of a system the notion of entropy arises in statistical mechanics. Entropy is a function which depends on a macroscopic state of the system. It can be thought of as a function which for each macroscopic state gives the (logarithm of the) number of different microscopic states corresponding to this macro-state. More precisely, the space of states of the system should be divided into compartments, where all micro-states belonging to the same compartment are macroscopically indistinguishable. Then the entropy of a macro-state is given\footnote{% In fact, the usual thermodynamical entropy is proportional to this number, but it is convenient to work in the units in which this proportionality coefficient is chosen to be unity.} by the logarithm of the `volume' of the compartment corresponding to this macro-state. Let us return to our quantum description of geometry. It is natural to introduce a coarse-graining of the space of quantum states in such a way that quantum states approximating different `geometries' belong to different compartments. Having divided the space of states this way, it is natural to introduce the function which for any geometry configuration gives the logarithm of the `volume' of the corresponding compartment. This gives rise to the notion of {\it geometrical entropy}. Thus, geometrical entropy tells `how many' there are different quantum states which correspond to a given geometry configuration. To be more precise, in the cases when a compartment is itself a linear space the entropy is given by the logarithm of the dimension of the corresponding compartment. In this paper we try to implement the idea of geometrical entropy following a very simple choice of the `ensemble' of quantum states. Although some of our results are surprising, the aim of this paper is not to argue a physical significance of the results obtained. Rather, in order to understand if the general idea of geometrical entropy makes sense, we consider a particularly simple case, in which the analysis can be accomplished, and develop a technique that may prove to be useful for future developments. Our choice of the `ensemble' of quantum states is as follows. First, we restrict our consideration to the gravitational degrees of freedom of an arbitrarily chosen in space 2-dimensional surface. Namely, we consider Lorentzian 3+1 general relativity in the framework of (real) Ashtekar variables, and the quantization given by the loop quantum gravity. We restrict our consideration to the gravitational system induced by the full theory on some 2-surface $S$ embedded in the spatial manifold $\Sigma$. Thus, the degrees of freedom of our system are just the degrees of freedom of general relativity that live on a surface $S$. Second, we specify a macro-state of our system simply fixing the total area of the surface $S$. In this case, as we shall see later, it is easy to pick up all quantum states which approximate a given macro-state, and the analysis becomes almost straightforward. Thus, to illustrate the general idea of geometrical entropy we stick here to the case when a macro-state of our system is specified simply by the total area $A$ of the surface. Let us note that such a statistical mechanical consideration of surface degrees of freedom is of a special interest because of its possible connection with the black hole thermodynamics. Indeed, there is a common believe that it is the degrees of freedom living on the horizon surface of a black hole which account for the black hole entropy. To try to reveal the connection between quantum gravity and thermodynamics that is suggested by black hole physics is one the motivations for our investigation. The other, and maybe even more important motivation is that the loop quantum gravity itself is a new approach. In such a situation it is necessary to apply the formalism to simple problems, just in order to see if it gives a reasonable picture. The set of problems concerning statistical properties of the theory might serve as one of such tests. The paper is organized as follows. In the next section we remind briefly how the surface quantum states look like and discuss the issue of correspondence between macro- and micro-descriptions. In Section \ref{sec:3} we calculate geometrical entropy of surface degrees of freedom considering the case of an open surface. Section \ref{sec:4} contains a generalization of our result to the case of closed surfaces. We conclude with the discussion. \section{Surface quantum states.} \label{sec:2} Let us recall the description of general relativity in terms of (real) Ashtekar variables. In the Hamiltonian framework general relativity can be formulated as a theory of $SU(2)$-connection over the spatial manifold. The connection field plays the role of a configurational variable; the momentum variable is presented by the canonically conjugated field. The dynamics of the theory is determined by a set of constraint functionals. The degrees of freedom induced on an arbitrarily chosen surface $S$ are described by pull-backs of the connection and momentum fields into $S$, which we shall denote by $a_a^{AB}$ and ${\tilde e}^{a\,AB}$ respectively ($A,B$ stands for two component spinor indices). The `surface' momentum field ${\tilde e}^{a\,AB}$ carries information about 2-metric on $S$. Let us describe the quantum theory. The loop quantization of 3+1 general relativity is described in details in \cite{Asht1}. For our purposes it is sufficient to recall that there exists an orthogonal decomposition of the Hilbert space of gauge invariant states of quantum general relativity into subspaces, which are labeled by the so-called spin network states. Spin network states are labeled by closed graphs in $\Sigma$ with spins (or, equivalently, with irreducible representations of the gauge group) assigned to each edge and intertwining operators assigned to each vertex of graph. Let us now specify the space of quantum surface states. Given a 2-d (not necessarily closed) surface $S$ embedded into the spatial manifold $\Sigma$ and a 3-d spin network $\Gamma$ one can consider the intersection of this spin network with $S$. Let us call the intersection points vertices. Generally, there can be vertices of any valence not less then two\footnote{% In the case of theory without fermionic degrees of freedom, which we consider here, the valence of vertices of a spin network state should be not less than two in order to have a gauge invariant state. When fermionic degrees of freedom are present in the the theory valence of vertices can be equal to one. In this case open ends of a spin network describe fermionic degrees of freedom.} (valence of a vertex is the number of edges of a spin network state which meet in this vertex). Also, there can be edges lying entirely on the surface $S$ among the edges of $\Gamma$ ; we shall call such edges tangential (see Fig. \ref{fig:1}). \begin{figure} \centerline{\hbox{\psfig{figure=vertex.ps}}} \caption{Vertex of a 3-d spin network (a) and its intersection with the surface S (b). Edges $1,2,3$ are tangential ones.} \label{fig:1} \end{figure} The intersection of $\Gamma$ with the surface $S$ defines what we shall call a surface spin network on $S$. A surface spin network is a graph lying entirely on the surface $S$, with spins assigned to each edge, and intertwining operators assigned to each vertex. The intertwiners assigned to each vertex are just those of the corresponding 3-d spin network. This means that vertices of a 2-d surface spin network `remember' what were the spins of the edges incident at the surface (see Fig. \ref{fig:2}). Note that our definition of a surface spin network is not the canonical one. A `canonical' surface spin network is defined as a graph lying on the surface, with spins assigned to each edge, and intertwiners assigned to each vertex. We use the term `surface spin network' simply to denote the intersection of a 3-d spin network with the surface $S$, or, in other words, to denote the `surface' part of information carried by a 3-d spin network. To avoid confusion, we shall also use the term `generalized surface spin network state'. \begin{figure} \centerline{\hbox{\psfig{figure=fig2.ps}}} \caption{Intertwining operator assigned to the vertex $v$ of a generalized surface spin network `remembers' what were the spins $j_4,j_5,j_6$ of the edges $4,5,6$ incident at the surface.} \label{fig:2} \end{figure} The simples non-trivial example of a surface spin network is that coming from a single edge intersecting the surface $S$ (see Fig. \ref{fig:3}). \begin{figure} \centerline{\hbox{\psfig{figure=fig3.ps}}} \caption{The valence of the simplest vertex is two.} \label{fig:3} \end{figure} Such a spin network is simply the point $v$ on $S$, with the intertwining operator being the map from the one copy of the representation space $\rho^{(j)}$ ($j$ here is the spin labeling the irreducible representation of $SU(2)$) to the other copy of $\rho^{(j)}$. The intertwiner in this case is specified (up to an overall constant) simply by the spin $j$ attached to the vertex $v$. We define the space $\cal H$ of surface quantum states as the space spanned by all (generalized) surface spin network states. This definition means that the basis in $\cal H$ is formed by surface spin network states, what gives us all that we need for our counting purposes. We can now recall that there exists a set of well defined operators ${\hat A}_R$, which `measure' quantum geometry of $S$. These operators correspond to areas of various regions $R$ of the surface $S$ (see \cite{QGeometry}). It turns out that 3-d spin network states are eigenstates of operators ${\hat A}_R$. The corresponding eigenvalues are given by \begin{equation} A_{s} = \sum_{v\in R} {1\over2} \sqrt{2j_{(v)}^{d}(j_{(v)}^d+1)+ 2j_{(v)}^{u}(j_{(v)}^u+1)- j_{(v)}^{u+d}(j_{(v)}^{u+d}+1)}. \label{qarea} \end{equation} Note that we measure areas in the units of $16\pi l_p^2$, which is convenient in the loop quantum gravity. Here the sum is taken over all vertices $v$ lying in the region $R$ of the surface $S$; $j_{(v)}^u,j_{(v)}^d$ and $j_{(v)}^t$ are the total spins of edges lying up, down the surface and tangential to the surface (see \cite{QGeometry} for details). Although these operators are defined on 3-d quantum states, the eigenvalues (\ref{qarea}) depend only on the `surface' part of the information carried by a 3-d spin network state. Therefore, we can think of $\hat{A}_R$ as operators defined on the (generalized) surface spin network states, with eigenvalues given by (\ref{qarea}). As we have said in the introduction, in this paper we are going to consider a geometrical entropy that corresponds to a macro-state specified simply by a total surface area $A$. For this simple case, the approximation criterion between macro- and micro-descriptions is straightforward. Namely, we can say that a quantum state (micro-state) approximates a given macro-state if the mean value of the operator ${\hat A}_S$ in this quantum state is approximately equal to the fixed value $A$. It is straightforward to see that quantum states approximating total surface area form a linear subspace in the space of all surface states. Its dimension is equal to the number of different surface spin network states approximating $A$. Let us see now how many different quantum states approximate a given total area. It is easy to see that this number is infinite. Indeed, loops on the surface, which are the simplest possible parts of a surface spin network state, do not give any contribution to the areas. Therefore, one has an infinite number of spin networks which approximate one and the same total area of $\cal S$ being different only in configurations of loops on the surface. One can argue, however, that this happens because, using terminology from statistical mechanics, a macro-state of our system is not completely specified when we fix only a total area of the surface. Indeed, areas of regions on $S$ carry information only about degrees of freedom described by $e^{AB}$ field. But there are also degrees of freedom described by the pull-back of the connection field on the surface which one should take care of when specifying a macro-state. Our guess is that different configurations of loops on the surface from the above example correspond to different classical configurations of the connection field on $S$. Indeed, as we know, in classical theory loop quantities are constructed as traced holonomies of the connection, and, therefore, are just those objects which carry information about the connection field. Therefore, since we want to forget about degrees of freedom described by the field $a^{AB}$ when we specify a macro-state, to be consistent we have also to forget about those quantum states which, as we believe, contribute to $a^{AB}$ and do not contribute to the area of $S$. Let us, therefore, consider only quantum states which contribute to the areas of regions on $S$, but do not `contribute' to the connection field on the surface. These, according to our guess, are the states which contain no loops on the surface. We shall call the corresponding spin networks {\it open} spin networks. A (generalized) surface spin network is called open if the (surface) graph that labels it contains no closed paths (or no loops). The simplest open spin network state is the spin network containing a single vertex (see Fig. \ref{fig:3}). Now, to find the entropy which corresponds to a macro-state of a fixed total surface area $A$ we should calculate the number of quantum states which approximate $A$, taking into account only open spin networks. However, let us first analyze the problem taking into account only some particularly simple spin network states, and then try to generalize the result obtained. \section{Geometrical entropy: sets of punctures.} \label{sec:3} Let us consider spin networks whose vertices are bivilent, i.e., those specified simply by sets of points (vertices) on the surface with spins assigned to these points (see Fig. \ref{fig:4}). \begin{figure} \centerline{\hbox{\psfig{figure=fig4.ps}}} \caption{The simplest surface spin network states are specified by a set of punctures on the boundary.} \label{fig:4} \end{figure} Points on the surface with spins assigned are sometimes called punctures (see, for example, \cite{Linking}), and we shall use this name as well. So quantum states which we consider now are specified by sets of punctures on the surface. For these simple quantum states there exists a simplified version of the formula (\ref{qarea}). Namely, a set $\{p,j_{p}\}$ of punctures gives the area of a region $R$ on $S$ \begin{equation} A(\{p,j_{p}\}) = \sum_{p\in R} \sqrt{j_{p}(j_{p}+1)}. \label{areasimple} \end{equation} Here the sum is taken over all punctures which lie in the region $R$ and $j_p$ are the corresponding spins. To get the total area of the surface we just have to sum over all punctures $p\in S$. With these simple states the problem of calculating the entropy becomes almost straightforward. Let us use the standard trick. Instead of counting states which correspond to the same area we shall take a sum over {\it all} states but take them with different statistical weight. Namely, let us consider the sum \begin{equation} Q(\alpha) = \sum_{\Gamma}\exp{\left ( -\,\alpha\,A(\Gamma) \right )} \label{StatSum} \end{equation} over all different states $\Gamma = \{p,j_{p}\}$, where $\alpha > 0$ is a parameter. Considering $p_{\Gamma} = {1\over Q(\alpha)} \exp{\left ( -\,\alpha\,A(\Gamma) \right )}$ as a {\it probability} of our system to be found in a state $\Gamma$, it is easy to see that the {\it mean} value of the area in such statistical state is \begin{equation} A(\alpha) = - {\partial \ln Q\over\partial\alpha}. \label{a1} \end{equation} The entropy of the system in this macro-state is given by the standard formula $S = -\,\sum_{\Gamma} p_{\Gamma}\ln p_{\Gamma}$ or, as it is easy to check, by \begin{equation} S(\alpha) = \alpha\,A(\alpha) + \ln Q(\alpha). \label{SA} \end{equation} We see that the mean value of the surface area depends on $\alpha$. If statistical `ensemble' of states is chosen properly, then one can adjust the value of $\alpha$ in such a way that $A(\alpha)$ acquires any prescribed value. There is some particular value of $S$ which corresponds to the chosen value of $A$. Excluding $\alpha$ in such a way we obtain the entropy as a function of the area $S = S(A)$. Statistical mechanics tells us that when the density $\eta(A)$ of states of our system grows sufficiently fast with $A$ it is of no difference which way of calculating $S(A)$ to choose; one can count the logarithm of the number of different states which give one and the same area or calculate the function $S(A)$ as described - the results will not differ. Let us note that $S(A)$ calculated through (\ref{SA}) will be meaningful only for large $A$ (as compared with unity, i.e. with the Planckian area). This is really what we need because only in this case the notion of approximation and, therefore, the notion of entropy acquires sense. Let us now discuss whether all the sets of $\{p,j_{p}\}$ should be taken into account in (\ref{StatSum}). First of all, let us recall that we have to count not the surface spin networks themselves but the diffeomorphism equivalence classes of spin networks. This means that two surface spin networks which can be transformed on into another by a diffeomorphism on the surface should be considered as a single state in (\ref{StatSum}). Thus, the continuous set of data $\{ p, j_p \}$ ($p$ runs all over the surface $S$) reduces simply to a set of spins $\{ j_p \}$ when one identifies sets of punctures which can be mapped one into another by a diffeomorphism on the surface (note, however, the discussion following the next paragraph). Next, we note that, if the surface $S$ is a closed one, not every set $\{ p, j_p \}$ can be obtained as a result of intersection with $S$ of some 3-d spin network in $\Sigma$. Namely, the sum $\sum_{p}j_{p}$, which is the total spin which `enter' the surface must be an integer for gauge invariant states. This corresponds to the fact that not all eigenvalues given by the formula (\ref{qarea}) are eigenvalues of the area of a closed surface, as it was first pointed out in \cite{QGeometry}. Let us consider in this section only a simpler case of an open surface $S$; we return to the case of a closed surface in sec. \ref{sec:4}. And, finally, let us consider the states specified by the following two set of punctures (see Fig. \ref{fig:5}) \begin{equation} \{\ldots,j(p')=s,\ldots,j(p'')=q,\ldots\},\qquad \{\ldots,j(p')=q,\ldots,j(p'')=s,\ldots\}. \label{states} \end{equation} \begin{figure} \centerline{\hbox{\psfig{figure=fig5.ps}}} \caption{Two sets of punctures which are considered as specifying different quantum states.} \label{fig:5} \end{figure} These two states differ one from another only at two points $p',p''$; they give the same total area of the boundary. Should we distinguish these two states or consider them as a single physical state? It turns out that this is the key question which determines the form of the dependence of the entropy $S(A)$ on the area $A$ of the surface. Let us now consider these states as {\it different} and postpone the discussion of such a choice to the last section. So let us denote by $N(A)$ the number of states which correspond to one and the same area $A$ in the case when all the sets (\ref{states}) are regarded as specifying different states. It is easy to see that $N(A)$ is the number of {\it ordered} sets of punctures which approximate the total area $A$. It is straightforward to compute $N(A)$ using our method with the statistical sum.\footnote{% One can also do this calculation explicitly, using combinatorial methods. See \cite{Rov}.} One can easily check that the fact that we regard the sets (\ref{states}) as specifying different states (or, equivalently, the fact that we count ordered sets of punctures) means that we can sum over the spin of each puncture {\it independently} \begin{equation} Q = 1 + \sum_{n=1}^{\infty}\sum_{j_{p_{1}}=1/2}^{\infty}\cdots \sum_{j_{p_{n}}=1/2}^{\infty}\exp{-\,\alpha\, \sum_{p}\sqrt{j_{p}(j_{p}+1)}}. \label{1} \end{equation} The first sum here denotes the sum over the number of possible punctures on $S$ and the subsequent ones denote the summation over the possible spins. It is easy to see that \begin{equation} Q = {1\over 1 - z(\alpha)}, \label{2} \end{equation} where $z(\alpha)$ is given by \begin{equation} z(\alpha) = \sum_{j=1/2}^{\infty}\exp{-\,\alpha\,\sqrt{j(j+1)}}. \label{smallz} \end{equation} Note that the sum here runs over all positive integers and half-integers. One can expect that $Q(\alpha)$ will increase as $\alpha$ gets smaller because in any case (\ref{1}) diverges when $\alpha$ goes to zero. However, we see that (\ref{2}) diverges even for some finite value of $\alpha'$ such that $z(\alpha') = 1$ (we shall see in a minute to which value of $\alpha$ this corresponds). When $\alpha$ gets smaller and approaches $\alpha'$, $Q(\alpha)$ increases and diverges at the point $\alpha'$. It can easily be checked that $A(\alpha)$ and $S(\alpha)$ also diverge when $\alpha \to \alpha'$. This means that changing $\alpha$ slightly we will obtain substantial differences in values of $A$ and $S$. What we are interested in is the dependence $S(A)$ for large values of $A$. But we see that all large values of $A$ can be obtained by small changes in $\alpha$ when $\alpha \to \alpha'$. Thus, from (\ref{SA}) we conclude that for $A >> 1$ \begin{equation} S \approx \alpha'\,A. \label{entropy} \end{equation} Here we neglected the term $\ln Q$ which is small comparatively with the main term (\ref{entropy}). This result tells us that entropy grows precisely as the first power of area of the boundary. Now let us see what $\alpha'$ amounts to. One can do this numerically but it is also straightforward to find an approximate value. Note that $z(\alpha)$ can be rewritten as the sum over integers \begin{equation} z(\alpha) = \sum_{l=1}^{\infty} \exp{-\,{\alpha\over 2}\,\sqrt{l^2+2l}}. \end{equation} The term under the square root in the exponential can be given the form $(l+1)^{2}-1$. Because the sum starts from $l=1$ we can neglect the unity comparatively to a larger term. Then $z(\alpha)$ can be easily computed \begin{equation} z(\alpha) = {\exp{(-\alpha)}\over 1\,-\,\exp{(-\alpha/2)}}. \label{z} \end{equation} This gives for the $\alpha': z(\alpha')=1$ the value $\alpha'=2\,\log{{2\over\sqrt{5}-1}} \approx 0.96$. An explicit numerical investigation of the equation $z(\alpha') = 1$ gives a close value $\alpha'\approx 1.01$. So we have shown that for large values of $A$ the entropy depends on the area as \begin{eqnarray} S(A) = \alpha' A; \nonumber \\ \alpha' = 1.01. \label{formula} \end{eqnarray} \section{Geometrical entropy: the case of closed surfaces.} \label{sec:4} As we have mentioned before, in the case of closed surfaces ${\cal S}$ we have to take into account the fact that not all eigenvalues given by the formula (\ref{qarea}) are eigenvalues of the operator $\hat{A}_S$, which measures the total area of the surface. Namely, in the case of a closed surface ${\cal S}$ gauge invariant quantum states are those which satisfy the condition that the sums $\sum_{v\in S} j_{(v)}^u$ and $\sum_{v\in S} j_{(v)}^d$ over all vertices lying on the surface are integers \cite{QGeometry} (spins $j_{(v)}^u,j_{(v)}^d$ are those defined after the formula (\ref{qarea}). This condition means that for the case of a closed surface some spin network states should be excluded from the sum (\ref{StatSum}). Recall that our result essentially follows from the fact that the statistical sum $Q(\alpha)$ diverges when $\alpha$ approaches some {\it finite} value $\alpha'$. Because we have to drop some positive terms from the statistical sum corresponding to a closed ${\cal S}$, $Q(\alpha)$ can prove to be convergent for all $\alpha > 0$ and as a result we would obtain some other (non-linear) dependence $S(A)$\footnote{% In fact, in this case we would obtain that S(A) grows {\it slower} than $A$.}. The aim of this section is to show that geometrical entropy $S(A)$ of (ordered) sets of punctures, considered in the previous section, for a closed surface $S$ still depends linearly on the total surface area. So, again, our states are specified by ordered sets $\{p,j_p\}$ of punctures on the surface. However, in the case of a closed surface, not all sets of punctures correspond to gauge invariant physical states. Namely, in our case gauge invariant states are those for which $\sum_{p\in S} j_p$ is an integer. To find the number of different states which approximate one and the same total area we can again apply our trick with the statistical sum. The statistical sum $Q(\alpha)$ will be given by the expression (\ref{1}), where we have to take into account only the states satisfying the above condition. It is not hard to calculate the statistical sum $Q(\alpha)$ for our case of a closed surface $S$. Let us divide the function $z(\alpha)$ given by (\ref{z}) into two parts $z(\alpha)=\tilde{z}(\alpha)+\tilde{\tilde{z}}(\alpha)$. Function $\tilde{z}(\alpha)$ is the sum over all integer values $l$ of spin $j$ \begin{equation} \tilde{z}=\sum_{l=1}^{\infty} \exp{-\alpha\sqrt{l(l+1)}}. \end{equation} Function $\tilde{\tilde{z}}(\alpha)$ is the sum over all half-integers $j=l-1/2$ \begin{equation} \tilde{\tilde{z}}(\alpha) = \sum_{l=1}^{\infty} \exp{-\alpha\sqrt{(l-1/2)(l+1/2)}}. \end{equation} Let us rewrite the statistical sum $Q(\alpha)$ over all sets of punctures in terms of the functions $\tilde{z},\tilde{\tilde{z}}$ introduced \begin{equation} Q(\alpha) = {1\over 1-(\tilde{z}+\tilde{\tilde{z}})} = {1\over 1-\tilde{z}} \sum_{n=0}^{\infty} \left({\tilde{\tilde{z}}\over 1-\tilde{z}} \right)^n. \label{q1} \end{equation} A moment of reflection shows that in order to get the statistical sum corresponding to the case of a closed surface, we have to drop from (\ref{q1}) all terms which contain odd powers of $\tilde{\tilde{z}}$. Thus, we get \begin{equation} Q(\alpha)_{closed} = {1\over 1-\tilde{z}} \sum_{n=0}^{\infty} \left({\tilde{\tilde{z}}\over 1-\tilde{z}} \right)^{2n} = {1\over 1-\tilde{z}} \; {1\over 1-\left({\tilde{\tilde{z}}\over 1-\tilde{z}}\right)^2}. \end{equation} Here $\tilde{z},\tilde{\tilde{z}}$ are functions of $\alpha$. Recall now that our statistical mechanical system has a regime in which the entropy depends linearly on the area in the case when the statistical sum diverges for some finite value of $\alpha$. Let us, therefore, investigate the behavior of $Q(\alpha)_{closed}$ when $\alpha$ goes to zero. First, let us note that \begin{equation} Q(\alpha)_{closed} = {1-\tilde{z}\over (1-\tilde{z}+\tilde{\tilde{z}})(1-\tilde{z}-\tilde{\tilde{z}})}. \end{equation} Also, we note that $\tilde{\tilde{z}} > \tilde{z}$, because the sum in $\tilde{\tilde{z}}(\alpha)$ starts from $j=1/2$, whereas the sum in $\tilde{z}$ is taken over all integer values of spin and starts from $j=1$. We see, therefore, that the statistical sum $Q(\alpha)_{closed}$ diverges for $\alpha': \tilde{z}(\alpha')+\tilde{\tilde{z}}(\alpha') = 1$. But $\tilde{z}+\tilde{\tilde{z}}=z$, so the value of $\alpha'$ here is the solution of equation $z(\alpha') = 1$ obtained in the previous section. Thus, we get that \begin{equation} S_{closed}(A) = \alpha' \,A, \end{equation} where $\alpha'$ is the same as in (\ref{formula}). Thus, we have proved that, although the statistical sum $Q_{closed}(\alpha)$ for the case of a closed surface is different from $Q(\alpha)$ corresponding to the case of an open $S$, it implies the linear dependence $S(A) = \alpha' A$ of the entropy on the surface area, with the proportionality coefficient $\alpha'$ being the same as in the case of an open surface. One can say that, although we excluded some states from the statistical sum, there are still `enough' states to give the same linear dependence of the entropy on $A$ as in the case of an open surface. \section{Discussion.} The result which we have obtained considering some particularly simple surface quantum states is that the entropy corresponding to a macro-state which is specified by a total area $A$ of the surface is proportional precisely to the area $A$. It is important that we have obtained precisely the same dependence $S(A)$ both for the case of open and closed surfaces $S$. The entropy was defined as the logarithm of the number of different quantum states that approximate one and the same area of the surface. The states which we considered were specified simply by the sets of punctures on the surface. It is crucial that the states which are different only up to a permutation of spins (see Fig.\ref{fig:5}) were considered as different quantum states. One of the reasons why we postponed the discussion of the key point of distinguishing the states (\ref{states}) was to emphasize the importance of this choice. This is this choice which implies the linear dependence of entropy $S(A)$ on the surface area. As we show in the appendix, in the case when states (\ref{states}) are considered as indistinguishable, the dependence of the entropy on the areas is different form the linear one (in fact, the entropy turns out to be proportional to the square root of the area). Before discussing this key point, let us note that our aim in this paper is not to give a physical motivation for some particular choice of the ensemble of quantum states. Rather we wanted to show how the entropy arises naturally when one considers correspondence between classical and quantum pictures of geometry. We also wanted to illustrate this idea following some simple choice of the ensemble of quantum states, and to present a technique which turns out to be useful. Having this in mind, let us discuss our choice of considering the states (\ref{states}) as distinguishable ones. We were counting the number of different diffeomorphism equivalence classes of (simple) surface spin networks approximating one and the same total surface area. This means, that two surface spin networks which can be transformed one into another by a diffeomorphism on the surface should have been considered as specifying the same micro-state. Let us now consider the two states (\ref{states}). It is easy to see, that there exists a diffeomorphism which maps one state into the other; this diffeomorphism simply rearranges two points $p',p''$ on the surface. Therefore, on the first sight, the states (\ref{states}) should have been considered as a single state. Does this mean that the result obtained is physically meaningless and is simply an exercise in statistical mechanics?. There are some reasons to believe that the result obtained is more than that. So let us give some possible motivations for our strange choice of considering states (\ref{states}) as different quantum states. One possibility, which is argued by Carlo Rovelli \cite{Rov}, is that points on the surface are physically distinguishable, and so are the states (\ref{states}). This, in fact, happens in the case of some systems for which the surface $S$ play the role of the boundary. It might be the case that {\it boundary conditions} partly (or even completely) brake diffeomorphism invariance on the boundary. This would mean that some spin networks which are usually considered as specifying one and the same quantum state, should, in fact, be considered as different quantum states. This case of systems with boundaries is subtle and deserves a special attention. Let us only note the possible connection of our result, viewed from this point, with the results of Steven Carlip \cite{Carlip}. The other possibility, which has also been argued in \cite{SM}, is that some other (in fact, loop) states on the surface make some of the states (\ref{states}) belong to different diffeomorphism equivalence classes of spin networks. Indeed, the surface loop states which we considered as giving no contribution to the areas, and, thus, forgot about, may affect largely the number of different diffeomorphism equivalence classes of states which approximate one and the same area of $S$. To see this, let us introduce a loop configuration on the surface. This loop configuration divides the surface into regions. It is clear that some of the states (\ref{states}) will belong to different equivalence classes, for there will no longer be a diffeomorphism `connecting' different regions. Thus, the number of different diffeomorphism equivalence classes which approximate one and the same total area $A$ in this case is larger that in the case when there are no loops on the surface. So loops on the surface may allow one to distinguish states of the form (\ref{states}) (for more details see \cite{SM}). Of course, neither of these motivations gives a final physical justification of the result obtained. But let us repeat that this is not what we aimed at in this paper. We hope, however, that the above discussion shows at least that the issue deserves a further investigation. Finally, let us discuss the possibility to generalize the result obtained considering arbitrary open surface spin network states. First, let us take into account surface spin networks which have no tangential edges, allowing, however, vertices of arbitrary valence. In this case we have to use a general formula (\ref{qarea}) for eigenstates of area operators (with all $j_{(v)}^{(u+d)}$ being equal to zero because of the fact that we consider spin networks with no tangential edges). We would like to generalize our result counting all (open) surface spin networks which approximate one and the same total surface area. However, we face the problem trying to consider all states. Namely, eigenvalues given by the formula (\ref{qarea}) are degenerate and we have to take this degeneracy into account when calculating the entropy. Let us consider, for example, a simple state, which contains one vertex of valence two (see Fig. \ref{fig:7}). \begin{figure} \centerline{\hbox{\psfig{figure=fig7.ps}}} \caption{Surface state which does not contribute to the surface area, thus, producing the degeneracy.} \label{fig:7} \end{figure} For this state we get $j^u = j^d = 0$, and, therefore, $A_S = 0$. A moment of consideration shows that there are, in fact, an infinite number of similar surface states that do not give any contribution to the area of $S$. Thus, we find that eigenvalue $A_S=0$ is infinitely degenerate. Similarly, we find that {\it all} eigenvalues given by the formula (\ref{qarea}) are infinitely degenerate. Therefore, if we would like to take into account all different surface states we would get an infinite value for our geometrical entropy. Let us note, however, that the states which we have just considered are rather pathological. Namely, we observe that small deformations of the surface $S$ (see Fig. \ref{fig:7}) (with the spin network state being not deformed) cause a change in the `quantum area' of $S$. Let us consider a one parameter family of surfaces $S_\epsilon, \epsilon\in [0,1]$ such that $S_\epsilon \to S$ when $\epsilon\to 0$. For our example (see Fig. \ref{fig:7}), if surfaces $S_\epsilon$ approach the surface $S$ from below we have $\lim_{\epsilon\to 0} A_{S_\epsilon} = A_S = 0$ (here by $A_S$ we denote an eigenvalue of the operator $\hat{A}_S$ on the quantum state we consider). However, if we choose the family of surfaces approaching $S$ from above we have $\lim_{\epsilon\to 0} A_{S_\epsilon} = \sqrt{3} \not= A_S = 0$ (we measure areas in the units $16\pi l_p^2$). Thus, we see that states which cause the degeneracy are `pathological' when considered as surface states, for `quantum area' of $S$ in this states behaves non-continually under small deformations of $S$. This observation, which the author learned from A.Ashtekar, suggests that we have to exclude these states when we consider an ensemble of surface states. A natural way to do it would be to change the approximation criterion between macro- and micro-descriptions. Namely, let us strengthen our criterion in the following way. We fix a value $A$ of the total area of $S$, which defines our macro-state. We choose one parameter family $S_\epsilon$ of two-surfaces such that $S_\epsilon\to S, \epsilon\to 0$. Let us now say that a spin network state $\Gamma$ approximates our macro-state if $\lim_{\epsilon\to 0} A_{S_\epsilon}(\Gamma) = A$, where $A_{S_\epsilon}(\Gamma)$ is the eigenvalue of operator $\hat{A}_{S_\epsilon}$ corresponding to the eigenstate $\Gamma$. The new approximation criterion states that we have to consider only `good' surface quantum states. By definition, `quantum area' of $S$ in `good' surface quantum states does not change under small deformations of the surface $S$. It is easy to see, that `good' quantum states are those which have only bivilent vertices, i.e. precisely those states which we considered in this paper. Thus, we conclude that the result obtained above gives the geometrical entropy $S(A)$ of a macro-state of a fixed total area $A$, the quantum states that account for this entropy being all states which approximate $A$ in the strong sense, i.e., those for which `quantum area' does not change under small deformations of the surface $S$. Furthermore, the entropy $S(A)$ is the same both for open and for closed surfaces. Thus, the result obtained is general in the sense that we consider all quantum surface states which approximate $A$ in the strong sense. Let us conclude saying that the notion of geometrical entropy is, presumably, valid not only in the form explored here (when we fixed only one macroscopic parameter -- the surface area), but also in a more general context. For example, it is of interest to calculate the entropy $S(g)$ which corresponds to a given 2-metric on the surface, which is a genuine geometrical entropy. \section{Acknowledgments.} I am grateful to Yuri Shtanov for the discussions in which the idea of geometrical entropy developed and for important comments on the first versions of the manuscript. I would like to thank A.Ashtekar, A.Coricci, R.Borisov, S.Major, C.Rovelli, L.Smolin and J.Zapata for discussions, comments and criticism. I am grateful to A.Ashtekar from whom I learned the idea of sequences of surfaces, which is used here to formulate the `strong' approximation criterion. I would also like to thank the Banach center of Polish Academy of Sciences for the hospitality during the period when this paper was started. This work was supported, in part by the International Soros Science Education Program (ISSEP) through grant No. PSU062052.
1,116,691,498,773
arxiv
\section{\label{sec:intro}Introduction} Consider a lattice gauge theory with gauge group SU($N$) on a periodic lattice of time extent $N_t$, possibly containing matter fields and a chemical potential. If we integrate out all degrees of freedom under the constraint that Polyakov line holonomies are held fixed, then the resulting distribution depends only on those Polyakov line holonomies or, more precisely, on their eigenvalues. The logarithm of this distribution is defined to be the effective Polyakov line action $S_P$. If the underlying lattice gauge theory in $D=4$ dimensions has a sign problem due to a non-zero chemical potential, then $S_P$ probably also has a sign problem. However, there are indications that the sign problem may be more tractable in $S_P$ than in the underlying theory. Using strong-coupling and hopping parameter expansions, it is possible to actually carry out the integrations over gauge and matter fields mentioned above, to arrive at an action of the form~\footnote{This is the action at leading order. For the effective action determined at higher orders in the combined strong-coupling and hopping parameter expansions, cf.\ \cite{Fromm:2011qi}.} \begin{eqnarray} S_P &=& \b_P \sum_{{\vec{x}}} \sum_{i=1}^3 [\text{Tr} U_{\vec{x}}^\dagger \text{Tr} U_{{\vec{x}}+\boldsymbol{\hat{\textbf{\i}}}} + \text{Tr} U_{\vec{x}} \text{Tr} U^\dagger_{{\vec{x}}+\boldsymbol{\hat{\textbf{\i}}}}] + \k \sum_{\vec{x}} [e^\mu \text{Tr} U_{\vec{x}} + e^{-\mu} \text{Tr} U^\dagger_{\vec{x}}] \ , \label{action1} \eea where $\b_P, \k$ are calculable constants depending on the gauge coupling, quark masses, and temperature $T=1/N_t$ in the underlying theory. To minimize minus signs later on, the overall sign of $S_P$ is defined such that the Boltzmann weight is proportional to $\exp[S_P]$, rather than $\exp[-S_P]$. The Polyakov line holonomies $U_{\vec{x}} \in $ SU($N$) in \rf{action1} are also known as ``effective spins." A path integral based on an effective spin action of the form \rf{action1}, for a wide range of $\b_P,\k,\mu$, can be treated by a number of different methods, including the ``flux representation" \cite{Mercado:2012ue}, reweighting \cite{Fromm:2011qi}, and stochastic quantization \cite{Aarts:2011zn}. Even traditional mean field methods have had some degree of success in determining the phase diagram \cite{Greensite:2012xv}. The problem, of course, is that strong lattice coupling and heavy quark masses lie outside the parameter range of phenomenological interest, and it is not obvious how to extract $S_P$ for parameters inside the range of interest, even at $\mu=0$. There have been some efforts in this direction, notably the inverse Monte Carlo method of ref.\ \cite{Wozar:2007tz,*Heinzl:2005xv}, as well as early studies \cite{Gocksch:1984ih,Ogilvie:1983ss} which employed microcanonical and Migdal-Kadanoff methods, respectively. There is also a strategy for determining the phase structure of lattice gauge theory from an effective spin theory, whose form is suggested by high-order strong-coupling and hopping parameter expansions \cite{Fromm:2011qi}. Here, however, I will discuss a different approach to the problem, recently suggested in ref.\ \cite{Greensite:2012xv}, which will be illustrated for SU(2) pure gauge and gauge-Higgs theories. \section{\label{sec:method}The ``Relative Weights" Approach} Let $S_{QCD}$ be the lattice QCD action at temperature $T=1/N_t$ in lattice units, with lattice gauge coupling $\b$, and a set of quark masses denoted collectively $m_q$. We set chemical potential $\mu=0$ for now. It is convenient to impose a temporal gauge condition in which the timelike link variables are set to the unit matrix everywhere except on a single time slice, say at $t=0$. In that case, $U_0({\vec{x}},0)$ is the Polyakov line holonomy passing through the site $({\vec{x}},t=0)$. The effective Polyakov line action is defined in terms of the partition function \begin{eqnarray} Z(\b,T,m_q) &=& \int DU_0({\vec{x}},0) \int DU_k D\overline{\psi} D\psi ~ e^{S_{QCD}} \nonumber \\ &=& \int DU_0({\vec{x}},0) ~ e^{S_P[U_0]} \ , \label{S0} \eea or equivalently \begin{eqnarray} \exp\Bigl[S_P[U_{{\vec{x}}}]\Bigl] = \int DU_0({\vec{x}},0) DU_k D\overline{\psi} D\psi ~ \left\{\prod_{{\vec{x}}} \d[U_{{\vec{x}}}-U_0({\vec{x}},0)] \right\} e^{S_{QCD}} \ . \label{S_P} \eea Because temporal gauge has a residual symmetry under time-independent gauge transformations, it follows that $S_P[U_{{\vec{x}}}]$ is invariant under $U_{{\vec{x}}} \rightarrow g({\vec{x}}) U_{{\vec{x}}} g^\dagger({\vec{x}})$, which means that $S_P$ only depends on the eigenvalues of the Polyakov line holonomies. Now consider a finite set of $M$ SU($N$) ``effective spin" configurations in the three-dimensional cubic lattice $V_3$ of volume $L^3$, \begin{eqnarray} \Bigl\{ \{U^{(i)}_{{\vec{x}}}, \mbox{all~} {\vec{x}} \in V_3\}, ~ i=1,2,...,M \Bigr\} \ . \eea Each member of the set can be used to specify the timelike links on the timeslice $t=0$. Define \begin{eqnarray} {\cal Z} = \int DU_0({\vec{x}},0) DU_k D\overline{\psi} D\psi ~ \sum_{i=1}^M \left\{ \prod_{{\vec{x}}} \d[U^{(i)}_{{\vec{x}}} - U_0({\vec{x}},0)] \right\} e^{S_{QCD}} \ , \eea and consider the ratio \begin{eqnarray} { \exp\Bigl[S_P[U^{(j)}]\Bigr] \over \exp\Bigl[S_P[U^{(k)}]\Bigr] } &=& { \int DU_0({\vec{x}},0) DU_k D\overline{\psi} D\psi ~ \left\{\prod_{{\vec{x}}} \d[U^{(j)}_{{\vec{x}}} - U_0({\vec{x}},0)] \right\} e^{S_{QCD}} \over \int DU_0({\vec{x}},0) DU_k D\overline{\psi} D\psi ~ \left\{ \prod_{{\vec{x}}} \d[U^{(k)}_{{\vec{x}}} - U_0({\vec{x}},0)] \right\} e^{S_{QCD}} } \nonumber \\ &=& { {1\over {\cal Z}} \int DU_0({\vec{x}},0) DU_k D\overline{\psi} D\psi ~ \left\{\prod_{{\vec{x}}} \d[U^{(j)}_{{\vec{x}}}-U_0({\vec{x}},0)] \right\} e^{S_{QCD}} \over {1\over {\cal Z}} \int DU_0({\vec{x}},0) DU_k D\overline{\psi} D\psi ~ \left\{\prod_{{\vec{x}}} \d[U^{(k)}_{{\vec{x}}}-U_0({\vec{x}},0)] \right\} e^{S_{QCD}} } \ , \eea where in the second line we have merely divided both the numerator and denominator by a common factor. However, by inserting this factor, both the numerator and denominator acquire a meaning in statistical mechanics, because the factor ${\cal Z}$ can be interpreted as the partition function of a system in which the configuration of timelike link variables at $t=0$ is restricted to belong to the set $\{U^{(i)},~i=1,...,M\}$. This means that \begin{eqnarray} \mbox{Prob}[U^{(j)}] = {1\over {\cal Z}} \int DU_0({\vec{x}},0) DU_k D\overline{\psi} D\psi ~ \left\{\prod_{{\vec{x}}} \d[U^{(j)}_{\vec{x}}-U_0({\vec{x}},0)] \right\} e^{S_{QCD}} \eea is simply the probability, in this statistical system, for the $j$-th configuration $U_0({\vec{x}},0) = U^{(j)}({\vec{x}})$ to be found on the $t=0$ timeslice. This probability can be determined from a slightly modified Monte Carlo simulation of the original lattice action. The simulation proceeds by standard algorithms, for all degrees of freedom other than the timelike links at $t=0$, which are held fixed. Periodically, on the $t=0$ timeslice, one member of the given set of timelike link configurations is selected by the Metropolis algorithm, and all timelike links on that timeslice are updated simultaneously. Let $N_i$ be the number of times that the $i$-th configuration is selected by the algorithm, and ${N_{tot} = \sum_i N_i}$. Then $\mbox{Prob}[U^{(j)}]$ is given by \begin{eqnarray} \mbox{Prob}[U^{(j)}] = \lim_{N_{tot}\rightarrow\infty} {N_j \over N_{tot}} \ , \eea and this in turn gives us the {\it relative weights} \begin{eqnarray} { \exp\Bigl[S_P[U^{(j)}]\Bigr] \over \exp\Bigl[S_P[U^{(k)}]\Bigr] } = \lim_{N_{tot}\rightarrow\infty} {N_j \over N_{k}} \label{rw} \eea for all elements of the set. A computation of this kind allows us to test any specific proposal for $S_P$, which may be motivated by some theoretical considerations. But it might also be possible, given data on the relative weights of a variety of different sets, to guess the action that would lead to these results. In this article we will consider sets of spatially constant Polyakov line configurations, and small plane wave perturbations around a constant background. This is already sufficient to determine the potential term in $S_P$, and to suggest the form of the full action. The method described above was proposed long ago \cite{Greensite:1988rr} in connection with the Yang-Mills vacuum wavefunctional. Recently there have been some sophisticated suggestions for the form of this wavefunctional in 2+1 dimensions, and the technique was revived in order to test these ideas in ref.\ \cite{Greensite:2011pj}. The main difference between the method as applied to vacuum wavefunctionals, and as applied to determining $S_P$, is that in the former case the simulation chooses from a fixed set of spacelike link configurations on the $t=0$ timeslice, while in the latter the choice is made from a set of timelike link configurations. \subsection{Finite chemical potential} Let $S^{\mu}_{QCD}$ denote the QCD action with a chemical potential, which can be obtained from $S_{QCD}$ by the following replacement of timelike links at $t=0$: \begin{eqnarray} S^{\mu}_{QCD} = S_{QCD}\Bigr[U_0(\mathbf{x},0) \rightarrow e^{N_t \mu} U_0(\mathbf{x},0), U^\dagger_0(\mathbf{x},0) \rightarrow e^{-N_t \mu} U^\dagger_0(\mathbf{x},0)\Bigl] \ . \eea The corresponding Polyakov line action $S_P^\mu$ is in principle obtained from \rf{S_P}, with $S^{\mu}_{QCD}$ as the underlying action. Of course the integration indicated in \rf{S_P} can so far only be carried out for strong couplings and large quark masses, but it is not hard to see that each contribution to $S_P$ in the strong-coupling + hopping parameter expansion at $\mu=0$ maps into a corresponding contribution to $S^\mu_P$ by the replacement \begin{eqnarray} U_{\vec{x}} \rightarrow e^{N_t \mu} U_{\vec{x}} ~~~,~~~ U^\dagger_{\vec{x}} \rightarrow e^{-N_t \mu} U^\dagger_{\vec{x}} \ . \label{replace} \eea It is reasonable then to suppose that this mapping holds in general, i.e.\ if we have by some means obtained $S_P[U_{\vec{x}},U^\dagger_{\vec{x}}]$ beyond the range of validity of the strong-coupling + hopping parameter expansion, then the corresponding $S^\mu_P$ is obtained by making the change of variables \rf{replace}. There is, however, a possible source of ambiguity in this scheme (noted in \cite{Greensite:2012xv}), coming from identities such as \begin{eqnarray} \text{Tr} U^\dagger_{\vec{x}} = \frac{1}{2} \Bigl[ (\text{Tr} U_{\vec{x}})^2 - \text{Tr} U_{\vec{x}}^2 \Bigr] \label{identity} \eea in SU(3). One way around this ambiguity is to enlarge the range of $U_0({\vec{x}},0)$, allowing these variables to take on values \begin{eqnarray} U_0({\vec{x}},0) = e^{i\th} U({\vec{x}}) \ , \eea where $U({\vec{x}})$ is an element of $SU($N$)$. In other words, we allow the $U_0({\vec{x}},0)$ links to take on values in the $U(N)$ group, although it will be sufficient for our purposes to let $\theta$ be ${\vec{x}}$-independent.\footnote{It is also sufficient to restrict $\th$ to $0\le \th < 2\pi/N$. The full range $[0,2\pi]$ is redundant, because of the $Z_N$ center of SU($N$).} Suppose we are able to determine $S_P$ for this enlarged domain of Polyakov line variables. Then $S_P^\mu$ is obtained by analytic continuation, $\th \rightarrow -iN_t \mu$. The essential point here is that if one can determine $S_P$ by simulations of $S_{QCD}$ at $\mu=0$, then this result can be used to determine $S_P^\mu$ at finite chemical potential. If the sign problem is in fact tractable for $S_P^\mu$, as recent results seem to suggest, then this may be a useful way of attacking the sign problem in full QCD. \subsection{Relative weights, and path-derivatives of $\mathbf S_P$} Let ${\cal C}$ be the configuration space of effective spins $\{U_{\vec{x}}\}$ on an $L^3$ lattice, and let the variable $\l$ parametrize some path $\{U_{\vec{x}}(\l)\}$ through ${\cal C}$. The method of relative weights is particularly useful in computing derivatives of the Polyakov line action \begin{eqnarray} \left({d S_P \over d \l} \right)_{\l = \l_0} \eea along the path. To see this, we begin by taking the logarithm of both sides of eq.\ \rf{rw}, and find \begin{eqnarray} S_P[U^{(j)}] - S_P[U^{(k)}] &=& \lim_{N_{tot}\rightarrow\infty} \Bigl\{ \log N_j - \log N_k \Bigr\} \nonumber \\ &=& \lim_{N_{tot}\rightarrow\infty} \left\{ \log {N_j\over N_{tot}} - \log {N_k\over N_{tot}} \right\} \ . \eea (From this point on we will drop the limit.) Now imagine parametrizing the effective spins by a parameter $\l$; each value of $\l$ gives us a different configuration $U_{\vec{x}}(\l)$. Let the configuration $U^{(j)}$ correspond to $\l=\l_0+\Delta \l$, and $U^{(k)}$ correspond to $\l=\l_0-\Delta \l$. Then \begin{eqnarray} \left({d S_P[U_{\vec{x}}(\l)] \over d\l}\right)_{\l=\l_0} \approx {1\over 2\Delta \l} \left\{ \log {N_j\over N_{tot}} - \log {N_k\over N_{tot}} \right) \ . \eea However, rather than using only two configurations to compute the derivative, we can obtain a more accurate numerical estimate if we let $\l$ increase in increments of $\Delta \l$, e.g. \begin{eqnarray} \l_n = \l_0 + \left( n - {M+1 \over 2}\right) \Delta \l ~~~,~~~ n=1,2,...,M ~~~ \ , \eea and use all of the $M$ values obtained for $N_n$ in the simulation. For $\Delta \l$ small enough, the data for $\log N_n/N_{tot}$ vs.\ $\l_n$ will fit a straight line, and then we obtain the estimate \begin{eqnarray} \left({d S_P[U_{\vec{x}}(\l)] \over d\l}\right)_{\l=\l_0} \approx ~~ \mbox{slope of } \log {N_n \over N_{tot}} ~~ \mbox{vs.}~~ \l_n \ . \eea The procedure will be illustrated explicitly in the next section. \section{\label{sec:test}Testing the method at strong coupling} The first step is to compute $d S_P/d \l$ for a case where we know the answer analytically. As mentioned previously, $S_P$ can be readily computed in the strong-coupling + hopping parameter expansion. We will consider here the case of pure SU(2) Yang-Mills theory at a strong coupling $\b$. If the lattice is $N_t$ lattice spacings in the time direction, then computing the diagrammatic contributions to $S_P$ at leading and next-to-leading order in the strong-coupling/character expansion we find \begin{eqnarray} S_P &=& \left[1 + 4N_t \left({I_2(\b) \over I_1(\b)}\right)^4 \right] \left({I_2(\b) \over I_1(\b)}\right)^{N_t} \sum_{\vec{x}} \sum_{i=1}^3 \text{Tr} U_{\vec{x}} \text{Tr} U_{{\vec{x}} + \boldsymbol{\hat{\textbf{\i}}}} \nonumber \\ &=& \b_P \sum_{\vec{x}} \sum_{i=1}^3 P_{\vec{x}} P_{{\vec{x}}+ \boldsymbol{\hat{\textbf{\i}}}} \ , \label{Sp_strong} \eea where \begin{eqnarray} P_{\vec{x}} &\equiv& \frac{1}{2} \text{Tr} U_{\vec{x}} \nonumber \\ \b_P &=& 4 \left[1 + 4N_t \left({I_2(\b) \over I_1(\b)}\right)^4 \right] \left({I_2(\b) \over I_1(\b)}\right)^{N_t} \ . \eea Let us first consider sets of spatially constant configurations with varying amplitudes in the neighborhood of $P=P_0$, i.e. \begin{eqnarray} U^{(n)}_{\vec{x}} &=& (P_0 + a_n) \mathbbm{1} + i \sqrt{1 - (P_0+a_n)^2} \sigma_3 \nonumber \\ a_n &=& \Bigl(n - \frac{1}{2}(M+1)\Bigr) \Delta a ~~,~~ n=1,2,...,M ~~~~ \ , \label{path1} \eea so in this case $a$ is the $\l$ parameter of the previous section. If we divide $S_P$ into a kinetic and potential part, which in the case of \rf{Sp_strong} is \begin{eqnarray} S_P &=& K_P + V_P \nonumber \\ K_P &=& \frac{1}{2} \b_P \sum_{\vec{x}} \sum_{i=1}^3 (P_{\vec{x}} P_{{\vec{x}} + \boldsymbol{\hat{\textbf{\i}}}} - 2P_{\vec{x}}^2 + P_{\vec{x}} P_{{\vec{x}} - \boldsymbol{\hat{\textbf{\i}}}}) \nonumber \\ V_P &=& 3 \b_P \sum_{\vec{x}} P_{\vec{x}}^2 \ , \label{strong} \eea then $dS_P/da = dV_P/dP_0$ is giving us the derivative of the potential piece, which can then be reconstructed, up to an irrelevant constant, by integration. So the procedure for determining $V_P$ (assuming it were not already known from the strong-coupling expansion) is to compute $dV_P/dP_0$ numerically, fit the results to some appropriate polynomial in $P_0$, and then integrate the fit. Our sample simulation is carried out in pure SU(2) lattice gauge theory at coupling $\b=1.2$ (well within the regime of strong couplings) on a $12^3 \times 4$ lattice with $M=20$ sets of spatially constant configurations. Figure \ref{fig1} shows the data for $\log(N_n/N_{tot})$ plotted vs.\ $(P_0+a_n) \times$ spatial lattice volume ($12^3$), at $P_0=0.5$. It is clear that the data falls quite accurately on a straight line, and the slope gives an estimate for the derivative \begin{eqnarray} {1\over L^3}\left({d S_P(U_{\vec{x}}(a)) \over da}\right)_{a=0} &=& {1\over L^3} {dV_P(P_0) \over dP_0} \label{dSda} \eea which can be compared to the value $6 \b_P P_0$ obtained from the strong-coupling expansion. The derivative obtained from numerical simulation vs.\ $P_0$ is plotted in Fig.\ \ref{fig2}, and it obviously fits a straight line. Therefore the potential $V_P$ is quadratic in $P_{\vec{x}}$, and we find, at $\b=1.2$ \begin{eqnarray} V_P = \left\{ \begin{array}{cl} 0.1721(8) \sum_{\vec{x}} \frac{1}{2} P_{\vec{x}}^2 & \mbox{relative weights method} \cr & \cr 0.1710 \sum_{\vec{x}} \frac{1}{2} P_{\vec{x}}^2 & \mbox{strong-coupling expansion} \end{array} \right. \ , \eea where we have dropped, in the upper line, an irrelevant constant of integration. The small numerical difference between the relative weights and strong-coupling results can probably be attributed to neglected higher order terms in the strong-coupling expansion.\footnote{Statistical errors are estimated from best fit slopes obtained from eight independent runs. Where errorbars are not shown explicitly, in the two-dimensional plots shown below, they are smaller than the symbol size.} \begin{figure}[t!] \centerline{\scalebox{0.9}{\includegraphics{fig1.eps}}} \caption{The slope of the straight-line fit to the data shown gives an estimate for the derivative $L^{-3}dS_P/da$ of $S_P$ with respect to the amplitude of spatially constant effective spin configurations. In this case, the derivative is evaluated at $P_0=0.5$, for an underlying pure Yang-Mills theory at strong coupling value of $\b=1.2$, on a $12^3 \times 4$ lattice.} \label{fig1} \end{figure} \begin{figure}[t!] \centerline{\scalebox{0.9}{\includegraphics{fig2.eps}}} \caption{A plot of the values for $L^{-3} dS_P/da$ vs.\ $P_0$. Each data point is extracted from a plot similar to the previous figure. Also shown are the corresponding strong-coupling values, and a best linear fit to the data points.} \label{fig2} \end{figure} In order to investigate the kinetic term, we consider plane-wave deformations of spatially constant configurations. The path through configuration space ${\cal C}$ is again parametrized by $a$, with \begin{eqnarray} U^{(n)}_{\vec{x}} &=& P^{(n)}_{\vec{x}} \mathbbm{1} + i \sqrt{1 - (P^{(n)}_{\vec{x}})^2} \sigma_3 \nonumber \\ P^{(n)}_{\vec{x}} &=& P_0 + a_n \cos(\vk \cdot {\vec{x}}) \nonumber \\ k_i &=& {2 \pi \over L} m_i \ , \label{trajectory} \eea where the $\{m_i,~ i=1,2,3\}$ are integers, not all of which are zero. For this class of configurations we have, for the action \rf{Sp_strong} \begin{eqnarray} S_P = \b_P L^3 \left( 3P_0^2 + \frac{1}{2} a_n^2 \sum_{i=1}^3 \cos(k_i) \right) \ . \eea Since the deformation of the action is proportional to $a^2$, it is natural to consider the derivative of $S_P$ with respect to $a^2$, i.e. \begin{eqnarray} {1\over L^3}{dS_P \over d(a^2)} = \frac{1}{2} \b_P \sum_i \cos(k_i) \ , \eea and therefore we can choose to let $a_n^2$, rather than $a_n$, increase in equal increments, so that ${a_n = \sqrt{n} \Delta a}$. \begin{figure}[t!] \centerline{\scalebox{0.9}{\includegraphics{fig3.eps}}} \caption{Derivative of the action w.r.t.\ path parameter $a^2$ vs.\ squared lattice momentum. Data is taken at strong gauge coupling $\b=1.2$ for plane-wave deformations. Squares indicate the relative-weights values, while green dots are the values obtained from the strong-coupling expansion.} \label{fig3} \end{figure} The numerical procedure is similar to the determination of the potential term: we compute the derivative $L^{-3}dS_P/d(a^2)$, at fixed $P_0$ and $\vk$, from the slope of a plot of $\log(N_n/N_{tot})$ vs.\ $a_n^2 L^3$. Then these values for the derivative are plotted, at various values of $P_0$, against squared lattice momentum \begin{eqnarray} k_L^2 \equiv 4 \sum_{i=1}^3 \sin^2(\frac{1}{2} k_i) \ . \label{k2} \eea The result, at $P_0=0.5$, is shown in Fig.\ \ref{fig3}, and we find, for a trajectory \rf{trajectory} at fixed $\vk$, \begin{eqnarray} {1\over L^3} {d S_P \over d(a^2)} = -A k_L^2 + B \ , \label{deriv} \eea where \begin{eqnarray} A = 7.3(2) \times 10^{-3} ~~,~~ B = 4.30(3) \times 10^{-2} \ . \eea The simulation has also been carried out at other values of $P_0$, but the results are almost indistinguishable from Fig.\ \ref{fig3}, and so are not displayed here. The important point, however, is that the path derivative \rf{deriv} is $P_0$ independent. Integrating with respect to $a^2$, we find that along any path parametrized by $a$ with fixed $P_0$ \begin{eqnarray} S_p[U_{\vec{x}}(a)] &=& L^3\{ -A a^2 k_L^2 + B a^2 + f(P_0) \} \ , \label{action2} \eea where $f(P_0)$ is a constant of integration, which can be determined from the data on the potential: \begin{eqnarray} f(P_0) = C P_0^2 ~~,~~ C = 0.0861 \pm 0.0004 \ . \eea The next step is to express $S_P$ along the path in terms of $U_{\vec{x}}$ (or $P_{\vec{x}}=\frac{1}{2} \text{Tr} U_{\vec{x}}$). From the definitions \rf{trajectory}, \rf{k2}, one easily finds that \rf{action2} can be expressed as \begin{eqnarray} S_P = 4A \sum_{\vec{x}} \sum_{i=1}^3 P_{\vec{x}} P_{{\vec{x}}+ \boldsymbol{\hat{\textbf{\i}}}} + \Bigl[(B-6A)a^2 + (C-12A) P_0^2 \Bigr] L^3 \ . \eea The constants $B-6A$ and $C-12A$ are, within statistical error, consistent with zero. So we will just drop these terms. Then along the trajectory the action has the form \begin{eqnarray} S_P = (.0292 \pm .0008) \sum_{\vec{x}} \sum_{i=1}^3 P_{\vec{x}} P_{{\vec{x}}+ \boldsymbol{\hat{\textbf{\i}}}} ~~~\mbox{(relative weights method)}\ , \eea and of course the natural conjecture is that this is the action itself, at any point in configuration space. Further checks would be to calculate numerical derivatives $dS_P/d\l$ along other trajectories, to test the consistency of this conjecture. We don't really need to do that here, since the action at strong couplings is already known analytically, and is given in eq.\ \rf{Sp_strong} to leading and next-to-leading order in the strong-coupling expansion. At $\b=1.2$ we have, from eq.\ \rf{Sp_strong}, that \begin{eqnarray} S_P = .0285 \sum_{\vec{x}} \sum_{i=1}^3 P_{\vec{x}} P_{{\vec{x}}+ \boldsymbol{\hat{\textbf{\i}}}} ~~~\mbox{(strong-coupling expansion)} \ , \eea which is a close match to what we have arrived at via the relative weights procedure. This is, perhaps, a lot of effort to derive a known result. We have gone through this exercise in order to illustrate the method, and to make sure, in a case where the answer is known, that the method actually works. \section{\label{sec:potential}Potential $\mathbf V_P$ in pure-gauge theory, weaker couplings} We now reduce the lattice coupling of the underlying SU(2) pure-gauge theory, setting $\b=2.2$ with inverse temperature $N_t=4$ in lattice units. At this coupling and temperature (which is still inside the confinement phase of the theory), the effective Polyakov line action $S_P$ is not known. The easiest task is to determine the potential part of the action. For the purposes of this article, we define the kinetic part of the action to be the piece which vanishes for spatially constant configurations, while the potential part is local. With these definitions \begin{eqnarray} V_P &=& \sum_{\vec{x}} {\cal V}(U_\mathbf{x}) \ , \eea and the function ${\cal V}(U_\mathbf{x})$ is determined by evaluating $S_P$ on configurations $U_{\vec{x}}=U$ which are constant in 3-space, i.e. \begin{eqnarray} {\cal V}(U) &=& {1\over L^3} S_P(U) \ . \eea Then by definition the kinetic part of the action is \begin{eqnarray} K_P &\equiv& S_P[U_{\vec{x}}] - V_P[U_{\vec{x}}] \ . \eea In order to determine $V_P$, we consider as before the path through configuration space \rf{path1} parametrized by the variable $a$, and once again we can identify $dS_P/da$ with $dV_P(P_0)/dP_0$ as in \rf{dSda}. The derivatives are determined by the relative weight method described above, the dependence on $P_0$ is fit to a polynomial, and $V_P$ is then determined, up to an irrelevant constant, by integration over $P_0$. Because the $Z_2$ center symmetry is unbroken at $\b=2.2$ and $N_t=4$, and ${\cal V}(U_{\vec{x}})$ is a class function, it is natural to assume that ${\cal V}(U)$ is well represented by a few group characters $\chi_j(U)$ of zero N-ality ($j=$ integer for SU(2)), and the potential is analytic in $P_{\vec{x}}$. Surprisingly, this is {\it not} what is found. Figure \ref{sfig4a} shows the data for the derivative \begin{eqnarray} D(P_0) &\equiv& {1\over L^3} {dV_P \over dP_0} \nonumber \\ &=& {1\over L^3} {dS_P \over da} \eea at $\b=2.2$ on a $12^3 \times 4$ volume, which, as in the strong-coupling case, extrapolates linearly to zero at $P_0=0$. Also shown is a best fit of $D(P)$ to the polynomial \begin{eqnarray} f(P) = c_1 P + c_2 P^2 + c_3 P^3 \label{fitfunc} \eea with the best fit constants shown in Table \ref{tab1}. What is initially a little troubling about this fit is that upon integration, and up to an irrelevant integration constant, we must have \begin{eqnarray} {\cal V}(P_{\vec{x}}) = \frac{1}{2} c_1 P_{\vec{x}}^2 + {1\over 3} c_2 P_{\vec{x}}^3 + \frac{1}{4} c_3 P_{\vec{x}}^4 \ , \eea which appears to violate center symmetry, i.e.\ ${\cal V}(P_{\vec{x}}) = {\cal V}(-P_{\vec{x}})$ for SU(2) gauge theory. Because of center symmetry, the character expansion of ${\cal V}(P_{\vec{x}})$ contains only characters $\chi_j$ with $j=$ integer. It is a property of the SU(2) group characters that each $\chi_j$ can be expressed as a polynomial of order $2j$ in $P$, containing only even powers of $P$ for $j=$ integer, and only odd powers for $j=$ half-integer. Then if the character expansion of ${\cal V}(P_{\vec{x}})$ is truncated at some $j=j_{max}$, the $P$-derivative is a polynomial in odd powers of $P$ up to $P^{2j_{max}-1}$. \begin{figure}[ht] \centering \subfigure[~ ${1\over L^3}{dV_P \over dP}$ vs.\ $P$ with a $P^2$ term in the fitting function.]{ \resizebox{90mm}{!}{\includegraphics{fig4a.eps}} \label{sfig4a} } \subfigure[~ ${1\over L^3}{dV_P \over dP}$ vs.\ $P$. Successive approximations without a $P^2$ term in the fitting functions.]{ \resizebox{90mm}{!}{\includegraphics{fig4b.eps}} \label{sfig4b} } \subfigure[~ test if ${dV_P \over dP}$ is an odd function.]{ \resizebox{79mm}{!}{\includegraphics{fig4d.eps}} \label{sfig4d} } \subfigure[~ ${1\over L^3}{dV_P \over d(P^2)}$ vs.\ $P^2$, same data set (and fit) as (a)]{ \resizebox{79mm}{!}{\includegraphics{fig4c.eps}} \label{sfig4c} } \caption{Derivatives of the potential. Subfigure (a) shows the best fit to the data by a polynomial ${aP + bP^2 + cP^3}$, while subfigure (b) shows a best fit by polynomials with two, three, and four odd powers of $P$, which are forms that might be expected from unbroken center symmetry. (c) is a test of whether $dV_P/dP$ is an odd function of $P$. Data for the derivative at values of $P_0<0$ are multiplied by -1, for comparison with the data at $P_0>0$. (d) same data (and fit) as in subfigure (a), plotted in a different way.} \label{fig4} \end{figure} One might expect that ${\cal V}(P_{\vec{x}})$ can be accurately approximated by a handful of group characters. However, the attempt to fit the data with only a few odd powers of $P$ is unsuccessful, in the sense that each of the three fitting functions \begin{eqnarray} f(P) = \left\{ \begin{array}{l} c_1 P + c_3 P^3 \cr c_1 P + c_3 P^3 + c_5 P^5 \cr c_1 P + c_3 P^3 + c_5 P^5 + c_7 P^7 \end{array} \right. ~~~ \ , \eea corresponding to truncated character expansions with $j_{max}=2,3,4$, respectively, gives an unacceptable fit, as seen in Fig.\ \ref{sfig4b}. The reduced $\chi^2$ values in the three cases are $440,100,25$, respectively. This is to be compared to the reduced $\chi^2 = 3.2$ for the fitting function \rf{fitfunc}. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Potential fit} \\ \hline $c_1 $ & $c_2$ & $c_3$ \\ \hline 4.61(2) & $-4.51(10)$ & 1.77(8) \\ \hline \end{tabular} \end{center} \caption{The constants $c_{1-3}$ derived from a best fit of $c_1 P + c_2 P^2 + c_3 P^3$ to the potential data.} \label{tab1} \end{table} All this seems to imply that ${\cal V}(P_{\vec{x}})$ has a term violating center symmetry, but of course that cannot be the case. In order that ${\cal V}(P_{\vec{x}})$ is an even function of $P_{\vec{x}}$, it must be that the derivative is an odd function, $D(P_0)=-D(-P_0)$, which in turn means that the coefficient of the quadratic term in \rf{fitfunc} must change sign when $P_0 \rightarrow -P_0$. This is easy to check; we simply repeat the calculation with $P_0 < 0$ in \rf{path1}, with the result shown in \ref{sfig4d}. Here the squares are the data for $D(P_0)$ at $P_0>0$, while the circles are data for $(-1)\times D(P_0)$ at $P_0<0$. The fact that the corresponding data points at $\pm P_0$ lie on top of each other means that the derivative is an odd function, and the potential itself is an even function of $P_{\vec{x}}$, as it must be. The conclusion, which follows from the best fit, is that over the full range $-1\le P_{\vec{x}} \le 1$ the potential, up to an irrelevant constant, is given by \begin{eqnarray} {\cal V}(P_{\vec{x}}) = \frac{1}{2} c_1 P_{\vec{x}}^2 + {1\over 3}c_2 |P_{\vec{x}}|^3 + \frac{1}{4} c_3 P_{\vec{x}}^4 \ . \label{potential} \eea This function is non-analytic, because of the absolute value, but still center symmetric, with the constants given in Table \ref{tab1}. It should be emphasized again that this potential cannot be approximated very well by a simple sum of $j=0,1,2,3,4$ SU(2) group characters. Of course, any class function (including $|P_{\vec{x}}|^3$) can be approximated by a sufficiently large number of group characters, just as a step function can be approximated by a truncated Fourier series. But keeping only a relatively small number of group characters introduces ``wiggles" in the approximation to the potential (which are seen in Fig.\ \ref{sfig4b}) much like the truncated Fourier series does for the step function. So far we have only looked at a pure gauge theory in the confined phase, but it is also possible to compute ${\cal V}(P_{\vec{x}})$ in the deconfined phase using the same methods. In comparing the potential in the confining and deconfining phases it is useful to display the data in a slightly different way, by plotting the derivative $d V_P/d(P^2)$ vs.\ $P^2$, i.e. \begin{eqnarray} {1\over L^3} {dV_P \over d(P_0^2)} = {1\over L^3} {1\over 2P_0}{dV_P \over dP_0} \ , \label{der2} \eea When the data is plotted in this way, a curious feature does show up. First, consider the confined phase. The data for the above derivative in the confined phase, at the same coupling $\b=2.2$ and lattice volume as before, is shown in Fig. \ref{sfig4c}. In this plot, the best fit shown in Fig.\ \ref{sfig4a} transforms to \begin{eqnarray} g(P^2) = \frac{1}{2} (c_1 + c_2 \sqrt{P^2} + c_3 P^2) \ , \label{gfunc} \eea with the same constants $c_{1-3}$ shown in Table \ref{tab1}, and this function is also plotted in Fig.\ \ref{sfig4c}. Note that if the potential didn't have a cubic term, then we would have to omit the term proportional to $\sqrt{P^2}$. But then the data should fit a straight line in Fig.\ \ref{sfig4c}, which it quite clearly does not. Now we display corresponding data in the deconfined phase. Figure \ref{fig5} shows the result for the derivative \rf{der2} at $\b=2.4$, again on a $12^3 \times 4$ lattice, which is well past the deconfinement transition. Note the peculiar ``dip" near $P_0=0$. Because of this dip, the polynomial form \rf{fitfunc} to the derivative, which translates to \rf{gfunc} for $d V_P/d(P^2)$, cannot fit the data over the full range. It {\it is} consistent with the data away from the dip, i.e.\ at $P_0^2>0.1$, and the resulting fit to data in the interval $[0.1,1]$ is also shown in Fig.\ \ref{fig5}. The relationship of the dip in the derivative near $P_0=0$ to the deconfinement phenomenon is not obvious to the author. \begin{figure}[t!] \centerline{\scalebox{0.9}{\includegraphics{fig5.eps}}} \caption{Derivative of the potential in the deconfined phase. Note the dip in the data in the interval ${0<P_0^2<0.1}$. The fit is to data at $P_0^2 \ge 0.1$.} \label{fig5} \end{figure} Finally, it is important to ask whether the potential shown in Fig.\ \ref{fig4} is dependent on the spatial volume. In Fig.\ \ref{vol} we show the previous data for the derivative of the potential, obtained on a $12^3 \times 4$ lattice, together with data for the same observable obtained on an $8^3 \times 4$ lattice. It can be seen that the volume dependence is negligible in this case. \begin{figure}[t!] \centerline{\scalebox{0.9}{\includegraphics{vol.eps}}} \caption{A test of volume dependence of the potential at $\b=2.2$. Data for the potential derivative is displayed for lattice volumes $8^3\times 4$ (open squares) and $12^3 \times 4$ (green circles). } \label{vol} \end{figure} \section{\label{sec:matter}Potential $\mathbf V_P$ in SU(2) gauge-Higgs theory} We now add a matter field to the gauge theory, to see how this will affect the potential. To keep the computation requirements very modest, we consider a scalar matter field, in the fundamental representation, with a fixed modulus (i.e.\ a ``gauge-Higgs" theory). For the SU(2) gauge group, the matter field can be mapped onto SU(2) group elements, and the action can be expressed as \begin{eqnarray} S = \b \sum_{plaq} \frac{1}{2} \mbox{Tr}[UUU^{\dagger}U^{\dagger}] + \gamma \sum_{x,\mu} \frac{1}{2} \mbox{Tr}[\phi^\dagger(x) U_\mu(x) \phi(x+\widehat{\mu})] \ . \label{ghiggs} \eea There have been many numerical studies of this action, following the work of Fradkin and Shenker \cite{Fradkin:1978dv}, itself based on a theorem by Osterwalder and Seiler \cite{Osterwalder:1977pc}, which showed that the Higgs region and the ``confinement-like" regions of the $\b-\gamma$ phase diagram are continuously connected. Subsequent Monte Carlo studies found that there is only a single phase at zero temperature (there might have been a separate Coulomb phase), although there is a line of first-order transitions between the confinement-like and Higgs regions, which eventually turns into a line of sharp crossover around ${\b=2.775,\gamma=0.705}$, cf.\ \cite{Bonati:2009pf} and references therein. At $\b=2.2$ the crossover occurs at $\gamma \approx 0.84$, as seen in the plaquette energy data shown in Fig.\ \ref{fig6}. There is also a steep rise in the Polyakov line expectation value as $\gamma$ increases past this point. \begin{figure}[t!] \centerline{\scalebox{0.7}{\includegraphics{fig6.eps}}} \caption{Plaquette energy vs.\ gauge-Higgs coupling $\gamma$ at fixed $\b=2.2$, for the SU(2) gauge-Higgs theory with fixed Higgs modulus, showing a sharp crossover at $\gamma \approx 0.84$.} \label{fig6} \end{figure} Fig.\ \ref{fig7a} shows the potential derivative $L^{-3} dV_P/dP_0$ vs $P_0$, along with a best fit to the data, at $\b=2.2$ and $\gamma=0.75$, which is somewhat below the crossover, in the ``confinement-like" regime. We compute this derivative, again in a $12^3 \times 4$ lattice volume, at both positive and negative values of $P_0$, to test for the presence of a small center-symmetry breaking term in the potential (which is not obvious in Fig.\ \ref{fig7a}). The data over the full range is fit to the form \begin{eqnarray} f(P) = c'_0 + c'_1 P + c'_2 \text{sign}(P) P^2 + c'_3 P^3 \eea which translates, upon integration, into a potential \begin{eqnarray} {\cal V}(P_{\vec{x}}) = c'_0 P_{\vec{x}} + \frac{1}{2} c'_1 P_{\vec{x}}^2 + {1\over 3}c'_2 |P_{\vec{x}}|^3 + \frac{1}{4} c'_3 P_{\vec{x}}^4 \ . \label{potential1} \eea with a center symmetry breaking term $c'_0 P_{\vec{x}}$. The constants obtained from the fit are shown in Table \ref{tab1a}. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Potential fit: gauge-Higgs model} \\ \hline $c'_0$ & $c'_1 $ & $c'_2$ & $c'_3$ \\ \hline 0.025(1) & 4.70(2) & $-4.70(8)$ & 1.91 (7) \\ \hline \end{tabular} \end{center} \caption{The constants $c'_{0-3}$ derived from a best fit of $ c'_0 + c'_1 P + c'_2 \text{sign}(P) P^2 + c'_3 P^3$ to the potential data of the SU(2) gauge-Higgs model.} \label{tab1a} \end{table} The slight asymmetry which breaks $f(P)=-f(-P)$, and therefore center symmetry, is more evident when we expand the plot in the immediate region of $P_0=0$, as in Fig.\ \ref{fig7b}. It can be seen that the best fit through the data points does not go through $f(P_0)=0$ at $P_0=0$, but rather crosses the $y$-axis at a positive value $f(0)=c'_0=0.025$. The line shown in Fig.\ \ref{fig7b} is taken from a best fit to the full range of data, not just the near $P_0=0$ data. Since the underlying gauge-Higgs theory breaks center symmetry explicitly, a term linear in $P_{\vec{x}}$ is of course expected. The coefficient $c_0=0.025$ of the symmetry breaking term is quite small, but the expectation value of the Polyakov line at $\gamma=0.75$ is also quite small: $\langle P_{\vec{x}} \rangle = 0.03$ at these couplings and lattice size. \begin{figure}[htb] \centering \subfigure[~ ${1\over L^3}{dV_P \over dP}$ vs.\ $P$ for the gauge-Higgs theory.]{ \resizebox{79mm}{!}{\includegraphics{fig7a.eps}} \label{fig7a} } \subfigure[~ A closeup near $P_0=0$.]{ \resizebox{79mm}{!}{\includegraphics{fig7b.eps}} \label{fig7b} } \caption{Derivative of the Polyakov line potential, per unit volume, with respect to the Polyakov line value $P$, for the SU(2) gauge-Higgs theory on a $12^3 \times 4$ lattice. Data is taken at gauge coupling $\b=2.2$ and gauge-Higgs coupling $\gamma=0.75$. (a) the data over the range $-1<P<1$, together with the best fit; (b) the data in the vicinity of $P=0$, also showing the fit in this region derived from the full range of data (i.e.\ same curve as in (a)). Note that the line through the data does not pass through the origin, which implies a small breaking of center symmetry.} \label{fig7} \end{figure} \section{\label{sec:deform}Plane-wave deformations} We now return to the pure gauge theory at $\b=2.2$. So far the potential term $V_P$ of the effective Polyakov line action has been determined, but the ultimate interest is in the full action. It was not very hard to extract this action from the $\log[N_n/N_{tot}]$ data at strong couplings. Unfortunately it is not as easy to jump from the path derivatives to the full action at weaker couplings, simply because $S_P$ is not so simple (and is not known in advance!). Nevertheless, knowledge of the action along a particular trajectory in configuration space does provide some information about the full action. As in the strong coupling case, we choose to investigate the derivatives of $S_P$ along paths of the form \rf{trajectory}, i.e.\ plane waves of fixed wavenumber and varying amplitude on a constant background. The method is the same as outlined in section \ref{sec:test}, but the result is different. At $\b=1.2$, it was found that $dS_P/d(a^2)$ was linear in $k_L^2$, and independent of $P_0$. That is not the case at $\b=2.2$. What happens in this case is shown in Fig.\ \ref{add}, where we display $L^{-3}dS_P/d(a^2)$ plotted against the magnitude of lattice momentum $k_L=(k_L^2)^{1/2}$ at fixed values of $P_0=0.1$ and $P_0=0.8$. It can be seen that the $k_L$-dependence of the data in Fig.\ \ref{add1}, at $P_0=0.1$, is consistent with linear, while the $k_L$-dependence in Fig.\ \ref{add8}, at $P_0=0.8$, seems to be quadratic. This can be seen from fits to $a-bk_L$ in the former case, and to $a-bk_L^2$ in the latter. This suggests a possible interpolating form \begin{eqnarray} {1\over L^3} {d S_P \over d(a^2)}_{|_{a=0}} = f(P_0) + c \sqrt{k_L^2 + g P_0^2} \ , \label{interp} \eea whose $k_L$-dependence would vary continuously from linear, as $P_0 \rightarrow 0$, to quadratic, for ${k_L^2 \ll g P_0^2}$. Fig.\ \ref{vol0p1} is the same plot as Fig.\ \ref{add1}, except that data obtained on both an $8^3 \times 4$ lattice and a $12^3 \times 4$ volume are displayed together, and both sets of data points appear to have the same $k_L$ dependence. This is, of course, evidence of the insensitivity of our results to the spatial volume. \begin{figure}[ht] \centering \subfigure[~ $P_0=0.1$]{ \resizebox{79mm}{!}{\includegraphics{add0p1.eps}} \label{add1} } \subfigure[~ $P_0=0.8$]{ \resizebox{79mm}{!}{\includegraphics{add0p8.eps}} \label{add8} } \caption{Derivative of the action along a path of plane wave deformations. (a) Data at $P_0=0.1$ is consistent with a linear variation of the derivative with deformation lattice momentum $k_L$; (b) data at $P_0=0.8$ is consistent with a quadratic variation w.r.t.\ $k_L$.} \label{add} \end{figure} \begin{figure}[t!] \centerline{\scalebox{0.7}{\includegraphics{vol0p1.eps}}} \caption{A check of insensitivity to lattice volume. Parameters are the same as in Fig.\ \ref{add1}, but this time including data obtained on an $8^3 \times 4$ lattice volume ($L=8$), in addition to data on a $12^3 \times 4$ volume ($L=12$).} \label{vol0p1} \end{figure} \begin{figure}[t!] \centering \subfigure[~ ]{ \resizebox{120mm}{!}{\includegraphics{kernel1.eps}} \label{kern1} } \subfigure[~]{ \resizebox{120mm}{!}{\includegraphics{kernel2.eps}} \label{kern2} } \caption{Two views, at different viewing angles, of the data (red crosses) for $L^{-3}dS/d(a^2)$ vs.\ lattice momentum $k_L$ and Polyakov line $P_0$, and the best fit (green surface) of the form \rf{fitfun} to the data.} \label{kernel} \end{figure} If \rf{interp} is correct, then it ought to be consistent with the potential \rf{potential}. This means that $f(P_0)$ can be, at most, quadratic in $P_0$, so let us write \begin{eqnarray} {1\over L^3} {d S_P \over d(a^2)}_{|_{a=0}} = b_0 + b_1 P_0 + b_2 P_0^2 + c \sqrt{k_L^2 + g P_0^2} \ . \label{fitfun} \eea The constants shown are subject to three constraints by the potential, so if we insist on the potential \rf{potential} there are really only two independent constants. In order to derive those constraints, consider a very large lattice volume $L^3$, such that $k_L^2$ can be made very small compared to $g P_0^2$, but still non-zero, and we assume that $\{c_1,c_2,c_3\}$ do not vary much with $L$ (we have already seen evidence of this fact in Fig.\ \ref{vol}). Then the kinetic term is negligible compared to the potential term, and along the trajectory \rf{trajectory}, taking account of the spatial average $(\cos^2 \vk \cdot {\vec{x}})_{av} = \frac{1}{2}$, we have \begin{eqnarray} {1\over L^3} {d S_P \over d(a^2)}_{|_{a=0}} = \frac{1}{4} c_1 + \frac{1}{2} c_2 P_0 + {3\over 4} c_3 P_0^2 \ . \eea Comparison with \rf{fitfun} in the $k_L^2 \ll g P_0^2$ limit calls for identifying \begin{eqnarray} b_0 = \frac{1}{4} c_1 ~~,~~ b_2 = {3\over 4} c_3 ~~,~~ b_1 + c \sqrt{g} = \frac{1}{2} c_2 \ . \label{ident} \eea Figure \ref{kernel} show a best fit of the data to the form \rf{fitfun}, with the best fit constants given in Table \ref{tab2}. This is hardly a perfect fit through the data points, given the value of the reduced $\chi^2 \approx 30$. Still, except at very low $k_L^2, P_0^2$, the fitting function gives a reasonable account of the dependence of the data on $k_L^2$ and $P_0$. Table \ref{tab3} is a test of constraints, listing three combinations of constants which, according to the identities \rf{ident}, should vanish. It is seen that the second and third combinations in the table are consistent with zero, and the first combination is very nearly so.\footnote{All fits, and error estimates on fitting constants, are made using the GNUPLOT software.} \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{Surface fit} \\ \hline $b_0 $ & $b_1$ & $b_2$ & $c$ & $g$ \\ \hline 1.105(14) & 0.85(17) & 1.365(56) & $-0.529(13)$ & 33(3) \\ \hline \end{tabular} \end{center} \caption{Fitting constants $b_{0-2},c,g$ obtained from a best fit to the data points shown in Fig.\ \ref{kernel}, by a surface of the form \rf{fitfun}.} \label{tab2} \end{table} \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{Constraints} \\ \hline $b_0 - \frac{1}{4} c_1$ & $b_1 + c\sqrt{g} - \frac{1}{2} c_2$ & $b_2 - {3\over 4} c_3$ \\ \hline -0.05(2) & 0.06(23) & 0.04(8) \\ \hline \end{tabular} \end{center} \caption{The constraints \rf{ident} imply that the combination of constants in the second line of the table should vanish within errorbars, and the last line shows the actual values of these combinations, for the constants given in Tables \ref{tab1} and \ref{tab2}.} \label{tab3} \end{table} \section{\label{sec:towards}Towards the full action} The interesting question, of course, is what is the full action which gives rise to the variation \rf{fitfun} along the path, with the given potential \rf{potential}. We begin by noting that, with the constants shown in Tables \ref{tab1} and \ref{tab2}, the action \begin{eqnarray} S_P &=& 2c \left\{ \sum_{{\vec{x}} {\vec{y}}} P_{\vec{x}} Q_{{\vec{x}} {\vec{y}}} P_{\vec{y}} - \sum_{\vec{x}} \sqrt{gP_0^2} P_{\vec{x}}^2 \right\} + \sum_{\vec{x}} \Bigl( \frac{1}{2} c_1 P_{\vec{x}}^2 + {1\over 3} c_2 |P_{\vec{x}}^3| + \frac{1}{4} c_3 P_{\vec{x}}^4 \Bigr) \ , \nonumber \\ &=& K_P + \sum_{\vec{x}} {\cal V}(P_{\vec{x}}) \label{pathS} \eea where $K_P$ is the kinetic term \begin{eqnarray} K_P = 2c \left\{ \sum_{{\vec{x}} {\vec{y}}} P_{\vec{x}} Q_{{\vec{x}} {\vec{y}}} P_{\vec{y}} - \sum_{\vec{x}} \sqrt{gP_0^2} P_{\vec{x}}^2 \right\} \eea and \begin{eqnarray} Q_{{\vec{x}} {\vec{y}}} &=& \Bigl(\sqrt{R}\Bigr)_{{\vec{x}} {\vec{y}}} \nonumber \\ R_{{\vec{x}} {\vec{y}}} &=& (-\nabla_L^2)_{{\vec{x}} {\vec{y}}} + gP_0^2 \d_{{\vec{x}} {\vec{y}}} \nonumber \\ &=& \sum_{i=1}^3 (2\d_{{\vec{x}} {\vec{y}}} - \d_{{\vec{x}},{\vec{y}}+\boldsymbol{\hat{\textbf{\i}}}} - \d_{{\vec{x}}+\boldsymbol{\hat{\textbf{\i}}}}) + gP_0^2 \d_{{\vec{x}} {\vec{y}}} \ , \label{QR} \eea gives the known results for the potential \rf{potential} and for the variation of $S_P$ with $a^2$ \rf{fitfun} along the paths of plane wave deformations \rf{trajectory}. The operator $\nabla_L^2$ is the usual lattice Laplacian operator, and $Q$ has the spectral representation \begin{eqnarray} Q &=& \sum_\vk \left( \sqrt{k_L^2 + gP_0^2}\right) |\vk \rangle \langle \vk | \nonumber \\ Q_{{\vec{x}} {\vec{y}}} &=& {1\over L^3} \sum_\vk \left(\sqrt{k_L^2 + gP_0^2}\right) e^{i\vk \cdot ({\vec{x}}-{\vec{y}})} \ , \label{spectral} \eea where $\sum_\vk$ is shorthand for the sum over lattice wave vectors with components $k_i=(2\pi/L) m_i$, and lattice momentum $k_L$ has been defined previously in \rf{k2}. The ket vectors $|\vk \rangle$ correspond to normalized $L^{-3/2} \exp[i \vk \cdot {\vec{x}}]$ plane wave states. For the paths \rf{trajectory}, set $P_{\vec{x}} = P_0 + a \cos(\vk \cdot {\vec{x}})$, and compute the resulting action on such configurations up to leading order in $a^2$. Using the spectral representation for the operator $Q$, a short calculation gives, up to $O(a^2)$, \begin{eqnarray} S_P &=& L^3 {\cal V}(P_0) + a^2 L^3 \left\{ \frac{1}{4} c_1 + (\frac{1}{2} c_2 - c\sqrt{g}) P_0 + {3\over 4} c_3 P_0^2 + c\sqrt{k_L^2 + gP_0^2} \right\} \ . \eea Applying the identities \rf{ident}, which are reasonably well satisfied by the data, this becomes \begin{eqnarray} S_P &=& L^3 {\cal V}(P_0) + a^2 L^3 \left\{ b_0 + b_1 P_0 + b_2 P_0^2 + c \sqrt{k_L^2 + gP_0^2} \right\} \ . \eea So we find that for constant configurations ($a=0$), the action is simply the known potential, i.e.\ $S_P=L^3 {\cal V}(P_0)$, while the path derivative is \begin{eqnarray} {1\over L^3}{dS_P \over d(a^2)}_{|_{a=0}} = b_0 + b_1 P_1 + b_2 P_0^2 + c\sqrt{k_L^2 + gP_0^2} \ , \label{test} \eea in complete agreement with \rf{fitfun}. Denote by $P_{av}$ and $\Delta P^2$ the lattice average value and mean square deviation, respectively, of a given Polyakov line configuration. It is clear that for the paths \rf{trajectory} considered so far, $P_0=P_{av}$. One further generalization, which will not affect agreement with the data so far, is to allow the kinetic term to also depend on $\Delta P^2$, i.e.~\footnote{A generalization of \rf{pathS} which does {\it not} work is the replacement of $P_0$ by $P_x$ in \rf{pathS} and \rf{QR}. This leads to additional contributions to $dS_P/d(a^2)$ which spoil the agreement with \rf{test}.} \begin{eqnarray} K_P = 2c \left\{ \sum_{{\vec{x}} {\vec{y}}} P_{\vec{x}} \Bigl(\sqrt{-\nabla_L^2 + gP_{av}^2 + g' \Delta P^2}\Bigr) _{{\vec{x}} {\vec{y}}} P_{\vec{y}} - \sum_{\vec{x}} \sqrt{ gP_{av}^2 + g' \Delta P^2} P_{\vec{x}}^2 \right\} \eea It is not hard to see that the O($a^2$) contribution that would arise from the $a^2$-dependence of the square root terms also selects, at this order, the constant $a^2$-independent part of $P_{\vec{x}}$ and $P_{\vec{y}}$. In that case $k_L=0$, and this contribution to the O($a^2$) part of the kinetic term vanishes. In order to investigate the possibility of a $\Delta P^2$-dependence a little further, let us consider trajectories consisting of plane waves, of varying amplitude $A$, with $P_{av}=0$, i.e. \begin{eqnarray} P_x = A \cos(\vk \cdot {\vec{x}}) \ , \eea and study the derivative $L^{-3} dS_p/dA$ evaluated at $A=A_0$. To compute this derivative by the relative weights approach, we construct a set of configurations \begin{eqnarray} U^{(n)}_{\vec{x}} &=& P^{(n)}_{\vec{x}} \mathbbm{1} + i \sqrt{1 - (P^{(n)}_{\vec{x}})^2} \sigma_3 \nonumber \\ P^{(n)}_{\vec{x}} &=& A_n \cos(\vk \cdot {\vec{x}}) \nonumber \\ A_n &=& A_0 + \Bigl(n - \frac{1}{2}(M+1)\Bigr) \Delta A ~~,~~ n=1,2,...,M \nonumber \\ k_i &=& {2 \pi \over L} m_i \ . \label{trajectory2} \eea and proceed as before. The conjectured action is \begin{eqnarray} S_P &=& 2c \left\{ \sum_{{\vec{x}} {\vec{y}}} P_{\vec{x}} \Bigl(\sqrt{-\nabla_L^2 + gP_{av}^2 + g' \Delta P^2}\Bigr) _{{\vec{x}} {\vec{y}}} P_{\vec{y}} - \sum_{\vec{x}} \sqrt{ gP_{av}^2 + g' \Delta P^2} P_{\vec{x}}^2 \right\} \nonumber \\ & & + \sum_{\vec{x}} \Bigl( \frac{1}{2} c_1 P_{\vec{x}}^2 + {1\over 3} c_2 |P_{\vec{x}}^3| + \frac{1}{4} c_3 P_{\vec{x}}^4 \Bigr) \label{fullS} \eea whose path derivative is~\footnote{The numbers multiplying $c_1,c_2,c_3$ are the lattice averages of $\cos^2(\vk \cdot {\vec{x}}), |\cos^3(\vk \cdot {\vec{x}})|, \cos^4(\vk \cdot {\vec{x}})$ respectively. These numbers are almost independent of the wavenumber $\vk$ on finite lattices, so long as $\vk \ne 0$, and converge rapidly to the infinite volume limit as lattice volume increases.} \begin{eqnarray} {1\over L^3} {dS_P \over dA}_{|_{A=A_0}} &=& \frac{1}{2} c_1 A_0 + .424 c_2 A_0^2 + .375 c_3 A_0^3 + 2 c A_0 \left( \sqrt{k_L^2 + \frac{1}{2} g' A_0^2} - \sqrt{\frac{1}{2} g' A_0^2}\right) \nonumber \\ & & + \frac{1}{2} c g' A_0^3\left( {1\over \sqrt{k_L^2 + \frac{1}{2} g' A_0^2}} - {1\over \sqrt{\frac{1}{2} g' A_0^2}}\right) \label{dSa} \eea \begin{figure}[t!] \centerline{\scalebox{0.9}{\includegraphics{pkernel.eps}}} \caption{Variation of Polyakov line action with Polyakov line amplitude, $L^{-3} dS_P/dA$ evaluated at $A=A_0$, for Polyakov line configurations proportional to plane waves $P_{\vec{x}} = A\cos(\vk \cdot {\vec{x}})$, as a function of $A_0$ and lattice momentum $k_L$. Red crosses are data points, and the green surface is a best fit to the data by the analytic form \rf{dSa}.} \label{pkernel} \end{figure} Taking $c$ and $c_{1-3}$ as given in Tables \ref{tab1} and \ref{tab3}, there is only one free constant left to fit the data, and the best fit, shown in Fig.\ \ref{pkernel}, is obtained at $g'=3.45(4)$. Once again, this plot should not be interpreted as a perfect fit through the data points within errorbars, given that reduced $\chi^2 \approx 45$. On the other hand, with only one fitting constant, the expression \rf{dSa} does seem to give a quite reasonable account of the dependence of the data on $A_0$ and $k_L$, despite the highly non-local expression $\Delta P^2$ introduced into the kinetic term. \section{Conclusions} I have presented a method for computing derivatives $dS_P/d\l$ of the effective Polyakov line action along any given path through field configuration space, parametrized by the variable $\l$. The technique is easily implemented in a lattice Monte Carlo code by simply replacing updates of timelike links, on a single timeslice, by a Metropolis step which updates that set of links simultaneously, and the potential part $V_P$ of the effective Polyakov line action can be readily determined, for any given lattice coupling, temperature, and set of matter fields, up to an irrelevant constant. It is also possible to determine, from the derivatives, the action $S_P$ along any given trajectory in field configuration space. The method has been applied here to SU(2) lattice gauge theory, both without and with a scalar matter field. At a strong coupling ($\b=1.2$) and finite temperature, the method easily determines the effective Polyakov line action, which we have checked against the known result derived from a strong-coupling expansion. At a weaker coupling ($\b=2.2$ on a $12^3 \times 4$ lattice), where the Polyakov line action is not known, it has been shown that, up to a constant, the potential term has the form \begin{eqnarray} V_P = \sum_{\vec{x}} \Bigl( \frac{1}{2} c_1 P_{\vec{x}}^2 + {1\over 3} c_2 |P_{\vec{x}}|^3 + \frac{1}{4} c_3 P_{\vec{x}}^4 \Bigr) \ , \eea with coefficients given in Table \ref{tab1}. The center-symmetric but non-analytic cubic term comes as a surprise; to the best of my knowledge such a term has not been anticipated in previous studies. It would be interesting to study the evolution of the above potential as $\b$ and $N_t$ vary. Addition of a scalar matter field in the underlying lattice gauge theory introduces a center symmetry breaking term into the potential which is linear in $P_{\vec{x}}$, with a coefficient reported in section \ref{sec:matter}. Data has also been obtained from small plane-wave deformations around a constant Polyakov line background (Section \ref{sec:deform}), and for Polyakov lines proportional to a plane waves with variable amplitude (Section \ref{sec:towards}). It was found that the action \rf{fullS} is consistent with the results that have been found so far, and at this point we may conjecture that \rf{fullS} approximates the desired full Polyakov line action. Of course, the kinetic term in $S_P$ could easily have a more complicated form than what is suggested in \rf{fullS}, and therefore this conjecture needs to be tested on more complicated, non-plane wave configurations. Those tests, and the extension to the SU(3) group, would be the obvious next steps in the approach introduced here. \acknowledgments{It is a pleasure to thank Kim Splittorff for many helpful suggestions. This research is supported in part by the U.S.\ Department of Energy under Grant No.\ DE-FG03-92ER40711.}
1,116,691,498,774
arxiv
\section{Conclusion} \label{sec:conclusion} \textbf{Summary.} In this paper, we studied the problem of reinforcement learning with demonstrations from mismatched tasks under sparse rewards. Our key insight is that, although we should not purely imitate the mismatched demonstrations, we can still get useful guidance from the demonstrations collected in a similar task. Concretely, we proposed conservative reward shaping from demonstrations (CRSfD) which uses reward shaping by estimated value function of a mismatched expert to incorporate useful future information to augment the sparse reward, with conservativeness techniques to handle out-of-distribution issues. Simulation and real world robot insertion experiments show the effective of proposed method under tasks varied in environmental dynamics and reward functions. \textbf{Limitations and Future works.} Provided with demonstrations from a mismatched task, our proposed method aids the online learning process for each new task separately. However, one may need to learn a policy to solve multiple new tasks at the same time, and exploration in these tasks may benefit each other. So future works include using demonstrations to accelerate the joint learning process of multiple tasks. Another limitation is that our method is only applicable to new tasks similar to original task. The effectiveness of CRSfD gradually decays when the tasks differ too much from the original task so that the demonstrations do not contain any useful information. It also worth to mention that the whole algorithm pipeline should be able to be implemented directly on hardware, which is a promising research direction. \clearpage \acknowledgments{This work is supported by the Ministry of Science and Technology of the People’s Republic of China, the 2030 Innovation Megaprojects “Program on New Generation Artificial Intelligence” (Grant No. 2021AAA0150000).} \section{Introduction} Reinforcement learning has been applied to various real-world tasks, including robotic manipulation with large state-action spaces and sparse reward signals \cite{andrychowicz2020learning}. In these tasks, standard reinforcement learning tends to perform a lot of useless exploration and easily fall into local optimal solutions. To eliminate this problem, previous works often use expert demonstrations to aid online learning, which adopt some successful trajectories to guide the exploration process \cite{nair2018overcoming,vecerik2017leveraging}. However, standard learning from demonstration algorithms often assume that the target leaning task is exactly same with the task where demonstrations are collected \cite{ziebart2008maximum,abbeel2004apprenticeship,ho2016generative}. Under this assumption, experts need to collect the corresponding demonstration for each new task, which can be expensive and inefficient. In this paper, we consider a new learning setting where expert data is collected under a single task, while the agent is required to solve different new tasks. For instance as shown in Figure \ref{fig:motivation}, a robot arm aims to solve peg-in-hole tasks.The demonstration is collected on a certain type of hole while the target tasks have different hole shapes (changes in environmental dynamics) or position shifts (changes in reward function). This can be challenging as agents cannot directly imitate those demonstrations from mismatched tasks due to dynamics and reward function changes. However, compared to learning from scratch, those demonstrations should still be able to provide some useful information to help exploration. \begin{figure}[H] \begin{center} \centerline{\includegraphics[width=0.9\linewidth]{figure/pipeline0_clip.pdf}} \caption{Illustration of our motivation. Demonstrations collected on a single original task are transferred to other similar but different tasks with either environmental dynamics changes (shape change) or reward function change (position shift), and aid the learning of these tasks.} \label{fig:motivation} \end{center} \end{figure} To address the issue of learning with demonstrations from mismatched task, previous works in imitation learning consider agent dynamics mismatch and rely on state-only demonstrations \cite{schroecker2017state,torabi2018generative,sun2019provably}. However, this approach has an implicit assumption that the new task share the same reward function as the original task \cite{gangwani2020state}. Hester et al. and Vecerik et al. \cite{hester2018deep,vecerik2017leveraging} receive sparse rewards in the environment and add demos into a prioritized replay buffer. Sparse reward signal can be backward propagated during the Bellman update and thus guide the exploration. However, this propagation flow may be blocked due to the mismatch in new tasks. Another class of work \cite{pong2021offline,zhao2021offline, yu2020meta} also considers that we have expert data on multitasks and utilize meta-learning methods to obtain diverse skills, and then transfer skills to new tasks. However, such a strategy requires to collect a huge expert dataset, which is expensive and inefficient. In our setting, we are only provided with a few demonstrations collected under a single task. In this paper, we propose \textbf{C}onservative \textbf{R}eward \textbf{S}haping \textbf{f}rom \textbf{D}emonstration (\textbf{CRSfD}), which learns policies for new tasks accelerated by demonstrations collected in a single mismatched task. We use reward shaping \cite{ng1999policy,cheng2021heuristic} to incorporate future information into single-step rewards while keeping the optimal policy unchanged. Moreover, we explicitly deal with out-of-distribution problem to encourage agent to explore around demonstrations. Experimental results of robot manipulation tasks show that our approach outperforms baseline LfD methods when learning in new tasks with mismatched demonstrations. Our contributions can be summarized as follows: \begin{itemize} \item We proposed a reward shaping scheme for reinforcement learning with demonstration from mismatched task, which use estimated value function from expert demonstrations to reshape sparse reward in new tasks. \item Built upon such scheme, we propose the conservative reward shaping from demonstration (CRSfD) algorithm to overcome the out-of-distribution problem, we regress value function of OOD states to zero and use a larger discount factor in new tasks, which guides the agent to conservatively explore around expert data. \item We conduct simulation and real world experiments of robot insertion tasks with mismatched demonstrations. The results show that CRSfD effectively guide the exploration process in new tasks and reach a higher sample efficiency and convergence performance. \end{itemize} \section{Related Works} \textbf{Learning from demonstration} A prominent research subject is how to leverage expert data to assist reinforcement learning. Imitation learning (IL) is a broad family of such algorithms that enforce agents to directly imitate the expert. Behavior cloning (BC) is the simplest IL algorithm which greedily imitates the step-wise action of the expert and can fall into the problem of distributional shift \cite{ross2011reduction}. Inverse reinforcement learning \cite{ng2000algorithms,ziebart2008maximum} and adversarial imitation learning \cite{ho2016generative} infer the expert's reward function and learn the corresponding optimal policy jointly. The above IL algorithms assume environment rewards are not available, hence their performances are upper-bounded by that of experts \cite{rengarajan2022reinforcement}. Another line of work makes use of reward feedback from environment and leverages expert demonstration data to overcome the sparse reward issue or learn more natural behaviors. Vecerik et al. \cite{vecerik2017leveraging} add demonstration into a prioritized replay buffer. Rajeswaran et al. \cite{rajeswaran2017learning} add a behavioral cloning loss to the policy to speed up exploration and learn more natural and robust behaviors. Chen et al. \cite{wu2021shaping} use generative models on single step transition to reshape reward of the original task. However, standard learning from demonstration algorithms always requires demonstrations to be collected under the same task and act nearly optimal under this task, which is not suitable for our setting. \textbf{Generalization of demonstrations} There are a few works relaxing the requirements for demonstrations to achieve generalization of demonstrations from different aspects. Some works assume that demonstrations are collected by a sub-optimal policy under the same task \cite{gao2018reinforcement,brown2019extrapolating}, early work \cite{brown2019extrapolating} requires manually ranking of trajectories and later works \cite{chen2020learning,brown2020better} move the needs for rankings by actively adding noise to demonstrations along with automatical ranking. Cao et al. \cite{cao2021learning,cao2021learning2} assume that the demonstrations are a mixture of different experts and use a classifier to separate out the more feasible expert data for the new task. Other works \cite{gangwani2020state,radosavovic2020state,liu2019state} assume that the target task has different agent dynamics to the task where demonstrations are collected, so they only match the state sequence of demonstrations or use an inverse dynamic model to recover the action between two states in the new task. In our work, we further consider new tasks with the environment dynamics mismatch as well as reward function mismatch. Another branch of related works are meta imitation learning algorithms, which assume that we have expert data on multitasks and utilize meta-learning methods to solve new tasks in zero-shot or few shots adaption \cite{pong2021offline,zhao2021offline}. However, such a strategy usually necessitates a huge expert dataset which may be expensive and inefficient. Differently, we consider the problem where only a small number of demonstrations collected in a single task are provided, and the agent needs to use them to accelerate the learning of other similar but different tasks. \section{Problem Statement} \vspace{-2mm} In our problem setting, we have collected a few demonstrations under a single task and want to utilize these data in reinforcement learning for other similar but different tasks. A task can be formalized as a standard Markov decision process MDP, which is modeled as $M_i=(S,A,P_i,R_i,\gamma_i)$. The task where demonstrations are collected is denoted as $M_0=(S,A,P_0,R_0,\gamma_0)$, and the new tasks we target to solve are denoted as $M_i=(S,A,P_i,R_i,\gamma_i),i\geq 1$. $S$ and $A$ are the shared state space and action space for each task. $P_i:S\times A\times S \rightarrow [0,1]$ are state transition probability functions of each task, $R_i:S\times A\times S \rightarrow \mathbb{R}$ are reward functions for task $M_i$, describing the natural reward signal in each task. Due to differences of environment and agent dynamics, $P_i$ and $R_i$ often varied between different tasks. $\gamma_i$ is discounted factor of $M_i$ which reflects how much we care about future, typically set to a constant slightly lower than 1. A policy $\pi_i:S\rightarrow A$ defines a probability distribution in action space. For a task $M_i$ and a policy $\pi_i$, state value function $V_i^{\pi_i}(s)=\mathbb{E}_{s_0=s,\pi_i}[\Sigma\gamma_i^t R_t(s_t,a,s_{t+1})]$ estimates the discounted cumulative reward of the task under this policy $\pi_i$. $V_i^{*}(s)$ estimates the discounted cumulative rewards for state $s$ under the optimal policy $\pi_i$. As many works \cite{oh2018self,riedmiller2018learning} point out, directly applying RL in a sparse reward environment can be sample inefficient and fail to find a good solution. In this work, we want to make use of the demonstrations $D:(\tau_0,\tau_1,...)$ collected in task $M_0$ to facilitate reinforcement learning for the different but similar new tasks $M_i$. Note each trajectory $\tau_k$ contains a sequence of state action transitions $[s_0,a_0,s_1,a_1,...s_t,a_t]$ in task $M_0$. \textbf{Challenges} There are two key issues when leveraging demonstrations from mismatched tasks. First, how to get effective guidance from these mismatched demonstrations? Although we should not purely imitate these demonstrations, we do need to obtain some useful guidance from them to acceleration exploration in new tasks with sparse rewards. Second, since our goal is to maximize the reward defined under the new task, guidance from mismatched demonstrations should not influence the optimality of the learned policy in new tasks. \section{Conservative Reward Shaping from Demonstrations}\label{sec:method} \vspace{-2mm} Provided with demonstrations in a particular task $M_{0}:(S,A,P_0,R_0,\gamma_0)$, we aim to help the reinforcement learning process of different tasks $M_1,M_2...M_K$, which may have different transition functions $P_k(s'|s,a)$ and reward functions $R_k(s,a,s')$. In this work, we use SAC \cite{haarnoja2018soft} as our base reinforcement learning algorithm as it holds an excellent exploration mechanism which leads to higher sample efficiency than policy gradient algorithms \cite{schulman2015trust, schulman2017proximal} and is shown to perform well on continuous action tasks \cite{haarnoja2018soft}. Nevertheless, it is also possible to base our method on other RL algorithms including on-policy ones. To make use of expert demonstrations, DDPGfD \cite{vecerik2017leveraging} proposes a mechanism compatible with the off-policy method, which adds the demonstration data into the replay buffer with prioritized sampling. Under such framework, the sparse reward signal can propagate back along the expert trajectory to guide the agent. By combining SAC and DDPGfD \cite{vecerik2017leveraging}, we obtain the backbone of our method and labeled as SACfD, which is also our best baseline method. \vspace{-2mm} \subsection{Reward Conflict under Mismatched Task Setting} \vspace{-2mm} Although LfD methods such as SACfD benefit from demonstrations in sparse reward reinforcement learning, they may not benefit from demonstrations when the target tasks are mismatched from that of the expert. When following the demonstrations, agent may consistently fail and can not get any sparse rewards signals. As failure time increases, agent may consider expert trajectories to have low value since few rewards are received. The agent will then avoid following the expert and the demonstrations cannot provide effective guidance, resulting in inefficient exploration in the whole free space. Although totally following the demonstrations may not be able to receive any sparse reward in new tasks, it can still provide useful exploration directions since in our settings the new tasks are similar to the original one. We formally introduce our method as conservative reward shaping from demonstration (CRSfD). Intuitively, CRSfD assigns appropriate reward signals along the demonstrations to efficiently guide the agent towards the goal, and allows exploration around the goal to maintain optimally. Details are described in the following subsection. \vspace{-2mm} \subsection{Conservative Reward Shaping from Demonstrations(CRSfD)}\label{sec:CRSfD} \vspace{-2mm} \textbf{Reward Shaping with Value Function} Reward shaping \cite{ng1999policy} provides an elegant way to modify reward function while keeping the optimal policy unchanged. Given original MDP $M$ and an arbitrary potential function $\Phi:S \rightarrow \mathbb{R}$, we can reshape the reward function to be: \begin{equation} \setlength{\abovedisplayskip}{6pt} \setlength{\belowdisplayskip}{6pt} R'(s,a,s')=R(s,a,s')+\gamma \Phi(s')-\Phi(s), s'\sim P(.|s,a) \end{equation} Denote the new MDP as $M'=(S,A,P,R',\gamma)$ obtained by replacing reward function $R$ in $M$ to $R'$. Ng et al. \cite{ng1999policy} proved that the optimal policy $\pi_{M'}^{*}$ on $M'$ and the optimal policy $\pi_{M}^{*}$ on the original MDP $M_0$ are the same: $\pi_{M'}^{*}=\pi_{M}^{*}$. Furthermore, the optimal state-action function $Q^{*}_{M'}(s,a)$ and value function $V^{*}_{M'}(s)$ are shifted by $\Phi(s)$: \begin{equation} \setlength{\abovedisplayskip}{6pt} \setlength{\belowdisplayskip}{6pt} Q^{*}_{M'}(s,a) = Q^{*}_{M}(s,a)-\Phi(s),\quad V^{*}_{M'}(s) = V^{*}_{M}(s)-\Phi(s) \end{equation} In particular, Ng et al. \cite{ng1999policy} pointed out that when the potential function is chosen as the optimal value function of the original MDP $\Phi(s)=V_{M}^{*}(s)$, then the new MDP $M'$ becomes trivial to solve. What remained for the agent is to choose each time-step's action greedily, because the transformed single-step reward already contains all the long-term information for decision making. \textbf{Conservative Value Function Estimation} The reward shaping method provides a principled way to guide the agent with useful future information and keep the optimal policy unchanged. Ideally, an accurate $\Phi_i(s)=V_{M_i}^{*}(s)$ will lead to simple and optimal policy in new MDP $M'$, but a perfect $\Phi_i(s)=V_{M_i}^{*}(s)$ is unavailable in advance. Practically, we estimate a $\widetilde{V}_{M_0}^{D}\approx V_{M_0}^{*}(s)$ using demonstrations from task $M_0$ by Monte-Carlo regression and treat $\widetilde{V}_{M_0}^{D}(s)$ as a prior guess of $V_{M_i}^{*}(s)$. We then shape the sparse reward in the new task $M_i$ to: \begin{equation}\label{shaping_equation} \setlength{\abovedisplayskip}{6pt} \setlength{\belowdisplayskip}{6pt} R_i'(s,a,s') = R_i(s,a,s')+\gamma \widetilde{V}_{M_0}^{D}(s')-\widetilde{V}_{M_0}^{D}(s) \end{equation} However, demonstration trajectories only cover a small part of the state space. For out-of-distribution states, estimated $\widetilde{V}_{M_0}^{D}$ may output random values and lead to random single-step reward after reward shaping, which may mislead the agent. We make two improvements over the above reward shaping method to encourage the agent to explore around the demonstrations conservatively: (1) Regress value function $\widetilde{V}_{M_0}^{*}(s)$ of the out-of-distribution states to 0, thus discouraging exploration far from demonstrations. The OOD states are sampled randomly from free space. (2) Increasing the discount factor $\gamma_i$ in new tasks. From equation \ref{shaping_equation}, we can find that increasing $\gamma_i$ will give higher single-step reward for state with large $V_{\theta}(s')$ in the original task, thus encourages exploration around demonstrations. Our method can be summarized as follows: ($D$ stands for demonstration buffer, $S$ stands for free space, $\gamma_i>\gamma_0$): \vspace{-2mm} \begin{algorithm}[H] \caption{Conservative Value Function Estimation} \label{crs} \begin{algorithmic} \STATE {\bfseries Input: } Demonstration transitions, demo discount factor $\gamma_0$, new task discount factor $\gamma_k$($\gamma_k>\gamma_0$), regression steps $n_r$,scale factor $\lambda$. \STATE {\bfseries Initialization: }Initialize value function $V_\theta(s)$ \STATE Monte-Carlo policy evaluation on demonstrations, Calculate cumulative reward for states in demos using $\gamma_0$: $V_{M_0}^{D}(s)=\Sigma_{i=t}^{T}\gamma_0^{i-t} r_i$ \FOR{$n$ in regression steps $n_r$} \STATE Sample minibatch $B_1$ from demo buffer $D$ with regression target $V_{M_0}^{D}(s)=\Sigma_{i=t}^{T}\gamma_0^{i-t} r_i$. Sample minibatch $B_2$ from whole free space $S$ with regression target 0. \STATE perform regression: $\theta = \mathop{\arg\min}\limits_{\theta} \left[ \mathbb{E}_{s_t\sim B_1}\left( V_{\theta}(s_t)-\Sigma_{i=t}^{T}\gamma_0^{i-t} r_i\right)^2 + \lambda \mathbb{E}_{s_t\sim B_2}\left( V_{\theta}(s_t)-0\right)^2 \right]$ \ENDFOR \STATE Shaping reward with $\gamma_k$: $R_i'(s,a,s') = R_i(s,a,s')+\gamma_k V_{\theta}(s')-V_{\theta}(s)$. \STATE Perform SACfD update. (detalis can be found in appendix.) \end{algorithmic} \end{algorithm} \vspace{-6mm} \textbf{Conservative Properties} In the last paragraph, we introduced some conservative techniques and give some intuitively explanations why those improvements can encourage exploration around demonstrations under the proposed reward shaping framework. The following theorem can quantize the benefits of proposed methods. \begin{theorem} For task $M_0$ with transition $T_0$ and new task $M_k$ with transition $T_k$, define total variation divergence $D_{TV}(s,a)=\Sigma_{s'}|T_0(s'|s,a)-T_k(s'|s,a)|=\delta$. If we have $\delta<(\gamma_k-\gamma_0) \mathbb{E}_{T_2(s'|s,a)}[ V_{M_0}^D(s')]/\gamma_0 \max_{s'}V_{M_0}^D(s')$, then following the expert policy in new task will result in immediate reward greater then 0: \begin{equation} \setlength{\abovedisplayskip}{6pt} \setlength{\belowdisplayskip}{6pt} \mathbb{E}_{a\sim \pi(.|s)}r'(s,a) \geq (\gamma_k-\gamma_0)\mathbb{E}_{T_{k}(s'|s)}[ V_{M_0}^{D}(s')]-\gamma_0 \delta \max_{s'}V_{M_0}^D(s') > 0 \end{equation} \end{theorem} Detailed proof can be found in Appendix 7.5. The above theorem indicates that for similar but different tasks ($\delta$ smaller than the threshold), exploration along demonstrations will lead to positive immediate rewards which guide the learning process. \textbf{Conservative Reward Shaping from Demonstrations} After reward shaping by demonstrations from mismatched task, we perform online learning based on SACfD as described in Section \ref{sec:method}. Pseudocode can be found in supplementary materials. Although the estimated $\widetilde{V}_{M_0}^{D}(s)$ can be inaccurate, it still provides enough future information, thus facilitates exploration for the agent. Moreover, nice theoretical properties of reward shaping guarantees that we will not introduce bias to the learned policy in new tasks. \begin{figure*}[ht] \vspace{-3mm} \begin{center} \includegraphics[width=\linewidth]{figure/legend_re.pdf} \subfigure[Hole "1"]{ \includegraphics[width=0.23\linewidth]{figure/sacfd1_rebuttal.pdf}\vspace{0pt}} \subfigure[Hole "2"]{ \includegraphics[width=0.23\linewidth]{figure/sacfd2_rebuttal.pdf}\vspace{0pt}} \subfigure[Hole "3"]{ \includegraphics[width=0.23\linewidth]{figure/sacfd3_rebuttal.pdf}\vspace{0pt}} \subfigure[Hole "4"]{ \includegraphics[width=0.23\linewidth]{figure/sacfd4_rebuttal.pdf}\vspace{0pt}} \caption{Evaluation of algorithms on 4 new tasks with demonstrations from task “0”. The solid line corresponds to the mean of success rate over 5 random seeds and the shaded region corresponds to the standard deviation. Y-axis reflects success rate range in [0, 1], X-axis reflects interaction steps range in [0, 3e5]. } \label{setting1} \end{center} \vspace{-5mm} \end{figure*} \section{Experimental Results} We perform experimental evaluations of the proposed CRSfD method and try to answer the following two questions: Can CRSfD help the exploration of similar sparse-rewarded tasks with demonstrations from a mismatched task? Will CRSfD introduce bias to the learned policy in new tasks? We choose the robot insertion tasks for our experiments, which has natural sparse reward signals: successfully inserting peg into hole get a reward +1, otherwise 0. We perform both simulation and real world experiments. The simulation environment is built under robosuite framework \cite{zhu2020robosuite} powered by Mujoco physics simulator \cite{todorov2012mujoco}. We construct a series of similar tasks where the holes have different shapes and unknown position shifts, reflecting changes in dynamics and reward functions respectively, as shown in Figure \ref{fig:motivation}. Then we verify the effectiveness of CRSfD under the following 2 settings: (1) Transfer collected demonstrations to similar insertion tasks with environment dynamics mismatch. (2) Transfer collected demonstrations to similar tasks with both environment dynamics and reward function mismatch. Finally, we address the sim-to-real issue and deploy the learned policy on a real robot arm to perform insertion tasks with various shapes of holes in the real world. We use Franka Panda robot arm in both simulation and real world. The comparison baseline algorithms are chosen as follows: \begin{itemize} \item \textbf{Behavior Cloning \cite{ross2011reduction}}: Just ignore the task mismatch and directly perform behavior cloning of the demonstrations. \item \textbf{SAC \cite{haarnoja2018soft}}: A SOTA standard RL method which does not use the demonstrations and directly learn from scratch in the target tasks. \item \textbf{GAIL \cite{ho2016generative}}: Use adversarial training to recover the policy that generates the demonstrations, which allieviates the distributional shift problem of behavior cloning. \item \textbf{GAIfO \cite{torabi2018generative}}: A variant of GAIL which trains a discriminator with state transitions $(s,s')$ instead of $(s,a)$ in GAIL to alleviate dynamics mismatch. \item \textbf{POfD \cite{kang2018policy}}: A variant of GAIL which combines the intrinsic reward from discriminator and extrinsic reward from the new task. \item \textbf{SQIL \cite{reddy2019sqil}}: An effective off-policy imitation learning algorithm that adds demonstrations with reward +1 to the buffer and assign reward 0 to all agent experiences. \item \textbf{SACfD \cite{haarnoja2018soft,vecerik2017leveraging}}: Incorporate effective demonstration replay mechanism from \cite{vecerik2017leveraging} with SAC as described in Section \ref{sec:method}, which is also the best baseline as well as the backbone of our method. \item \textbf{RS-GM \cite{wu2021shaping}}: Reward Shaping using Generative Models, which is an extension on discrete reward shaping methods \cite{brys2015reinforcement,hussein2017deep}. After learning a discriminator $D_{\phi}(s,a)$, they shape the reward into $R'(s,a)=R(s,a)+\gamma \lambda D_{\phi}(s',a')-\lambda D_{\phi}(s,a)$. \end{itemize} \subsection{Simulation experiments} We set a nominal hole position as the original-point of our Cartesian coordination. Observable states include robot proprioceptive information such as joint and end-effector position and velocity. Action space includes the 6d pose change of the robot end-effector in 10 Hz, followed with a Cartesian impedance PD controller running at a high frequency. Only a sparse reward +1 is provided when the peg is totally inserted inside the hole. Demonstrations are collected by a sub-optimal RL policy trained with SAC in task $M_0$ under carefully designed dense reward, where the hole has shape "0". This process can be replaced by manual collection in the real world. Then demonstrations are tagged with the corresponding sparse reward. We collected 40 demonstration trails with 50 time steps each. \textbf{Setting 1: Tasks with environment dynamics mismatch.} To reflect environment dynamics changes of the tasks, we create experiments domain on insertion tasks with holes of various shapes in the simulator, represented by different digit numbers, as shown in Figure \ref{fig:motivation}. Different shapes of holes will encounter different contact mode thus lead to different environmental dynamics. We collect demonstrations from hole "0", and our method use them to help training similar new tasks with various hole shapes from digit 1 to 4. \textbf{Analysis 1: }The comparison results of CRSfD and baseline algorithms under the above setting are show in Figure \ref{setting1}. As we expected, the simplest BC algorithm simply imitates the expert action of the original task and can only complete the insertion with a small chance. The SAC algorithm does not make use of the demonstration data and conducts a lot of useless exploration, which leads to poor performance. GAIL algorithm and its variants GAIfO, POfD also fail for most of times as they try to purely imitate the demonstration collected in the mismatched task. SQIL ignores the reward in the new task and only obtains a limited success rate. SACfD can not be effectively guided by demonstrations from the mismatched task under sparse reward. Our proposed CRSfD provide guidance through reward shaping, and consistently achieves the best performance on all the four insertion tasks with different hole shapes. \textbf{Setting 2: Tasks with both dynamics and reward function mismatch} Next, we consider more challenging scenarios where we aim to transfer the demonstrations to new tasks with both environmental dynamics mismatch and reward function mismatch. We assume that the hole has unknown random shifts relative to the nominal position, thus the reward function changes. At the beginning of each episode, the hole is uniformly initialized in a square area centered at the nominal position. This can be challenging because the robot is ‘blind’ to these unknown offsets and requires further search for the entrance of the hole. Practically, we collected demonstrations from task with hole "0" with fixed hole position, and transfer to new tasks with random hole shifts and different hole shapes. \begin{figure}[H] \begin{center} \includegraphics[width=1.0\linewidth]{figure/rebuttal2.pdf} \caption{Evaluations of CRSfD and the best baseline SACfD. The solid line corresponds to the mean of success rate over 3 random seeds and the shaded region corresponds to the standard deviation. X-coordinate reflects changes in reward functions and Y-coordinate reflects changes in environmental dynamics. Our algorithm outperforms baseline with increasing margins as the task changes become larger.} \label{setting2} \end{center} \end{figure} \textbf{Analysis 2: }We compare our algorithm with the best baseline algorithm SACfD under varying degrees of environmental dynamics and reward function changes, as shown in the Figure \ref{setting2}. Due to space limit, more comparison can be found in Figure \ref{setting3} in appendix. The x-coordinate represents the increasing changes of the reward function, where the random range of the holes becomes larger (from 4mm*4mm, 6mm*6mm, to 8mm*8mm). The y-coordinate represents increasing environmental dynamics change, from hole "0" in its original shape to hole "2" in a different shape. Straightforwardly, coordinate origin can represent the original task where demonstrations are collected, and a 2d coordinates $[x,y]$ represents a new task with varying degree of mismatch. From Figure \ref{setting2}, we can observe that when applying to the original task or very similar task such as [4mm, shape '0'], our method has a similar performance to the SACfD baseline. When the task changes become greater (e.g, [8mm, shape '0'], [4mm, shape '2'], [6mm, shape '2'], [8mm, shape '2']), SACfD gradually lose the guidance from original demonstrations as task mismatched more significantly, while CRSfD achieves significant performance gains with help of the conservative reward shaping using estimated value function. \textbf{Ablation study}\label{ablation} As mentioned in section \ref{sec:CRSfD}, we make two improvements over the reward shaping method to encourage the agent to explore around the demonstrations conservatively. (1) Regress value function of OOD states to zero. (2) Use a larger discount factor in new tasks. \begin{wrapfigure}{r}{3.5cm} \includegraphics[width=3.5cm]{figure/tion3ablation.pdf} \vspace{-5mm} \caption{Ablation studies of the conservativeness techniques. (1) means regressing value function to zero for OOD states. (2) means setting larger discount factors.}\label{ablation} \vspace{-7mm} \end{wrapfigure} We ablate these 2 improvements and compare their performance. Ablations are tested under new task with hole shape ``3", results for other shapes can be found in the supplementary materials. As shown in Figure \ref{ablation}, compared to original CRSfD algorithm, moving away either of these 2 techniques will lead to a performance drop, where the agent needs to take more effort in exploration. \subsection{Real World Experiments} After completing the insertion tasks of various-shaped holes in the simulator, we deploy the policy to the real robotic arm. To overcome the sim-to-real problem, we use domain randomization in the simulation. The initial position of the robot arm end-effector and holes are randomized in a 6cm*6cm*6cm space and 2mm*2mm plane respectively, and the friction coefficient of the object is also randomized in [1, 2]. We use a real Franka Panda robot arm and 3d print the holes corresponding to digit numbers ``0-4”. Holes are roughly in sizes of 4cm*4cm, with a 1mm clearance between the peg and the hole. We performed 25 insertion trials under each shape of hole, and counted their success rates separately, as shown in the table \ref{success_rate}. The robot achieves high success rate in all tasks. \begin{figure}[H] \begin{minipage}{.75\linewidth} \centering \includegraphics[width=\linewidth]{figure/real_peginhole_clip.pdf} \caption{Real world robot insertion experiments.} \end{minipage} \begin{minipage}{.24\linewidth} \begin{table}[H] \renewcommand\arraystretch{1.3} \vspace{-6mm} \centering \begin{tabular}{c c } & \\ \toprule[1.5pt] \textbf{Hole} & \textbf{Success} \\ \textbf{Shape} & \textbf{Rate} \\ \hline Digit "0" & 1.0\\ Digit "1" & 1.0\\ Digit "2" & 0.92\\ Digit "3" & 0.92\\ Digit "4" & 0.96\\ \bottomrule[1.5pt] \end{tabular} \vspace{2mm} \caption{Success rate for real world robot insertion tasks.} \label{success_rate} \vspace{-6mm} \end{table} \end{minipage} \end{figure} \section{Appendix} \subsection{Algorithm} \begin{algorithm} \begin{algorithmic} \caption{CRSfD} \STATE {\bfseries Input:} $Env$ Environment for the new task $M_i$; $\theta^{\pi}$ initial policy parameters; $\theta^{Q}$ initial action-value function parameters; $\theta^{Q'}$ initial target action-value function parameters; N target network update frequency. \STATE {\bfseries Input:} $B^{E}$ replay buffer initialized with demonstrations. $B$ replay buffer initialized empty. $K$ number of pre-training gradient updates. $d$ expert buffer sample ratio. $batch$ mini batch size. \STATE {\bfseries Input:} $\theta^{V}$ initial value function (potential function), original task discount factor $\gamma_0$. \STATE {\bfseries Output:} $Q_{\theta}(s,a)$ action-value function (critic) and $\pi(.|s)$ the policy (actor). \STATE \textcolor[rgb]{0,0.7,0}{\# Estimate value function from demonstration.} \FOR{step $t$ {\bfseries in} \{0,1,2,...T\}} \STATE Sample with $batch$ transitions from $B^{E}$, calculate their Monte-Carlo return with discount factor $\gamma_0$. \STATE Estimate $V_{\theta}(s)$ conservatively by equation \ref{v_equation} \ENDFOR \STATE \textcolor[rgb]{0,0.7,0}{\# Interact with $Env$.} \FOR{episode $e$ {\bfseries in} \{0,1,2,...M\}} \STATE Initialize state $s_0 \sim Env$ \FOR{step $t$ {\bfseries in} episode length \{0,1,2,...T\}} \STATE Sample action from $\pi(.|s_t)$ \STATE Get next state and natural sparse reward $s_{t+1},r_{t}$ \STATE Shape reward by: $r_{t}'=r_{t}+\gamma_i V(s_{t+1},\theta^{V})-V(s_{t},\theta^{V})$ \STATE Add single step transition $(s_t,a_t,r_t',s_{t+1})$ to replay buffer $B$. \ENDFOR \FOR{update step $l$ {\bfseries in} \{0,1,2,...L\}} \STATE Sample with prioritization: $d*batch$ transitions from $B^{E}$, $(1-d)*batch$ transitions from $B$. Concatenate them into a single batch. \STATE Perform SAC update for actor and critic:$L_{Actor}(\theta^{\pi}), L_{Critic}(\theta^{Q})$. \IF{step $l \equiv 0 \pmod{N}$} \STATE Update target critic using moving average:$\theta^{Q'}=(1-\tau) \theta^{Q'}+\tau \theta^{Q}$ \STATE Decrease expert buffer sample ratio: $d=d-\delta$ if $d>0$. \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Implementation Details} We implemented our CRSfD algorithm and the baseline algorithms in PyTorch and the implementation can be found in the supplementary materials. Simulated environments are based on robosuite framework \url{https://github.com/ARISE-Initiative/robosuite}. Our CRSfD algorithm is based on \url{https://github.com/denisyarats/pytorch_sac_ae} while baseline algorithms are based on \url{https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail} and \url{https://github.com/ku2482/gail-airl-ppo.pytorch}. \subsection{Videos} Videos for simulated environments and real world environments can be found in the supplementary materials. \subsection{Ablations} As mentioned in section \ref{ablation}, we make two improvements over the reward shaping method to encourage the agent to explore around the demonstrations conservatively. (1) Regress value function of OOD states to zero. (2) Use a larger discount factor in new tasks. We ablate these 2 improvements and compare their performance on more environments, as show in Figure \ref{ablation_all}. \begin{figure*}[ht] \vspace{-3mm} \begin{center} \includegraphics[width=0.6\linewidth]{figure/legend_ablation.pdf}\\ \subfigure[Hole "1"]{ \includegraphics[width=0.23\linewidth]{figure/tion1ablation.pdf}\vspace{0pt}} \subfigure[Hole "2"]{ \includegraphics[width=0.23\linewidth]{figure/tion2ablation.pdf}\vspace{0pt}} \subfigure[Hole "3"]{ \includegraphics[width=0.23\linewidth]{figure/tion3ablation-2.pdf}\vspace{0pt}} \subfigure[Hole "4"]{ \includegraphics[width=0.23\linewidth]{figure/tion4ablation.pdf}\vspace{0pt}} \caption{Ablation studies of the conservativeness techniques. (1) means regressing value function to zero for OOD states. (2) means setting larger discount factors. } \label{ablation_all} \end{center} \vspace{-5mm} \end{figure*} \setcounter{theorem}{0} \subsection{Proof for theorem} \begin{theorem} For task $M_0$ with transition $T_0$ and new task $M_k$ with transition $T_k$, define total variation divergence $D_{TV}(s,a)=\Sigma_{s'}|T_0(s'|s,a)-T_k(s'|s,a)|=\delta$. If we have $\delta<(\gamma_k-\gamma_0) \mathbb{E}_{T_2(s'|s,a)}[ V_{M_0}^D(s')]/\gamma_0 \max_{s'}V_{M_0}^D(s')$, then following the expert policy in new task will result in immediate reward greater then 0: \begin{equation} \setlength{\abovedisplayskip}{0pt} \setlength{\belowdisplayskip}{0pt} \mathbb{E}_{a\sim \pi(.|s)}r'(s,a) \geq (\gamma_k-\gamma_0)\mathbb{E}_{T_{k}(s'|s)}[ V_{M_0}^{D}(s')]-\gamma_0 \delta \max_{s'}V_{M_0}^D(s') > 0 \end{equation} \end{theorem} \textbf{Proof:} For simplify, denote demonstration state value function in original task $V_{M_0}^D=V_1(s)$. Start from the reward shaping equations, and extend $V_1(s)$ for one more time step: \begin{equation} \begin{split} r'(s,a,s')=&r(s,a,s')+\gamma_k V_1(s’)- V_1(s)\\ r'(s,a)=&r(s,a)+\gamma_k \mathbb{E}_{T_k(s'|s,a)}[ V_1(s')]-V_1(s)\\ =&(\gamma_k-\gamma_0)\mathbb{E}_{T_k(s'|s,a)}[ V_1(s')]+ \left(r(s,a)+\gamma_0 \mathbb{E}_{T_k(s'|s,a)}[ V_1(s')]-V_1(s) \right)\\ \geq&(\gamma_k-\gamma_0)\mathbb{E}_{T_k(s'|s,a)}[ V_1(s')]+\left(Q^{\pi_1}(s,a)-V_1(s)\right)- \gamma_0 \delta \max_{s'}V_1(s') \end{split} \end{equation} Take expectation on demonstration policies: \begin{equation} \begin{split} \mathbb{E}_{a\sim \pi(.|s)}r'(s,a) \geq& \mathbb{E}_{a\sim \pi(.|s)}\left[(\gamma_k-\gamma_0)\mathbb{E}_{T_k(s'|s,a)}[ V_1(s')]\right]- \gamma_0 \delta \max_{s'}V_1(s') \end{split} \end{equation} For a sparse reward environment, we have $r(s,a)=0$ almost everywhere: \begin{equation} \begin{split} \mathbb{E}_{a\sim \pi(.|s)}r'(s,a) \geq &\mathbb{E}_{a\sim \pi(.|s)}\left[(\gamma_k-\gamma_0)\mathbb{E}_{T_k(s'|s,a)}[ V_1(s')]\right]- \gamma_0 \delta \max_{s'}V_1(s')\\ =&(\gamma_k-\gamma_0)\mathbb{E}_{T_{k}(s'|s)}[ V_1(s')]-\gamma_0 \delta \max_{s'}V_1(s') \end{split} \end{equation} \subsection{Increasingly Larger Task Mismatch} \begin{figure}[H] \begin{center} \includegraphics[width=1.0\linewidth]{figure/all_rebuttal.pdf} \caption{Increasingly larger task mismatch. Experiments are done on hole shape 0 with increasing random hole position.} \label{setting3} \end{center} \vspace{-5mm} \end{figure} We can observe that as task difference increases, our method first gradually outperforms baseline methods. When task mismatch are too large, our method gradually loss some performance and has similar performance with baselines. \section{Introduction} Submission to CoRL 2022 will be entirely electronic, via a web site (not email). Information about the submission process and \LaTeX{} templates are available on the conference web site at \url{http://www.robot-learning.org/}. For camera ready submission, use the \texttt{final} option for the \texttt{\textbackslash usepackage} command. \section{Citations} \label{sec:citations} Citations can be made using either \textbackslash citep\{\} or \textbackslash citet\{\}, depending from the appropriateness. To avoid the citation moving to the next line, it is often a good practice to replace the space before with a tilde (\~{}) character. Example 1: ``CoRL is the best conference ever~\citep{Gauss1857}.'' Example 2: ``\citet{Gauss1857} proved, both theoretically and numerically, that CoRL is the best conference ever.'' \section{Experimental Results} \label{sec:result} Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi. Morbi auctor lorem non justo. Nam lacus libero, pretium at, lobortis vitae, ultricies et, tellus. Donec aliquet, tortor sed accumsan bibendum, erat ligula aliquet magna, vitae ornare odio metus a mi. Morbi ac orci et nisl hendrerit mollis. Suspendisse ut massa. Cras nec ante. Pellentesque a nulla. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam tincidunt urna. Nulla ullamcorper vestibulum turpis. Pellentesque cursus luctus mauris. Nulla malesuada porttitor diam. Donec felis erat, congue non, volutpat at, tincidunt tristique, libero. Vivamus viverra fermentum felis. Donec nonummy pellentesque ante. Phasellus adipiscing semper elit. Proin fermentum massa ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. Sed lacinia nulla vitae enim. Pellentesque tin- cidunt purus vel magna. Integer non enim. Praesent euismod nunc eu purus. Donec bibendum quam in tellus. Nullam cursus pulvinar lectus. Donec et mi. Nam vulputate metus eu enim. Vestibulum pellentesque felis eu massa. Quisque ullamcorper placerat ipsum. Cras nibh. Morbi vel justo vitae lacus tincidunt ultrices. Lorem ipsum dolor sit amet, consectetuer adipiscing elit. In hac habitasse platea dictumst. Integer tempus convallis augue. Etiam facilisis. Nunc elementum fermentum wisi. Aenean placerat. Ut imperdiet, enim sed gravida sollicitudin, felis odio placerat quam, ac pulvinar elit purus eget enim. Nunc vitae tortor. Proin tempus nibh sit amet nisl. Vivamus quis tortor vitae risus porta vehicula. \section{Conclusion} \label{sec:conclusion} Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi. Morbi auctor lorem non justo. Nam lacus libero, pretium at, lobortis vitae, ultricies et, tellus. Donec aliquet, tortor sed accumsan bibendum, erat ligula aliquet magna, vitae ornare odio metus a mi. Morbi ac orci et nisl hendrerit mollis. Suspendisse ut massa. Cras nec ante. Pellentesque a nulla. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam tincidunt urna. Nulla ullamcorper vestibulum turpis. Pellentesque cursus luctus mauris. Nulla malesuada porttitor diam. Donec felis erat, congue non, volutpat at, tincidunt tristique, libero. Vivamus viverra fermentum felis. Donec nonummy pellentesque ante. Phasellus adipiscing semper elit. Proin fermentum massa ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. Sed lacinia nulla vitae enim. Pellentesque tin- cidunt purus vel magna. Integer non enim. Praesent euismod nunc eu purus. Donec bibendum quam in tellus. Nullam cursus pulvinar lectus. Donec et mi. Nam vulputate metus eu enim. Vestibulum pellentesque felis eu massa. Quisque ullamcorper placerat ipsum. Cras nibh. Morbi vel justo vitae lacus tincidunt ultrices. Lorem ipsum dolor sit amet, consectetuer adipiscing elit. In hac habitasse platea dictumst. Integer tempus convallis augue. Etiam facilisis. Nunc elementum fermentum wisi. Aenean placerat. Ut imperdiet, enim sed gravida sollicitudin, felis odio placerat quam, ac pulvinar elit purus eget enim. Nunc vitae tortor. Proin tempus nibh sit amet nisl. Vivamus quis tortor vitae risus porta vehicula. \clearpage \acknowledgments{If a paper is accepted, the final camera-ready version will (and probably should) include acknowledgments. All acknowledgments go at the end of the paper, including thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support.}
1,116,691,498,775
arxiv
\section{Introduction} Quantum information has disclosed a modern approach to both quantum mechanics and information theory \cite{NielsenBook}. Very recently, this field has been developed into the so-called \textquotedblleft continuous variable\textquotedblright\ setting, where information is encoded and processed using quantum systems with infinite dimensional Hilbert spaces \cite{RMPWeed,BraREV,BraREV2,GaussSTATES,GaussSTATES2}. Bosonic systems, such as the radiation modes of the electromagnetic field, are today the most studied continuous variable systems, thanks to their strict connection with quantum optics. In the continuous variable framework, a wide range of results have been successfully achieved, including quantum teleportation~\cite{CVtelepo,Bra98,RalphTELE,PirTeleOPMecc,Barlett2003,Sherson , teleportation networks~\cite{TeleNET,teleREV,teleJMO} and games~\cite{Pirgames,Pir2005}, entanglement swapping protocols~\cite{Entswap,Entswap2,PirENTswap}, quantum key distribution~\cite{QKD0,QKD1,Weed,Weed2,Chris}, two-way quantum cryptography~\cite{PirNATURE,Pir2way}, quantum computation~\cite{Qcomp1,Qcomp2,Qcomp2b,Qcomp2c,Qcomp2d,Qcomp2e,Qcomp2f} and cluster quantum computation~\cite{Qcomp3,Qcomp5,Qcomp6,Qcomp7}. Other studies have lead to the full classification of Gaussian channels and collective Gaussian attacks~\cite{HolevoCAN,CharacATT,LectNOTES}, the computation of secret-key capacities and their reverse counterpart~\cite{Devetak,Deve2,Deve3,PirSKcapacity,RevCOHE}, and possible schemes for quantum direct communication~\cite{PirDirectCommunication,PirDcomm2}. One of the key resources in many protocols of quantum information is quantum entanglement. In the bosonic setting, quantum entanglement is usually present under the form of Einstein-Podolski-Rosen (EPR) correlations~\cite{EPRpaper}, where the quadrature operators of two separate bosonic modes are so correlated to beat the standard quantum limit~\cite{note1}. The simplest source of EPR correlations is the two-mode squeezed vacuum (TMSV) state. In the number-ket representation this state is defined by \[ \left\vert \xi\right\rangle =(\cosh\xi)^{-1}\sum_{n=0}^{\infty}(\tanh\xi )^{n}\left\vert n\right\rangle _{s}\left\vert n\right\rangle _{i}~, \] where $\xi$ is the squeezing parameter and $\{s,i\}$ is an arbitrary pair of bosonic modes, that we may call \textquotedblleft signal\textquotedblrigh \ and \textquotedblleft idler\textquotedblright. In particular, $\xi$ quantifies the signal-idler entanglement and gives the mean number of photons \textrm{sinh}$^{2}\xi$ in each mode. Since it is entangled, the TMSV state cannot be prepared by applying local operations and classical communications (LOCCs) to a couple of vacua $\left\vert 0\right\rangle _{s}\otimes\left\vert 0\right\rangle _{i}$ or any other kind of tensor product state. For this reason, the TMSV state cannot be expressed as a classical mixture of coherent states $\left\vert \alpha\right\rangle _{s}\otimes\left\vert \beta \right\rangle _{i}$ with $\alpha$ and $\beta$ arbitrary complex amplitudes. In other words, its P-representation~\cite{Prepres,Prepres2 \[ \left\vert \xi\right\rangle \left\langle \xi\right\vert =\int\int d^{2}\alpha d^{2}\beta\boldsymbol{~}\mathcal{P}(\alpha,\beta)~\left\vert \alpha \right\rangle _{s}\left\langle \alpha\right\vert \otimes\left\vert \beta\right\rangle _{i}\left\langle \beta\right\vert ~, \] involves a function $\mathcal{P}$ which is non-positive and, therefore, cannot be considered as a genuine probability distribution. For this reason, the TMSV\ state is a particular kind of \textquotedblleft nonclassical\textquotedblright\ state. Other kinds are single-mode squeezed states and Fock states. By contrast a bosonic state is called \textquotedblleft classical\textquotedblright\ when its P-representation is positive, meaning that the state can be written as a classical mixture of coherent states. Thus a classical source of light is composed by a set of $m$ bosonic modes in a stat \begin{equation} \rho=\int d^{2}\alpha_{1}\cdots\int d^{2}\alpha_{m}\boldsymbol{~ \mathcal{P}(\alpha_{1},\cdots,\alpha_{m})~\otimes_{k=1}^{m}\left\vert \alpha_{k}\right\rangle \left\langle \alpha_{k}\right\vert ~, \label{Prepres \end{equation} where $\mathcal{P}$ is positive and normalized to $1$. Typically, classical sources are just made by collection of coherent states with amplitudes $\{\bar{\alpha}_{1},\cdots,\bar{\alpha}_{m}\}$, i.e., $\rho=\otimes_{k=1 ^{m}\left\vert \bar{\alpha}_{k}\right\rangle \left\langle \bar{\alpha _{k}\right\vert $ which corresponds to have \[ \mathcal{P}=\prod_{k=1}^{m}\delta^{2}(\alpha_{k}-\bar{\alpha}_{k})~. \] In other situations, where the sources are particularly chaotic, they are better described by a collection of thermal states with mean photon numbers $\{\bar{n}_{1},\cdots,\bar{n}_{m}\}$, so tha \[ \mathcal{P}=\prod_{k=1}^{m}\frac{\exp(-\left\vert \alpha_{k}\right\vert ^{2}\bar{n}_{k})}{\pi\bar{n}_{k}}~. \] More generally, we can have classical states which are not just tensor products but they have (classical) correlations among different bosonic modes. The comparison between classical and nonclassical states has clearly triggered a lot of interest. The main idea is to compare the use of a candidate nonclassical state, like the EPR state, with all the classical states for specific information tasks. One of these tasks has been the detection of low-reflectivity objects in far target regions under the condition of extremely low signal-to-noise ratios. This scenario has been called \textquotedblleft quantum illumination\textquotedblright\ and has been investigated in a series of papers~\cite{QIll1,QIll2,QIll3,Guha,Devi,YuenNair}. Most recently, the EPR correlations have been exploited for a completely different task in a completely different regime of parameters. In the model of \textquotedblleft quantum reading\textquotedblright\ \cite{QreadingPRL}, the EPR correlations have been used to retrieve information from digital memories which are reminiscent of today's optical disks, such as CDs and DVDs. A digital memory can in fact be modelled as a sequence of cells corresponding to beam splitters with two possible reflectivities $r_{0}$ and $r_{1}$ (used to encode a bit of information). By fixing the mean total number of photons $N$\ irradiated over each memory cell, it is possible to show that a non-classical source of light with EPR correlations retrieves more information than any classical source~\cite{QreadingPRL}. In general, the improvement is found in the regime of few photons ($N=1\div100$)\ and for memories with high reflectivities, as typical for optical memories. In this regime, the gain of information given by the quantum reading can be dramatic, i.e., close to $1$ bit for each bit of the memory. An important point in the study of Ref.~\cite{QreadingPRL} is that the quantum-classical comparison is performed under a global energy constraint, i.e., by fixing the total number of photons $N$\ which are irradiated over each memory cell (on average). Under this assumption, it is possible to construct an EPR transmitter, made by a suitable number of TMSV states, which is able to outperform \textit{any} classical source composed by \textit{any} number of modes. In the following we consider a different and easier comparison: we fix the number of signal modes irradiated over the target cell ($M$) and the mean number of photons \textit{per signal mode} ($N_{S}$). Under these assumptions, we compare an EPR transmitter with a classical source. Then, for fixed $N_{S}$, we determine the critical number of signal modes $M^{(N_{S})}$ after which an EPR\ transmitter with $M>$ $M^{(N_{S})}$ is able to beat any classical source (with the same number of signals $M$). \section{Readout mechanism} Here we briefly review the basic readout mechanism of Ref.~\cite{QreadingPRL}, specifying the study to the case of a local energy constraint. Let us consider a model of digital optical memory (or disk) where the memory cells are beam splitter mirrors with different reflectivities $r=r_{0},r_{1}$ (with $r_{1}\geq r_{0}$). In particular, the bit-value $u=0$ is encoded in a lower-reflectivity mirror ($r=r_{0}$), that we may call a \textit{pit}, while the bit-value $u=1$ is encoded in a higher-reflectivity mirror ($r=r_{1}$), that we may call a \textit{land} (see\ Fig.~\ref{QreadPIC}). Close to the disk, a reader aims to retrieve the value of the bit $u$ which is stored in each memory cell. For this sake, the reader exploits a transmitter (to probe a target cell) and a receiver (to measure the corresponding output). In general, the transmitter consists of two quantum systems, called \textit{signal} $S$ and \textit{idler} $I$, respectively. The signal system $S$ is a set of $M$ bosonic modes which are directly shined on the target cell. The mean total number of photons of this system is simply given by $N=MN_{S}$, where $N_{S}$ is the mean number of photons per signal mode (simply called \textquotedblleft energy\textquotedblright, hereinbelow). At the output of the cell, the reflected system $R$ is combined with the idler system $I$, which is a supplementary set of bosonic modes whose number $L$ can be completely arbitrary. Both the systems $R$ and $I$ are finally measured by the receiver (see\ Fig.~\ref{QreadPIC}). \begin{figure}[ptbh] \vspace{-1.0cm} \par \begin{center} \includegraphics[width=0.75\textwidth] {Qread_PIC.eps} \end{center} \par \vspace{-2.7cm}\caption{\textbf{Model of memory}. Digital information is stored in a disk whose memory cells are beam splitter mirrors with different reflectivities: $r=r_{0}$ encoding bit-value $u=0$ and $r=r_{1}$ encoding bit-value $u=1$. \textbf{Readout}. A reader is generally composed by a transmitter and a receiver. It retrieves a stored bit by probing a memory cell with a signal system $S$ ($M$ bosonic modes) and detecting the reflected system $R$ together with an idler system $I$ ($L$ bosonic modes). In general, the output system $R$ combines the signal system $S$ with a bath system $B$ ($M$ bosonic modes in thermal states). The transmitter is in a state $\rho$ which can be classical (classical transmitter) or non-classical (quantum transmitter). In our work, we consider a quantum transmitter with EPR\ correlations between signal and idler systems. \label{QreadPIC \end{figure}We assume that Alice's apparatus is very close to the disk, so that no significant source of noise is present in the gap between the disk and the decoder. However, we assume that non-negligible noise comes from the thermal bath present at the other side of the disk. This bath generally describes stray photons, transmitted by previous cells and bouncing back to hit the next ones. For this reason, the reflected system $R$ combines the signal system $S$ with a bath system $B$ of $M$ modes. These environmental modes are assumed in a tensor product of thermal states, each one with $N_{B}$ mean photons (white thermal noise). In this model we identify five basic parameters: the reflectivities of the memory $\{r_{0},r_{1}\}$, the temperature of the bath $N_{B}$, and the profile of the signal $\{M,N_{S}\}$, which is given by the number of signals $M$\ and the energy $N_{S}$. In general, for a fixed input state $\rho$ at the transmitter (systems $S,I$), Alice will get two possible output states $\sigma_{0}$ and $\sigma_{1}$ at the receiver (systems $R,I$). These output states are the effect of two different quantum channels, $\mathcal{E}_{0}$ and $\mathcal{E}_{1}$, which depend on the bit $u=0,1$ stored in the target cell. In particular, we have $\sigma _{u}=(\mathcal{E}_{u}\otimes\mathcal{I})(\rho)$, where the conditional channel $\mathcal{E}_{u}$ acts on the signal system, while the identity channel $\mathcal{I}$ acts on the idler system. More exactly, we have $\mathcal{E _{u}=\mathcal{R}_{u}^{\otimes M}$, where $\mathcal{R}_{u}$ is a one-mode lossy channel with conditional loss $r_{u}$ and fixed thermal noise $N_{B}$. Now, the minimum error probability $P_{err}$ affecting the decoding of $u$ is just the error probability affecting the statistical discrimination of the two output states, $\sigma_{0}$ and $\sigma_{1}$, via an optimal receiver. This quantity is equal to $P_{err}=[1-D(\sigma_{0},\sigma_{1})]/2$, where $D(\sigma_{0},\sigma_{1})$ is the trace distance between $\sigma_{0}$ and $\sigma_{1}$~\cite{Helstrom,Fuchs,FuchsThesis}. Clearly, the value of $P_{err}$ determines the average amount of information which is decoded for each bit stored in the memory. This quantity is equal to $J=1-H(P_{err})$, where $H(x):=-x\log_{2}x-(1-x)\log_{2}(1-x)$ is the usual formula for the binary Shannon entropy. In the following, we compare the performance of decoding in two paradigmatic situations, one where the transmitter is described by a non-classical state (quantum transmitter) and one where the transmitter is in a classical state (classical transmitter). In particular, we show how a quantum transmitter with EPR correlations (EPR transmitter) is able to outperform classical transmitters. The quantum-classical comparison is performed for fixed signal profile $\{M,N_{S}\}$. Then, for various fixed values of the energy $N_{S}$ (local energy constraint), we study the critical number of signal modes $M^{(N_{S})}$ after which an EPR transmitter (with $M>M^{(N_{S})}$ signals) is able to beat any classical transmitter (with the same number of signals $M$). \section{Quantum-classical comparison} First let us consider a classical transmitter. A classical transmitter with $M$ signals and $L$ idlers is described by a classical state $\rho$ as specified by Eq.~(\ref{Prepres}) with $m=M+L$. In other words it is a probabilistic mixture of multi-mode coherent states $\otimes_{k=1 ^{M+L}\left\vert \alpha_{k}\right\rangle \left\langle \alpha_{k}\right\vert $. Given this transmitter, we consider the corresponding error probability $P_{err}^{class}$ which affects the readout of the memory. Remarkably, this error probability is lower-bounded by a quantity which depends on the signal profile $\{M,N_{S}\}$, but not from the number $L$ of the idlers and the explicit expression of the $\mathcal{P}$-function. In fact, we have~\cite{QreadingPRL \begin{equation} P_{err}^{class}\geq\mathcal{C}(M,N_{S}):=\frac{1-\sqrt{1-F(N_{S})^{M}}}{2}~, \label{CB_cread \end{equation} where $F(N_{S})$ is the fidelity between $\mathcal{R}_{0}(|N_{S}^{1/2 \rangle\langle N_{S}^{1/2}|)$ and $\mathcal{R}_{1}(|N_{S}^{1/2}\rangle\langle N_{S}^{1/2}|)$, the two possible outputs of the single-mode coherent state $|N_{S}^{1/2}\rangle\langle N_{S}^{1/2}|$. As a consequence, all the classical transmitters with signal profile $\{M,N_{S}\}$ retrieve an information which is upper-bounded by $J_{class}:=1-H[\mathcal{C}(M,N_{S})]$. Now, let us construct a transmitter having the same signal profile $\{M,N_{S}\}$, but possessing EPR correlations between signals and idlers. This is realized by taking $M$ identical copies of a TMSV state, i.e., $\rho=\left\vert \xi\right\rangle \left\langle \xi\right\vert ^{\otimes M}$ where $N_{S}=\mathrm{sinh}^{2}\xi$. Given this transmitter, we consider the corresponding error probability $P_{err}^{quant}$ affecting the readout of the memory. This quantity is upper-bounded by the quantum Chernoff bound \cite{QCbound,QCbound2,QCbound3,QCbound4,MinkoPRA \begin{equation} P_{err}^{quant}\leq\mathcal{Q}(M,N_{S}):=\frac{1}{2}\left[ \inf_{s\in (0,1)}\mathrm{Tr}(\theta_{0}^{s}\theta_{1}^{1-s})\right] ^{M}, \label{QCB_qread \end{equation} where $\theta_{u}:=(\mathcal{R}_{u}\otimes\mathcal{I})(\left\vert \xi\right\rangle \left\langle \xi\right\vert )$. Since $\theta_{0}$ and $\theta_{1}$ are Gaussian states, we can write their symplectic decompositions~\cite{Alex} and compute the quantum Chernoff bound using the general formula for multimode Gaussian states given in Ref.~\cite{MinkoPRA}. Then, we can easily compute a lower bound $J_{quant}:=1-H[\mathcal{Q (M,N_{S})]$ for the information which is decoded via this quantum transmitter. In order to show an improvement with respect to the classical case, it is sufficient to prove the positivity of the \textquotedblleft information gain\textquotedblright\ $G:=J_{quant}-J_{class}$. This quantity is in fact a lower bound for the average information which is gained by using the EPR quantum transmitter instead of every classical transmitter. Roughly speaking, the value of $G$ estimates the minimum information which is gained by the quantum readout for each bit of the memory. In general, $G$ is a function of all the basic parameters of the model, i.e., $G=G(M,N_{S},r_{0},r_{1},N_{B})$. Numerically, we can easily find signal profiles $\{M,N_{S}\}$, classical memories $\{r_{0},r_{1}\}$, and thermal baths $N_{B}$, for which we have the quantum effect $G>0$. Some of these values are reported in the following table \ \begin{tabular} [c]{|c|c|c|c|c|c|}\hline $~M~$ & $~~N_{S}~~$ & $~~~~r_{0}~~~~$ & $~~~~r_{1}~~~~$ & $~~N_{B}~~$ & $~~~G~($bits$)~~~$\\\hline $1$ & $3.5$ & $0.5$ & $0.95$ & $0.01$ & $~6.2\times10^{-3}$\\\hline $10$ & $1$ & $0.2$ & $0.8$ & $0.01$ & $~3.4\times10^{-2}$\\\hline $30$ & $1$ & $0.38$ & $0.85$ & $1$ & $~1.2\times10^{-3}$\\\hline $100$ & $0.1$ & $0.25$ & $0.85$ & $0.01$ & $~5.9\times10^{-2}$\\\hline $200$ & $0.1$ & $0.6$ & $0.95$ & $0.01$ & $0.22$\\\hline $2\times10^{5}$ & $0.01$ & $0.995$ & $1$ & $0$ & $0.99$\\\hline \end{tabular} \ \ \ \ \] Note that we can find choices of parameters where $G\simeq1$, i.e., the classical readout of the memory does not decode any information whereas the quantum readout is able to retrieve all of it. As shown in the last row of the table, this situation can occur when the both the reflectivities of the memory are very close to $1$. From the first row of the table, we can acknowledge another remarkable fact: for a land-reflectivity $r_{1}$\ sufficiently close to $1$, one signal with few photons can give a positive gain. In other words, the use of a single, but sufficiently entangled, TMSV state $\left\vert \xi\right\rangle \left\langle \xi\right\vert $ can outperform every classical transmitter, which uses one signal mode with the same energy (and potentially infinite idler modes). According to our numerical investigation, the quantum readout is generally more powerful when the land-reflectivity is sufficiently high (i.e., $r_{1}\gtrsim0.8$). For this reason, it is very important to analyze the scenario in the limit of ideal land-reflectivity ($r_{1}=1$). Let us call \textquotedblleft ideal memory\textquotedblright\ a classical memory with $r_{1}=1$. Clearly, this memory is completely characterized by the value of its pit-reflectivity $r_{0}$. For ideal memories, the quantum Chernoff bound of Eq.~(\ref{QCB_qread}) takes the analytical form \[ \mathcal{Q}=\frac{1}{2}\{[1+(1-\sqrt{r_{0}})N_{S}]^{2}+N_{B}(2N_{S +1)(1-r_{0})\}^{-M}, \] and the classical bound of Eq.~(\ref{CB_cread}) can be computed using \[ F(N_{S})=\gamma^{-1}\exp[-\gamma^{-1}(1-\sqrt{r_{0}})^{2}N_{S}]~, \] where $\gamma:=1+(1-r_{0})N_{B}$~\cite{QreadingPRL}. Using these formulas, we can study the behavior of the gain $G$ in terms of the remaining parameters $\{M,N_{S},r_{0},N_{B}\}$. Let us consider an ideal memory with generic $r_{0}\in\lbrack0,1)$ in a generic thermal bath $N_{B}\geq0$. For a fixed energy $N_{S}$, we consider the minimum number of signals $M^{(N_{S})}$ above which $G>0$~\cite{note2}. This critical number can be defined independently from the thermal noise $N_{B}$ (via an implicit maximization over $N_{B}$). Then, for a given value of the energy $N_{S}$, the critical number $M^{(N_{S})}$ is a function of $r_{0}$ alone, i.e., $M^{(N_{S )}=M^{(N_{S})}(r_{0})$. Its behavior is shown in Fig.~\ref{PRLmin} for different values of the energy. \begin{figure}[ptbh] \vspace{-0.4cm} \par \begin{center} \includegraphics[width=0.55\textwidth] {Mplots.eps} \end{center} \par \vspace{-0.6cm}\caption{Number of signals $M$ (logarithmic scale)\ versus pit-reflectivity $r_{0}$. The curves refer to $N_{S}=0.01$, $0.1$ and $0.5$ photons. For each value of the energy $N_{S}$, we plot the critical number $M^{(N_{S})}(r_{0})$ as function of $r_{0}$. All the curves have an asymptote at $r_{0}=1$. For $N_{S}\gtrsim2.5$ photons (curves not shown), we have another asymptote at $r_{0}=0$. \label{PRLmin \end{figure} It is remarkable that, for low-energy signals ($N_{S}=0.01\div1$ photons), the critical number $M^{(N_{S})}(r_{0})$ is finite for every $r_{0}\in\lbrack 0,1)$. This means that, for ideal memories and low-energy signals, there always exists a finite number of signals $M^{(N_{S})}$ above which the quantum readout of the memory is more efficient than its classical readout. In other words, there is an EPR\ transmitter with $M>M^{(N_{S})}$ able to beat any classical transmitter with the same number of signals $M$. In the considered low-energy regime, $M^{(N_{S})}(r_{0})$ is relatively small for almost all the values of $r_{0}$, except for $r_{0}\rightarrow1$\ where $M^{(N_{S}) (r_{0})\rightarrow\infty$. In fact, for $r_{0}\simeq1$, we derive $M^{(N_{S )}(r_{0})\simeq\lbrack4N_{S}(2N_{S}+1)(1-r_{0})]^{-1}$, which diverges at $r_{0}=1$. Such a divergence is expected, since we must have $P_{err ^{quant}=P_{err}^{class}=1/2$\ for $r_{0}=r_{1}$ (see Appendix~\ref{app1} for details). Apart from the divergence at $r_{0}=1$, in all the other points $r_{0}\in\lbrack0,1)$, the critical number $M^{(N_{S})}(r_{0})$ decreases for increasing energy $N_{S}$ (see Fig.~\ref{PRLmin}). In particular, for $N_{S}=1$ photon, we have $M^{(N_{S})}(r_{0})\simeq1$\ for most of the reflectivities $r_{0}$. In other words, for energies around one photon, a single TMSV state is sufficient to provide a positive gain for most of the ideal memories. However, the decreasing trend of $M^{(N_{S})}(r_{0})$ does not continue for higher energies ($N_{S}\geq1$). In fact, just after $N_{S}=1$, $M^{(N_{S})}(r_{0})$ starts to increase around $r_{0}=0$. In particular, for $N_{S}\geq1$, we can derive $M^{(N_{S})}(0)\simeq(\ln2)[2\ln(1+N_{S )-N_{S}]^{-1}$, which is increasing in $N_{S}$, and becomes infinite at $N_{S}\simeq2.5$. As a consequence, for $N_{S}\gtrsim2.5$ photons, we have a second asymptote appearing at $r_{0}=0$ (see Appendix~\ref{app2} for details). This means that the use of high-energy signals ($N_{S}\gtrsim2.5$) does not assure positive gains for memories with extremal reflectivities $r_{0}=0$ and $r_{1}=1$. \section{Conclusion} In conclusion, we have considered the basic model of digital memory studied in Ref.~\cite{QreadingPRL}, which is composed of beam splitter mirrors with different reflectivities. Adopting this model, we have compared an EPR transmitter with classical sources for fixed signal profiles, finding positive information gains for memories with high land-reflectivities ($r_{1 \gtrsim0.8$). Analytical results can be derived in the limit of ideal land-reflectivity ($r_{1}=1$) which defines the regime of ideal memories. In this case, by fixing the mean number of photons per signal mode (local energy constraint), we have computed the critical number of signals after which an EPR\ transmitter gives positive information gains. For low-energy signals ($0.01\div1$ photons) this critical number is finite and relatively small for every ideal memory. In particular, an EPR\ transmitter with one TMSV state can be sufficient to achieve positive information gains for almost all the ideal memories. Finally, our results corroborate the outcomes of Ref.~\cite{QreadingPRL} providing an alternative study which considers a local energy constraint instead of a global one. As discussed in Ref.~\cite{QreadingPRL}, potential applications are in the technology of optical digital memories and involve increasing their data-transfer rates and storage capacities.
1,116,691,498,776
arxiv
\section{Introduction} \label{sec:Intro} The determination of the interest-rate term structure is one important subject of the pricing models, the risk management, the time value of money, hedge and arbitrage, \textit{et. al}. Many researches focus on the following five aspects: the formation of the term structure, the statical models of the term structure, the micro analysis of the shape of the term structure, the dynamic models of the term structure and the empirical test of the dynamical models. In capital market, hedgers, bond traders and portfolio managers concern more about the anticipation of the changes in the term structure and the position of interest-rate based instruments. They try to estimate the movement of the interest rate and the risk exposure of the portfolio, and then they hedge against the risk by adjusting the position of instruments using some quantitative methods. The first problem is how to estimate the movement of the interest rate. There are two approaches to tackle this problem, one can be called the dynamics approach, and the other can be called the kinematics approach. The motivation of the first approach is that the interest rate is determined by supply and demand of capital in the market, and one needs to find out the impact factors (for example, some economic variables) that drive the movement of the interest rate, and one representative model is the multi-factor model with the econometric method and the principal component analysis~\cite{dewachter,ang,orphanides}. The motivation of the second approach is based on the observed properties of the interest rate, such as the mean reversion and the random fluctuation, and one use the equilibrium models~\cite{vasicek,cox1,cox2,rendkeman} or the no-arbitrage models~\cite{heath,hull,ho} to describe the movement of the yield-curve. The stochastic property of the interest rate may arise from the complicated impact factors yet unknown to us, so up to now, all existing models are only approximations, which would be invalid once the market environment changes. The second problem is how to quantify the interest-rate risk once the yield-curve changes. A simple and widely used strategy is based on the concept of the duration~\cite{macauley,fisher}. Duration can be used to measure the sensitivity of the price to the change of the yield, and also can be used to calculate the hedge ratios. Redington~\cite{redington} proposed a method to immunize the bond portfolio against the parallel movement of the term structure by using the duration. But this method gives a sensible risk measure only if the yield-curve shifts in the parallel manner, thus the duration approach should be improved if the change of the yield-curve is nonparallel. Nonparallel movement is more realistic in the real market. Many observational data indicates that there are two types of nonparallel movement, slope change and curvature change. For example, the term structure may become steep or flat, and the changes of the two sides may be different from the change of the middle, which is called butterfly shift. Many researchers have payed attention to the nonparallel movement before. Garbade~\cite{garbade} discussed the immunization method if the slope of term structure changes. Litterman and Scheinkman proposed a three-factor approaches by quantifying the level, the slope and the curvature of the term structure~\cite{litterman}, which has been widely used and generated by many researchers. Chambers and Carlet~\cite{chambers} introduced the concept of multiple duration, which they called duration vectors. This method is developed by Ho~\cite{ho1}, who introduced the concept of key-rate durations based on the interest rate on the maturity date. Even though these methods are helpful for estimating the interest rate risk, they are less helpful for predetermining the trade that should be made to hedge against the risk. Because of simplicity and tractability, the duration immunization method is still favored by many market participants and other traders. By adopting the polynomial interpolation method, we propose a method that can measure the interest rate risk and hedge against the risk. This method preserves the concept of duration and takes into consideration of various movements of the yield curve, such as the translation, the rotation and the twist. One can also generalize to other cases in which more complicated evolution behaviors happens to the yield-curve, if one has suitable number of hedging instruments on hand. This paper is organized as follows. In the next section, we introduce some main characters of the interest-rate term structure and its movement properties. In Sec. 3, we introduce our method to describe the changes of the yield curve, and then propose a dynamical method to immunize a single bond or a portfolio. In Sec. 4, we show the empirical test of our strategy and make comparisons with other methods proposed. The conclusion is present in Sec. 5. \section{Statistical properties of interest rate term structure and immunization} \label{sec:IRTS} Many motivations of modeling the term structure dynamics arise from the empirical observations of the interest-rate term structure. Some important movement properties of the interest-rate term structure are summarized below~\cite{bouchaud,rebonato,cont}: 1. Mean reversion: This behavior has resulted in models where interest rates are modeled as stationary processes. 2. Smoothness in maturity: This property should be viewed more as a requirement of market operators, which means that the yield curves do not present highly irregular profiles with respect to maturity. This is reflected in the practice of obtaining implied yield curves by smoothing data points using splines. 3. Irregularity in time: The time evolution of individual forward rates (with a fixed time to maturity) are very irregular. 4. Principal components: Principal component analysis of the term structure deformation indicates that at least two factors of uncertainty are needed to model term structure deformation. In particular, forward rates of different maturities are imperfectly correlated. The shapes of these principal components are stable across time periods and markets. 5. Humped term structure of volatility: Forward rates of different maturities are not equally variable. This hump is always observed to be skewed towards smaller maturities. Moreover, though the observation of a single hump is quite common~\cite{Moraleda}, multiple humps are never observed in the volatility term structure. We show the movements of the yield curve both with time (t) and maturity (T) in Figure 1. The data are selected by the Wind Financial Terminal~\footnote{http://www.wind.com.cn.} from the China Securities Index, which contains 3-year daily spot rate of the treasury bond, including the maturity of 6 month, 1 year, 2 year, 3 year, 4 year, 5 year, 6 year, 7 year, 8 year, 10 year, 15 year, 20 year. Figure 1 demonstrates the consistency of the term structure with the properties summarized above. The upper graph shows the evolution behaviors of the spot rate of different maturities. The below graph shows the yield curves at different time. Table 1 demonstrates that the spot rates of different maturities are correlated at different level, and the correlation coefficients are all larger than 0.57. \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\textwidth,height = !]{spot.pdf} \label{fig:irts} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=0.7\textwidth,height = !]{correlation.pdf} \label{fig:irts} \end{center} \end{figure} For the 6-month spot rate, the regression analysis demonstrates that it is not a random-walk or stationary process, and that it shows obvious serial correlation and unit-root characteristics, which is different from the former summary. This property may be generated in an inefficient market~\cite{Pesando}, and may cause estimation bias~\cite{ball}. Instead of discussing the reason for this, we focus on the hedging strategy against the interest rate risk. Consider the following case: One wants to hedge a single bond with another one instrument against the interest rate. Suppose that he holds one bond $B$ with price $P$, amount $N$, duration $D$, and in order to hedge against the interest-rate risk, he sells an appropriate amount of standard hedging instrument $B_A$, for example a future contract or a benchmark bond, with price $P_A$, amount $N_A$ and duration $D_A$. The total value of the combination is $V=NP+N_AP_A$, which should be independent of the yield movement $\Delta y$. Then one ontains \begin{equation} N_A=-\frac{N\Delta P}{\Delta P_A}. \label{duration} \end{equation} Specifically, if the movement $\Delta y$ is parallel or infinitesimal, the hedge ratio can be calculated as $N_A=-\frac{NPD}{P_AD_A}$, which is the ordinary duration-based hedging ratio. But for more general and realistic cases, one has to completely evaluate $\Delta P$ as \begin{equation} \Delta P=P(Y+\Delta Y)-P(Y). \end{equation} There are two problems lie before us, the first is how to express $\Delta Y$, and the other is how to completely evaluate $\Delta P$ once the yield curve changes. Some researches focused on the first problem by calculating $\Delta Y$ with various approximations (for more details, see references~\cite{heiko,agca,crack}), other researches introduced duration-based approaches to solve the second problem, such as the traditional duration-convexity, the exponential duration and the discrete duration (see references~\cite{livingston,bajo}). From mathematical point of view, the most precise solution is to accurately express $\Delta Y$, and then completely calculate $P(Y+\Delta Y)$. This method means that one needs large number of hedging instruments to cover the interest-rate risk. It is time consuming and unrealistic, and it will generate new risks such as liquidity risk and basis risk. And on the other hand, some researchers find that higher-order principal components show increasingly oscillating profiles in maturity and the variances associated to these principal components decay quickly \cite{bouchaud,rebonato}. As a result, using large number of hedging instruments to cover the interest-rate risk may not be so efficient. Unlike the traditional duration approach, we propose an method which allows for non-parallel movement of the yield-curve. This method does not rely on historical data, and one can easily adjust the hedging position depending on the market situation, which is flexible and not time consuming. The main object of this stratedgy is to hedge against the interest-rate risk with less instruments but with higher accuracy. We limit the number of hedging instruments to 3 or less. \section{The model} \label{sec:model} The interest-rate term structure is actually a curve in 3-dimensional space, which has two freedom degrees represented by two free parameters $(t,\, T)$, where $t$ is the time (such as date) and $T$ is the maturity, and $Y(t,\,T)$ is not static but evolutive with time. This curve shows smoothness in maturity but irregularity in time, so it is difficult to express it as an analytical formulation. Because of its smoothness, we choose a segment of the yield-curve between $T_A$ and $T_B$ ($T_A<T_B$), which are the maturities of two hedging instruments, respectively. $T_B-T_A$ can not be too large, because under common conditions, a portfolio manager would unlikely use a long-maturity bond to hedge a short-maturity bond, which will increase the liquidity risk and basis risk. In China's treasury-bond future market, the maturity of the deliverable bond is between 4-7 year. So, we can safely use interpolation method to express the yield curve between $[T_A,\,T_B]$. In the following, we apply the physical concept $translation,\,rotation,\,twist$ to describe the movement of the yield-curve. \begin{figure}[!h] \begin{center} \includegraphics[width=0.55\textwidth,height = !]{movement.pdf} \label{fig:movement} \end{center} \end{figure} Suppose the yield-curve between $[T_A,\,T_B]$ can be approximated as a polynomial $Y(t)$, and we need at least cubic polynomial in order to quantify the $twist$. \begin{equation} Y(T)=\alpha+\beta T+\gamma T^2+\lambda T^3. \label{y} \end{equation} where the coefficients $\alpha,\,\beta,\,\gamma$ are constants determined by the hedging instruments we choose. The $translation,\,rotation,\,twist$ can be expressed by $\alpha$, the first-order derivative and the second-order derivative (or the curvature $K(T)=\frac{F''(T)}{(1+F'^2(T))^{3/2}}$). We can see from Figure 2 that the $translation,\,rotation,\,twist$ can represent the common movement of the yield-curve very well, but they are not co-moving with each other. As a 2-dimension curve, it is the other variable $t$ that determines the change of each kind of movement, in another word, $translation,\,rotation,\,twist$ should be related to different functions of $t$, respectively. So, we can express $\Delta Y(t,\,T)$ as follows, \begin{equation} \Delta Y(t,\,T)=a(t)+b(t)F'(t)+c(t)F''(T). \label{deltay} \end{equation} where, $a(t),\,b(t),\,c(t)$ are independent time-dependent functions. Modifying $a(t),\,b(t),\,c(t)$ is equivalent to changing the level, the slope and the curvature of the yield-curve, respectively. Equation~\ref{y} means that we need 3 standard instruments to hedge against the movement of the yield-curve. When using 2 hedging instruments, we need to drop the third term of Equation~\ref{y}. Once we obtains the expression of $\Delta y$, the other problem left is how to evaluate $\Delta P$. No matter which approach is adopted (such as the exponential duration, discrete duration)~\cite{livingston,bajo}, one needs two more hedging instruments, thus leads to more complex hedging strategy. Since there is no robust evidence that the exponential duration approach or the discrete duration approach overmatches the duration-convexity approach, we will adopt the traditional duration-convexity approach to calculate $\Delta P$. \begin{equation} \Delta P=P(-D\Delta Y+\frac{1}{2}C\Delta Y^2). \label{deltap} \end{equation} Suppose we hold $N$ bond with price $P$, maturity $T$, duration $D$, convexity $C$, and if we has one suitable hedging instruments on hand, with price $P_A$, maturity $T_A$, duration $D_A$, convexity $C_A$, then the most convenient and effective strategy is the duration strategy, with the hedge ratio $N_A=-\frac{NPD}{P_AD_A}$. This means that we can only cover the parallel movement risk of the yield-curve with only one hedging instrument. If we hold two suitable hedging instruments, the situation begins to change. Since we have three instruments, we can determine three parameters in Equation~\ref{y}, so we need to ignore the third term. Accordingly, we have two kinds of hedging strategies as follows, \begin{itemize} \item Ignoring the third term of Equation~\ref{y}, we obtain the following equation: \begin{eqnarray} NPD(a(t)+b(t)(\beta+2\gamma T)+c(t)\gamma)+N_AP_AD_A(a(t)+b(t)(\beta+2\gamma T_A)+c(t)\gamma) \nonumber \\ +N_BP_BD_B(a(t)+b(t)(\beta+2\gamma T_B)+c(t)\gamma) = 0. \end{eqnarray} Thus, the following equations should be fulfilled in order to sufficiently hedge against the movement of the yield-curve, \begin{eqnarray*} NPD+N_AP_AD_A+N_BP_BD_B& = & 0,\\ NPDT+N_AP_AD_AT_A+N_BP_BD_BT_B & = &0.\end{eqnarray*} Solving these equations, we arrive at the hedge ratios $N_A,\,N_B$ as follows, \begin{eqnarray*} N_A& = & -\frac{NPD}{P_AD_A}\frac{T_B-T}{T_B-T_A},\\ N_B& = & -\frac{NPD}{P_BD_B}\frac{T-T_A}{T_B-T_A}. \label{compare1} \end{eqnarray*} We find that this result is the same as the result in~\cite{heiko}. But our results are more general and flexible if we have suitable number of hedging instruments. In the following, we call this approach as the quadratic approach. This approach means that we only consider the translation and rotation of the yield-curve, so we need two hedging instruments. \item Taking use of Equation~\ref{deltap}, which means that we adopt the duration-convexity approach (equally, $\Delta Y=a(t)$), we obtain the following equations: \begin{eqnarray} NP(-Da(t)+1/2Ca^2(t))+N_AP_A(-D_Aa(t)+1/2C_Aa^2(t)) \nonumber \\ + N_BP_B(-D_Ba(t)+1/2C_Ba^2(t)) = 0. \end{eqnarray} Thus, the following equations should be fulfilled, \begin{eqnarray*} NPD+N_AP_AD_A+N_BP_BD_B& = & 0,\\ NPC+N_AP_AC_A+N_BP_BC_B & = &0.\end{eqnarray*} Accordingly, the hedge ratios can be calculated as follows, \begin{eqnarray*} N_A& = & \frac{NP(C_BD-CD_B)}{P_A(C_AD_B-C_BD_A)},\\ N_B& = & \frac{NP(-C_AD+D_AC)}{P_B(C_AD_B-C_BD_A)}. \label{compare2}\end{eqnarray*} \end{itemize} If we hold three suitable hedging instruments, we can fully determine Equation~\ref{y}. Taking use of Equation~\ref{y}, and following the same procedure, we obtain the hedge ratios for the three hedging instruments: \begin{eqnarray*} N_A& = & -\frac{NPD}{P_AD_A}\frac{(T-T_C)(T-T_B)}{(T_B-T_A)(T_C-T_A)},\\ N_B& = & -\frac{NPD}{P_BD_B}\frac{(T-T_C)(T-T_A)}{(T_B-T_A)(T_B-T_C)},\\ N_C& = & -\frac{NPD}{P_CD_C}\frac{(T_B-T)(T-T_A)}{(T_C-T_A)(T_B-T_C)}. \label{compare3} \end{eqnarray*} where, we have set $T_A<T_C<T_B$. It is clear that this condition will not impact the result. In the following, we call this approach as the cubic approach. Once the method to hedge a single bond is known, we can easily calculate the hedge ratios for a portfolio with $n$ bonds. Suppose that the maturity of each bond $T_i$ lies between $T_A$ and $T_B$. The amount, price, maturity, duration and convexity of theportfolio are $N,\,P,\,T,\,D,\,C$, respectively, which can be expressed as follows, \begin{eqnarray*} NP& = & \sum_{i=1}^{n}n_iP_i,\\ T&=& \max (T_i),\\ D& = & \frac{\sum_{i=1}^{n} n_iD_i}{\sum_{i=1}^{n}n_i},\\ N_C& = & \frac{\sum_{i=1}^{n} n_iC_i}{\sum_{i=1}^{n}n_i}.\\ \label{portfolio}\end{eqnarray*} Combining the quadratic approach and the cubic approach with the above equations, we can use our model to hedge the bond portfolio. In the next section, we will analyze the ability of our model in hedging against the interest-rate risk. We will make comparison of the duration approach, the quadratic approach, the duration-convexity approach and the cubic approach, respectively. To illustrate the results, we carry out an empirical study. The representative bond and the standard hedging instruments are actively traded treasury bonds selected from the Wind Financial Terminal. \section{Comparison analysis} \label{analysis} In order to compare the hedging effect of different methods proposed in Section~\ref{sec:model}, we select 4 representative treasury bonds and use the daily data from 2007-06-04 to 2008-06-04. In practice, one tends to choose hedging instruments whose maturities are close to that of the representative bond or portfolio. For example, one would prefer to choose instruments with zero-year to 2-year maturity to hedge the portfolio with short maturity. The representative treasury bonds and the maturity, price, modified duration, and convexity of each bonds on the starting date (June 4th, 2007) are listed in Table 2, which are actively transacted in Shanghai Stock Exchange. \begin{figure}[!h] \begin{center} \includegraphics[width=0.6\textwidth,height = !]{bond.pdf} \end{center} \end{figure} In the following, we will compare these hedging strategies by monitoring the daily profit-loss under each strategy. We suppose that the bond we hold is B2 with $N=100$, and the hedging instruments are B3 and B1, with the amount $N_A$ and $N_B$, respectively. When using three hedging instruments, we also add in B4, with the amount $N_C$. As a comparison, we also use the duration approach to hedge against the interest-rate risk. The results of these strategies are shown in Figure 3. It is obvious that the hedging strategies proposed in this paper are much more effective than the duration approach. But we also find that if the maturity drops to below about 6-month, these approaches can not hedge against the interest-rate risk so effectively, which means that these strategies lose efficacy when hedging against the ultra-short-term interest-rate risk. This may be caused by the high volatility and irregularity of the ultra-short-term interest rate. \begin{figure}[!h] \begin{center} \includegraphics[width=0.8\textwidth,height = !]{hedge1.pdf} \label{hedge1} \end{center} \end{figure} We also compare the quadratic approach and the traditional duration-convexity approach, both of which contain two hedging instruments. The result is shown in Figure 4. We can see that both strategies are comparable when hedging against the interest-rate risk. But in some period (for example, the period around December, 2007), the quadratic approach performs better than the duration-convexity approach. \begin{figure}[!h] \begin{center} \includegraphics[width=0.8\textwidth,height = !]{hedge2.pdf} \label{hedge2} \end{center} \end{figure} Next, we will compare the quadratic approach and the cubic approach. The difference between these strategies is that the latter takes into consideration of the twist of the yield-curve. Figure 5 shows that the cubic approach performs obviously better than the quadratic approach. One reason for this is that we consider more information about the movement of the yield-curve, and the other reason is that we add B4 to immunize B2 against the interest rate, whose maturities are closer to each other. \begin{figure}[!h] \begin{center} \includegraphics[width=0.8\textwidth,height = !]{hedge3.pdf} \label{hedge3} \end{center} \end{figure} \section{Conclusion} To hedge against the interest-rate risk, one should describe the movement of the interest-rate term structure. The simplest approach is called the duration approach, which approximate the movement as a translation. This method needs only one instrument to hedge against the interest rate, and it is still widely used in the financial field. We propose a new method to describe the movement of the yield-curve. Since the interest-rate term structure is smooth in maturity $T$ and irregular in time $t$, we can quantify the movement of the term structure as a function of $T$ and $t$. We use the polynomial interpolation method describe the yield-curve between $T_A$ and $T_B$, then the irregular movement with $t$ is the risk that should be hedged against. If we have two suitable hedging instruments on hand, we can use the quadratic-polynomial interpolation, which will describe the translation and the rotation of the term structure. If we have three suitable hedging instruments on hand, we can use the cubic-polynomial interpolation, which will describe the translation, the rotation and the twist of the term structure. For more complicated movement of the term structure, we can combine the traditional duration-convexity approach and the polynomial interpolation approach, but the shortage is that we have to use more than three hedging instruments, which would cause more risks such as the liquidity risk and the basis risk and lead less efficiency. The empirical analysis shows that our hedging strategies are comparable or better than the traditional duration-convexity strategy. But all these methods will lose efficacy when hedging against the ultra-short-term interest-rate risk. Furthermore, We note that none of these approaches has the capability to deal with a sudden jump in the term structure, so we needs further researches.
1,116,691,498,777
arxiv
\section{Introduction} \label{sec:intro} The Language-Based Audio Retrieval task is a new form of cross modal learning \cite{Xie_2022_audio_retrieval} which aims to rank a list of audio clips according to their relevance given a query caption. These query captions \cite{drossos_clotho_2019} are descriptive natural language sentences annotated by humans, and they describe the acoustic events happening in both foreground and background of the audio clip. Being able to model and interpret the relationship between audio clips and a text sequence is helpful towards many applications. Language-Based Audio Retrieval can be applied to many practical applications in real life, such as acoustic monitoring and human-computer interaction \cite{Xie_2022_audio_retrieval}. The Language-Based Audio Retrieval task is formulated in this way. The audio clips and the query caption is passed to an audio encoder and text encoder respectively. From the two encoders, output audio embeddings corresponding to audio clips and a output text embedding corresponding to the query caption are obtained. To determine the relevance between the audio clip and query caption, a similarity measure is used to calculate similarity scores between the output text embedding of the query caption and the output audio embedding of each audio clip. Using these similarity scores, we can calculate the top 10 average precision, and the top 1, top 5, and top 10 recall scores. The baseline system proposed for Language-Based Audio Retrieval in DCASE2022 uses a dual encoder structure with two disjoint pathways to produce the output audio and text embeddings. The CRNN model \cite{crnn_gru_baseline, text-to-audio-grounding} was used as the audio encoder while the pretrained word2vec \cite{word2vec} model was used as the text encoder. The model is trained using the Triplet ranking Loss \cite{triplet_ranking} to maximize the distance between the anchor sample and the negative sample, while minimizing the distance between the anchor sample and the positive sample. During inference, the output audio and text embeddings are extracted from the encoders. Then, the dot product is used as the similarity measure to determine the relevance of the audio clips to the query caption. We argue that training to maximize the similarity between the output embeddings of two disjoint and separate encoders for audio and text in Language-Based Audio Retrieval is non-trivial. We find that we can increase efficiency and performance by tying the audio and text encoder together and sharing their parameters. In addition to having a tied model to produce output embeddings, we find that using contrastive loss is instrumental in getting the model to converge and perform well. Finally, we show compare the computational footprint of our methods and show the efficiency of our method. Our contributions are as follows: \begin{enumerate} \item We introduce Converging Tied Layers and show that using Converging Tied Layers for Language-Based Audio Retrieval is an efficient and straightforward method. \item We examine the importance of using contrastive loss and observe that contrastive loss is crucial for using transformers effectively. \item We demonstrate that using Converging Tied Layers and contrastive loss outperforms the baseline method by a significant factor, \end{enumerate} \section{Related Work} \label{sec:related_work } \subsection{Datasets} \label{sec:datasets} The Clotho Dataset v2.1 consists of 6974 15 to 30 seconds long audio samples. Each audio clip has 5 corresponding 8 to 20 words long human-annotated captions that describe the acoustic events happening in the audio. During training, the ground truth captions for that audio are used as the positive samples and the ground truth captions for the other audios are used as the negative samples. During evaluation, all of the audio clips are passed to the model to rank each audio's similarity to the query caption. Another dataset is the Audio Grounding dataset has also been used by \cite{text-to-audio-grounding} for Audio and Caption Retrieval. Though there are many other audio captioning datasets where Language-Based Audio Retrieval can be applied to, to our knowledge active work is ongoing only on the Clotho dataset and Audio Grounding dataset. In this work, we focus only on the Clotho Dataset v2.1 as proposed in the DCASE2022 challenge\footnote{https://dcase.community/challenge2022/task-language-based-audio-retrieval}, henceforth referred to as the Clotho Dataset. \subsection{Model Architectures} \label{model_arch_objectives} Prior work so far uses disjoint audio and text encoders to produce a vector representation of the inputs. The baseline model \cite{crnn_gru_baseline} presented in DCASE 2022 uses 2 disjoint audio and text models to encode the audio clip and text from the Clotho Dataset v2.1. The input audio is encoded by a Convolutional Recurrent Neural Network (CRNN) \cite{crnn_gru_baseline} and is trained from scratch. For the input text sequence, a pretrained word2vec \cite{word2vec} model\footnote{https://code.google.com/archive/p/word2vec/} already trained on the Google News dataset \cite{googlenews} is used to encode the text sequence to obtain a text vector representation. The pretrained word2vec is not finetuned. \cite{crnn_gru_baseline} also uses a similar approach for the Audio Grounding Dataset. \subsection{Contrastive Loss} During the advent of large scale pretrained language models\cite{devlin2019bert, roberta}, many authors focused on different predictive objectives such as masked language modelling \cite{devlin2019bert,roberta} for pretraining. Over time, the focus shifted to a different form of using contrastive loss to learn more informative multimodal embedding spaces. There has been several variations of contrastive loss \cite{clip,audioclip,contrastive_medical}. In this work, we use the contrastive objective from CLIP \cite{clip, audioclip}. The CLIP contrastive objective extracts feature representations of the different input modalities from the model and projects these representation to a contrastive embedding space via a linear projection. The projected representations are then normalized. The cosine similarities between every possible pairwise representations of different modalities in the same batch are calculated to obtain logits for both text and audio. Finally, the contrastive loss is the average of the two cross entropy loss applied to the text and audio logits with their labels being their respective index in the batch. \begin{figure}[ht!] \centering \includegraphics[scale=0.40]{images/training.png} \caption{\textbf{a)} Baseline system. A CRNN is trained for the audio encoder while a word2vec model pretrained on Google News is used without any finetuning. \textbf{b)} Proposed system. Both audio embeddings and text embeddings are used with frozen weights without any finetuning. We use CNN10, CNN14 for the audio embeddings and BERT, RoBERTa for the text embeddings. Both embeddings are passed to the tied model which is trained on both Triplet Ranking Loss and Contrastive Loss. Shaded red boxes in the figure refers to models with frozen parameters (not finetuned) while green boxes refers to layers/models with trainable parameters.} \label{fig:training} \end{figure} \begin{figure}[ht!] \centering \includegraphics[scale=0.45]{images/inference.png} \caption{Evaluation process: The logmel spectrogram of each audio clip in the evaluation set is encoded to obtain its corresponding vector representation. The query caption is likewise encoded to obtain the sentence vector representation. The similarity metric is then applied between each audio representation and the sentence representation to obtain a list of similarity values. These values are then ranked to obtain the relevance of each audio clip to the query caption.} \label{fig:inference} \end{figure} \section{Proposed Method} \label{sec:proposed_method} Our proposed model consists of two main parts. The first component refers to the use of pretrained audio and text encoders as audio and text embeddings. There is no finetuning of these embeddings and the weights of these encoders are frozen. The second component is the Converging Tied layers. These layers are shared between the audio and text input. Unlike prior work \cite{text-to-audio-grounding}, where the output embeddings are extracted via two disjoint and seperate models, we use the same layers to produce both audio and text embeddings. We visualize this in Figure \ref{fig:training} and \ref{fig:inference}. \subsection{Pretrained Embeddings} There is a plethora of pretrained models available publicly. These pretrained models are often used for transfer learning to another related domain \cite{koh2021automated}, hence there is a need for finetuning. In our case, we find that it is sufficient to simply use these pretrained models as it is without finetuning. Therefore, there is a very low computational footprint from these embeddings. However, we also performed some experiments where we finetuned on these pretrained embeddings and we found that doing so yields a minimal performance boost. \begin{equation} \begin{array}{l} Emb_{A} = \text{pool}_{mean}(Encoder\textsubscript{A}(x_{A})) \\ Emb_{T} = Encoder\textsubscript{T}(x_{T}) \\ \end{array} \label{eqn:tied_layers1} \end{equation} We use the CNN10 and CNN14 models already pretrained on audio tagging as the audio encoder to produce audio embeddings, $Emb_{A}$. For the text embeddings, we use BERT and RoBERTa as the text encoder to produce text embeddings, $Emb_{T}$. Unless otherwise stated, these pretrained embeddings are frozen and not finetuned, thereby minimizing the training time. \subsection{Converging Tied layers} The Converging Tied Layers take in both the audio embedding, $Emb_{A}$, and text embedding, $Emb_{T}$, and project these embeddings to a common subspace. We first pass both $Emb_{A}$ and $Emb_{T}$ through a linear layer for each modality to project these embeddings to the same dimensionality. The projected audio and text embeddings, $R_{A^\prime}$ and $R_{T^\prime}$, are then passed through several shared layer to obtain the final vector representations of the audio and text inputs, $R_A$ and $R_T$. While we defaulted to transformer encoder layers due to its ability to encode contextual information, we also experimented with simple feedforward layers. \begin{equation} \begin{array}{l} R_{A^\prime} = \text{FFN}_A(Emb_{A}) \\ R_{T^\prime} = \text{FFN}_T(Emb_{T}) \\ R_A = \text{pool}_{mean}(\text{Transformer}_{tied}(R_{A^\prime})) \\ R_T = \text{pool}_{mean}(\text{Transformer}_{tied}(R_{T^\prime})) \end{array} \label{eqn:tied_layers2} \end{equation} These tied layers share parameters across text and audio inputs and produces both the audio and text vector representations. We hypothesize that using a shared embedding subspace allows the model to perform better on the ranking task, as opposed to having two disjoint encoders with two disjoint embedding subspace. \subsubsection{Contrastive Loss} \label{sec:contrastive_loss} In addition to the Triplet Ranking Loss used, we also use a supplementary Contrastive Loss jointly train the model. We find that using the Contrastive loss is instrumental in helping the model converge. We use the same contrastive loss as that in CLIP \cite{clip,audioclip, contrastive_medical}. \begin{equation} \begin{array}{l} L = L_{Ranking} + L_{contrastive} \\ \end{array} \label{eqn:loss_combination} \end{equation} The model is trained to minimize both the triplet ranking loss, $L_{Ranking}$, from positive and negative examples in the minibatch, and the contrastive loss, $L_{contrastive}$, from the predicting the correct pair in the batch \cite{contrastive_medical}. \section{Experimental Details} \label{sec:experimental_details} \subsection{Data} We use Clotho dataset v2.1 for all our experiments as mentioned in Section \ref{sec:datasets}. We extract log mel spectrogram with 64 Mel-bands, sampling rate 44100, and hop length of 441. This is identical to the settings of the previous baseline. \subsection{Training and Evaluation} \label{sec:training} Our training hyperparameters and settings are as follows. We use a batch size of 32 with no gradient accumulation steps for 150 epochs with early stopping based on the validation performance. We use a learning rate of $1 \times 10^{-3}$ without any weight decay along with a learning rate scheduler which reduces the learning rate with a factor of 0.1 when the performance plateaus for 5 epochs. For the audio embeddings, we initialized the weights of the pretrained CNN10 and CNN14 model\footnote{Publicly available at https://github.com\/qiuqiangkong\/audioset\_tagging\_cnn}. For the text embeddings, we used the pretrained BERT and RoBERTa model provided by Hugging Face\footnote{huggingface.co/docs/transformers}. Unless explicitly stated, these pretrained embeddings are frozen in our experiments and the weights are not updated. For inference, the dot product between the vector representations of the audio clips in the evaluation set and the vector representation of each query caption in the evaluation set is used as the similarity measure to determine the relevance of the audio clip to the query caption. The metrics used to gauge performance are mean average precision at 10, and top 1, top 5 and top 10 recall. \section{Experimental Results and Analysis} \label{sec:experimental_results_analysis} We provide a summary of our best models and methods in Table \ref{tab:results_comparison_best}. In the following sections, we will analyse and provide some ablation studies of our methods along with more comphrensive results. As mentioned in Section \ref{sec:training}, the pretrained weights of the audio and text embeddings are frozen unless otherwise stated. \begin{table}[!ht] \centering \resizebox{\columnwidth}{!}{\begin{tabular}{@{}lllllll@{}} \toprule Encoder\textsubscript{A} & Encoder\textsubscript{T} & Tied Model & R\textsubscript{1} & R\textsubscript{5} & R\textsubscript{10} & mAP\textsubscript{10}\\ \midrule CRNN & word2vec & - &0.03 & 0.11& 0.19& 0.07\\ \midrule CNN10 & RoBERTa\textsubscript{base}& 4L 96dim Transformer & 0.10& 0.29& 0.41& 0.18\\ \textit{CNN10} & \textit{{RoBERTa\textsubscript{base}}}& \textit{2L 192dim Transformer} &\textit{0.11}& \textit{0.30}& \textit{0.42}& \textit{0.19}\\ \textbf{\textit{CNN10}} & \textit{\textbf{RoBERTa\textsubscript{base}}} & \textit{\textbf{4L 96dim Transformer}} & \textit{\textbf{0.11}}& \textit{\textbf{0.32}}& \textit{\textbf{0.45}}& \textit{\textbf{0.2}}\\ \bottomrule \end{tabular} } \caption{Comparison of our best 3 models against the baseline model (1st row). Bold (4th row) indicates our best performing model. Italics (3rd and 4th row) indicate that the model is fully trainable and no weights are frozen.} \label{tab:results_comparison_best} \end{table} \subsection{Importance of the Contrastive loss} \begin{table}[!ht] \centering \resizebox{\columnwidth}{!}{\begin{tabular}{@{}lllllll@{}} \toprule Encoder\textsubscript{A} & Encoder\textsubscript{T} & Tied Model & R\textsubscript{1} & R\textsubscript{5} & R\textsubscript{10} & mAP\textsubscript{10}\\ \midrule CRNN & word2vec & - &0.03 & 0.11& 0.19& 0.07\\ \midrule CNN10 & RoBERTa\textsubscript{base} & 2L 192dim Linear & 0.00& 0.00& 0.01 & 0\\ CNN10 & RoBERTa\textsubscript{large} & 2L 192dim Linear & 0.00& 0.00& 0.01 & 0\\ CNN10 & BERT\textsubscript{base} & 2L 192dim Linear & 0.01& 0.05& 0.10 & 0.03\\ CNN10 & BERT\textsubscript{large} & 2L 192dim Linear & 0.00& 0.00& 0.01 & 0\\ \bottomrule \end{tabular}} \caption{Models trained without contrastive loss. Without contrastive loss, the model fails to perform well.} \label{tab:results_comparison_contrastive} \end{table} As mentioned in Section \ref{sec:contrastive_loss}, we use a supplementary contrastive loss in addition to the Triplet Ranking Loss. We find that without the contrastive loss, the model is unable to converge and performs very badly. Our results are shown in Table \ref{tab:results_comparison_contrastive}. For all other experiments, we defaulted to using contrastive loss as supplementary objective. \subsection{Impact of using trainable or frozen pretrained embeddings} \begin{table}[!ht] \centering \resizebox{\columnwidth}{!}{\begin{tabular}{@{}lllllll@{}} \toprule Encoder\textsubscript{A} & Encoder\textsubscript{T} & Tied Model & R\textsubscript{1} & R\textsubscript{5} & R\textsubscript{10} & mAP\textsubscript{10}\\ \midrule CRNN & word2vec & - &0.03 & 0.11& 0.19& 0.07\\ \midrule CNN10 & BERT\textsubscript{large} & 2L 192dim Transformer & 0.06& 0.20& 0.31 & 0.12\\ CNN10 & BERT\textsubscript{base} & 2L 192dim Transformer & 0.07& 0.23& 0.34 & 0.14\\ \midrule CNN10 & RoBERTa\textsubscript{large} & 2L 192dim Transformer & 0.08& 0.24& 0.37 & 0.15\\ CNN14 & RoBERTa\textsubscript{base} & 2L 192dim Transformer & 0.09& 0.26& 0.37 & 0.16\\ CNN10 & RoBERTa\textsubscript{base} & 2L 192dim Transformer & 0.10& 0.28& 0.40 & 0.18\\ \midrule \textit{CNN10} & \textit{RoBERTa\textsubscript{base}}& \textit{2L 192dim Transformer} &\textit{0.11}&\textit{0.30}& \textit{0.42}& \textit{0.19}\\ \bottomrule \end{tabular}} \caption{Comparison of the choice of pretrained embeddings for the audio and text embeddings. Italics (last row) indicate that the model is fully trainable and no weights are frozen. } \label{tab:results_comparison_trainable} \end{table} We compare the effectiveness of using pretrained embedding. Results are shown in Table \ref{tab:results_comparison_trainable}. We experimented with using CNN10/CNN14 \cite{kong2020panns} as the audio embeddings and Bert/RoBERTa as the text embeddings. Using either trainable or frozen pretrained embeddings with the Converging Tied Layers both surpass the baseline performance significantly. We also note that trainable pretrained embeddings do perform marginally better than their frozen counterparts at the cost of more computational power and memory. RoBERTa consistently performs significantly better than BERT, even though their model sizes are similar. This is expected as RoBERTa \cite{roberta} outperforms BERT on many Natural Language Processing benchmarks such as GLUE \cite{glue_benchmark}, RACE \cite{race_benchmark}, SQuaD \cite{squad_benchmark}. Therefore, RoBERTa is regarded as a better and more robust model. This indicates that initialization of the embeddings is important and any information stored in pretrained embeddings helps the model perform better for audio retrieval. We also observe that smaller variants of the pretrained audio and embeddings perform significantly better than their larger variants. For instance, BERT\textsubscript{base} and RoBERTa\textsubscript{base} perform better than BERT\textsubscript{large} and RoBERTa\textsubscript{large} with around 0.03 difference in mAP\textsubscript{10}. \subsection{Tied Transformers layers vs Tied Linear layers} \begin{table}[!ht] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}lllllll@{}} \toprule Encoder\textsubscript{A} & Encoder\textsubscript{T} & Tied Model & R\textsubscript{1} & R\textsubscript{5} & R\textsubscript{10} & mAP\textsubscript{10}\\ \midrule CRNN & word2vec & - &0.03 & 0.11& 0.19& 0.07\\ \midrule CNN14 & RoBERTa\textsubscript{base} & 2L 192dim Linear & 0.03& 0.15& 0.24 & 0.08\\ CNN14 & RoBERTa\textsubscript{base} & 2L 300dim Linear & 0.03& 0.14& 0.24 & 0.08\\ CNN14 & RoBERTa\textsubscript{base} & 2L 192dim Linear & 0.03& 0.14& 0.23 & 0.08\\ \midrule CNN14 & RoBERTa\textsubscript{base} & 2L 192dim Transformer & 0.09& 0.26& 0.37 & 0.16\\ \bottomrule \end{tabular}} \caption{Comparison of the choice of Converging Tied Layers. Converging Tied Linear layers consistently gets outperformed by Converging Tied Transformers layers.} \label{tab:results_comparison_tied} \end{table} We explore the choice of the type of layer to use for Converging Tied Layers. Results are shown in Table \ref{tab:results_comparison_tied}. Transformers are known for being able to encode contextual information \cite{wav2vec2} while linear layers provide a transformation between features. In our experiments, Converging Tied Linear layers consistently gets outperformed by Converging Tied Transformers layers. This confirms our hypothesis that using the transformer encoder layers as the choice for Converging Tied layers allows the model to better interpolate contextual information from both the audio embedding and text embedding to a common subspace. \section{Conclusion} \label{sec:conclusion} This work introduces the use of Converging Tied layers and the importance of contrastive loss for Language Basd Audio Retrieval. We show that Converging Tied layers is a straightforward and efficient method that allows for minimal training. We examined and analysed several design choices such as the choice for converging tied layers and also the preference for smaller embeddings. With our methods, we surpass the baseline model significantly on all metrics. \bibliographystyle{IEEEbib}
1,116,691,498,778
arxiv
\section{Introduction : } The possibilities of an emergent universe have been studied recently in a number of papers [1-3] in which one looks for a universe which is ever-existing and large enough so that the space-time may be treated as classical entities. There is no time-like singularity. In these models, the universe in the infinite past is in an almost static state but it eventually evolves into an inflationary stage. These ideas are in conformity with the Lemaitre-Eddington concepts forwarded in the early days of modern cosmology, although the details and the context are different now. An emergent universe model, if developed in a consistent way is capable of solving the well known conceptual problems of the Big-Bang model. A model of an ever-existing universe, which eventually entered at some stage into the standard Big Bang epoch and consistent with features precisely known to us, will be of considerable interest. The purpose of this letter is to examine the possibilities of such a scenario. We mention here three models of the emergent universe, which are relevant here. 1. A closed universe containing radiation and a cosmological constant, given by Harrison [4], which contains the scale factor \begin{equation} a(t) = a_{i} \left[ 1 + exp \left( \frac{\sqrt{2} t}{a_{i}} \right) \right]^{1/2}. \end{equation} As $t \rightarrow - \infty$, the model goes over asymptotically to an Einstein static universe. Although ever inflating, at any time $t_{o} >> a_{i}$, the expansion is given by a finite number of e-folds, \begin{equation} N_{o} = \ln \left( \frac{a_{o}}{a_{i}} \right) \sim \frac{t_{o}}{\sqrt{2}}{a_{i}}. \end{equation} 2. The second example has been studied by Ellis and Maartens [1] and Ellis {\it et al.} [2]. They considered a closed universe containing a minimally coupled scalar field $\phi$, which has a self-interaction given by a special potential function $V (\phi)$. This potential looks similar to what one obtains in a $R + \alpha R^{2}$ theory [5] after the conventional conformal transformation and identifying the scalar field $\phi$ as $ \phi = - \sqrt{3} \; \ln (1 + 2 \alpha R) $ with $\alpha$ negative. Although the solution is not obtained analytically, the model exhibits features expected in an emergent universe. 3. A third example has been provided by Mukherjee {\it et al.} [3], where it was shown that the Starobinsky Model has a solution which can describe an emergent universe. Here, one considers the semiclassical Einstein equation, \begin{equation} R_{\mu \nu} - \frac{1}{2} g_{\mu \nu} R = - \; 8 \pi G < T_{\mu \nu} > \end{equation} where $< T_{\mu \nu} > $ is the vacuum expectation value of the energy momentum tensor of the fields. Assuming only free, massless and conformally invariant fields, and a Robertson-Walker metric, one obtains, for a spatially flat universe, the following equation : \begin{equation} H^{2} \left( \frac{1}{ K_{3}} - H^{2} \right) = - \frac{ 6 K_{1}}{K_{3}} \left( 2 H \ddot{H} + 6 H^{2} \dot{H} - \dot{H}^{2} \right), \end{equation} where the constant $K_{3}$ is determined by the species and number of fields and $K_{1}$ is a constant which may be chosen freely. It has been shown that the equation permits a solution which describes an emergent universe, with a scale factor \begin{equation} a(t) = a_{i} \left( \beta + e^{\alpha t} \right)^{2/3}. \end{equation} where $ \alpha = \frac{3}{2} \sqrt{ \frac{1}{K_{3}}} $ and $ K_{1} = - \frac{2}{27} K_{3}$, $\beta$ is an integration constant. The general features of the model have been given in [3]. These examples indicate that solutions, describing an emergent universe, occur in different contexts. It will, therefore, be interesting to see if solutions describing an emergent universe can be classified and studied in a general way. A simple approach will be to look for the equation of state (EOS) which lead to such solutions. In the next section we obtain such an EOS and study the general features of the relevant solutions, without referring to the actual source of the energy density. In section 3, we shall consider various combinations of radiation and matter, normal or exotic. We discuss our results in the last section. \section{ The Equation of State (EOS) for emergent universe } The cosmological models usually consider linear equation of state (EOS), viz. $p = \omega \rho$, where $\omega $ is a constant, depending on the nature of the constituents. A notable exception is the case of a scalar field. For a homogeneous scalar field $\phi$ interacting with a potential $V( \phi) $, $$ \omega = \frac{\frac{1}{2} \dot{\phi}^{2} - V(\phi) }{\frac{1}{2} \dot{\phi}^{2} + V(\phi) } $$ where $\omega$ may vary between - 1 and +1. Tachyonic condensates provide another case of varying $\omega$, e.g. \begin{equation} \omega = - \left(1 - \dot{\phi}^{2} \right) \end{equation} where $\omega$ is always negative. Recent astronomical data when interpreted in the context of the Big Bang model have provided some interesting information about the composition of the universe. The total energy density has three components, while Big Bang nucleosynthesis data suggest that baryonic matter can account for only about 4 $\%$ of the total energy density, the cold dark matter (CDM) is about 23 $\%$ and the third part, called dark energy, constitutes the remaining 73 $\%$. The CDM has an almost dustlike EOS and it is considered to be responsible for clustering on galactic or supergalactic scales. The dark energy on the other hand provides a negative pressure which may explain the recent acceleration in the expansion of the universe in the context of a closed universe. The behaviour of dark energy is very similar to that of a cosmological constant. In looking for a model of the emergent universe, we assume the following features for the universe : 1) The universe is isotropic and homogeneous at large scales. \\ 2) It is spatially flat, as indicated by BOOMERANG and WMAP results. \\ 3) It is ever existing. There is no singularity. \\ 4) The universe is always large enough so that a classical description of space-time is adequate. \\ 5) The matter or in general, the source of gravity has to be described by quantum field theory. \\ 6) The universe may contain exotic matter so that energy conditions may be violated. \\ 7) The universe is accelerating as suggested by Supernova observations. \\ The presence of exotic components indicate that we need to revise our concepts about the primordial composition of the universe and the EOS need some generalisation. In the following, we consider the EOS \begin{equation} p (\rho) = A \rho - B \rho^{\frac{1}{2} } \end{equation} where $A$ and $B$ are constant. The energy density $ \rho$ may have different components, each satisfying its own equation of state. The Einstein equations for a flat universe in RW metric are given by \begin{equation} \rho = 3 \frac{ \dot{a}^{2}}{a^{2}}, \end{equation} \begin{equation} p = - 2 \frac{ \ddot{a}}{a} - \frac{ \dot{a}^{2}}{a^{2}}. \end{equation} Making use of eqs (7) - (9) , we get the equation, with $ \dot{a} >0$, \begin{equation} 2 \frac{ \ddot{a}}{a} + ( 3 A + 1) \frac{ \dot{a}^{2}}{a^{2}} - \sqrt{3} B \frac{ \dot{a}}{a} = 0 \end{equation} which can be integrated once to give \begin{equation} \dot{a} a^{ \frac{ 3 ( A + 1)}{2}} = K e^{- \frac{ \sqrt{3}}{2} B t} \end{equation} leading to the solution \begin{equation} a ( t ) = \left( \frac{ 3 K (A + 1)}{2} \left( \sigma + \frac{2}{\sqrt{3 }B } e^{\frac{\sqrt{3}}{2} B t } \right) \right)^{\frac{2}{3 (A + 1)}} \end{equation} where $K$ and $\sigma$ are two constants of integration. We note the following : 1) If $ B < 0$, the solution has a singularity and it is not of interest to us here. 2) If $B > 0$, the solution describes an emergent universe if $A > -1$. The solution in this case can be written as, \begin{equation} a ( t ) = a_{i} \left( \beta + e^{\alpha t} \right)^{\omega} \end{equation} where $a_{i}$ and $\beta$ are constants, $\alpha = \frac{\sqrt{3}}{2} B$, and $ \omega = \frac{2}{3 (A + 1)}$. The Hubble parameter and its derivatives are given by \begin{equation} H = \frac{\omega \alpha e^{\alpha t}}{\beta + e^{\alpha t}}, \; \dot{H} = \frac{\beta \omega \alpha^{2} e^{\alpha t}}{(\beta + e^{\alpha t})^{2}}, \; \ddot{H} = \frac{\beta \omega \alpha^{3} e^{\alpha t} (\beta - e^{\alpha t})}{(\beta + e^{\alpha t})^{3}} \end{equation} Here $H$ and $\dot{H}$ are both positive, but $\ddot{H}$ changes sign at $t = \frac{1}{\alpha} \; \ln \beta$. Thus $H$, $\dot{H}$ and $\ddot{H}$ both tend to zero as $t \rightarrow - \infty$. On the otherhand as $t \rightarrow \infty$ the solution gives asymptotically a de Sitter universe. $\beta$ can be determined if the time when $\dot{H}$ is a maximum, can be fixed from observational data. 3) The solution with Starobinsky model, obtained by Mukherjee {\it et al.} [3] , appears to be a special solution with $A = 0$, $B > 0$. However, Starobinsky model is based on a semi-classical Einstein equation and in the initial stage, we have no matter and the vacuum energy of the fields act as the source of gravitation. It is indeed a solution of a different equation and in that sense it does not belong to the class of solutions considered here. \section{ Composition of the emergent universe } To study the possible composition of the emergent universe, we first study the dependence of the energy density on the scale factor. Consider the energy conservation equation, \begin{equation} \dot{\rho} + 3 (\rho + p) \frac{ \ddot{a}}{a} = 0. \end{equation} Making use of the EOS, equation (7), and integrating we obtain the relation \begin{equation} \rho ( a ) = \frac{ 1}{(A + 1 )^{2}} \left[ B + \frac{ K}{ a^{3/2 (A+1)}} \right]^{2}. \end{equation} where $K$ is an integration constnt. Since $a$ is a monotonically increasing function of $t$ in the model, one may use $a$ to study the evolution of the universe also. It may be pointed out that a minimally coupled scalar field cannot give rise to the emergent universe of the type we are considering here (spatially flat, expanding, accelerating and singularity free). To see this we note that the conservation eq. (15) leads to the field equation \begin{equation} \ddot{\phi} + 3 H \dot{\phi} + \frac{dV(\phi)}{d\phi} = 0 \end{equation} where $V(\phi)$ is the self-interaction potential of the field $\phi$. However, $\rho + p = \dot{\phi}^{2}$ must be positive. We, therefore, require $\dot{\rho} < 0$. But we have $ \dot{\rho} = 6 H \dot{H}$ which is always positive in this model. Inclusion of a cosmological constant will not change the conclusion. Note that the solutions of [1] and [2] were obtained for a closed universe. Cosmological studies have made extensive use of scalar fields and, therefore, the emergent scenario has naturally been left out of consideration. However, if the recent observations of spatial features and the presence of both dark matter and dark energy are confirmed, one should look for alternative sources and the possibility of an emergent universe containing exotic matter cannot be ruled out. The equation (16) provides us the information about the components of energy density that lead to an emergent universe. We can rewrite the equation as \begin{equation} \rho = \frac{ B^{2}}{(A + 1 )^{2}} + \frac{ 2 K B}{ (A+1)^{2}} \frac{ 1}{ a^{3/2 (A+1)}} + \frac{ K^{2}}{ (A+1)^{2}} \frac{ 1}{ a^{3(A+1)}} \\ = \rho_{1} + \rho_{2} + \rho_{3}. \end{equation} The pressure $p$, given by equation (7), can also be expressed similarly, \begin{equation} p = - \frac{ B^{2}}{(A + 1 )^{2}} + \frac{ B K (A - 1)}{ (A+1)^{2}} \frac{ 1}{ a^{3/2 (A+1)}} + \frac{ A K^{2}}{ (A+1)^{2}} \frac{ 1}{ a^{3(A+1)}} \\ = p_{1} + p_{2} + p_{3}. \end{equation} It is now easy to identify the components : a) The first term behaves like a cosmological constant and may account for the dark energy. b) The second and third terms depend on the choice of the parameters. Thus if $A = \frac{1}{3}$, we have $p_{2} = - \frac{1}{3}\rho_{2}$, which describes cosmic strings and $p_{3} = \frac{1}{3}\rho_{3}$, which describes radiation and ultra relativistic particles. Thus, it seems that an emergent universe could have evolved out of a mixture of cosmic strings and radiation, along with a cosmological constant. Other possibilities also exist as shown in Table 1. For a given cosmological constant $\Lambda = \left (\frac{B}{A + 1} \right)^{2}$, we have different possible compositions with different kinds of matter and radiation. Topological defects cosmic string, and domain walls have already been studied in the context of structure formation in the early universe and it is interesting to note that emergent universe can accommodate these exotic energy sources. If $A$ is given a value very close to 1, we have $p_{2} \sim 0$, corresponding to a dust like matter and $p_{3} = \rho_{3}$, describing a stiff matter. The stiff mater component falls rapidly with the scale factor, $\rho_{3} \sim \frac{1}{a^{6}}$, and unless particle interactions change the relevant EOS, one may observe only the dark energy and the dust like matter part. Note that we have four parameters in this theory $A, B$, $a_{i}$ and $\beta$ . As the present observational data indicate that the total energy density has three components, three of these parameters are determined. The measurement of the scale factor at any time or when the universe is quasi-static with $a_{s} \sim a_{i} \beta^{\omega}$, will determine the fourt parameter. The composition may change, as in the standard Big Bang cosmology, due to non-gravitational interactions. \begin{table}[htbp] \centerline{\footnotesize Table 1.} \begin{tabular}{l c c c c c c} \\ \hline \\ A & $\frac{\rho_{2}}{ \Lambda }$ in unit $\frac{ K}{B}$ & $\omega_{2}$ & $ \frac{\rho_{3}}{ \Lambda} $ in unit $(\frac{ K}{B})^{2}$ & $\omega_{3}$ & Composition \\ \\ \hline \\ $ \frac{1}{3}$ &\phantom0$\frac{9}{8a^{2}}$ & $- \frac{1}{3}$ & $\frac{9}{8a^{4}}$ & $ \frac{1}{3}$& dark energy, \\ {} & & & & & cosmic string and radiation \\ \hline \\ $ - \frac{1}{3}$ &\phantom0$\frac{9}{2a}$ & $- \frac{2}{3}$ & $\frac{9}{4a^{2}}$ & $- \frac{1}{3}$& dark energy \\ {} & & & & & domain wall and cosmic string \\ \hline \\ $1$ &\phantom0$\frac{1}{2a^{3}}$ & 0 & $\frac{1}{4a^{6}}$ & 1 & dark energy, \\ {} & & & & & dust and stiff matter \\ \hline \\ $ 0$ &\phantom0$\frac{2}{8a^{3/2}}$ & $- \frac{1}{2}$ & $\frac{1}{a^{3}}$ & 0& dark energy,\\ {} & & & & & exotic matter and dust \\ \hline \\ \end{tabular} \caption{ \it Composition of universal matter for various values of A } \end{table} \vspace{0.5 cm.} \section{DISCUSSION} We have shown in this letter that emergent universe scenarios are not isolated solutions and they may occur for different combinations of radiation and matter. The recipe for an emergent universe for a given cosmological constant (Dark energy), $\Lambda = \left (\frac{B}{A + 1} \right)^{2}$, has been given in Table 1. The exotic matter mentioned in the last line of the table ($A = 0$), which has an EOS, $p =- \frac{1}{2} \rho$, is not yet known. It may be an unstable energy source and may have decayed into other particles or radiation. The possibility of cosmic string or domain walls playing a role in the evolution of the universe has been studied previously in detail [6] and it is interesting to note that these topological defects suitably combined also lead to an emergent universe. Cosmic string in particular could serve as seeds for galaxy formation and larger scale structure formation. This should also be observable through their gravitational lensing and studies of anisotropy microwave background radiation and the background gravitational waves etc. However, the scenario of the phase transition of the relevant scalar field leading to these topological defects remains to be worked out in this model. It will be interesting to try to develop an evolutionary scenario of the emergent universe and this is presently under our consideration. \vspace{0.5 cm.} {\large {\it Acknowledgments :}} SM and BCP would like to thank the University of Zululand and the University of KwaZulu-Natal, South Africa for hospitality during their visit when a part of the work was done. They would also like to thank IUCAA, Pune and IUCAA Reference Centre, North Bengal University for providing facilities. \pagebreak
1,116,691,498,779
arxiv
\section{Introduction} \label{sec:intro} \subsection{Background} We are interested in polynomials on finite cartesian products, for instance of the form $f(x,y)\in\mathbb{R}[x,y]$ on $A\times B$, with $A,B\subset \mathbb{R}$ and $|A| = |B| = n$. We will focus on the question of how small the image $f(A,B)$ can be in terms of $n$. For two basic examples, $x+y$ and $xy$, the image can be as small as $cn$, if $A$ and $B$ are chosen appropriately. For $f(x,y) = x+y$ one can take $A = B = [1,n]$ (or any other arithmetic progression of length $n$), so that $f(A,B) = A+B =[2,2n]$; for $f(x,y) = xy$ one can take a geometric progression like $A = B = \{2^1,2^2, \ldots, 2^n\}$, so that $f(A,B) = A\cdot B = \{2^2,2^3,\ldots, 2^{2n}\}$. Similar small images can be obtained for polynomials of the form $f(x,y) = g(k(x) + l(y))$, for nonconstant polynomials $g,k,l$, by taking $A$ so that $k(A) \subset [1,n]$, and $B$ so that $l(B) \subset [1,n]$. A similar idea works for $f(x,y) = g(k(x)\cdot l(y))$. For convenience, we will formulate the problem slightly differently: we consider the surface $z = f(x,y)$ in $\mathbb{R}^3$ and its intersection with a cartesian product $A\times B \times C$, with $|A| = |B| = |C| = n$. Then the image of $f$ is small if and only if the intersection is large; for instance, $z = x+y$ has intersection with $[1,n]^3$ of size at least $\frac{1}{4}n^2$. In 2000, Elekes and R\'onyai \cite{Elek00} proved the following converse of the above observations. \begin{theorem}[Elekes-R\'onyai Theorem] \label{thm:ER} For every $c>0$ and positive integer $d$ there exists $n_0 = n_0(c,d)$ with the following property.\\ Let $f(x,y)$ be a polynomial of degree $d$ in $\mathbb{R}[x,y]$ such that for an $n>n_0$ the graph $z = f(x,y)$ contains $c n^2$ points of $A\times B\times C$, where $A,B,C\subset \mathbb{R}$ have size $n$. Then either $$f(x,y) = g(k(x) + l(y)),~~~\mathrm{or}~~~ f(x,y) = g(k(x)\cdot l(y)),$$ where $g,k,l\in \mathbb{R}[t]$. \end{theorem} In fact, they proved that the same is true for rational functions, if one allows a third special form $f(x,y) = g((k(x)+l(y))/(1-k(x)l(y)))$. Elekes and Szab\'o \cite{ElSz, Elek09} were able to extend this theorem to implicit surfaces $F(x,y,z) = 0$, and also showed that the surface need only contain $n^{2-\gamma}$ points of the cartesian product for the conclusion to hold, for some absolute 'gap' $\gamma>0$. Elekes and R\'onyai used their result to prove a famous conjecture of Purdy. It says that given two lines in $\mathbb{R}^2$ and $n$ points on each line, if the number of distinct distances between pairs of points, one on each line, is $cn$ for some $c>0$, then the lines are parallel or orthogonal. Elekes \cite{Elek99} also proved a 'gap version', only requiring the number of distances to be less than $cn^{5/4}$. For details and a variation of Purdy's conjecture, using our results below, see Section~\ref{subsec:purdy}. See \cite{Elek02, Mato02, Mato11} for more detail and some related problems. \subsection{Results} In this paper we prove a number of extensions of Theorem~\ref{thm:ER}. We extend the result to one dimension higher, to asymmetric cartesian products, and to both at the same time. The proofs are based on the proof of Theorem~\ref{thm:ER} by Elekes and R\'onyai. First we consider a less symmetric cartesian product. \begin{theorem} \label{thm:ER2} For every $c>0$ and positive integer $d$ there exist $n_0 = n_0(c,d)$ and $\tilde{c} = \tilde{c}(c,d)$ with the following property.\\ Let $f(x,y)$ be a polynomial of degree $d$ in $\mathbb{R}[x,y]$ such that for an $n>n_0$ the graph $z = f(x,y)$ contains $c n^{11/6}$ points of $A\times B\times C$, where $A,B,C\subset \mathbb{R}$ and $|A| = n, |B| = \tilde{c}n^{5/6}$, and $|C| = n$. Then either $$f(x,y) = g(k(x) + l(y)),~~~\mathrm{or}~~~ f(x,y) = g(k(x)\cdot l(y)),$$ where $g,k,l\in \mathbb{R}[t]$. \end{theorem} Using a recent result of Amirkhanyan, Bush, Croot and Pryby \cite{Amir11} regarding a conjecture of Solymosi about the number of lines in general position that can be rich on a cartesian product (see Section \ref{subsec:linelemmas}), we get the following theorem \begin{theorem} \label{thm:ERbest} For every $c>0$ and positive integer $d$ there exists $n_0 = n_0(c,d)$ with the following property.\\ Let $f(x,y)$ be a polynomial of degree $d$ in $\mathbb{R}[x,y]$ such that for an $n>n_0$ the graph $z = f(x,y)$ contains $c n^{3/2 + \varepsilon}$ points of $A\times B\times C$, where $A,B,C\subset \mathbb{R}$ and $|A| = n, |B| = n^{1/2 + \varepsilon}$ with $\varepsilon > 0$, and $|C| = n$. Then either $$f(x,y) = g(k(x) + l(y)),~~~\mathrm{or}~~~ f(x,y) = g(k(x)\cdot l(y)),$$ where $g,k,l\in \mathbb{R}[t]$. \end{theorem} We also extend the Elekes-R\'onyai Theorem to cartesian products of one dimension higher, i.e. to polynomials with one more variable. \begin{theorem} \label{thm:main} For every $c>0$ and positive integer $d$ there exists $n_0 = n_0(c,d)$ with the following property.\\ Let $f(x,y,z)$ be a polynomial of degree $d$ in $\mathbb{R}[x,y,z]$ such that for an $n>n_0$ the graph $w = f(x,y,z)$ contains $c n^3$ points of $A\times B\times C\times D$, where $A,B,C,D\subset \mathbb{R}$ have size $n$. Then either $$f(x,y,z) = g(k(x) + l(y) + m(z)),~~~\mathrm{or}~~~ f(x,y,z) = g(k(x)\cdot l(y)\cdot m(z)),$$ where $g,k,l,m\in \mathbb{R}[t]$. \end{theorem} We can also prove a higher-dimensional version with a less symmetric cartesian product. \begin{theorem} \label{thm:ER3} For every $c>0$ and positive integer $d$ there exists $n_0 = n_0(c,d)$ with the following property.\\ Let $f(x,y,z)$ be a polynomial of degree $d$ in $\mathbb{R}[x,y,z]$ such that for an $n>n_0$ the graph $w = f(x,y,z)$ contains $c n^{8/3+2\varepsilon}$ points of $A\times B\times C\times D$, where $A,B,C,D\subset \mathbb{R}$ and $|A| = n$, $|B|=|C| = n^{5/6+\varepsilon}$ with $\varepsilon > 0$, and $|D| = n$. Then either $$f(x,y,z) = g(k(x) + l(y) + m(z)),~~~\mathrm{or}~~~ f(x,y,z) = g(k(x)\cdot l(y)\cdot m(z)),$$ where $g,k,l,m\in \mathbb{R}[t]$. \end{theorem} And using the abovementioned result of Amirkhanyan et al.~we get the following: \begin{theorem} \label{thm:ER4} Given $c>0$ and $d$ a positive integer there exists $n_0 = n_0(c,d)$ with the following property. \\ Let $f(x,y,z)$ be a polynomial of degree $d$ in $\mathbb{R}[x,y,z]$ such that for an $n>n_0$ the graph $w = f(x,y,z)$ contains $c n^{2 + 2\varepsilon}$ points of $A\times B\times C\times D$, where $A,B,C,D\subset \mathbb{R}$ and $|A| = n$, $|B|=|C| = n^{1/2 + \varepsilon}$ with $\varepsilon > 0$, and $|D| = n$. Then either $$f(x,y,z) = g(k(x) + l(y) + m(z)), ~~~\mathrm{or}~~~ f(x,y,z) = g(k(x)\cdot l(y)\cdot m(z)),$$ where $g,k,l,m\in \mathbb{R}[t]$. \end{theorem} In Section~\ref{subsec:parab} we will give an example of a polynomial $f(x,y,z)$ whose graph contains $cn^2$ points of $A\times B\times C\times D$, where $|A|=|D|=n$ and $|B|=|C|=c'n^{1/2}$, but $f$ does not have the required additive or multiplicative form of Theorem~\ref{thm:ER4}. This shows that Theorem~\ref{thm:ER4} is near-optimal. Note that as for the two-variable case, the converses of Theorems~\ref{thm:main}--\ref{thm:ER4} all hold for some appropriate cartesian products. Specifically, if $f(x,y,z) = g(k(x)+l(y)+m(z))$, one can choose $A$, $B$, and $C$ so that $k(x)$, $l(y)$, and $m(z)$ have values in the same arithmetic progression. A similar construction works for the product case. Theorems~\ref{thm:ER}, \ref{thm:ER2}, \ref{thm:main} and \ref{thm:ER3} would all hold if we consider functions over $\mathbb{C}$ instead of $\mathbb{R}$, but we will restrict ourselves to $\mathbb{R}$ here. The proofs could be extended to $|B|\neq |C|$, at some cost to the exponents. It also seems possible to generalize our proofs to polynomials with even more variables. In Section \ref{subsec:outline} we give a short outline of the proof of the Elekes-R\'onyai Theorem, which provides a template for our subsequent proofs. Section \ref{sec:prelim} contains a number of concepts and results required throughout our proofs. In Section~\ref{sec:erLess} we give the proofs of Theorems \ref{thm:ER2} and \ref{thm:ERbest}, while Section \ref{sec:er4d} contains the proofs of Theorems \ref{thm:main}, \ref{thm:ER3}, and \ref{thm:ER4}. In Section~\ref{sec:apps} we give an extension of the conjecture of Purdy and an example showing the near-optimality of Theorem~\ref{thm:ER4}. \subsection{Outline of proofs}\label{subsec:outline} The following is an outline of the proof that Elekes and Ronyai gave in \cite{Elek00} of Theorem \ref{thm:ER}. Our theorems are obtained by adjusting this proof to three-variable $f$, and by using improved Line Lemmas (see Section \ref{subsec:linelemmas}) to get the asymmetric versions. All functions below are polynomials, and we repeatedly recycle the positive constant $c$. We split up the surface $z = f(x,y)$ into the $n$ curves $$z = f_i(x) = f(x, b_i),$$ for each of the $b_i\in B$. We wish to decompose a $cn$-sized subset of the $f_i$ as $$f_i(x) = (p\circ \varphi_i\circ k)(x) = p(a_i k(x) + b_i),$$ where $\varphi_i$ is linear and $p$ and $k$ are independent of $i$. Then the $cn$ lines $u = \varphi_i(t) = a_i t + b_i$ will also be $cn$-rich on an $n\times n$ cartesian product. For such sets of lines we have various lemmas (\ref{lem:line}--\ref{cor:croot}) that say that a $cn$-sized subset of them must be all parallel or all concurrent. Given $cn$ such decompositions with the lines $\varphi_i$ all parallel, we can write $f(x,y) = p(a k(x) + b_i)$, and then conclude by an algebraic argument that there exists an $l(y)$ such that $f(x,y) = p(k(x) + l(y))$. If $cn$ of the lines are concurrent, we can write $f(x,y) = p(a_i\cdot (k(x)+b))$, and then conclude that $f(x,y) = p(k(x)\cdot l(y))$. To find the above decomposition of the $f_i$, we first remove their common inner functions (polynomials $\mu$ such that $f_i = \lambda_i\circ\mu$) up to linear equivalence. We can do this because the number of decompositions up to linear equivalence of a polynomial of degree $d$ depends only on $d$ (Lemma \ref{lem:decomp}), so for large enough $n$ there must be a $cn$-sized subset of the $f_i$ that all have the same inner function of maximal degree. This maximal inner function will be the $k$ above, and we remove it by writing $f_i = \widehat{f}_i\circ k$. Then we have a subset of $\widehat{f}_i$ with the property that if $\widehat{f}_i = \mu_i\circ \lambda$ and $\widehat{f}_j = \mu_j\circ \lambda$, then $\lambda$ must be linear. Now we combine pairs $\widehat{f}_i, \widehat{f}_j$ into new curves $$\gamma_{ij}(t) = (\widehat{f}_i(t), \widehat{f}_j(t)).$$ We observe that these $\gamma_{ij}$ are $cn$-rich on an $n\times n$ cartesian product, and that we have $cn^2$ of them. But by a theorem of Pach and Sharir (Lemma \ref{lem:curve}), such a set of rich curves can have size at most $c'n$. This is not a contradiction: many of these $\gamma_{ij}$ may coincide as sets in $\mathbb{R}^2$. But if for instance $\gamma_{ij}$ and $\gamma_{kl}$ coincide, then by some algebra (Lemma \ref{lem:repar}) they must be reparametrizations of the same curve $(p(t), q(t))$, which means that we can write $$\begin{array}{cc} \widehat{f}_i = p\circ \varphi, & \widehat{f}_j = q\circ \varphi,\\ \widehat{f}_k = p\circ \phi, & \widehat{f}_l = q\circ \phi. \end{array}$$ Since we already removed all nonlinear common inner polynomials, $\varphi$ must be linear. If we have enough such decompositions, we can ensure that they all have the form $f_i = p\circ \varphi_i$ for the same $p$. This give us the desired decompositions $$f_i = \widehat{f}_i\circ k = p\circ \varphi_i\circ k.$$ \section{Preliminaries} \label{sec:prelim} \subsection{Discrete geometry} We will make frequent use of the following well-known theorem, first proved in \cite{Szem83}. We say that a line (or any other curve) is \textit{$k$-rich} on a point set $\mathcal{P}$ if it contains at least $k$ points of $\mathcal{P}$. \begin{theorem}[Szemer\'edi-Trotter Theorem] \label{thm:szem} There exists a constant $C_{ST}>0$ such that given a set $\mathcal{P}$ of $n$ points in $\mathbb{R}^2$, the number of lines $k$-rich on $\mathcal{P}$ is at most $C_{ST}\cdot(n^2/k^3 + n/k)$. \end{theorem} This theorem was generalized by Pach and Sharir \cite{Pach90, Pach98} to continuous real planar curves without self-intersection. We will use the following corollary for algebraic curves, which follows quite easily since algebraic curves (of bounded degree) can be split up into a small number (depending on the degree) of curves without self-intersection. For details see Elekes and R\'onyai \cite{Elek00}. \begin{lemma}[Curve Lemma] \label{lem:curve} Given $c>0$ and a positive integer $d$, there exist $C_{CL} = C_{CL}(c,d)$ and $n_0 = n_0(c,d)$ such that the following holds.\\ Given a set of $m$ irreducible real algebraic curves of degree $\leq d$ that are $cn$-rich on $A$, where $A\subset \mathbb{R}^2$ and $|A|\leq n^2$, then for all $n>n_0$ we have \[m\leq C_{CL}\cdot n.\] \end{lemma} \subsection{Line lemmas}\label{subsec:linelemmas} In the proof of Theorem \ref{thm:ER} by Elekes and R\'onyai, an important ingredient was the following result of Elekes \cite{Elek97} about lines containing many points from a cartesian product. \begin{lemma}[Line Lemma] \label{lem:line} Suppose $A,B\subset \mathbb{R}$ and $|A| = |B| = n$. For all $c_1,c_2>0$ there exists $C_{LL}>0$, independent of $n$, such that if $m$ lines in $\mathbb{R}^2$ are $c_1n$-rich on $A\times B$, with no $c_2n$ of the lines all parallel or all concurrent, then $$m < C_{LL}\cdot n.$$ \end{lemma} We prove a generalization that will be crucial in Section~\ref{sec:erLess}. The proof is at the end of this section, and is modelled on that of Elekes. \begin{lemma}[Generalized Line Lemma] \label{lem:genLine} Suppose $A,B\subset \mathbb{R}$ and $|A| = |B| = n$. For all $c_1,c_2>0$, and $\beta \ge 0$ there exists $C_{GLL}>0$, independent of $n$, such that if $m$ lines in $\mathbb{R}^2$ are $c_1n$-rich on $A\times B$, with no $c_2n^{\beta}$ concurrent or parallel, then $$m < C_{GLL}\cdot n^{2/3 + \beta/3}.$$ \end{lemma} A collection of lines in $\mathbb{R}^2$ is said to be in \emph{general position} if no two lines are parallel and no three lines are concurrent. The second author conjectured the following extension of the above result. For details see \cite{Elek02}. \begin{conjecture} Suppose $A,B\subset \mathbb{R}$ and $|A|=|B|=n$. For all $c>0$ there exists $C_S>0$ such that if $m$ lines in general position are $cn$-rich on $A\times B$ then $m<C$. \end{conjecture} The following result of Amirkhanyan et al.\cite{Amir11}~ is related to the above conjecture. \begin{theorem} \label{thm:croot} For every $\varepsilon>0$ there exists $\delta>0$ such that given $n^{\varepsilon}$ lines in $\mathbb{R}^2$ in general position, they cannot all be $n^{1-\delta}$-rich on $A\times B$, where $|A|=|B|=n$. \end{theorem} Thus if a collection of lines $\mathcal{L}$ in general position is $cn$-rich on $A\times B$ then $|\mathcal{L}| < n^{\varepsilon}$ for any $\varepsilon > 0$. We will use it in the form of the following corollary. \begin{corollary} \label{cor:croot} If $m$ lines in $\mathbb{R}^2$ are $cn$-rich on $A\times B$, with $|A|=|B|=n$, such that no $p$ are parallel and no $q$ are concurrent, then \[m \le (p+q)n^{\varepsilon}\] for every $\varepsilon > 0$. \end{corollary} \begin{proof} We show that the collection of lines contains at least $k=\sqrt{m}/\sqrt{2(p+q)}$ lines in general position. We pick any line, and then successively choose a new line that is not parallel to any of the previously chosen lines, and does not go through the intersection point of any pair of them. If we have chosen $k$ such lines, then there are $k$ slopes we may not choose, which excludes less than $pk$ lines. And there are at most $\binom{k}{2}$ intersection points that we must avoid, so since there are less than $q$ lines concurrent at a point, this excludes less than $q\binom{k}{2}$ lines. Hence we can continue in this way at least until $m \le q\binom{k}{2}+pk+k < (p+q)k^2$, so we can get $k\ge \sqrt{m}/\sqrt{p+q}$ lines in general position. These lines are $cn$-rich on $A\times B$. Thus $k\le n^{\varepsilon'}$ for every $\varepsilon' > 0$. This gives $m\le (p+q)n^{\varepsilon}$ for every $\varepsilon>0$. \end{proof} We begin the proof of Lemma~\ref{lem:genLine}. We will use the dual of a theorem of Beck \cite{Beck83}, which roughly states that given a collection of points, either ``many'' of the points are on the same line, or pairs of the points determine ``many'' distinct lines. \begin{theorem}[Dual of Beck's Theorem] \label{thm:beck} There exists $C_{BT}>0$ such that, given $N$ lines in $\mathbb{R}^2$, either $C_{BT}N$ lines are concurrent or the lines determine $C_{BT}N^2$ distinct pairwise intersection points. \end{theorem} \begin{proof}[Proof of Lemma~\ref{lem:genLine}] Let $L$ be the set of lines, $|L| = m = cn^{\alpha}$, so we will show that we can take $\alpha=2/3+\beta/3$ and $c$ some constant. For every pair $(\ell_i,\ell_j)\in L^2$ we define the linear functions $\gamma_{ij} = \ell_i\circ\ell_j^{-1}$ and $\Gamma_{ij} = \ell_j^{-1}\circ\ell_i$. \bigskip First we will prove that large subsets of the $\gamma_{ij}$ and $\Gamma_{ij}$ are also rich. Consider the tripartite graph $H$ with vertex sets $A\cup L\cup B$. Given $a\in A$ and $\ell \in L$, $(\ell, a)$ is an edge in $H$ if $\ell(a) \in B$. Similarly, given $\ell \in L$ and $b\in B$, $(\ell, b)$ is an edge if $\ell^{-1}(b)\in A$. Given $\ell\in L$, let $\deg_A(\ell)$ be the number of edges between $\ell$ and $A$ and $\deg_B(\ell)$ the number of edges between $l$ and $B$. Since the lines in $L$ are $c_1n$-rich on $A\times B$, we have $\deg_A(\ell) \ge c_1n$ and $\deg_B(\ell) \ge c_1n$ for each $\ell \in L$. Thus we have at least $cc_1n^{1+\alpha}$ edges between $A$ and $L$ and at least $cc_1n^{1+\alpha}$ edges between $B$ and $L$. We will count cycles of length four in $H$ with one vertex in $A$ and one vertex in $B$. Every such $C_4$ gives a point in $B\times B$ on $\gamma_{ij}$ and a point in $A\times A$ on $\Gamma_{ij}$ for some pair $(i,j)$. The number of paths of length two with one endpoint in $A$ and the other in $B$ is at least \[\#P_2 = \sum_{\ell\in L}\deg_A(\ell)\deg_B(\ell) \ge c_1^2n^{2+\alpha}.\] Let $p_{a,b}$ be the number of paths of length two between $a\in A$ and $b\in B$. Then \[\#P_2 = \sum_{a\in A, b\in B}p_{a,b}.\] Now, by Jensen's Inequality, the number of $C_4$'s we are looking for is \[\#C_4 = \sum_{a\in A, b\in B}\binom{p_{a,b}}{2} \ge |A\times B|\binom{\#P_2/|A\times B|}{2} \ge \frac{c_1^4n^{2+2\alpha}}{4}.\] Suppose there are fewer than $(c_1^4/8)n^{2\alpha}$ pairs $(\ell_i,\ell_j)\in L^2$ with at least $(c_1^4/8)n^2$ $C_4$'s between them. Then $H$ would have fewer than $(c_1^4/4)n^{2+2\alpha}$ $C_4$'s, a contradiction. Thus, setting $c_3=c_1^4/8$, we have at least $c_3n^{2\alpha}$ pairs $(i,j)$ for which $\gamma_{ij}$ and $\Gamma_{ij}$ are $c_3n$-rich on $B\times B$ and $A\times A$ respectively. \bigskip Next we define a different graph $G'$ and analyze it. The vertex sets of $G'$ consist of those $\gamma_{ij}$ that are $c_3n$-rich on $B\times B$ and those $\Gamma_{kl}$ that are $c_3n$-rich on $A\times A$. If $\gamma_{ij}$ and $\gamma_{kl}$ coincide as point sets, we consider them as the same vertex. Similarly we identify any coinciding $\Gamma_{ij}$ and $\Gamma_{kl}$, but we do not identify $\gamma_{ij}$ and $\Gamma_{kl}$ should they coincide. We place an edge between $\gamma_{ij}$ and $\Gamma_{ij}$ for each pair $(i,j)$, which means the graph is bipartite. The graph may contain multiple edges, if we have $\ell_i, \ell_j, \ell_k, \ell_l\in L$ such that both $\gamma_{ij} = \gamma_{kl}$ and $\Gamma_{ij} = \Gamma_{kl}$. But this implies that the four lines are concurrent: If $\ell_i$ and $\ell_j$ intersect in $(u,v)$, then $v=\ell_i\circ\ell_j^{-1}(v)=\ell_k\circ\ell_l^{-1}(v)$ and $u=\ell_j^{-1}\circ\ell_i(u)=\ell_l^{-1}\circ\ell_k(u)$, so $(u,v)$ is also the intersection point of $\ell_k$ and $\ell_l$. But with Beck's Theorem we can get a subgraph without multiple edges. We will assume that $\beta < \alpha$, and check it at the end of the proof. Then fewer than $c_2n^{\beta} < C_{BT}cn^{\alpha} = C_{BT}|L|$ lines are concurrent, so by Theorem~\ref{thm:beck}, the lines determine $C_{BT}n^{2\alpha}$ distinct intersection points. The corresponding lines span a subgraph $G''$ without multiple edges, and at least $C_{BT}n^{2\alpha}$ edges. By the Szemer\'edi-Trotter Theorem, since all vertices are $c_3n$-rich lines, the number of vertices is at most $\le c_4n$ for some constant $c_4>0$. The average degree in $G''$ is then $\geq (C_{BT}/c_4)n^{2\alpha-1}$. Thus $G''$ contains a connected component $H$ containing at least $(C_{BT}/c_4)n^{2\alpha-1}$ vertices and at least $\frac{1}{2}(C_{BT}/c_4)^2n^{4\alpha-2}$ edges. Note that each $\gamma_{ij}$ and $\Gamma_{ij}$ have the same slope, so every vertex in $H$ is a line with the same slope. If there are more than $cc_2n^{\alpha+\beta}$ edges in $H$ then we would have $\gamma_{ij_1}, \gamma_{ij_2},\dots, \gamma_{ij_k}$ vertices in this component with $k \ge c_2n^{\beta}$. This implies that the lines $\ell_{j_1}, \ell_{j_2}, \dots, \ell_{j_k}$ are all parallel, which is a contradiction. So we have \[\frac{C_{BT}^2}{2c_4^2}n^{4\alpha-2} < cc_2n^{\alpha+\beta}.\] From this we see that we can choose $\alpha = 2/3 + \beta/3$ and $c> C_{BT}^2/(2c_2c_4^2)$. \end{proof} \subsection{Algebra and graph theory} In the proofs of our higher-dimensional versions of the Elekes-Ronyai Theorem, we will need the following generalization of the fact that if a degree-$d$ polynomial of one variable has $d+1$ or more roots, then the polynomial is identically zero. \begin{lemma}[Vanishing Lemma] \label{lem:vanish} Let $K$ be a field, $F(y,z)\in K[y,z]$ with $\deg F = d$, and $B,C\subset K$ with $|B| = |C| = m$. If $F(y_i,z_j) =0$ for $2dm$ of the pairs $(y_i,z_j)\in B\times C$, then $F(y,z) = 0$. \end{lemma} \begin{proof} There must be $d+1$ columns with $d+1$ zeroes, i.e. $d+1$ $y_i$ such that for each there are $d+1$ $z_j$ with $F(y_i,z_j) = 0$. Indeed, after finding $d$ such columns and removing them, we are left with at least $2md - md = md$ zeroes, distributed over the $m-d$ remaining columns, so there must be another column with $d+1$ zeroes. (Note that the exact bound is $2md - d^2+1$, but the simpler formula suffices for us.) Since a nonzero polynomial in one variable of degree at most $d$ can have no more than $d$ roots, we have $F(y_i,z) = 0$ for each of the $y_i$ with $d+1$ zeroes. Let $L = K(z)$ and define the polynomial $G(y)\in L[y]$ by $G(y) = F(y,z)$, so also $\deg G \leq d$. We have $G(y_i) =0$ for the $d+1$ different $y_i$, which implies by the same fact that $G(y) = 0$, hence also $F(y,z) = 0$. \end{proof} We will also need the following three algebraic lemmas, which appear with proofs in \cite{Elek00}. Let $K$ be a field. We call two decompositions $f(x) = \varphi_1(\psi_1(x))$ and $f(x) = \varphi_2(\psi_2(x))$ of a polynomial $f \in K[x]$ into polynomials from $K[x]$ \emph{equivalent} if $\psi_1(t) = a\psi_2(t)+b$ for some $a,b \in K$. \begin{lemma} \label{lem:decomp} Let $K$ be a field. Then no $f \in K[x]$ can have more than $2^d$ non-equivalent decompositions, where $d=\deg f$. \end{lemma} \begin{lemma} \label{lem:alg} \text{ }\begin{enumerate} \item Let $E$ be a field, $\varphi \in E[x]$ a polynomial of degree $d>0$. Then every $F \in E[x]$ can be written in the form \[F = a_0 + a_1 x + \dots + a_{d-1} x^{d-1},\] where $a_i \in E(\varphi),$ in a unique way. \item Suppose further that $E=L(y)$ is a rational function field over some field $L$, and $\varphi \in L[x]$. Let $m$ be the degree of $F$ in $y$. Then the degree of $a_i$ in $y$ is at most $m(d+1)$. (Here $a_i$ is viewed as a polynomial of $\varphi$ and $y$ over $L$.) \end{enumerate} \end{lemma} \begin{lemma}[Reparametrization Lemma] \label{lem:repar} Suppose that two parametric curves $(f_1(t), g_1(t))$ and $(f_2(t), g_2(t))$ coincide as sets, with $f_i,g_i\in K[t]$ for a field $K$. Then there are $p,q,\varphi_1, \varphi_2\in K[t]$ such that $$\begin{array}{cc} f_1 = p\circ \varphi_1, & g_1 = q\circ \varphi_1,\\ f_2 = p\circ \varphi_2, & g_2 = q\circ \varphi_2. \end{array}$$ \end{lemma} Finally, we need the following graph-theoretic lemma, also proved in \cite{Elek00}. \begin{lemma}[Graph Lemma]\label{lem:graph} For every $c$ and $k$ there is a $C_{GL} = C_{GL}(c,k)$ with the following property.\\ If a graph has $N$ vertices and $cN^2$ edges, and the edges are colored so that at most $k$ colors meet at each vertex, then it has a monochromatic subgraph with $C_{GL}N^2$ edges. \end{lemma} \section{Proof of Theorems~\ref{thm:ER2}~and~\ref{thm:ERbest}} \label{sec:erLess} Suppose $z = f(x,y)$ contains $c n^{\alpha + 1}$ points of $A\times B\times C$, where $|A| = |C| = n$ and $|B| = \tilde{c}n^\alpha$; $\tilde{c}$ and $\alpha$ will be determined later. Throughout we will use $d = \deg f$. All functions will be polynomials. \subsection{Constructing $\widehat{f}_i$} For each of the $\tilde{c}n^\alpha$ $b_i\in B$ define $$f_i(x) = f(x, b_i).$$ Then each $f_i$ is a polynomial in $\mathbb{R}[x]$ of degree at most $d$. \begin{lemma}\label{lem:nottoomanysame2D} If at least $d+1$ of the $f_i$ are identical, then $f(x,y) = q(x)$.\\ In particular, the conclusion of Theorems~\ref{thm:ER2}~and~\ref{thm:ERbest} holds. \end{lemma} \begin{proof} Suppose that $f_i(x) = q(x)$ at least $d+1$ times. Then considering $F(y) = f(x,y) - q(x)$ as a polynomial in $y$ over the field $\mathbb{R}(x)$, we have $F(y)$ vanishing $d+1$ times, so $F(y) = 0$ identically. \end{proof}\noindent {\bf Assumption:} Throughout the rest of this section we will assume that at most $d$ of the $f_i$ are identical. \bigskip Let $c_1 = \min(c/2, \tilde{c}/2)$. Then at least $c_1n^{\alpha}$ of the $f_i$ are $c_1n$-rich on $A\times C$. Otherwise $z=f(x,y)$ would contain fewer than $cn^{\alpha + 1}$ points of $A\times B\times C$. We construct a graph $G$ with the $c_1n^{\alpha}$ $c_1n$-rich $f_i$ as vertices and edge set $E$ consisting of all pairs $(f_i,f_j)$. \begin{lemma} There is a subgraph of $G$ with edge set $\widehat{E}\subset E$ of size $|\widehat{E}|\geq c_2n^{2\alpha}$, such that the following holds. There is a polynomial $k(x)$ such that for all $(f_i, f_j)\in \widehat{E}$ we can write $$f_i = \widehat{f}_i\circ k,$$ $$f_j = \widehat{f}_j\circ k,$$ and $\widehat{f}_i$ and $\widehat{f}_j$ share no non-linear common inner function.\\ The $\widehat{f}_i$ are also $c_1n$-rich on $k(A)\times C$. \end{lemma} \begin{proof} Color each edge $(f_i, f_j)$ of $G$ with the equivalence class of a common inner function $\varphi$ of maximum degree, i.e. $f_i(x) = g(\varphi(x))$ and $f_j(x) = h(\varphi(x))$, and no such $\varphi$ of higher degree exists; two such inner functions $\varphi, \phi$ are equivalent if $\phi(x) = a\varphi(x) + b$. By Lemma~\ref{lem:decomp}, at every vertex there are at most $2^d$ colors, so by Graph Lemma \ref{lem:graph}, with $N = c_1n^{\alpha}$, there is a monochromatic subgraph with $C_{GL}N^2 = C_{GL}c_1^2n^{2\alpha}$ edges. We take $\widehat{E}$ to be the edge set of this subgraph. This means that all the $f_i$ involved in this subgraph have a common inner function $k(x)$ (actually up to equivalence, but by modifying the $\widehat{f}_i$ that is easily overcome), and no pair corresponding to an edge of $\widehat{E}$ has a common inner function of higher degree. That allows us to define the $\widehat{f}_i$ as in the theorem; they must be rich since otherwise the $f_i$ could not be rich. \end{proof} \subsection{Constructing $\gamma_{ij}$} For the $c_2n^{2\alpha}$ pairs $\widehat{f}_i, \widehat{f}_j$ for which $(f_i,f_j)\in \widehat{E}$, we construct the curves $$\gamma_{ij}(t) = \left(\widehat{f}_i(t), \widehat{f}_j(t)\right).$$ \begin{lemma}\label{lem:gammas}\text{ } \begin{enumerate} \item At least $c_3n^{2\alpha}$ $\gamma_{ij}$ are $c_3n$-rich on $C\times C$. \item Each $\gamma_{ij}$ is an irreducible algebraic curve of degree at most $2d$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item We define a bipartite graph with vertex set $k(A) \cup \{\widehat{f}_i\}$, and we connect $t\in k(A)$ with $\widehat{f}_i$ if $\widehat{f}_i(t)\in C$. Since $|\widehat{E}|\geq c_2n^{2\alpha}$, the number of $\widehat{f}_i$ is at least $\sqrt{c_2}n^{\alpha}$, each of them $c_2n$-rich, so the bipartite graph has $m \ge c_2^{3/2} n^{\alpha +1}$ edges. We count the paths of length two between different $\widehat{f}_i$'s, using that $k(A)\leq n$: $$\#P_2 = \sum_{t\in k(A)} \binom{\deg(t)}{2} \ge |k(A)|\binom{m/|k(A)|}{2}\ge c' n^{2\alpha +1}.$$ Hence at least $c''n^{2\alpha}$ pairs $(\widehat{f}_i,\widehat{f}_j)$ share $c''n$ common neighbors $t$ in this graph. In other words, $c''n^{2\alpha}$ of the $\gamma_{ij}$ have a point in $C\times C$ for $c''n$ different $t$. It is possible that different $t$ give the same point $\gamma_{ij}(t)$, so these $\gamma_{ij}$ could have fewer than $c''n$ points in $C\times C$. However, because $\deg \widehat{f}_i \le d$, this can happen for at most $d$ different $t$ at a time, so each $\gamma_{ij}$ will certainly be $(c''/d)n$-rich. Setting $c_3 = c''/d$ we are done. \item We require the notion of the resultant of two polynomials to prove this; for details see \cite{Cox05}. Let $R(x,y)$ be the resultant with respect to $t$ (so considering $x,y$ as coefficients) of the two polynomials $x - \widehat{f}_i(t)$ and $y - \widehat{f}_j(t)$. This is an irreducible polynomial of degree $\le 2d$ with the property that $R(x,y) = 0$ if and only if there is a $t$ such that $x = \widehat{f}_i(t)$ and $y = \widehat{f}_j(t)$; in other words, $\gamma_{ij}$ is the algebraic curve $R(x,y) = 0$. \end{enumerate} \end{proof} \subsection{Decomposing $\widehat{f}_i$} \begin{lemma} There is a subset $S$ of $c'n^{2\alpha -1}$ of the $\gamma_{ij}$ that all coincide as point sets, and such that the set $T$ of $\widehat{f}_i$ occurring in the first coordinate of a $\gamma_{ij}\in S$ has size $c_4n^{2\alpha -1}$. \end{lemma} \begin{proof} Since the $\gamma_{ij}$ are irreducible and have degree $\leq 2d$, we can apply the Curve Lemma \ref{lem:curve}. Thus there exists $n_0$ such that for $n>n_0$, there can be at most $C_{CL}n$ distinct $c_3n$-rich curves on $C\times C$, so $c_3n^{2\alpha}/C_{CL}n = c'n^{2\alpha -1}$ of them must coincide. Set $c_4 = c'/2d$. If fewer than $c_4n^{2\alpha -1}$ of the $\widehat{f}_i$ occurred among these coinciding $\gamma_{ij}$, then some $\widehat{f}_i$ would have to occur at least $d+1$ times, say in the first coordinate. But if $(\widehat{f}_i, \widehat{f}_j)$ and $(\widehat{f}_i, \widehat{f}_k)$ coincide, then we must have $\widehat{f}_j = \widehat{f}_k$. So we would have $d+1$ of the $\widehat{f}_i$ coinciding, hence also $d+1$ of the $f_i$, contradicting our Assumption after Lemma \ref{lem:nottoomanysame2D}. \end{proof} \begin{lemma} There are $c_4n^{2\alpha -1}$ $f_i$ with $$f_i(x) = p(a_i k(x) + b_i)$$ where $a_i, b_i\in \mathbb{R}$ and $p\in \mathbb{R}[x]$. \end{lemma} \begin{proof} By the Reparametrization Lemma~\ref{lem:repar}, for each coinciding pair of curves $\gamma_{ij}$ and $\gamma_{kl}$ from $S$, we can find $p$, $\varphi_i$, and $\varphi_k$ such that $$\widehat{f}_i = p\circ\varphi_i~~~\mathrm{and}~~~\widehat{f}_k = p\circ\varphi_k.$$ Hence we have such decompositions for each pair of the $\widehat{f}_i\in T$. The $\widehat{f}_i$ were constructed so that any pair corresponding to an edge of $\widehat{E}$ has no nonlinear common inner function. That implies that the $\varphi_i$ are linear, hence invertible, which allows us to assume that all $\widehat{f}_i\in T$ can be decomposed using the same $p$. Indeed, if $\widehat{f}_i = p \circ \varphi_i = q\circ \phi_i$ and $\widehat{f}_j = q\circ \phi_j$, then $q = p\circ (\varphi_i \circ \phi_i^{-1})$, so we can write $\widehat{f}_j = p\circ (\varphi_i \circ \phi_i^{-1}\circ\phi_j)$; by repeatedly modifying the $\varphi_i$ this way we can reach all $\widehat{f}_k\in T$. Write $\varphi_i(t) = a_it + b_i$; then for the $c_4n^{2\alpha-1}$ $f_i = \widehat{f}_i\circ k$ with $\widehat{f}_i\in T$ we have $f_i = p\circ\varphi_i\circ k$ \end{proof} \subsection{Proof of Theorem \ref{thm:ER2}} At this point we will apply the Generalized Line Lemma \ref{lem:genLine} with $\beta =0$ to obtain Theorem \ref{thm:ER2}. Then we need $2\alpha -1 = 2/3$, so we set $\alpha = 5/6$. Note that the $c_4n^{2/3}$ lines $u = \varphi_i(t) = a_it + b_i$ live on $k(A)\times p^{-1}(D)$, which is essentially an $n\times n$ cartesian product (both sets might be smaller than $n$, but we can just add arbitrary points to fill them out). They are $c_1n$-rich there, since otherwise the $f_i$ couldn't be $c_1n$-rich. We conclude that either $d+1$ of the lines $u = \varphi_i(t)$ are parallel, or $d+1$ are concurrent. Otherwise, by Lemma~\ref{lem:genLine} with $\beta = 0$ there would be fewer than $C_{GLL}n^{2/3}$ lines. But we can take $\tilde{c}$ in Theorem~\ref{thm:ER2} to be large enough so that $c_4 > C_{GLL}$. Indeed, one can easily check that each $c_i$ was an increasing unbounded function of $c_{i-1}$. By Lemma \ref{parallelcase} below, if $d+1$ of the lines are parallel, then $f$ has the additive form $f(x,y) = p(k(x)+l(y))$. By Lemma \ref{concurrentcase} below, if $d+1$ of the lines are concurrent, then $f$ has the multiplicative form $f(x,y) = p(k(x)\cdot l(y))$. That finishes the proof of Theorem \ref{thm:ER2}. \subsection{Proof of Theorem~\ref{thm:ERbest}} \label{subsec:ER3Croot} We will now use Corollary \ref{cor:croot}, instead of Lemma \ref{lem:genLine} as above, which will result in Theorem~\ref{thm:ERbest}. We start with $\alpha = 1/2 + \varepsilon$. Then we end up with $c_4n^{2\alpha -1} = c_4n^{2\varepsilon}$ lines $u = a_i t + b_i$ which are $c_1n$-rich on an $n\times n$ cartesian product. Certainly $c_4n^{2\varepsilon} > 2(d+1)n^{\varepsilon'}$ for some $\varepsilon'>0$, so by Corollary \ref{cor:croot} with $p = q = d+1$ either $d+1$ of the lines are parallel or $d+1$ are concurrent. By Lemma \ref{parallelcase} below, the parallel case would give the additive form for $f$, and Lemma \ref{concurrentcase} below, the concurrent case would give the multiplicative form for $f$. That finishes the proof of Theorem~\ref{thm:ERbest}. \subsection{The parallel case}\label{subsec:ER3para} \begin{lemma}\label{parallelcase} If $d+1$ of the lines $\varphi_i$ are parallel, then there is a polynomial $l(y)$ such that $f(x,y) = p(k(x)+l(y))$. \end{lemma} \begin{proof} The lines can be written as $\varphi_i(t) = at + b_i$, so by modifying $k$ we can write $f_i(x) = p(k(x) + b_i)$, for $d+1$ different $f_i$. We use the following two polynomial expansions of $f_i(x) = f(x,y_i) = p(k(x) + b_i)$: $$\sum_{l=0}^N v_l \cdot (k(x)+b_i)^l = \sum_{m=0}^Nw_m(y_i)\cdot k(x)^m.$$ The first is immediate from $p(k(x) + b_i)$; the second requires a little more thought. By Lemma~\ref{lem:alg}, there is a unique expansion of the polynomial $f$ of the form $f(x,y) = \sum_{l=0}^{D-1} c_l(k(x),y)x^l$, where $D = \deg k$. By the same lemma, we have a unique expansion $f_i(x) = \sum_{l=0}^{D-1} d_l(k(x))x^l$, so that we have $$\sum_{l=0}^{D-1} c_l(k(x),y_i)x^l = \sum_{l=0}^{D-1} d_l(k(x))x^l~~\Rightarrow~~c_l(k(x), y_i) = d_l(k(x)).$$ But since $f_i(x) = p(k(x) + b_i)$, uniqueness implies that $d_l =0$ for $l>0$, hence $c_l(k(x),y_i) = 0$ for $l>0$. Since we have this for $d+1$ different $i$, it follows that $c_l(k(x), y) = 0$ for $l>0$, so $f(x,y) = c_0(k(x),y)$, which means there is an expansion $f(x,y) = \sum w_m(y) k(x)^m$. Now plugging in $y=y_i$ gives the required expansion. Comparing the coefficients of $k(x)^{N-1}$ in the two expansions above, we get $$v_{N-1} + (N-1)v_N b_i = w_{N-1}(y_i),$$ which implies that $b_i = \frac{1}{(N-1)v_N} (w_{N-1}(y_i) - v_{N-1})$. If we now define the polynomial $$l(y) = \frac{1}{(N-1)v_N} (w_{N-1}(y) - v_{N-1}),$$ we have that for $d+1$ of the $y_i$ (note that $v_l$ and $w_m$ do not depend on the choice of $y_i$) $$f(x,y_i) = p(k(x)+l(y_i)).$$ Since the degree of $f$ is $d$, this implies that $f(x,y) = p(k(x)+l(y))$. \end{proof} \subsection{The concurrent case}\label{subsec:ER3conc} \begin{lemma}\label{concurrentcase} If $d+1$ of the lines $\varphi_i$ are concurrent, then there are polynomials $P(t)$, $K(x)$ and $L(y)$ such that $$f(x,y) = P(K(x)\cdot L(y)).$$ \end{lemma} \begin{proof} The lines can be written as $\varphi_i(t) = a_it + b$, so by modifying $k$ we can write $f_i(x) = p(a_i\cdot k(x))$, for $d+1$ different $f_i$. We again use two polynomial expansions of $f_i(x) = f(x,y_i) = p(a_i\cdot k(x))$: $$\sum_{l=0}^N v_l \cdot (a_i\cdot k(x))^l = \sum_{m=0}^Nw_m(y_i)\cdot k(x)^m.$$ Both are obtained in the same way as in the proof of Lemma~\ref{parallelcase}. We cannot proceed exactly as before, since $a_i$ might occur here only with exponents, and we cannot take a root of a polynomial. But we can work around that as follows. Define $M$ to be the greatest common divisor of all exponents $m$ for which $w_m\neq 0$ in the second expansion; then we can write $M$ as an integer linear combination of these $m$, say $M = \sum \mu_m m$. Comparing the coefficients of any $k(x)^m$ with $w_m\neq 0$ in the two expansions above, we get $$a_i^m = \frac{1}{v_m}w_m(y_i),$$ which tells us that $$a_i^M = \prod (a_i^m)^{\mu_m} = L(y_i),$$ where $L(y)$ is a rational function. If we define $P(s) = p(s^{1/M})$, or equivalently $P(t^M) = p(t)$, then the definition of $M$ gives that $P(s)$ is a polynomial. We also define $K(x) = k^M(x)$. Then $$P(K(x)\cdot L(y_i)) = P(k^M(x)\cdot a_i^M) = p(k(x) \cdot a_i) = f(x,y_i).$$ Since we have this for $d+1$ of the $y_i$, we get that $f(x,y) = P(K(x) L(y))$. This also tells us that $L(y)$ is in fact a polynomial, since otherwise $f(x,y)$ could not be one. \end{proof} \section{Proof of Theorems~\ref{thm:main}, \ref{thm:ER3}, and \ref{thm:ER4}} \label{sec:er4d} Suppose $w = f(x,y,z)$ contains $cn^{1+2\alpha}$ points of $A\times B\times C\times D$ and $|B|= |C| = n^{\alpha}$. For Theorem \ref{thm:main} we have $\alpha = 1$; for the other two theorems we will determine the right choice of $\alpha$ later. Throughout we will use $d = \deg f$. All functions will be polynomials. We will shorten or omit several of the proofs, because they are very similar to those in Section \ref{sec:erLess}. \subsection{Constructing $\widehat{f}_{ij}$} For each of the $n^{2\alpha}$ points $(y_i, z_j)\in B\times C$, we cut a fibre out of the solid: $$w = f_{ij}(x) = f(x, y_i, z_j).$$ \begin{lemma}\label{nottoomanyidentical} If at least $2dn^\alpha$ of the $f_{ij}$ are identical, then $f(x,y,z) = q(x)$.\\ In particular, the conclusion of Theorems \ref{thm:main}, \ref{thm:ER3}, and \ref{thm:ER4} holds. \end{lemma} \begin{proof} Suppose that $f_{ij}(x) = q(x)$ at least $2dn^\alpha$ times. Then for $F(y,z) = f(x,y,z) - q(x)$ and $K = \mathbb{R}(x)$, the Vanishing Lemma \ref{lem:vanish} with $b = c = n^\alpha$ gives $F(y,z) = 0$. \end{proof}\noindent {\bf Assumption:} Throughout the rest of this proof we will assume that fewer than $2dn^\alpha$ of the $f_{ij}$ are identical. \bigskip Let $c_1 = c/2$. Then at least $c_1n^{2\alpha}$ of the $f_{ij}$ are $c_1n$-rich on $A\times D$. Otherwise $w=f(x,y,z)$ would contain fewer than $cn^{1+2\alpha}$ points of $A\times B\times C\times D$. We construct a graph $G$ with the $c_1n^{2\alpha}$ $f_{ij}$ as vertices and edge set $E$ consisting of the pairs $(f_{ij}, f_{kl})$. \begin{lemma} There is a subgraph of $G$ with edge set $\widehat{E}\subset E$ of size $|\widehat{E}|\geq c_2n^{4\alpha}$, such that the following holds. There is a polynomial $k(x)$ such that for all $(f_{ij}, f_{kl})\in \widehat{E}$ we can write $$f_{ij} = \widehat{f}_{ij}\circ k,$$ $$f_{kl} = \widehat{f}_{kl}\circ k,$$ and $\widehat{f}_{ij}$ and $\widehat{f}_{kl}$ share no non-linear inner function.\\ The $\widehat{f}_{ij}$ are also $c_2n$-rich on $k(A)\times D$. \end{lemma} \subsection{Constructing $\gamma_{ijkl}$} For the $c_2n^{4\alpha}$ pairs $\widehat{f}_{ij}, \widehat{f}_{kl}$ for which $(f_{ij},f_{kl})\in \widehat{E}$ we construct the curves $$\widehat{\gamma}_{ijkl}(t) = \left(\widehat{f}_{ij}(t), \widehat{f}_{kl}(t)\right),$$ \begin{lemma}\text{ } \begin{enumerate} \item At least $c_3n^{4\alpha}$ of the $\gamma_{ijkl}$ are $c_3n$-rich on $D\times D$. \item Each $\gamma_{ijkl}$ is an irreducible algebraic curve of degree at most $2d$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item We define a bipartite graph with vertex set $E = k(A) \cup \{\widehat{f}_{ij}\}$, and we connect $t\in k(A)$ with $f_{ij}$ if $f_{ij}(t)\in D$. Then this graph has $m = c_2^{3/2}n^{1+2\alpha}$ edges. We count the 2-paths: $$\#P_2 = \sum_{x\in k(A)} \binom{d(x)}{2}\geq |k(A)|\binom{m/|k(A)|}{2}\geq c' n^{1+4\alpha}.$$ Hence at least $c''n^{4\alpha}$ pairs $\widehat{f}_{ij},\widehat{f}_{kl}$ share $c''n$ common neighbors $t$ in this graph. This implies that if $c_3 = c''/d$ then $c_3n^{4\alpha}$ of the $\gamma_{ijkl}$ have at least $c_3n$ point in $D\times D$. \end{enumerate} \end{proof} \subsection{Decomposing $\widehat{f}_{ij}$} \begin{lemma} There is a subset of $c_4n^{4\alpha-1}$ of the $\gamma_{ijkl}$ that all coincide, and such that $c_4n^{3\alpha-1}$ of the $\widehat{f}_{ij}$ occur in these $\gamma_{ijkl}$. \end{lemma} \begin{proof} By the Curve Lemma, for $n>n_0$, there can be at most $C_{CL}n$ distinct $c_3n$-rich curves on $D\times D$, so $c'n^{4\alpha -1}$ must coincide. Setting $c_4 = c'/2dn$ gives that at least $c_4n^{3\alpha -1}$ of the $\widehat{f}_{ij}$ occur. \end{proof} \begin{lemma} There are $c_4n^{3\alpha-1}$ pairs $(i,j)$ for which $$f_{ij}(x) = p(a_{ij} k(x) + b_{ij})$$ where $a_{ij}, b_{ij}\in \mathbb{R}$ and $p\in\mathbb{R}[x]$. \end{lemma} \begin{proof} For each coinciding pair of curves $\gamma_{ijkl}$ and $\gamma_{abcd}$, by the Reparametrization Lemma we can write $$\widehat{f}_{ij} = p\circ\varphi_{ij}~~~ \text{and}~~~\widehat{f}_{ab} = p\circ\varphi_{ab}.$$ By construction of the $\widehat{f}_{ij}$, the $\varphi_{ij}$ must be linear, which allows us to assume that all pairs use the same $p$. Write $\varphi_{ij}(t) = a_{ij}t + b_{ij}$; then for the $c_4n^{3\alpha-1}$ corresponding $f_{ij}$ we have $f_{ij} = p\circ\varphi_{ij}\circ k$. \end{proof} \subsection{Proof of Theorem \ref{thm:main}} Here we set $\alpha = 1$, so we have $c_4n^2$ rich lines $u = \varphi_{ij}(t) = a_{ij}t + b_{ij}$ that are rich on the (essentially) $n\times n$ cartesian product $k(A)\times p^{-1}(D)$. We claim that either $c_5n^2$ of the lines $u = \varphi_{ij}(t)$ are parallel, or $c_5n^2$ are concurrent, counting multiplicities. By the Szemer\'edi-Trotter Theorem (\ref{thm:szem}), at most $C_{ST}n$ of the lines are distinct. By our Assumption after Lemma \ref{nottoomanyidentical}, fewer than $2dn$ are identical. This implies that for some $c'$ we can split the lines into $c'n$ classes of size at least $c'n$, such that within each class the lines are identical, and between the classes the lines are distinct. We take a representative of each class and apply the Line Lemma \ref{lem:line} to these $c'n$ representatives, telling us that $c''n$ are parallel or $c''n$ are concurrent. Taking all of the corresponding classes together gives $(c''\cdot c')n^2$ lines that are all parallel or all concurrent. By Lemma \ref{parallelcase4d} below, we only need $2dn$ lines parallel, to show that $f$ has the additive form $f(x,y,z) = p(k(x)+l(y)+m(z))$, so $c''c'n^2$ will certainly suffice. Similarly, by Lemma \ref{parallelcase4d}, if $c''c'n^2$ of the lines are concurrent, then $f$ has the multiplicative form $f(x,y,z) = p(k(x)\cdot l(y)\cdot m(z))$. That finishes the proof of Theorem \ref{thm:main}. \subsection{Proof of Theorem \ref{thm:ER3}} We have $c_5n^{3\alpha-1}$ $c_5n$-rich lines, for an $\alpha$ to be determined below. Many of these lines may coincide, so we split them into $n^\beta$ classes of coinciding lines. The average size of a class is then $c_5n^{3\alpha - 1 - \beta}$, so for some $c'>0$ and $\varepsilon>0$ we can find a subset of $c' n^\beta$ classes that all have size at least $c'n^{3\alpha - 1 - \beta - \varepsilon}$. To apply Lemmas \ref{parallelcase4d} and \ref{concurrentcase4d} and finish the proof, we will need $2dn^\alpha$ lines that are all parallel or concurrent. To obtain these we need $\frac{2d}{c'}n^{\alpha - (3\alpha - 1 - \beta - \varepsilon)} = \frac{2d}{c'}n^{1+\beta+\varepsilon -2\alpha}$ representatives of the coinciding classes that are all parallel or concurrent, since each class has size at least $c'n^{3\alpha - 1 - \beta - \varepsilon}$. To get these representatives using Lemma \ref{lem:genLine}, we need $$c'n^{\beta} \geq \frac{2d}{c'}n^{2/3+ (1+\beta+\varepsilon -2\alpha)/3},$$ for which it suffices to have $3\beta-\varepsilon \geq 2 +1 +\beta +\varepsilon -2\alpha$, or $\beta \geq 3/2 +\varepsilon - \alpha$. On the other hand, if any of the $n^\beta$ classes contains at least $2dn^{\alpha}$ lines, then also $2dn^\alpha$ of the $f_{ij}$ would be identical, contradicting our assumption after Lemma \ref{nottoomanyidentical}. Hence all classes are smaller than $2dn^\alpha$, which implies that $$n^\beta \geq \frac{c_5}{2d}n^{2\alpha-1},$$ hence $\beta \geq 2\alpha -1-\varepsilon$. The second inequality for $\beta$ will imply the first if $$2\alpha - 1 -\varepsilon\geq 3/2 +\varepsilon - \alpha,$$ hence $\alpha = 5/6 + \varepsilon$ will do. \subsection{Proof of Theorem \ref{thm:ER4}} For Theorem \ref{thm:ER4}, we do the same as for Theorem \ref{thm:ER3}, except that instead of Lemma \ref{lem:genLine} we apply Corollary \ref{cor:croot}. To get the right number of parallel or concurrent lines, we set $p=q=2dn^{1+\beta+\varepsilon -2\alpha}$ in the Corollary, so we require $$c'n^\beta > (p+q)n^{\varepsilon'} = 4dn^{1+\beta+\varepsilon -2\alpha+\varepsilon'}$$ for some $\varepsilon'$. That will hold if $\beta > 1+\beta+\varepsilon -2\alpha+\varepsilon'$, or $\alpha \geq 1/2 + \varepsilon/2 + \varepsilon'/2$, which is satisfied for $\varepsilon' = \varepsilon$ and $\alpha = 1/2+\varepsilon$ as in Theorem \ref{thm:ER4}. \subsection{The parallel case}\label{subsec:ER4para} \begin{lemma}\label{parallelcase4d} If $2dn^\alpha$ of the lines $\varphi_{ij}$ are parallel, then there is a polynomial $r(y,z)$ such that $f(x,y,z) = p(k(x)+r(y,z))$. \end{lemma} \begin{proof} We can write $f_{ij}(x) = p(k(x) + b_{ij})$. We use the following two polynomial expansions of $f_{ij}(x) = f(x,y_i,z_j) = p(k(x) + b_{ij})$: $$\sum_{l=0}^N v_l \cdot (k(x)+b_{ij})^l = \sum_{m=0}^Nw_m(y_i,z_j)\cdot k(x)^m.$$ The first is immediate from $p(k(x) + b_{ij})$; the second requires a little more thought. By Lemma~\ref{lem:alg}, there is a unique expansion of the polynomial $f$ of the form $f(x,y,z) = \sum_{l=0}^{D-1} c_l(k(x),y, z)x^l$, where $D = \deg k$. By the same lemma, we have a unique expansion $f_{ij}(x) = \sum_{l=0}^{D-1} d_l(k(x))x^l$, so that we have $$\sum_{l=0}^{D-1} c_l(k(x),y_i, z_j)x^l = \sum_{l=0}^{D-1} d_l(k(x))x^l~~\Rightarrow~~c_l(k(x), y_i,z_j) = d_l(k(x)).$$ But since $f_{ij}(x) = p(k(x) + b_{ij})$, uniqueness implies that $d_l =0$ for $l>0$, hence $c_l(k(x),y_i,z_j) = 0$ for $l>0$. We have this for every $y_i,z_j$ such that $\varphi_{ij}$ is one of the parallel lines. Then we have $2dn^\alpha$ zeroes of $c_l(k(x),y,z)$, so applying the Vanishing Lemma with $|B|=|C|=n^\alpha$ gives $c_l(k(x), y,z) = 0$ for $l>0$. Thus $f(x,y,z) = c_0(k(x),y,z)$, which means there is an expansion $f(x,y,z) = \sum w_m(y,z) k(x)^m$. Now plugging in $y=y_i, z = z_j$ gives the expansion required above. Comparing the coefficients of $k(x)^{N-1}$ in the two expansions above, we get $$v_{N-1} + (N-1)v_N b_{ij} = w_{N-1}(y_i,z_j),$$ which implies that $b_{ij} = \frac{1}{(N-1)v_N} (w_{N-1}(y_i,z_j) - v_{N-1})$. If we now define the polynomial $$r(y,z) = \frac{1}{(N-1)v_N} (w_{N-1}(y,z) - v_{N-1}),$$ we have that for our $2dn^\alpha$ pairs $(y_i, z_j)$ (note that $v_l$ and $w_m$ do not depend on the choice of pair) $$f(x,y_i,z_j) = p(k(x)+r(y_i,z_j)).$$ By the Vanishing Lemma with $|B|=|C|=n^\alpha$, applied to $F(y,z) = f(x,y,z) - p(k(x)+r(y,z))$ over $K = \mathbb{R}(x)$, we get the desired equality $f(x,y,z) = p(k(x)+r(y,z))$. \end{proof}\noindent \begin{lemma} There are polynomials $l$ and $m$ such that $$f(x,y,z) = p(k(x) + l(y) + m(z)).$$ \end{lemma} \begin{proof} By applying the above with the roles of $x$ and $y$ swapped, we can also write $f(x,y,z) = P(K(y) + R(x,z))$. Then we calculate the quotient $f_x/f_y$ (using the notation $f_x = \partial f/\partial x)$ for both forms, $$\frac{f_x}{f_y} = \frac{k'(x)}{r_y(y,z)} = \frac{R_x(x,z)}{K'(y)},$$ which tells us that $r_y(y,z)$ (and $R_x(x,z)$) is independent of $z$. Integrating with respect to $y$ then gives that $r(y,z) = l(y) + m(z)$, which proves our claim. \end{proof} \subsection{The concurrent case}\label{subsec:ER4conc} \begin{lemma}\label{concurrentcase4d} If $2dn^\alpha$ of the lines $\varphi_{ij}$ are concurrent, There are polynomials $P(t)$, $K(x)$ and $R(y,z)$ such that $$f(x,y,z) = P(K(x)\cdot R(y,z)).$$ \end{lemma} \begin{proof} We can write $f_{ij}(x) = p(a_{ij}\cdot k(x))$. We again use two polynomial expansions of $f_{ij}(x) = f(x,y_i,z_j) =p(a_{ij}\cdot k(x))$: $$\sum_{l=0}^N v_l \cdot (a_{ij}\cdot k(x))^l = \sum_{m=0}^Nw_m(y_i,z_j)\cdot k(x)^m.$$ Both are obtained in the same way as in the proof of Claim \ref{parallelcase4d}. We cannot proceed exactly as before, since $a_{ij}$ might only occur here with exponents, and we cannot take a root of a polynomial. But we can work around that as follows. Define $M$ to be the greatest common divisor of all exponents $m$ for which $w_m\neq 0$ in the second expansion; then we can write $M$ as an integer linear combination of these $m$, say $M =\sum \mu_m m$. Comparing the coefficients of any $k(x)^m$ with $w_m\neq 0$ in the two expansions above, we get $$a_{ij}^m = \frac{1}{v_m}w_m(y_i,z_j),$$ which tells us that $$a_{ij}^M = \prod (a_{ij}^m)^{\mu_m} = R(y_i,z_j),$$ where $R(y,z)$ is a rational function. If we define $P(s) = p(s^{1/M})$, or equivalently $P(t^M) = p(t)$, then the definition of $M$ gives that $P(s)$ is a polynomial. We also define $K(x) = k^M(x)$. Then for each of the $2dn^\alpha$ pairs $y_i,z_j$ we have $$P(K(x)\cdot R(y_i,z_j)) = P(k^M(x)\cdot a_{ij}^M) = p(k(x) \cdot a_{ij}) = f(x,y_i,z_j).$$ Applying the Vanishing Lemma with $|B|=|C|=n^\alpha$ over $\mathbb{R}(x)$ to the numerator of $f(x,y,z) - P(K(x)R(y,z))$, we get that $f(x,y,z) = P(K(x) R(y,z))$. This also tells us that $R(y,z)$ is in fact a polynomial, since otherwise $f(x,y,z)$ could not be one. \end{proof} \noindent \begin{lemma} There are polynomials $L$ and $M$ such that $f(x,y,z) = P(K(x) \cdot L(y) \cdot M(z))$. \end{lemma} \begin{proof} By applying the above with the roles of $x$ and $y$ swapped, we can also write $f(x,y,z) = P^*(K^*(y)\cdot R^*(x,z))$. Then we calculate the quotient $f_x/f_z$ for both forms, $$\frac{f_x}{f_z} = \frac{K'(x)R(y,z)}{K(x)R_z(y,z)} = \frac{R^*_x(x,z)}{R^*_z(x,z)},$$ which tells us that $$\frac{R_z(y,z)}{R(y,z)} = \frac{\partial}{\partial z}\log(R(y,z))$$ is independent of $y$. Integrating we get that $\log(R(y,z)) = \lambda(y) + \mu(z)$, hence $$R(y,z) = e^{\lambda(y)}\cdot e^{\mu(z)} = L(y)M(z),$$ which also implies that $L(y)$ and $M(z)$ are polynomials, as desired. \end{proof} \noindent This finishes the proof. \section{Applications and Limitations} \label{sec:apps} In this section we give some applications and limitations of the main results. We start by giving a simple condition to check whether a function has the required additive or multiplicative form required in the main results. Then we give a proof of our variant of Purdy's conjecture. Finally we give a construction using parabolas that shows that the exponents in Theorem~\ref{thm:ER4} cannot be improved significantly. \subsection{How to check if a function is additive or multiplicative}\label{check} Given a differentiable function $f(x,y):\mathbb{R}^2\to\mathbb{R}$, we define \[q_f(x,y) = \frac{\partial^2}{\partial x \partial y}\log\biggl[\frac{\partial f/\partial x}{\partial f/\partial y}\biggr].\] Suppose $f$ is of the form $f(x,y)=p(k(x)+l(y))$ or $f(x,y)=p(k(x)l(y))$, where $p,k$ and $l$ are nonconstant. Then one can check that \[q_f(x,y) = 0\] identically. So, if we have a differentiable function $f:\mathbb{R}^2\to\mathbb{R}$, and $q_f$ is not identically zero, then we know that the function does not have the additive or multiplicative form. The converse of this result also holds, although we do not need that fact here. A similar condition holds for functions $f$ of the form $f(x,y,z)=p(k(x)+l(y)+m(z))$ or $f(x,y,z)=p(k(x)l(y)m(z))$. If we define \[q_f(x,y,z) = \frac{\partial^2}{\partial x \partial y}\log\biggl[\frac{\partial f/\partial x}{\partial f/\partial y}\biggr].\] Then $q_f(x,y,z) = 0$. Notice that with $f$ in the form above, \[\frac{\partial f/\partial x}{\partial f/\partial y}=\frac{k'(x)}{l'(y)}\quad\mathrm{or}\quad\frac{\partial f/\partial x}{\partial f/\partial y}=\frac{k'(x)l(y)m(z)}{k(x)l'(y)m(z)}=\frac{k'(x)l(y)}{k(x)l'(y)}\] is independent of $z$. This provides another way of checking whether a function does not have the additive or multiplicative form. Similar conditions could be checked using partial derivatives with respect to $z$. If $f(x,y,z)=p(k(x)+l(y)+m(z))$ or $f(x,y,z)=p(k(x)l(y)m(z))$ we get \[r_f(x,z) = \frac{\partial^2}{\partial x \partial z}\log\biggl[\frac{\partial f/\partial x}{\partial f/\partial z}\biggr] = 0\] and \[s_f(y,z) = \frac{\partial^2}{\partial y \partial z}\log\biggl[\frac{\partial f/\partial y}{\partial f/\partial z}\biggr] = 0.\] Note that in this case the converse does not hold. In the example in Section~\ref{subsec:parab} below $q_f=0$, $r_f=0$ and $s_f=0$, but $f$ does not have the required decomposition. \subsection{On a conjecture of Purdy} \label{subsec:purdy} The following theorem was conjectured by G. Purdy in \cite{Bras06} and proved by Elekes and R\'onyai in \cite{Elek00}. We will use the notation $D(P,Q) = \{d(p,q) : p\in P, q\in Q\}$ for the set of distances between two point sets. \begin{theorem} For all $c$ there is an $n_0$ such that for $n>n_0$ the following holds for any two lines $\ell_1$ and $\ell_2$ in $\mathbb{R}^2$ and sets $P_i$ of $n$ points on $\ell_i$. If $|D(P_1,P_2)|< cn$ then the two lines are parallel or orthogonal. \end{theorem} Using Theorem~\ref{thm:ERbest} (or Theorem~\ref{thm:ER2}) we can extend it to the asymmetric case when we have fewer points on one of the lines. The proof is similar to that in \cite{Elek00}. \begin{theorem} For every $c>0$ and $\varepsilon>0$ there is an $n_0$ such that for $n>n_0$ the following holds for any two lines $\ell_1$ and $\ell_2$ in $\mathbb{R}^2$, $P_1$ a set of $n$ points on $\ell_1$, and $P_2$ a set of $n^{1/2+\varepsilon}$ points on $\ell_2$. If $|D(P_1,P_2)|< cn$ then the two lines are parallel or orthogonal. \end{theorem} \begin{proof} Parameterize $l_1$ by $x_1$ and $l_2$ by $x_2$, and let $X_1$ and $X_2$ represent $P_1$ and $P_2$ in this parameterization. Then the condition on the distances means by the Law of Cosines that the polynomial $f(x_1,x_2)=x_1^2+2\lambda x_1x_2 + x_2^2$ assumes $<cn$ values on $X_1\times X_2$. Then $z = f(x_1,x_2)$ contains $>c'n^{3/2 +\varepsilon}$ points of the cartesian product $X_1\times X_2\times E$ where $E = \{a^2:a\in D(P_1,P_2)\}$. By Theorem~\ref{thm:ERbest}, this implies that $f$ has the additive or multiplicative form. Thus $q_f$, as defined in Section \ref{check}, should be identically zero. A quick calculation shows that this is only possible if $\lambda = -1,0,$ or $1$, which means that the angle between the lines is $0$ or $\pi/2$. Therefore the lines are parallel or concurrent. \end{proof} \subsection{Limits on the asymmetry of the cartesian product} \label{subsec:parab} In this section we show that Theorem~\ref{thm:ER4} is near-optimal. We will use the notation $[a,b] = \{a, a+1, \ldots, b-1, b\}$. Consider $$f(x,y,z) = x + (y-z)^2,$$ and let $A = D = [1,k^2]$ and $B = C = [1,k]$ for an even integer $k$. If we set $n = k^2$, then $|A| = |D| = n$ and $|B| = |C| = n^{1/2}$. We can think of the solid $w = f(x,y,z)$ as consisting of translates of the parabola $w = y^2$ from the $wy$-plane. We have $x+(y-z)^2 \in D$ when (for instance) $$x\in [1,k^2/2],\quad y \in [1,k/2] \quad \text{and}\quad z\in [1,k/2].$$ Then the solid $w = f(x,y,z)$ contains at least $\frac{1}{8}k^4 = \frac{1}{8} n^2$ points of $A\times B\times C\times D$. But the function $f(x,y,z)=(y-z)^2+x$ does not have one of the forms $p(k(x)+l(y)+m(z))$ or $P(K(x)L(y)M(z))$. Note that $q_f = 0$, $r_f = 0$ and $s_f = 0$, so we cannot use the method above to show that $f$ does not have the additive or multiplicative form. Instead we consider a degree argument. Suppose $f(x,y,z) = P(K(x)L(y)M(z))$. Since each of $P,K,L$ and $M$ must have degree at least one, we would have $\deg f\ge 3$, a contradiction. So $f$ does not have the multiplicative form. Now suppose that $f(x,y,z)=p(k(x)+l(y)+m(z))$. Then $p,k,l$ and $m$ have degree at least one and at most two. If $\deg p = 2$ then $\deg k = 1$, implying $f$ has a term of the form $cx^2$, which it doesn't. If $\deg p = 1$, then $f$ couldn't contain the term $-2yz$. So $f$ does not have the additive form either. Therefore the graph of $w=f(x,y,z)=x+(y-z)^2$ contains many points of $A\times B\times C\times D$, but $f$ cannot be written in the additive or multiplicative form. Hence any extension of Theorem~\ref{thm:main} with $|B|=|C|$ would have to have $|B|=|C|\ge cn^{1/2}$ for some constant $c>0$. Theorem~\ref{thm:ER4} supposes $|B|=|C|=n^{1/2+\varepsilon}$ for some $\varepsilon>0$, so that condition cannot be improved significantly. \bibliographystyle{plain}
1,116,691,498,780
arxiv
\section{Introduction} The narrow-gap semiconductors BiTe$X$ ($X$ = Cl, Br, I) have attracted considerable interest because of large Rashba-type spin-orbit splittings in their bulk and surface electronic structures \cite{Ishizaka, Eremeev_PRL, Murakawa}, which have been observed by angle-resolved photoelectron spectroscopy (ARPES) \cite{Crepaldi_PRL, Landolt_PRL,Landolt_NJP, Sakano_PRL, Moreschini} and magnetotransport measurements \cite{Martin, Bell}. The enhanced spin-splitting in these materials is driven by their non-centrosymmetric crystal structure in combination with strong atomic spin-orbit coupling and a negative crystal-field splitting of the topmost valence bands \cite{Bahramy_PRB}. The latter features have also been predicted to promote a topological insulator phase in BiTeI under application of external pressure \cite{Bahramy}. The BiTe$X$ series does not only host the presently largest known Rashba effect of all semiconductors, it also appears more suitable for possible spin-electronic applications \cite{Datta, Zutic} than artificially grown monolayer reconstructions, such as metallic surface alloys, where spin-splittings of similar magnitude can be achieved \cite{Ast, Bentmann, El-Kareh_PRL, El-Kareh_NJP}. At the surface, the non-centrosymmetric, layered unit cell of BiTe$X$ results in two possible polar terminations \cite{Crepaldi_PRL,Eremeev_NJP,Landolt_NJP, Moreschini}, Te- and $X$-terminated surfaces, that give rise to n-type or p-type band bending, respectively \cite{Crepaldi_PRL}. The surface properties may be influenced additionally by defects, as is the case for BiTeI, where bulk stacking faults induce coexisting Te- and I-terminated domains on microscopic length scales as shown by scanning tunneling microcopy (STM) \cite{Butler, Tournier, Kohsaka, Fiedler}. While the resulting lateral interfaces between surface areas of different terminations may provide interesting new physics \cite{Butler, Tournier}, the presence of multiple domains will in most instances be undesirable. For BiTeCl and BiTeBr spatially resolved surface investigations have so far been scarce \cite{Yan}. In the case of BiTeCl photoemission experiments indicate single-terminated surfaces \cite{Landolt_NJP}, in contrast to BiTeI, whereas for BiTeBr the situation is unclear. The majority of ARPES studies of BiTe$X$ point to similar Rashba-split band structures for all three compounds \cite{Sakano_PRL, Landolt_NJP, Crepaldi_PRB, Moreschini}, in agreement with theoretical predictions \cite{Eremeev_PRL, Moreschini}. However, for BiTeCl the existence of topological surface states has also been claimed based on ARPES \cite{Chen} and STM \cite{Yan}. In this work we present a combined investigation of the surface structural and electronic properties of the BiTe$X$ semiconductors. Our STM experiments show that BiTeBr and BiTeCl(0001) display single-domain surfaces with X- or Te-termination. The determined terrace step heights agree with the respective bulk unit cell parameters and X-ray photoemission (XPS) provides depth-dependent chemical information in line with the expected layered atomic structure. The measured core-level binding energies indicate a significant charge transfer from Bi to, both, $X$ and Te in agreement with density functional theory (DFT) calculations. We systematically compare the electronic properties of Te- and $X$-terminated surfaces in terms of band bending, surface band structure, work function, atomic defects, and reaction to deposited adsorbate atoms. \section{Methods} Our experimental setup is designed for a comprehensive analysis of the geometric and electronic properties in real and reciprocal space as described in Ref.\,\cite{Fiedler}. The system allows surface analytics by means of various techniques, i.e. LEED, SPA-LEED, STM, STS, AFM, XPS, work function and ARPES measurements in ultra-high vacuum conditions for the same sample without exposing it to air. Additional high-resolution STM measurements were performed at a separate setup with a low-temperature STM (Omicron LT-STM) at $T=5$\,K. We used a modified sample holder system, which allows to split single crystals \textit{in situ} and to measure both corresponding surfaces of a cleave without the need to re-glue or to expose the sample to air [see Fig.~\ref{Figure1}(a)]. Thus, BiTe$X$ ($X$ = Cl, Br, I) single crystals were cleaved at room temperature along the (0001) direction at pressures low $2\cdot10^{-10}$\,mbar revealing surfaces of about 2\,mm$\times$2\,mm on each side. A podium smaller than the sample was used to move the surface into the focal point of the electron spectrometer in order to minimize spurious signal from the sample holder. Submonolayer amounts of Cs were deposited using commercial alkali dispensers (SAES Getters). All experiments were performed at room temperature except for those carried out at the LT-STM. Tips have been prepared according to Ref.\,\cite{Fiedler}. Differential conductance maps are used to obtain spatially resolved information about the sample's local density of states (DOS). For this purpose a small modulation voltage ($U_{\rm mod} = 25$\,mV) is added to the sample bias $V$ and the resulting variation of the tunneling current, $\mathrm{d}I/\mathrm{d}V$, is recorded simultaneously with the topograhic, i.e.\ constant-current image. STM data were processed with the WSxM software package \cite{Horcas}. XPS measurements were done with Al K$\alpha$ radiation ($h\nu$~=~1486.6\,eV) under a photoelectron emission angle of 60\,$^\circ$ in order to enhance the surface sensitivity of the experiment. The X-ray source was not monochromatized and the spectra were satellite-corrected. The energy resolution of the XPS measurements was ca.~1~eV. ARPES data were acquired with a non-monochromatized He discharge lamp with He I$\alpha$ radiation ($h\nu$~=~21.2\,eV) and at an energy resolution of approximately 25\,meV. Work functions were determined from the secondary photoelectron cutoff with the sample held on a positive potential of 9\,V. Calibration measurements for Au(111) gave values in line with previous reports \cite{Trasatti, Rusu}. The synthesis of the charges was performed by fusing binary compounds: Bi$_2$Te$_3$ with BiCl$_3$, BiBr$_3$ and BiI$_3$, correspondingly. According to published data \cite{Tomokiyo, Petasch} BiTeI and BiTeBr melt congruently at 560\,$^\circ$C and 526\,$^\circ$C, while BiTeCl shows incongruent melting \cite{Petasch} at 430\,$^\circ$C with a peritectic composition around 11\,mol.\% Bi$_2$Te$_3$ +\,89 mol.\% BiCl$_3$. Therefore we have used stoichiometric charge for BiTeI, BiTeBr and melt-solution system with a molar ratio Bi$_2$Te$_3$:BiCl$_3$\,=\,1:9 for the crystallization of BiTeCl. The synthesis was performed directly in the growth quartz ampoules at a temperature which is 20\,$^\circ$C above the melting point. Crystal growth was performed by the modified Bridgman method with rotating heat field \cite{CrystEngComm}. After pulling the ampoules through the vertical temperature gradient of 15\,$^\circ$C/cm at 10\,mm/day, the furnace was switched off. Complementary first-principles calculations were performed within the framework of the density functional theory (DFT) using the projector-augmented-wave (PAW) \cite{PAW1,PAW2} basis. The generalized gradient approximation (GGA-PBE) \cite{PBE} to the exchange correlation (XC) potential as implemented in the {\sc VASP} code \cite{VASP1,VASP2} was used. The relaxed bulk parameters have been taken into account. The atomic charges were estimated by implementing the Bader charge analysis \cite{Bader}. \section{Results} \subsection{Surface morphology and bonding character} \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{Figure1a_1f.eps} \end{center} \caption{ Crystal structure and room-temperature STM measurements for BiTe$X$. (a) Bulk unit cells of BiTeBr/I and BiTeCl and the resulting surface terminations after cleaving. The inset sketches the situation for an ideal crystal mounted between the two sample holders (black) after the cleave. STM measurements (500\,nm$\times$500\,nm) of (b) BiTeI, (c) Te-terminated and (d) Br-terminated BiTeBr, as well as (e) Te-terminated and (f) Cl-terminated BiTeCl. The gap voltage is varied from -0.05\,V to -1\,V and the tunneling current was 0.1\,nA in (b) and 0.2\,nA in (c)-(f). The outmost parts of the images are d\textit{I}/d\textit{V} maps of the areas between the lines.} \label{Figure1} \end{figure*} Fig.~\ref{Figure1}(a) shows the unit cells of BiTe$X$. While BiTeI and BiTeBr have a unit cell of 3 atomic layers, the one of BiTeCl comprises 6 layers along $z$ resulting in a height twice as large \cite{Eremeev_PRL,Shevelkov}. The inset sketches the stacking order after the cleave of an ideal single crystal, resulting in two different terminations for the two opposing surfaces. Fig.~\ref{Figure1}(b) displays a 500\,nm$\times$500\,nm STM measurement of BiTeI(0001) at 0.1\,nA tunneling current. During the scan the gap voltage was gradually decreased from -0.05\,V at the upper part of the image down to -1.0\,V at the lower part. Note that negative voltages refer to tunneling from the sample to the tip, thus reflecting the occupied DOS of the sample as being also accessed by ARPES spectra. Coexisting Te- and I-terminations are visible as reported earlier \cite{Butler, Tournier, Kohsaka, Fiedler}. The outer part of the image shows the corresponding d\textit{I}/d\textit{V} map of the surface within the two white dashed lines. The Te-terminated surface shows a high DOS at -0.05\,V while at -0.3\,V the same surface appears dark in the d\textit{I}/d\textit{V} map and the I-terminated surface reveals a high intensity. This high DOS originates from the onsets of the band structures of the two different terminations, as shown in Ref. \cite{Fiedler}. The step edges within the same terminations are around 0.7\,nm high and the ratio between Te- to I-terminated areas is roughly 50/50. Next we investigate the surface morphologies of BiTeBr and BiTeCl [see Figure~\ref{Figure1}(c)-(f)]. The images reflect a surface area of 500\,nm$\times$500\,nm, and were obtained at 0.2\,nA tunnelling current at a voltage varied from -0.05\,V to -1\,V. The surface terminations are indicated in the figures by BiTeBr-Te and BiTeBr-Br for the Te- and Br-terminated surfaces of BiTeBr and by BiTeCl-Te and BiTeCl-Cl for the Te- and Cl-terminated surfaces of BiTeCl, respectively. Fig.~\ref{Figure1}(c) shows one side of a BiTeBr crystal split at (0001) direction. On this surface there is no sign of a second termination as seen in Fig.~\ref{Figure1}(b) for BiTeI. The step edges are (0.65$\pm$0.05)\,nm high, which is in agreement with the bulk unit cell height along $z$ \cite{Shevelkov}. Some adsorbates can be seen but the surface is mostly clean. An increase in the DOS close to $E_{\rm F}$ indicates that we are dealing with the Te termination of BiTeBr, as has been shown for BiTeI in Fig.~\ref{Figure1}(b) and Ref.\,\cite{Fiedler}. d\textit{I}/d\textit{V} maps taken over a larger energy range (not shown) further showed an onset of valence states at an energy of approximately -1\,eV. Fig.~\ref{Figure1}(d) shows the other side of the cleave. More adsorbates can be found on this surface, which indicates a higher chemical reactivity. The d\textit{I}/d\textit{V} map strongly deviates from the one obtained for the Te-termination. At a gap voltage of around -0.55\,eV an increase in the DOS can be seen, indicating a band onset, as observed similarly for the I-termination of BiTeI in Fig.~\ref{Figure1}(b). Furthermore, the adsorbates appear dark in the d\textit{I}/d\textit{V} and start accumulating at the step edges before covering the terraces. The higher chemical reactivity and the determined DOS indicates that this surface is Br-terminated. For BiTeCl similar observations in terms of DOS and adsorbate characteristics are obtained as for BiTeBr. The STM images and d\textit{I}/d\textit{V} maps for the Te- and Cl-terminated surface are shown Fig.~\ref{Figure1}(e) and (f), respectively, closely resembling their counterparts in BiTeBr. Notably, most of the step edges have a heights of (1.25$\pm$0.05)\,nm for both terminations, matching again the height of the bulk unit cell \cite{Shevelkov}, while only 5\%--10\% of the steps have a height of $\approx$0.7\,nm, corresponding to a single BiTeCl trilayer. Our STM measurements thus reveal strikingly different surface morphologies for BiTeBr and BiTeCl as compared to BiTeI. Both compounds feature single-domain (0001) surfaces with either Te- or $X$-termination. Apparently, bulk stacking faults, giving rise to domains of different stacking order in BiTeI, are largely absent in the other two compounds. A possible explanation for this behavior could be the similar atomic radii of Te and I atoms, that might be expected to promote the formation of mixed Te/I layers during the crystal growth. Our DFT calculations indicate that the formation energy for stacking faults in the bulk is much smaller for BiTeI (1\,meV) than for BiTeBr (46\,meV) and BiTeCl (60\,meV), in line with the experimental findings. In general, BiTeBr and BiTeCl will thus be more suitable materials for spatially-averaging techniques that address the spin-polarization of the electronic bulk states. To gain further insight into the structural and chemical properties of the BiTeBr and BiTeCl(0001) surfaces we have performed XPS experiments. Fig.~\ref{Figure2}(a)-(c) shows core-level spectra directly corresponding to the different surfaces presented in Fig.~\ref{Figure1}. Comparing spectra for Te- and Br(Cl)-terminated surfaces we observe a relative shift of 200\,meV (300\,meV), which we attribute to band bending \cite{Crepaldi_PRL, Landolt_NJP, Moreschini}. The energy shifts are slightly reduced compared to values reported in Ref.\,\cite{Moreschini} which might be due to the higher excitation energy and thus an increased probing depth in the present experiments. \begin{figure*} \begin{centering} \includegraphics[width=0.8\textwidth]{Figure2a_2c.eps} \end{centering} \caption{X-ray photoemission data for BiTeBr in (a) and BiTeCl in (b), (c). Characteristic intensity differences in the Te and Br/Cl core level signals are observed for the different surface terminations, reflecting the changed atomic stacking orders and the finite probing depth of the experiment. Furthermore, band bending gives rise to small energy shifts between the spectra for Te-terminated and Br/Cl-terminated surfaces.} \label{Figure2} \end{figure*} \begin{table*} \begin{tabular}{cccccccc} & elem.[eV] & Bi$_2$Te$_3$[eV] & Cl[eV] & Te$_{Cl}$[eV] & Br[eV] & Te$_{Br}$[eV] & BiTeI[eV]\\ \hline \hline Bi 5d$_{5/2}$ & 24.1 & 24.6 & 25.0 & 25.3 & 25.0 & 25.2 & 25.0 \\ Te 4d$_{5/2}$ & 40.5 & 39.9 & 40.1 & 40.4 & 40.1 & 40.3 & 40.1 \\ \hline work function & & 5.1 & 6.2 & 4.5 & 6.0 &4.7 & (5.2) \\ \end{tabular} \caption{Core level binding energies and work functions for BiTe$X$ and Bi$_2$Te$_3$. The estimated uncertainty of the measured values amounts to $\pm$0.1~eV. For comparison we also show the corresponding binding energies for elemental Bi and Te metal taken from Ref.\,\cite{Shalvoy}.} \label{Table1} \end{table*} \begin{table} \centering \begin{tabular}{ccccc} & BiTeCl & BiTeBr & BiTeI \\ \hline \hline Bi & -1.09 & -1.01 & -0.91 \\ Te & +0.41 & +0.42 & +0.44 \\ $X$& +0.68 & +0.59 & +0.47 \\ \end{tabular} \caption{Calculated charge transfer based on DFT in the bulk BiTe$X$ compounds (in electrons).} \label{Table2} \end{table} Considering the peak intensities for the Te and Br(Cl) species we observe characteristic differences between two surfaces with different termination resulting from the finite electron mean free path of the XPS experiment of around 1\,nm \cite{Huefner}. When going from Te- to $X$-terminated surfaces the Te signal is reduced while the $X$ signal is enhanced, directly reflecting the changed atomic stacking sequence. The spectra have been normalized to the signal of Bi which for both terminations is expected to reside in the second atomic layer as shown in the inset of Fig.~\ref{Figure1}(a). For a quantitative estimation we assume an exponential damping of the signal which amounts to roughly 30\% for two atomic layers and the present experimental conditions \cite{Huefner}. From the data in Fig.~\ref{Figure2}(a) we infer that the Te 4d and Br 3d signals change by 22\% and 25\%, respectively. In Fig.~\ref{Figure2}(b)-(c) the change for the Te 4d core level is 30\% and 36\% for Cl 2p. Averaged over four samples, the damping for BiTeBr is 26$\pm$5\% for Te- and 19$\pm$6\% for Br-terminations while for BiTeCl we find 32$\pm$3\% for Te- and 24$\pm$13\% for Cl-terminated surfaces. The XPS data thus confirm the single termination and the expected termination-dependent atomic layer stacking for BiTeBr and BiTeCl. \begin{figure*} \begin{center} \includegraphics[width=0.8\textwidth]{Figure3a_3d.eps} \end{center} \caption{Angle-resolved photoemission data for BiTeBr in (a)-(b) and for BiTeCl in (c)-(d) ($h\nu =$~21.2 eV). The contrast between $E_{\rm F}$ and 0.5\,eV (indicated by horizontal lines) has been increased in (a) and (c) for better visibility of the Rashba-split band near $E_{\rm F}$. The red dotted lines serve as guide-to-the-eye. } \label{Figure3} \end{figure*} Table~\ref{Table1} summarizes the binding energies for the Te 4d$_{5/2}$ and Bi 5d$_{5/2}$ peaks in BiTe$X$, which contain information about the chemical bonding in the compounds \cite{Huefner}. The aforementioned band bending gives rise to small deviations between different terminations on the order of 200-300\,meV. Furthermore, when compared to the values in Bi and Te metal \cite{Shalvoy}, the Bi 5d$_{5/2}$ peaks are shifted to higher and the Te 4d$_{5/2}$ peaks to lower binding energies. The absolute shift is significantly larger for Bi than for Te. On the other hand, no clear trends along the series $X$ = Cl, Br, I are apparent. To gain a better understanding of the experimental data we have calculated by use of DFT the charge transfer in bulk BiTe$X$ as shown in Table~\ref{Table2}. As one can see, the Bi atom loses about one electron by transferring it to Te ($\sim$0.4\,$e$) and $X$ atoms which is in line with the experimental result. Note that among the three compounds the values for Bi vary by only 10 - 20\% and are basically the same for Te. This might explain the absence of clear chemical trends in the respective XPS binding energies. The considerable increase in the calculated charge transfer to $X$ along $X$ = I, Br, Cl further indicates an increasingly ionic bonding character between $X^{-}$ and BiTe$^{+}$ layer with rising electronegativity of the halogen atoms. Additional insight into the influence of the halogen species on the bonding character may be gained by a comparison to Bi$_2$Te$_3$, showing a similarly layered structure as BiTe$X$, where a single Bi layer resides between two Te layers (see e.g. Ref.~\cite{Kuznetsov}). For this compound the chemical shift of the Bi 5d$_{5/2}$ line is considerably reduced (see Table~\ref{Table1}). This points to significant differences between BiTe$X$ and Bi$_2$Te$_3$, for which the bonding is usually assumed to be dominated by covalent contributions \cite{Wagner}. Table~\ref{Table1} also displays work functions for BiTe$X$ as determined by the secondary photoelectron cutoff. For $X$\,=\,Cl, Br large differences above 1\,eV between $X$- and Te-terminated surfaces are observed in quantitative agreement with a recent STM study of the local work function on BiTeI(0001) \cite{Kohsaka}. This finding may indeed be understood in terms of an ionic bonding between $X^{-}$ and BiTe$^{+}$ layers creating opposite dipoles near the surface depending on termination \cite{Shevelkov, Eremeev_NJP, Crepaldi_PRL, Tournier, Moreschini, Butler, Ishizaka}. Furthermore, the larger calculated charge transfer in BiTeCl compared to BiTeBr is in line with the increased work function difference between the two terminations observed experimentally. The work function for a Bi$_2$Te$_3$(0001) surface, which is terminated by a Te layer \cite{Kuznetsov}, is considerably larger than for the Te-terminated BiTeBr and BiTeCl surfaces, again pointing to a strong effect of the halogen atoms on the microscopic charge distribution. Surprisingly, for BiTeI only one cutoff could be observed in our spectra despite the presence of Te- and I-terminated surface areas. The Te and I domains of BiTeI are in order of 100\,nm \cite{Fiedler} and maybe small enough to result in a mixed work function when measured by secondary electron cutoff technique. The corresponding work function of 5.2~eV is given in brackets in Table~\ref{Table1} and lies in between the values found for Te- and I-terminated surface areas by STM \cite{Kohsaka}. \subsection{Surface electronic structure} \begin{figure*} \begin{centering} \includegraphics[width=0.8\textwidth]{Figure4a_4f.eps} \end{centering} \caption{Effect of Cs-adsorption on BiTeCl(0001). (a) and (d) show core level spectra measured before and after Cs deposition on a Te- and a Cl-terminated surface, respectively. Corresponding valence band spectra taken at the $\bar\Gamma$-point are shown in (b) and (e) ($h\nu =$~21.2 eV). STM images and d\textit{I}/d\textit{V} maps acquired after deposition of Cs on Te- and Cl-terminated BiTeCl are given in (c) and (f).} \label{Figure4} \end{figure*} Fig.~\ref{Figure3} shows ARPES data obtained for BiTeBr and BiTeCl(0001) surfaces. The band structures vary greatly between Te- and Br/Cl-terminated surfaces, but, for a given termination, are similar for both materials. This is in agreement with previous results \cite{Moreschini}. On the Te-terminated surface we observe a Rashba-split band close to the Fermi level that derives from the conduction band bottom and the onset of valence band states at a binding energy of approximately 1\,eV. We note that only one set of parabolic bands is visible in our data whereas previous studies observed two to three sets of bands \cite{Sakano_PRL,Crepaldi_PRB}. In Refs.~\cite{Sakano_PRL,Crepaldi_PRB} the lowest detected bands have their minima below -0.4\,eV while in our case at roughly -0.2\,eV. This could point to a different n-type doping at the surface or in the bulk. Another possible explanation are strong cross section effects with excitation energy which were reported recently \cite{Crepaldi_PRB}. For the Br/Cl-terminated surface conduction band states do not appear at the Fermi level due to p-type band bending as well as no surface states emerge near the valence band in agreement with earlier ARPES measurements on BiTeCl and in contradiction with a theoretical prediction \cite{Landolt_NJP}. The onset of spectral weight derived from the valence band lies at binding energies of approximately 0.7\,-\,0.8\,eV. The electronic structure determined by ARPES is in fair agreement with the d\textit{I}/d\textit{V} maps in Fig.~\ref{Figure1}, concerning, e.g., the presence or absence of surface states at the Fermi level depending on termination. In accordance with previous findings for BiTeI we observe significant time-dependent shifts to higher binding energies in the electronic structure of the $X$-terminated surfaces while those are much reduced for the Te-termination \cite{Fiedler}. This can be attributed to residual gas absorption that is enhanced for the $X$-terminations, as already suggested by our STM data. More rapid energy shifts were observed during operation of the He lamp, possibly as a result of hydrogen adsorption, which might explain the discrepancy between the valence band offsets determined by ARPES (Fig.~\ref{Figure3}) and by the d\textit{I}/d\textit{V} maps in Fig.~\ref{Figure1} as well as the absence of the surface states on the $X$ terminations. Similar to the XPS spectra in Fig.~\ref{Figure2} also the ARPES data in Fig.~\ref{Figure3} reflect the complete surface area of our samples because the spot sizes of the light sources exceed the lateral sample dimensions. The results therefore confirm the single termination of BiTeBr and BiTeCl on a macroscopic scale, in line with the STM data in Fig.~\ref{Figure1}. This excludes any considerable appearance of different crystal phases. The measured band structures show no topological surface state that would bridge the gap between valence and conduction bands, excluding a possible topological insulator phase in BiTeCl \cite{Chen,Yan}. \begin{figure*} \includegraphics[width=0.75\textwidth]{Figure5a_5f.eps} \caption{LT-STM measurements, all scans are performed at $T$\,=\,5\,K, V\,=\,1\,V and I\,=\,10\,pA. (a) The Te termination of BiTeBr shows the lowest defect density of the BiTe$X$ family. (b) is the zoom in of (a) at the green square, we can find mainly one type of surface defects, three different types of third layer defects with an additional variation, but no second layer defects could be found. (c) shows the Te termination of BiTeCl, the defect density is the highest of the BiTe$X$ compounds. (d) shows the zoom in of (c), one can find at least two different types of defects, others might be covered. (e) side view of a hard ball sketch of BiTeBr-Te. Second (2nd) and third (3rd) layer defects and their effect on nearest neighbor atoms are indicated schematically. (f) top view of a hard ball sketch of BiTeBr-Te. 2nd layer defects would result in three neighboring Te atoms with different contrast while 3rd layer defects mainly affect three next-nearest neighbor Te atoms, as can be seen in defect C in Fig.~\ref{Figure5}(b).} \label{Figure5} \end{figure*} Since the electronic structure of BiTe$X$ near the surface is highly termination-dependent it is of interest to investigate additional possibilities to modify the surface electronic properties. Fig.~\ref{Figure4} summarizes the influence of Cs adsorption on the surfaces of BiTeCl. Surprisingly, we observe energy shifts in the spectra into opposite directions for the two terminations: While for the Cl-terminated surface the features shift to higher binding energy - as expected for adsorption of alkali species \cite{Crepaldi_PRL, Fiedler, Seibel} - they shift to lower binding energy for the Te-terminated surface. This trend is observed in the valence band [see Fig.~\ref{Figure4}(b),(e)] and in the core levels [see Fig.~\ref{Figure4}(a),(d)]. The positive energy shift on the Te-terminated surface is rather unusual and may occur in the present case due to clustering of the Cs adsorbates, as observed by STM in Fig.~\ref{Figure4}(f). In Ref.\,\cite{Fiedler} we showed for BiTeI that the diffusion length of Cs atoms at room-temperature is considerably higher for Te- than for I-terminated surfaces \cite{Fiedler}, which could explain the strong clustering observed in Fig.~\ref{Figure4}(f). For the Cl-termination the appearance of the Cs-induced structures in STM is different and reveals flatter areas with reduced d\textit{I}/d\textit{V} signal (see Fig.~\ref{Figure4}(c)). As seen in Fig.~\ref{Figure4}(b) the conduction band minimum shows up below the Fermi level upon Cs deposition on the Cl-terminated surface, indicating that it is located slightly above the Fermi level for the pristine surface. In summary, the results indicate that the surface termination can considerably affect the adsorption behavior of adatoms and the resulting influence on the electronic structure, which might be of relevance, e.g., for interfacing BiTeX with other materials. Similar effects as presented here for Cs/BiTeCl were also observed for Cs/BiTeBr (not shown), namely an energy shift to higher binding energies on the Br-termination and a clustering of Cs on the Te-termination in combination with an energy shift to lower binding energies. \subsection{Atomic defects} After identifying the surface termination, we re-glued the samples with a top-post and moved them to a separate LT-STM, operated at $T$\,=\,5\,K, to cleave them again. Fig.~\ref{Figure5} shows data obtained at a positive gap voltage, usually resulting in increased (decreased) contrast for defects that act as electron donors (acceptors) \cite{Jiang}. If we assume that the sample only consists out of three elements, for example Bi, Te and Br, three kinds of defects may appear, e.g. in the Br layer: a vacancy, a Te antisite and a Bi antisite. We expect that the electronegativity behaves as Bi\,\textless\,Te\,\textless\,Br (as shown in our DFT calculations) and that charge of two neighboring atoms is transferred from the one with lower to the one with higher electronegativity. The atomic radii behave as Bi\,\textgreater\,Te\,\textgreater\,Br. One can assume that it is more likely for a vacancy to be substituted by a smaller atom than forming an antisite with a larger atom. In another publication we showed a 400\,nm$^{2}$ scan of the Te termination of BiTeI \cite{Fiedler} which revealed defect densities of roughly 7.5/(100\,nm$^{2}$) in the third layer (I) and 2.5/(100\,nm$^{2}$) in the first layer (Te). Fig.~\ref{Figure5}(a) shows the Te termination of BiTeBr (scan area 75\,nm$\times$75\,nm) measured at 1\,V gap voltage and 10\,pa tunneling current. With the same method \cite{Jiang}, we can identify defect densities of about 2.5/(100\,nm$^{2}$) in the third layer (Br) and 1.3/(100\,nm$^{2}$) in the first layer (Te). No defects in the second layer (Bi) have been found. Adsorbates, marked by a black arrow, appear to be around 2.5\,nm high and vary in shape, while defects labeled (A) are only 25\,pm high and 1\,nm in diameter. They show an increased contrast and in the zoom-in in Fig.~\ref{Figure5}(b) one further recognizes that the atoms around the defect center appear darker. This is an indication for a local charge transfer from the surrounding to the defect atom. Defect (B) shows a reduced DOS indicating a charge transfer from the defect to the surrounding. Comparing the defects (A) and (B) by means of total numbers and relative contrast, we conclude that (A) is a Br antisite while (B) is most likely a Bi antisite or a vacancy. Now we analyze the three different third layer defects by means of total number and relative contrast. Defect (C) appears most often and features the highest contrast. Since the third layer of the Te termination of BiTeBr is Br, having the smallest atomic radius and largest electronegativity, a Br vacancy could be a reasonable candidate. Furthermore, the basic structure of defect (D) is the same defect as (C) with an additional atom on top. A possible explanation is a Br atom which remains on the surface after the cleaving process. Defect (E) appears less often than (C) but more often than defect (F) and has the lowest contrast. The atomic radius of Br is closer to Te than to Bi, which would lead to a Te antisite in the Br layer. Also the fact that the contrast is weak could be due to the smaller difference in electronegativity of Te and Br compared to Bi. (F) is the defect that appears most rarely, which may indicate a Bi antisite in the Br layer. The high contrast contradicts this assumptions, but a closer comparison between (C) and (F) shows an inversion of the contrast. While the center of defect (C) shows a higher DOS than the direct surrounding, for (F) the situation is opposite: a low intensity in the center with a bright surrounding. If we expect a Bi antisite in the Br layer, the Bi would donate an electron, which would result in a higher DOS at the location of the defect \cite{Jiang}. Also the center of defect (E) shows a dark contrast with a brighter surrounding which would be in line with our assumptions, since both Bi and Te are less electronegative compared to Br, so they would act as electron donors. Fig.~\ref{Figure4}(e) and (f) provide side- and top-view sketches of particular atomic defects in the second a third atomic layer, respectively. While a defect in a certain layer affects nearest neighbors (NN) atoms, the resulting pattern on the surface gets more extended the deeper the defect is located. A second layer defect (2nd) would result in a contrast change of three NN atoms on the surface. A third layer defect (3nd) results in a contrast change of three next-nearest neighbor surface atoms, as can be seen in Fig.~\ref{Figure4}(b) defect C. Defects like E and F appear, when the third layer defect (Br) influences the NN (Bi / 2nd layer) differently, e.g. acting as an electron donor instead of an electron acceptor. The result is a Bi atom acting like a 2nd layer defect and therefore in three Bi atoms influencing three neigboring atoms (Te) each. Like on BiTeI \cite{Fiedler} no defects below the third layer could be found, possibly due to the van-der-Waals gap. The whole surface seems to be corrugated, as can be seen on the bottom part of Fig.~\ref{Figure5}(a) at the dark and bright area, which might be the result of screw dislocations. If we compare the Te termination of BiTeBr and BiTeI, the defects E and F of Fig.~\ref{Figure5}(a) are very similar to the defects E and F from Fig.\,2 in Ref.\,\cite{Fiedler}, which could also be Te and Bi antisites. The defect density in BiTeCl [Fig.~\ref{Figure5}(c)] is much higher as compared to BiTeBr. It is difficult to find a vacancy in the first layer but adsorbates (black arrow) and antisites (A) can frequently been found. Fig.~\ref{Figure5}(d) is the magnified view of the blue-framed square shown in Fig.~\ref{Figure5}(c). It is hard to point out certain defects but (G) and (H) probably represent different third layer defects, most likely a vacancy and a Te antisite, respectively. So far measurements in the LT-STM were only successful for the Te-terminated surfaces of BiTeBr and BiTeCl. However, third layer defects of Te should be equal to first layer defects of $X$, as long as they are not induced by the cleaving process. This would mean at least for BiTeBr that the Bi layer is almost free of defects and that the Te-layer has less defects than the Br layer. \section{Discussion} Comparing the three BiTe$X$ compounds the most obvious difference is the presence ($X$ = I) or absence ($X$ = Cl, Br) of stacking faults in the bulk crystal structure resulting in surfaces with mixed or single terminations, respectively. On the atomic scale, however, BiTeCl stands out with a considerably larger defect density than the two other compounds. Hence, in this respect BiTeBr currently appears to be the material with the most homogeneous structural properties. This finding nicely complements comparative studies of the surface electronic properties of BiTe$X$ that suggests BiTeBr as the best candidate for possible future applications \cite{Eremeev_NJP,Moreschini}. We further note that a possible migration of Bi atoms into the topmost Te-layer was speculated to occur in all three BiTe$X$ compounds based on the observation of a second component in the Bi $5d$ core level signal for Te-terminated surfaces \cite{Moreschini}. In our STM measurements for BiTeBr, however, such defects involving the first (Te) and the second (Bi) layer are not found. On the other hand, also no additional component in the Bi core level spectra is observed in the present study, in agreement with a previous report on BiTeCl \cite{Landolt_NJP}. The role of structural defects is furthermore important for a basic understanding of the electronic properties in BiTe$X$. For BiTeCl a lift-off during the cleaving process of a thin free-standing layer (around 1 unit cell) that remains loosely on the crystal surface has been proposed to give rise to the Rashba-split surface bands observed in ARPES and to mask the presence of a topological state on the intrinsic surface \cite{Chen}. This scenario is not supported by the present combined STM and ARPES results that show step edge heights of the surface terraces matching the bulk unit cell and, at the same time, provide no indication of topological surface bands. It is furthermore worth noting that, while the atomic defect density observed here in STM is considerably higher for BiTeCl than for BiTeBr, the quality of the ARPES data turns out to be comparable and also the measured band structures are very similar. This observation is in contrast to a recent investigation of BiTeCl that concluded qualitative changes in the electronic structure depending on the amount of defects near the surface \cite{Yan}. The broken inversion symmetry in BiTe$X$ in combination with the high electronegativity of the halogen atoms is assumed to induce a net dipole moment in the bulk unit cell \cite{Chen,Kohsaka} that, in turn, gives rise to n- or p-type band bending at the surface depending on termination \cite{Crepaldi_PRL}. The proposed microscopic picture of the charge distribution is often based on a covalently bound (BiTe)$^{+}$ bilayer that couples ionically to the adjacent $X^{-}$ layer \cite{Shevelkov,Crepaldi_PRL,Moreschini,Butler}. However, the bonding character has also been viewed as ionic for, both, Bi-Te and Bi-$X$ based on the fact that the valence (conduction) band is to most extent Te/$X$ (Bi) derived which indicates significant charge transfer from Bi to Te and $X$ \cite{Zhu}. In some calculations even a larger charge transfer to Te than to $X$ has been obtained \cite{Chen, Ma}. Direct experimental information on this issue has so far been scarce. The present XPS measurements indeed point to a substantial charge donation from Bi to Te and $X$ which is in line with our first-principles calculations of the local atomic charges. On the other hand, the large work function differences between Te- and $X$-terminated surfaces confirm the presence of a dipole moment in the unit cell and, thus, support the view of a (BiTe)$^{+}$ block with positive net charge that forms a polar bond with the $X^{-}$ layer. \section{Summary} We have presented a comparative study of the structural and electronic surface properties of the non-centrosymmetric giant-Rashba semiconductors BiTe$X$(0001) ($X$ = Cl, Br, I). Cleaving of single-crystalline samples exposes macroscopically homogeneous surfaces with Te- and $X$-termination for BiTeCl and BiTeBr, in contrast to BiTeI where bulk stacking faults are known to give rise to mixed surface terminations. STM and XPS data confirm the unit cell heights and atomic stacking orders that are expected from the bulk crystal structure. The electronic band structures measured by ARPES differ considerably depending on surface termination, but in no case topological surface states are observed. The chemical bonding in BiTe$X$ is found to be characterized by substantial charge transfer from Bi to Te and $X$. However, based on work function measurements we also obtain evidence for ionic bonding between (BiTe)$^{+}$ bilayers and $X^{-}$ layers, whereas the polarity of the bond increases with rising electronegativity of the halogen atom. \section{Acknowledgements} This work was financially supported by the Deutsche Forschungsgemeinschaft through FOR1162 and partly by the Ministry of Education and Science of Russian Federation (Grant No. 2.8575.2013), the Russian Foundation for Basic Research (Grants No. 15-02-01797, 15-02-02717) and Saint Petersburg State University (project 11.50.202.2015). \vspace{2ex}
1,116,691,498,781
arxiv
\section{Introduction} Networks of MIMO (Multi-Input Multi-Output) enabled nodes can use advanced eigen-beamforming and beamnulling techniques to enable concurrent communications and increase overall network throughput. This technique is loosely referred to as space division multiple access and several medium access control (MAC) protocols have appeared in the literature that can deliver concurrent transmissions in an Ad Hoc network of multi-antenna, MIMO, enabled nodes \cite{1_SPACEMAC, 2_MIMOMAN, 3_NULLHOC, 4_nullhoc, 5_MIMA_MAC}. Although SDMA (space division multiple access) and concurrent links have been well studied in cellular networks (see \cite{VT_SDMA} and the references therein), it is still a challenging problem in Ad Hoc networks. Initially, SDMA and concurrent links were utilized in Ad Hoc networks via a simple abstract model called Degree of Freedom (DOF) \cite{6_Wireless_Comm_David, COOP_MAC}. This model uses the number of antennas to represent the number of concurrent links in the network. It assumes that the concurrent links are perfectly separated and do not interfere with one another. As such, the DOF model ignores all physical layer (PHY) impairments. At the same time, using TX/RX beamforming, the SPACEMAC, MIMAMAC and NullHoc protocols \cite{1_SPACEMAC, 2_MIMOMAN, 3_NULLHOC, 4_nullhoc, 5_MIMA_MAC} have been proposed to support concurrent links in Ad Hoc networks. These protocols assume that the first node to ``win'' the contention window will use an omni directional radiation pattern, but other, secondary, users will use TX beamforming to ensure that newly accessing link will not introduce any interference to existing links. As a result, the throughput of existing links is not affected, and additional network throughput can be had as a result of the newly formed concurrent links. Although this idea works well under an ideal MIMO system model, its performance is significantly affected by physical layer constraints and imperfections (e.g., channel estimation error, absence of link adaptation, etc.). In this paper we focus on the concurrent transmission window within a generic SDMA-MAC that is similar in its construction to SPACEMAC and NullHoc. These MACs were proposed for Ad Hoc networks and typically use signaling during the contention window to determine the TX {\&} RX beam patterns to be used during the concurrent transmission windows. They typically make idealized and simplified assumptions that will impact their performance during the concurrent transmission window. In this paper our aim is to migrate an idealized SDMA-MAC system, such as the ones found in \cite{1_SPACEMAC, 2_MIMOMAN, 3_NULLHOC, 4_nullhoc}, towards a more realistic one that incorporates (a) channel estimation error, (b) the use of a more practical MMSE detection algorithm, (c) incorporation of link adaptation, and (d) combined TX and RX beamforming techniques. Our study uses the generic SDMA-MAC protocol presented in section II as a baseline, but the results could be easily extended to other MACs with a similar structure. For each of the elements (a) through (d) we compare the performance of the baseline SDMA-MAC during the concurrent transmission window with and without the proposed modification. We then combine all the changes together and compare the performance of the resulting ``practical'' MAC with the baseline system. To this end, the paper will be organized as follows. Section II introduces our system model, the baseline SDMA-MAC, and the simulation setup. Section III describes each of the four elements of our proposed modifications and the associated performance gain/loss for each element in isolation. In section IV we provide side by side comparisons of the baseline concurrent SDMA-MAC, the realistic variant of the concurrent SDMA-MAC which includes elements a-d, and a non-concurrent MAC that utilizes MIMO links. The paper is then concluded in Section V. \section{System Description} This paper focuses on a single hop Ad Hoc network, where each node is within the transmission range of all other nodes. There are a total of $K$ concurrent links simultaneously transmitting in the network, labeled as link $L_1$ to link $L_K$. The TX node and RX node involved in link $L_q$ are denoted as $T_q$ and $R_q$, respectively. Every node is equipped with $N_A$ antennas, and all packets are modulated using OFDM (Orthogonal Frequency Division Multiplexing), where the number of subcarriers is $N_C$. The TX power per node is the same and is denoted by $P_T$. The fast fading channel from the TX node $T_q$ to the RX node $R_q$ at the $i$th subcarrier is ${\rm {\bf H}}_{R_q ,T_q } (i)$, which is an $N_A \times N_A $ matrix of complex Gaussian random variables with zero mean and unit variance. $G_{R_q ,T_q } $ is the path loss from node $T_q $ to node $R_q $. For simplicity, we assume that each link uses a single spatial stream. However, our discussions can be easily generalized to other cases where some links might use multiple spatial streams. We denote the power normalized $N_A \times 1$ TX vector at the $i$th subcarrier of node $T_q $ as ${\rm {\bf W}}_{T_q } (i)$ (power normalized implies that ${\rm {\bf W}}_{T_q }^H (i){\rm {\bf W}}_{T_q } (i) = 1)$. Similarly, the RX vector at the $i$th subcarrier of node $R_q $ is ${\rm {\bf W}}_{R_q } (i)$ subject to ${\rm {\bf W}}_{R_q }^H (i){\rm {\bf W}}_{R_q } (i) = 1$. Also, the transmitted QAM symbol at each subcarrier is zero mean and unit variance. Finally, we use ${\rm {\bf A}}(i)$ to represent the matrix corresponding to the $i$th subcarrier, and ${\rm {\bf A}}(i,j)$ is the $j$th column of matrix ${\rm {\bf A}}(i)$. $[ \cdot ]^H$ and $[ \cdot ]^T$ are Hermitian and transpose calculation. \subsection{Overview of the Generic SDMA-MAC} Our baseline SDMA-MAC is designed to represent a class of MACs such as SPACEMAC \cite{1_SPACEMAC, 2_MIMOMAN} and NullHoc \cite{3_NULLHOC, 4_nullhoc}, it is built on the principal that links have an access hierarchy in that the newly accessing link should cause no interference to the existing links. This process is described mathematically in the following subsection. \subsection{Mathematical Description of a Generic SDMA-MAC} Let the access order of $K$ concurrent links in the network range from $L_1 $ (link 1) to $L_K $ (link $K$). Currently the first $(Q - 1)$ links have accessed the channel, and now we look at the access process of link $L_Q $, $1 \le Q \le K$. According to our generic SDMA-MAC protocol, during the concurrent transmission window, link $L_Q $ should use a TX vector, ${\rm {\bf W}}_{T_Q } (i)$, that is orthogonal to the existing links' RX vectors ${\rm {\bf W}}_{R_q } (i)$, or equivalently: \begin{eqnarray} \label{eq1} {\sqrt {P_T G_{R_q ,T_Q } / N_C } {\rm {\bf W}}_{R_q }^H (i){\rm {\bf H}}_{R_q ,T_Q } (i)}{\rm {\bf W}}_{T_Q }(i)=0, \nonumber\\ 1 \le q \le (Q - 1). \end{eqnarray} To calculate the TX vector, ${\rm {\bf W}}_{T_Q } (i)$, we start with ${\rm {\bf H}}_{{\rm intf},T_Q } (i)$ which represents all interference channels from node $T_Q $ to node ($R_1 ,R_2 ,...,R_{(Q - 1)} )$. ${\rm {\bf H}}_{{\rm intf},T_Q } (i)$ is an $N_A \times (Q - 1)$ matrix, whose $q^{\rm{th}}$ column is: \begin{align} \label{eq2} {\rm {\bf H}}_{{\rm intf},T_Q }^{{\rm col}} (i,q) =& \left\{ {\sqrt {P_T G_{R_q ,T_Q } / N_C } {\rm {\bf W}}_{R_q }^H (i){\rm {\bf H}}_{R_q ,T_Q } (i)} \right\}^H,\notag\\ &1 \le q \le (Q - 1). \end{align} Node $T_Q $ then runs a Singular Value Decomposition (SVD) on ${\rm {\bf H}}_{{\rm intf},T_Q } (i){\rm {\bf H}}_{{\rm intf},T_Q }^H (i)$. Assuming non-increasing order of eigen-values in the SVD result, node $T_Q $'s TX vector is calculated as: \begin{eqnarray} \label{eq3} &&{\rm {\bf H}}_{{\rm intf},T_Q } (i){\rm {\bf H}}_{{\rm intf},T_Q }^H (i) = {\rm {\bf U}}_{T_Q } (i){\rm {\bf \Lambda}} _{T_Q } (i){\rm {\bf U}}_{T_Q }^H (i), \\ &&{\rm {\bf W}}_{T_Q } (i) = {\rm {\bf U}}_{T_Q } (i,N_A ). \end{eqnarray} Here ${\rm {\bf U}}_{T_Q } (i,N_A )$ is the $(N_A) ^{{\rm th}}$ column in the matrix ${\rm {\bf U}}_{T_Q } (i)$. Next we calculate ${\rm {\bf W}}_{R_Q } (i)$ based on both the desired channel coming from node $T_Q $ and the interference channel coming from nodes $(T_1 ,T_2 ,...,T_{(Q - 1)} )$. Here the interference channel from $T_q $ is denoted as $\sqrt {P_T G_{R_Q ,T_q } / N_C } {\rm {\bf H}}_{R_Q ,T_q } (i){\rm {\bf W}}_{T_q } (i)$ with $1 \le q \le (Q - 1)$. The desired channel from $T_Q $ is $\sqrt {P_T G_{R_Q ,T_Q } / N_C } {\rm {\bf H}}_{R_Q ,T_Q } (i){\rm {\bf W}}_{T_Q } (i)$. Node $R_Q $'s RX vector ${\rm {\bf W}}_{R_Q } (i)$ can be derived using a MIMO detection algorithm (zero-forcing is used in the generic SDMA-MAC). Details relating to link contention, handshaking, and channel information exchange are referred to \cite{1_SPACEMAC, 2_MIMOMAN, 3_NULLHOC}. These are not considered here as the focus of this work is the achievable real-world performance of the concurrent SDMA-MAC during the data transmission phase of the protocol. Admittedly the structure and mechanism of the contention windows, RTS, CTS, etc., will impact the overall performance of the network. However, in the interest of maintaining focus we have chosen to defer these issues to a possible follow on contribution. Therefore care must be taken to incorporate all MAC specific overheads when translating the results to estimate MAC efficiency or throughput performance. At this juncture it is worth introducing some underlying assumptions or limitations in our generic SDMA-MAC which also appear in SPACEMAC \cite{1_SPACEMAC, 2_MIMOMAN} and NullHoc \cite{3_NULLHOC, 4_nullhoc}. \begin{enumerate} \item SDMA-MAC often assumes perfect channel estimation in the system \cite{1_SPACEMAC, 2_MIMOMAN}. \item TX and RX vectors are calculated using the zero-forcing or SVD based algorithm \cite{1_SPACEMAC, 2_MIMOMAN, 3_NULLHOC, 4_nullhoc}. \item Each link simply uses a fixed modulation scheme, specifically, the 2Mbps mode in 802.11b. Multi-rate capabilities embedded within the concurrent links are not fully utilized. \item TX vectors are calculated to minimize the resultant interference on the existing links. But the optimization of SNR within the desired communication is not considered. \item Simulations in SPACEMAC and NullHoc consider at most $N_A $ concurrent links, and the capability of supporting more than $N_A $ links is not evaluated. \end{enumerate} \subsection{Description of the Simulation Environment and Metric Used} Our simulations are conducted in a single-hop Ad Hoc network, where all concurrent links are randomly and uniformly placed in a rectangle box of 200m by 200m. Each node is equipped with $N_A=4$ antennas, and uses a single spatial stream. The system bandwidth, $W$, is assumed to be $W=20$MHz. The modulation is assumed to be OFDM with $N_C=64$ subcarriers and the guard interval is $\rho_G=1/4$. We assume no power control in the network with the total TX power per node $P_T=25$dBm. Power decay between any two nodes is calculated according to the simplified path loss model \cite{6_Wireless_Comm_David} with an exponent of 3, $d_0= 1$m, and wave-length $\lambda= 0.125$m. Fast fading Rayleigh channels are kept invariant during the transmission period. Background noise power per subcarrier is $\sigma_N^2=-113$dBm. When link adaptation is enabled, the link can pick one of the eight modulation and coding schemes (MCS) shown in Table \ref{Table_I_MCS}. Also, all packets carry the same amount of data, namely, 100 bytes. We use MATLAB software to build our simulation framework, and each point in our results is an average of 1000 independently generated topologies. \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{table} \caption{List of Modulation and Coding Schemes} \centering \label{Table_I_MCS} \begin{tabular}{|c||c||c||c|} \hline \textbf{\tabincell{c}{MCS \\ Index}} & \textbf{QAM Type} & \textbf{\tabincell{c}{Coding\\ Rate}} & \textbf{\tabincell{c}{Minimum required effective \\ PPSNR to achieve target \\ BER/PER (10{\%} PER)}} \\ \hline 0 & \textrm{BPSK} & 1/2 & \textrm{1.4 dB} \\ \hline 1 & \textrm{QPSK} & 1/2 & \textrm{4.4 dB} \\ \hline 2 & \textrm{QPSK} & 3/4 & \textrm{6.5 dB} \\ \hline 3 & \textrm{16QAM} & 1/2 & \textrm{8.6 dB} \\ \hline 4 & \textrm{16QAM} & 3/4 & \textrm{12 dB} \\ \hline 5 & \textrm{64QAM} & 2/3 & \textrm{15.8 dB} \\ \hline 6 & \textrm{64QAM} & 3/4 & \textrm{17.2 dB} \\ \hline 7 & \textrm{64QAM} & 5/6 & \textrm{18.8 dB} \\ \hline \end{tabular} \end{table} We use the network sum-throughput metric \cite{COOP_MAC, 13_Sum_rate_3} in our study which measures the successfully transmitted throughput summed from all links. However, in order to decide if a particular link is viable or not, we calculate the effective Post Processing SNR (PPSNR) \cite{8_PER_model_1} at the receiver for each link. For a given MCS, if the effective PPSNR is above the minimum required for the desired QoS (see Table \ref{Table_I_MCS}), then we declare the link viable and include the link throughput within the sum-throughput calculation. Otherwise, the link is assumed not to be usable. The effective PPSNR is calculated as follows. Once the TX vector ${\rm {\bf W}}_{T_Q } (i)$ and the RX vector ${\rm {\bf W}}_{R_Q } (i)$ have been determined for all links in the network, the PPSNR can be calculated as: \begin{align} \label{PPSNR_eq5} &\Gamma_{R_Q } (i) = \left| {\rm {\bf W}}_{R_Q }^H (i) \cdot {\rm {\bf H}}_{R_Q ,T_Q }^{{\rm Rec}} (i) \right|^2 \notag\\ &\ \ \ \ \cdot\left\{{\sum\limits_{q = 1,q \ne Q}^K {\left| {\rm {\bf W}}_{R_Q }^H (i) \cdot {\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i) \right|^2} + \sigma _N^2 }\right\}^{-1},\\ \label{H_rec} &{\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i) = \sqrt {P_T G_{R_Q ,T_q } / N_C } {\rm {\bf H}}_{R_Q ,T_q } (i){\rm {\bf W}}_{T_q } (i). \end{align} Using the PPSNR $\mathop \Gamma \nolimits_{R_Q } (i)$ and $\Gamma_{R_Q ,{\rm dB}} (i) = 10\log _{10} \left( {\mathop \Gamma \nolimits_{R_Q } (i)} \right)$, we then calculate link $L_Q $'s effective PPSNR $\mathop \Gamma \nolimits_{R_Q ,{\rm dB}}^{\rm eff} $ via Eqn. (\ref{PPSNR_eq6}) \cite{8_PER_model_1} as \begin{eqnarray} \label{PPSNR_eq6} \mathop \Gamma \nolimits_{R_Q ,{\rm dB}}^{\rm eff} = \frac{1}{N_C }\sum\limits^{N_C}_{i = 1} {\Gamma_{R_Q ,{\rm dB}} (i)} - \alpha\cdot var \left[ {\Gamma_{R_Q ,{\rm dB}} (i) } \right]. \end{eqnarray} Here variance $var$ is calculated over all subcarriers, and $\alpha = 0.125$ is fitted offline \cite{8_PER_model_1}. \section{Quantifying Performance with Realistic Parameters and Algorithms} This section is broken down into 4 subsections. In these subsections we separately look at the impact of MMSE, channel estimation error, link adaptation, and TX beamforming on the sum-throughput of our generic SDMA-MAC. These will be studied in isolation of one another. Section IV will then evaluate the SDMA-MAC that incorporates all above four elements. \subsection{MMSE vs. ZF} In this section we look at the impact of using the more common MMSE (minimum mean squared error) detector instead of the idealized ZF (zero-forcing) detector assumed in the class of SDMA-MACs \cite{1_SPACEMAC, 2_MIMOMAN, 3_NULLHOC, 4_nullhoc}. The reason why the MMSE detector is more common is because it has the same hardware complexity as the ZF detector, but does not suffer from the unwanted noise enhancement properties of the latter \cite{9_ZF_MMSE}. Additionally, one of the drawbacks of the MACs presented in \cite{1_SPACEMAC, 2_MIMOMAN, 3_NULLHOC, 4_nullhoc} is that channel access is sequential with current link not knowing anything about other links that might access the channel after it. In this section we also want to consider the potential benefits of relaxing this assumption. We will refer to this scheme as the Universal-MMSE scheme and describe it in subsection (3). \subsubsection{Zero-Forcing Detection} Using the same notation as in section II, link $L_Q $'s RX vector under the ZF criterion is expressed as: \begin{align} \label{ZF_eq8} &{\rm {\bf W}}_{R_Q } (i) = {\mathbb{N}}\left\{ {{\rm {\bf B}}_{R_Q } (i)\left[ {{\rm {\bf B}}_{R_Q }^H (i){\rm {\bf B}}_{R_Q } (i)} \right]{\rm {\bf e}}_1 } \right\}, \\ &{\rm {\bf e}}_1 = {\underbrace {[1,0,...,0]}_{Q\ {\rm elements}}}^T. \end{align} Here ${\rm {\bf B}}_{R_Q } (i)=\left[{\rm {\bf H}}_{R_Q ,T_Q }^{{\rm Rec}} (i), {\rm {\bf H}}_{R_Q ,T_{Q-1} }^{{\rm Rec}} (i), \cdots, {\rm {\bf H}}_{R_Q ,T_1 }^{{\rm Rec}} (i)\right]$ is an $N_A \times Q$ matrix, and ${\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i)$ is given in Eqn. (\ref{H_rec}). ${\mathbb{N}}\{ \cdot \}$ denotes the vector normalization with unit power. With the expression for ${\rm {\bf W}}_{R_Q } (i)$, we then place it in the equation for PPSNR (Eqn. (\ref{PPSNR_eq5}-\ref{PPSNR_eq6})) to determine if a given link is active or not. From there we calculate the sum throughput as described in section II.C. \subsubsection{MMSE Detection} Link $L_Q $'s RX vector under the MMSE criterion is expressed as: \begin{align} \label{MMSE_1} &{\rm {\bf W}}_{R_Q } (i) = {\mathbb{N}}\left\{ {{\rm {\bf C}}_{R_Q ,{\rm MMSE}}^{ - 1} (i) \cdot {\rm {\bf H}}_{R_Q ,T_Q }^{{\rm Rec}} (i)} \right\}, \\ \label{MMSE_2} &{\rm {\bf C}}_{R_Q ,{\rm MMSE}} (i) = \sum\limits_{q = 1}^{Q-1} {{\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i)\left\{ {{\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i)} \right\}^H} + \sigma _N^2 {\rm {\bf I}}_{N_A }. \end{align} Here ${\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i)$ is given in Eqn. (\ref{H_rec}). Similar to ZF receiver, the derived ${\rm {\bf W}}_{R_Q } (i)$ is used in the PPSNR calculation (Eqn. (\ref{PPSNR_eq5}-\ref{PPSNR_eq6})) and subsequently in the sum-throughput calculation. \subsubsection{Universal MMSE} Previous derivations for link $L_Q $'s RX vector only consider the interference channel coming from link $L_1 $ to $L_{(Q - 1)} $. The residual interference caused by $L_{(Q + 1)}$ to $L_K$ is not considered. This can have a negative impact on the PPSNR results. Here we look to answer the question of how much the performance of the system might be improved if the receive beamnulling was performed at each node with full knowledge of all $K$ transmitters. Firstly, link $L_Q$ estimates the covariance of the received signal from all $K$ TX nodes as: \begin{align} \label{MMSE_3} &{\rm {\bf C}}_{R_Q ,{\rm UMMSE}} (i) =\notag\\ &\sum\limits_{q = 1, q\neq Q}^K {{\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i)\left\{ {{\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i)} \right\}^H} + \sigma _N^2 {\rm {\bf I}}_{N_A } . \end{align} Based on the estimate ${\rm {\bf C}}_{R_Q ,{\rm UMMSE}} (i)$, the corresponding Universal MMSE RX vector is: \begin{equation} \label{MMSE_4} {\rm {\bf W}}_{R_Q } (i) = {\mathbb{N}}\left\{ {{\rm {\bf C}}_{R_Q ,{\rm UMMSE}}^{-1}(i)\cdot{\rm {\bf H}}_{R_Q ,T_Q }^{{\rm Rec}} (i)} \right\}. \end{equation} Again, ${\rm {\bf W}}_{R_Q }(i)$ is used to calculate the PPSNR (Eqn. (\ref{PPSNR_eq5}-\ref{PPSNR_eq6})) and the sum-throughput. \subsubsection{Simulation Results} We now simulate the SDMA-MAC protocol using both the ZF and the MMSE algorithms for the RX vector ${\rm {\bf W}}_{R_Q } (i)$. We assume perfect channel estimation, and as in the case of \cite{1_SPACEMAC, 2_MIMOMAN, 3_NULLHOC} we fix the MCS for each link to either MCS 0 or MCS 5. Fig. \ref{Fig1_Rx_vector} shows the sum-throughput as a function of the number of concurrent links allowed by the MAC. As expected, initially the sum-throughput increases with the number of concurrent links in the network, however, as additional concurrent links are added the interference power dominates, thus causing a decrease in the network sum-throughput. Fig. \ref{Fig1_Rx_vector} also shows that the MMSE receiver outperforms the ZF receiver by an average of 10{\%}, and a maximum of 20{\%}. The Universal MMSE scheme outperforms the ZF receiver by up to 40{\%}. This is because for link $L_Q$, the Universal MMSE protocol takes into account the residual interference from all links irrespective of the order in which they start transmission, whereas the ZF and MMSE solutions only take into account the subset of links that accessed the channel before link $L_Q$. \begin{figure} \centering \includegraphics[width=3.3in]{Figure1_Rx_vector_compact.eps} \caption{SDMA-MAC's throughput performance comparing a ZF, an MMSE, and a Universal MMSE RX vector calculation algorithm.} \label{Fig1_Rx_vector} \end{figure} \subsection{Impact of Channel Estimation Errors} Channel estimation errors impact the sum throughput by increasing interference and also reducing the PPSNR. Given that the SDMA-MAC uses MIMO beamforming at both the TX and the RX, estimation errors will impact both the TX vector ${\rm {\bf W}}_{T_Q } (i)$ and the RX vector ${\rm {\bf W}}_{R_Q } (i)$. Let us first derive the expression of the noisy RX vector ${\rm {\bf W}}_{R_Q } (i)$. Note that this is calculated using either the ZF method (Eqn. (\ref{ZF_eq8})) or the MMSE method (Eqn. (\ref{MMSE_1}-\ref{MMSE_4})). These calculations all rely on the following channel information ${\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i)$. Under imperfect channel estimation, the noisy estimate of ${\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i)$ is given by: \begin{align} \label{eq14} \widetilde{{\rm {\bf H}}}_{R_Q ,T_q }^{{\rm Rec}} (i) =& \sqrt {P_T G_{R_Q ,T_q } / N_C } {\rm {\bf H}}_{R_Q ,T_q } (i){\rm {\bf W}}_{T_q } (i)\nonumber\\ &+ \sqrt{\sigma _C^2 } {\rm {\bf Z}}_{R_Q ,T_q } (i), 1 \le q \le K. \end{align} Here ${\rm {\bf Z}}_{R_Q ,T_q } (i)$ represents the channel estimation noise which is modeled as a complex Gaussian random variable with zero mean and unit variance. $\sigma _C^2 $ is the variance of the estimation noise, which is dependent on the variance of the background noise per subcarrier $\sigma _N^2 $ (e.g., under $L$ training symbols and least-square estimation, $\sigma _C^2 $ is equal to $\sigma _N^2 / L$ \cite{3_NULLHOC}). In this way, the noisy RX vector ${\rm {\bf W}}_{R_Q } (i)$ is derived by using noisy estimate $\widetilde{{\rm {\bf H}}}_{R_Q ,T_q }^{{\rm Rec}} (i)$ rather than perfect estimate ${\rm {\bf H}}_{R_Q ,T_q }^{{\rm Rec}} (i)$ in ZF or MMSE methods. Now we look at the derivation of the noisy TX vector ${\rm {\bf W}}_{T_Q } (i)$. Since ${\rm {\bf W}}_{T_Q } (i)$ depends on ${\rm {\bf W}}_{R_q } (i)$ ($q=1, \ldots, Q-1$), then the noisy estimate of ${\rm {\bf W}}_{T_Q } (i)$ will be a function of the noisy estimates of other ${\rm {\bf W}}_{R_q } (i)$. Recalling Eqn. (\ref{eq3}) ${\rm {\bf W}}_{T_Q } (i) = {\rm {\bf U}}_{T_Q } (i,N_A )$, we have that ${\rm {\bf W}}_{T_Q } (i)$ is a function of the SVD results of ${\rm {\bf H}}_{{\rm intf},T_Q } (i){\rm {\bf H}}_{{\rm intf},T_Q }^H (i)$. In the presence of channel estimation errors, ${\rm {\bf H}}_{{\rm intf},T_Q } (i)$'s column, ${\rm {\bf H}}_{{\rm intf},T_Q }^{{\rm col}} (i,q)$ in Eqn. (\ref{eq2}), will be replaced by the noisy estimate: \begin{align} \label{eq15} \widetilde{{\rm {\bf H}}}_{{\rm intf},T_Q }^{{\rm col}} (i,q) =& \left\{ {\sqrt {P_T G_{R_q ,T_Q } / N_C } {\rm {\bf W}}_{R_q }^H (i){\rm {\bf H}}_{R_q ,T_Q } (i)} \right\}^H \nonumber\\ &+ \sqrt {\sigma _C^2 } {\rm {\bf Z}}_{T_Q,R_q } (i). \end{align} Again, here ${\rm {\bf Z}}_{T_Q ,R_q } (i)$ represents the channel estimation noise. Using the noisy estimate $\widetilde{{\rm {\bf H}}}_{{\rm intf},T_Q }^{{\rm col}} (i,q)$, the noisy TX vector ${\rm {\bf W}}_{T_Q } (i)$ is calculated according to Eqn. (\ref{eq3}). Finally, given the resulting noisy TX vectors and RX vectors, the effective PPSNR can be derived using Eqn. (\ref{PPSNR_eq5}-\ref{PPSNR_eq6}), and the sum-throughput performance can be evaluated accordingly. \subsubsection{Simulation Results} We simulate the sum throughput of the SDMA-MAC protocol under different channel estimation errors. Here the variance of the estimation error is set to $\sigma _N^2 $, $0.5\sigma _N^2 $, $0.1\sigma _N^2 $, $0.01\sigma _N^2 $ and $0.001\sigma _N^2 $, respectively, where $\sigma _N^2 $ denotes the power of the background noise per subcarrier and is equal to -113dBm in our study. Each link's MCS is fixed as MCS 0 or MCS 5, and simulation results under different number of concurrent links are plotted in Fig. \ref{Fig2_estimation_error}. The curves in the figure show that, compared with the result under perfect channel estimation, system's sum throughput is seriously degraded when the estimation variance is $\sigma _N^2 $ or $0.5\sigma _N^2 $. Meanwhile, even under estimation variance of $0.1\sigma _N^2 $ and $0.01\sigma _N^2 $, there still exists considerable performance loss in the sum throughput. Generally, it is safe to assume that for any Ad Hoc system, the estimation noise variance will be at best 0.1$\sigma_{N}^{2}$. \begin{figure} \centering \includegraphics[width=3.3in]{Figure2_estimation_error_compact.eps} \caption{Sum throughput performance of SDMA-MAC under different channel estimation errors. Assume that each link uses either MCS 0 or MCS 5.} \label{Fig2_estimation_error} \end{figure} \subsection{Impact of Link Adaptation} \subsubsection{Link Adaptation Design} There are 8 different MCSes in this paper, and each link can adaptively select the proper MCS based on the estimated PPSNR $\mathop {\widehat{\Gamma }}\nolimits_{R_Q } (i)$. The derivation of $\mathop {\widehat{\Gamma }}\nolimits_{R_Q } (i)$ is based on Eqn. (\ref{PPSNR_eq5}) but using the estimated channel information as shown in Eqn. (\ref{eq14}). To study the link adaptation in isolation, this subsection assumes perfect channel estimates ($\sigma _C^2 = 0$). Given ${\widehat{\Gamma }}_{R_Q ,{\rm dB}} (i) = 10\log _{10} \left( {\mathop {\widehat{\Gamma }}\nolimits_{R_Q } (i)} \right)$, link $L_Q $'s effective SNR is estimated as: \begin{align} \label{eq16} {\widehat{\Gamma }}_{R_Q,{\rm dB}}^{\rm eff} = & \frac{1}{N_C}\sum\limits_{i=1}^{N_C}{\widehat{\Gamma }}_{R_Q ,{\rm dB}} (i) - \alpha \cdot var\left[{\widehat{\Gamma }}_{R_Q ,{\rm dB}} (i)\right] \nonumber\\ &- \Gamma _{L_Q }^{\rm Backoff},\ \ \ \ \alpha = 0.125 \end{align} Later, link $L_Q $ will select the highest MCS whose threshold listed in Table I is smaller than the estimated SNR $\mathop {\widehat{\Gamma }}\nolimits_{R_Q ,{\rm dB}}^{\rm eff} $. Finally, $\Gamma _{L_Q }^{\rm Backoff} $ in Eqn. (\ref{eq16}) is a correction term that makes up for the inaccuracy of the PPSNR estimation (e.g., due to imperfect channel estimation). Its value can be tuned at run-time using the real packet error rate embedded within the ACK packet. \subsubsection{Simulation Results} We evaluate the sum throughput performance of our generic SDMA-MAC by using the link adaptation process discussed above. The results are summarized in Fig. \ref{Fig3_link_adaptation}. Here we assume perfect channel estimation in the system, and RX vectors are derived via the ZF method. For completeness, we also provide the results of fixed MCS selection (MCS 0 or MCS 5) in that figure. The resultant curves underscore the importance of link adaptation in improving the network performance. Compared with the fixed MCS selection (MCS 0 or MCS 5) under total 4 concurrent links, the usage of link adaptation can provide additional throughput gains of around 70{\%} (for MCS 5) to 200{\%} (for MCS 0). \begin{figure} \centering \includegraphics[width=3.3in]{Figure3_link_adaptation_compact.eps} \caption{Sum throughput performance of SDMA-MAC under the usage of link adaptation. Assume perfect channel estimation and ZF based RX vector derivation.} \label{Fig3_link_adaptation} \end{figure} \subsection{Combining TX Beamforming with SDMA-MAC} In the generic SDMA-MAC, the TX node $T_Q $ of link $L_Q $ knows nothing about the channel between TX node $T_Q $ and RX node $R_Q $. In this subsection we pose the question of how the performance could be improved if node $T_Q $ knew about the channel between $T_Q $ and $R_Q $, and was able to beamform accordingly. Consider the derivation of ${\rm {\bf W}}_{T_Q } (i)$ in Eqn. (\ref{eq3}), provided that $Q \le N_A $, there will be $(N_A - Q + 1)$ candidates for the TX vector ${\rm {\bf W}}_{T_Q } (i)$ (${\rm {\bf U}}_{T_Q } (i,Q),{\rm {\bf U}}_{T_Q } (i,Q + 1),...,{\rm {\bf U}}_{T_Q } (i,N_A ))$, which can all satisfy the orthogonality condition of Eqn. (\ref{eq1}). Besides, any linear combination of these candidates is also orthogonal with existing links (Eqn. (\ref{eq1})). This observation indicates that we can choose an optimized linear combination of these candidates, so that the resultant PPSNR in the desired communication is improved. We name this scheme TX beamforming to distinguish it from the TX beamnulling scheme (Eqn. (\ref{eq3})) used in the baseline SDMA-MAC. We apply TX beamforming only to links $L_Q $ with $Q \le N_A $, all other links (link $L_{N_A + 1} $ to link $L_K )$ will use the default TX beamnulling of the SDMA-MAC. \subsubsection{TX Beamforming Calculation} Consider link $L_Q $ with $Q \le N_A $, we use ${\rm {\bf U}}_{T_Q }^{{\rm INIT}} (i)$ to denote all the TX vector candidates at node $T_Q $. This is an $N_A \times (N_A - Q + 1)$ matrix composed of columns ${\rm {\bf U}}_{T_Q } (i,Q),{\rm {\bf U}}_{T_Q } (i,Q + 1),...,{\rm {\bf U}}_{T_Q } (i,N_A )$. The resultant TX vector is given as ${\rm {\bf W}}_{T_Q } (i) = {\rm {\bf U}}_{T_Q }^{{\rm INIT}} (i){\rm {\bf D}}_{T_Q } (i)$, where ${\rm {\bf D}}_{T_Q } (i)$ is an $(N_A - Q + 1)\times 1$ column vector with ${\rm {\bf D}}_{T_Q }^H (i){\rm {\bf D}}_{T_Q } (i) = 1$ representing the linear combination of ${\rm {\bf U}}_{T_Q }^{{\rm INIT}} (i)$. Given a specific ${\rm {\bf D}}_{T_Q } (i)$, the calculated PPSNR at the $i$th subcarrier of link $L_Q $ under the MMSE criterion is given as: \begin{align} \label{eq18} \Gamma _{R_Q } (i) =& (P_T G_{R_Q ,T_Q } /N_C)\left\{ {{\rm {\bf H}}_{R_Q ,T_Q } (i){\rm {\bf U}}_{T_Q }^{{\rm INIT}} (i){\rm {\bf D}}_{T_Q } (i)} \right\}^H \notag\\ &{\rm {\bf C}}_{R_Q ,{\rm MMSE}}^{ - 1} (i){\rm {\bf H}}_{R_Q ,T_Q } (i){\rm {\bf U}}_{T_Q }^{{\rm INIT}} (i){\rm {\bf D}}_{T_Q } (i). \end{align} Here ${\rm {\bf C}}_{R_Q,{\rm MMSE}}^{ - 1} (i)$ is given in Eqn. (\ref{MMSE_2}). Obviously, the optimal linear combination vector ${\rm {\bf D}}_{T_Q } (i)$ can be calculated as the maximum eigen-vector of $\left\{ {{\rm {\bf H}}_{R_Q ,T_Q } (i){\rm {\bf U}}_{T_Q }^{{\rm INIT}} (i)} \right\}^H{\rm {\bf C}}_{R_Q ,{\rm MMSE}}^{ - 1} (i){\rm {\bf H}}_{R_Q ,T_Q } (i){\rm {\bf U}}_{T_Q }^{{\rm INIT}} (i)$ that is corresponding to the maximum eigen-value. And the associated TX vector ${\rm {\bf W}}_{T_Q } (i)$ can be calculated according to the optimal ${\rm {\bf D}}_{T_Q } (i)$ as ${\rm {\bf W}}_{T_Q } (i) = {\rm {\bf U}}_{T_Q }^{{\rm INIT}} (i){\rm {\bf D}}_{T_Q } (i)$. \subsubsection{Simulation Results} We evaluate the sum throuhgput performance of our reference SDMA-MAC with the inclusion of the TX beamforming approach combined with MMSE RX vectors. By way of comparison we also provide the throughput results of two other schemes, the first includes baseline SDMA-MAC with ZF RX vectors (Eqn. (\ref{ZF_eq8})), and the second includes baseline SDMA-MAC with MMSE RX vectors (Eqn. (\ref{MMSE_1}-\ref{MMSE_2})). We assume perfect channel estimation in the network, and the MCS in each link is either fixed at MCS 0, or varied under the link adaptation protocol. The sum throughput results are shown in Fig. \ref{Fig4_Tx_beamforming}. Compared with the performance of the baseline SDMA-MAC with ZF RX vectors, the introduction of TX beamforming can have around 20{\%} improvement in terms of sum throughput. When compared with baseline SDMA-MAC plus MMSE RX vectors, the TX beamforming design can still have more than 10{\%} throughput gain. \begin{figure} \centering \includegraphics[width=3.3in]{Figure4_Tx_beamforming_compact.eps} \caption{Sum throughput performance under our discussed TX beamforming design. Assume perfect channel estimation in the network, and each link's MCS is either fixed as MCS 0, or adaptively tuned.} \label{Fig4_Tx_beamforming} \end{figure} \section{Combined Performance Characterization} After evaluating the impact of each of the four elements introduced in this paper in isolation, we now look to compare the performance of the baseline concurrent SDMA-MAC with the variant that includes the following four elements: (a) practical MMSE algorithm; (b) channel estimation errors; (c) link adaptation mechanism; (d) incorporation of TX beamforming. For comparison purposes, this section also introduces results of a non-concurrent MAC (only one link is allowed at any given time) that can employ any number of spatial streams less than $N_A$. The non-concurrent MAC also employs MIMO TX and RX beamforming, link adaptation and channel estimation errors. Detailed settings in these MAC schemes are summarized in Table \ref{Table_II_MAC}. \begin{table*} \caption{Detailed Settings in the Considered MAC Schemes} \centering \label{Table_II_MAC} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{MAC Scheme} & \textbf{TX Vectors} & \textbf{RX Vectors} & \textbf{\tabincell{c}{Link Adaptation \\ (\# of streams)}} & \textbf{\tabincell{c}{Link Adaptation \\ (MCS per stream)}} & \textbf{\tabincell{c}{Channel \\ Estimation \\ Error}} \\ \hline \textbf{\tabincell{c}{Baseline Concurrent\\ SDMA-MAC}} & \tabincell{c}{TX Beamnulling \\ Eqn. (\ref{eq3})} & \tabincell{c}{ZF method \\ Eqn. (\ref{ZF_eq8})} & \tabincell{c}{Each link uses \\ only one stream.} & \tabincell{c}{The MCS per \\ link is fixed \\ as MCS 0.} & {0.1$\sigma_N^2$} \\ \hline \textbf{\tabincell{c}{Enhanced (realistic) \\ SDMA-MAC Scheme}} & \tabincell{c}{TX Beamforming \\ Section III.D} & \tabincell{c}{MMSE method\\ Eqn. (\ref{MMSE_1}-\ref{MMSE_2})} & \tabincell{c}{Each link uses \\ only one stream.} & \tabincell{c}{The MCS per link \\ is adaptively selected. \\ (section III.C)} & {0.1$\sigma_N^2$} \\ \hline \textbf{\tabincell{c}{Enhanced Scheme \\ with the Universal \\ MMSE Scheme \\ at the RX}} & \tabincell{c}{TX Beamforming \\ Section III.D} & \tabincell{c}{Universal \\ MMSE method \\ Eqn. (\ref{MMSE_3}-\ref{MMSE_4})} & \tabincell{c}{Each link uses \\ only one stream.} & \tabincell{c}{The MCS per link \\ is adaptively selected. \\ (section III.C)} & {0.1$\sigma_N^2$} \\ \hline \textbf{\tabincell{c}{Non-concurrent \\ MAC Scheme}} & \tabincell{c}{SVD based \\ method \\ see \cite{14_SVD}} & \tabincell{c}{SVD based \\ method \\ see \cite{14_SVD}} & \tabincell{c}{Each link can use \\ up to $N_A$ streams. \\ The number of \\ streams is adaptively \\ selected to maximize \\ the throughput.} & \tabincell{c}{The MCS per stream \\ is adaptively selected, \\ which is similar to \\ the method in \\ section III.C.} & {0.1$\sigma_N^2$} \\ \hline \end{tabular} \end{table*} \subsection{Simulation Results} The simulation results are summarized in Fig. \ref{Fig5_MAC_scheme}. Firstly, the baseline SDMA-MAC has the lowest sum throughput, which is mostly due to the lack of link adaptation in it. Secondly, the enhanced SDMA-MAC has 3x to 4x higher throughput than the baseline SDMA-MAC, but its results are lower than that of non-concurrent MAC. It is because that under imperfect channel estimation, residual interference among concurrent links has a significant impact on the overall network performance. This is partly the reason why the enhanced design with the Universal MMSE scheme has the highest sum throughput. With 4 concurrent links it shows a 500{\%} improvement over the baseline SDMA-MAC and 40{\%} improvement over the non-concurrent MAC scheme. The superior performance of our enhanced SDMA MAC over the non-concurrent MAC scheme is primarily attributed to the use of multiple antennas for spatial interference mitigation and the associated power allocation. At this juncture it is worth noting that in an actual fully functioning MAC, the overhead of the contention window is most likely highest for the enhanced scheme with the Universal MMSE, and is smallest for the non-concurrent MAC. This is because the Universal-MMSE method will require more control packets in order to get all the information needed. \begin{figure} \centering \includegraphics[width=3.3in]{Figure5_MAC_scheme_compact.eps} \caption{Sum throughput performance under different MAC schemes. Channel estimation error is fixed as 0.1$\sigma_N^2$.} \label{Fig5_MAC_scheme} \end{figure} \section{Conclusion} The aim of this work was to investigate how an SDMA MAC based on the notion of concurrent links would perform in a real operating network that is subject to self interference and channel estimation errors that will negatively impact the performance. We also looked to bring in link adaptation and MMSE based beamforming that are part of almost any operating MIMO based system. Our work uncovered two rather important results. The first, is that significant performance improvements can be had with the combination of MMSE based beamforming and link adaptation, even in the presence of channel estimation errors. The second is that a single link transmission strategy that can use multiple spatial streams is rather hard to beat with a concurrent transmission strategy that looks to maximize the number of transmissions each with a single spatial stream. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,116,691,498,782
arxiv
\section{Introduction} An {\em inverse semigroup} is a semigroup $S$ together with an involution $*: S\to S$ such that for all $s\in S$ we have \[ ss^*s = s. \] and such that $s^*$ is the only element for which this equation holds. On the other hand, a {\em partial isometry} in a C*-algebra $A$ is an element $v$ such that \[ vv^*v = v. \] The link between inverse semigroups and partial isometries in C*-algebras implied by the above is hard to ignore, especially considering the mass of important examples of C*-algebras which are generated by partial isometries. In fact, for many C*-algebras of interest one can choose a countable generating set consisting of partial isometries which is also closed under product and adjoint -- such a set is necessarily an inverse semigroup. On the one hand, C*-algebras have provided interesting examples of inverse semigroups. For examples of this, we can look to the graph inverse semigroups of \cite{Pa02} and \cite{La14}, the tiling inverse semigroup of \cite{KelCoin} and the AF inverse semigroups of \cite{LS14}. As one can imagine, each of these appears as a generating set of partial isometries in its namesake C*-algebra. On the other hand, if one knows a certain C*-algebra $A$ is generated by an inverse semigroup $S$, then (semigroup-theoretical) properties of $S$ give rise to properties of $A$. The catch is that $S$ may not tell the whole story. For example, the Cuntz algebra $\mathcal{O}_2$ and its Toeplitz extension $\mathcal{T}_2$ are both generated by inverse semigroups of partial isometries, and both of these inverse semigroups are isomorphic (as semigroups) to the same inverse semigroup $P_2$ (called a {\em polycyclic monoid} in the literature). What is happening is that there is a {\em representation} of $P_2$ in both algebras, but the relation $s_1s_1^* + s_2s_2^* = 1$ which holds in the Cuntz algebra cannot be expressed using only the multiplication and involution inside the inverse semigroup. Therefore, if one hopes to phrase simplicity (for example) of a C*-algebra in terms of properties of a generating inverse semigroup, one has to care for how the inverse semigroup is represented in the C*-algebra.\footnote{It turns out that another way of approaching this problem is to specialize further to a class of inverse semigroups called {\em boolean inverse monoids}. We do not pursue this here, and the interested reader is directed to \cite{LL13} for more details.} This led Exel in \cite{Ex08} to define the notion of a {\em tight} representation of an inverse semigroup. He showed that for an inverse semigroup $S$ there always exists a C*-algebra, called the {\em tight} C*-algebra of $S$ and denoted $\Ct(S)$, which is universal for tight representations of $S$. It turns out that $\mathcal{O}_2$ is universal for tight representations of $P_2$, and $\mathcal{T}_2$ is universal for {\em all} representations of $P_2$ (a concept introduced by Paterson in \cite{Pa99}). Many C*-algebras of interest are isomorphic to the tight C*-algebra of their generating sets -- for example see \cite{EGS12} for tiling C*-algebras, \cite{Ex08} for graph and higher rank graph C*-algebras, \cite{StLCM} for boundary quotients of certain Cuntz-Li algebras, and \cite{EPSep14} for Katsura algebras and self-similar group algebras. Hence, there has been interest in relating the properties of an inverse semigroup to properties of its tight C*-algebra. The paper \cite{EP14} of Exel and Pardo provides conditions on $S$ which guarantee that $\Ct(S)$ is simple, and in the case that $\Ct(S)$ is nuclear these conditions become necessary and sufficient (invoking the results of \cite{BCFS14} regarding an underlying groupoid). They also give a condition which further guarantees that $\Ct(S)$ is purely infinite. Similar results are obtained in \cite{Ste14}. Work of Milan and Steinberg \cite{MS14} gives conditions on $S$ which imply that $\Ct(S)$ is isomorphic to the partial crossed product of a commutative C*-algebra by a group, and further conditions which imply it is Morita equivalent to a usual crossed product. With that setup, we turn to the present paper. We are concerned with one-sided subshifts over a finite alphabet $\A$, ie closed subspaces of $\A^\NN$ which are invariant under the left shift map. In \cite{Ma97}, Matsumoto associated a C*-algebra to such a space $\X$ which generalized the construction of Cuntz-Krieger C*-algebras \cite{CK80} (which can be naturally viewed as C*-algebras associated to shifts of finite type). In a subsequent paper with Carlsen \cite{CM04} a slightly different construction was put forward, and again in \cite{Ca08}. We will deal with the C*-algebra $\OX$ defined in \cite{Ca08}, and call this a {\em Carlsen-Matsumoto algebra}. This C*-algebra has been viewed under the lens of many different constructions: in \cite{Ca08} it is presented as a Cuntz-Pimsner algebra, in \cite{CS07} it is obtained as an Exel crossed product by an endomorphism, in \cite{CarlsenThesis} it is obtained from a Renault-Deaconu groupoid, and in \cite{Th10a} Thomsen constructs it from a semi-\'etale groupoid. Here, we add another construction to the list, by constructing an inverse semigroup $\s_\X$ from $\X$ and showing that $\OX$ is isomorphic to $\Ct(\s_\X)$. Our motivations for doing so are to add another example of a C*-algebra which can be seen as the tight C*-algebra of an inverse semigroup and also to provide another interesting example of an inverse semigroup arising from a C*-algebra. The other reason mentioned above that one might want to embark on such an investigation -- that properties of the algebra can be gleaned from that of the inverse semigroup -- is less pressing in this case, as Carlsen-Matsumoto algebras are already quite well-studied. For instance, the papers \cite{Th10a} and \cite{CT12} combine to provide sharp conditions under which $\OX$ is simple and purely infinite. We do however use the results of \cite{MS14} to show that $\OX$ can be seen as a partial crossed product of a commutative C*-algebra by the free group over $\A$. This paper is organized in the following manner. After providing some background, in Section 3 we define our inverse semigroup $\s_\X$ from $\X$, and show that it satisfies some nice properties. Section 4 is first devoted to establishing the isomorphism between $\OX$ and $\Ct(\s_\X)$, and then finishes by mentioning the partial crossed product result mentioned above. \section{Preliminaries and notation} We will use the following general notation. If $X$ is a set and $U\subset X$, let Id$_U$ denote the map from $U$ to $U$ which fixes every point, and let $1_{U}$ denote the characteristic function on $U$, ie $1_U: X\to \CC$ defined by $1_U(x) = 1$ if $x\in U$ and $1_U(x) = 0$ if $x\notin U$. If $F$ is a finite subset of $X$, we write $F\fs X$. We let $\NN$ denote the set of natural numbers (starting at 1). \subsection{Inverse semigroups}\label{ISGsubsection} An {\em inverse semigroup} is a semigroup $S$ such that for every $s\in S$, there exists a unique element $s^*\in S$, with the property that \[ ss^*s = s, \hspace{1cm}s^*ss^* = s^*. \] The element $s^*$ is called the {\em inverse} of $S$. For $s,t\in S$, we have $(s^*)^* = s$ and $(st)^* = t^*s^*$. We typically assume that $S$ has a neutral element $1$ and a zero element 0 such that \[ 1s = s1 = s\text{ for all }s\in S \] \[ 0s = s0 = 0\text{ for all }s\in S. \] Even though we call $s^*$ the inverse of $s$, we need not have $ss^* = 1$, although we always have that $(ss^*)^2 = ss^*ss^* = ss^*$, which is to say that $ss^*$ (and indeed $s^*s$) is an {\em idempotent}. The set of all idempotents in $S$ is denoted \[ E(S) = \{e\in S\mid e^2 = e\}. \] It is a nontrivial fact that if $S$ is an inverse semigroup, then $E(S)$ is closed under multiplication and commutative. It is also clear that if $e\in E(S)$, then $e^* = e$. Let $X$ be a set, and let \[ \I(X) = \{f:U\to V\mid U, V\subset X, f\text{ bijective}\}. \] Then $\I(X)$ is an inverse semigroup when given the operation of composition on the largest possible domain and inverse given by function inverse; it is called the {\em symmetric inverse monoid on $X$}. If $e$ is an idempotent in $\I(X)$, then $e =$ Id$_U$ for some $U\subset X$. The function Id$_X$ is the neutral element for $\I(X)$, and the empty function is the 0 element for $\I(X)$. It is an important fact (akin to the Cayley theorem for groups) that every inverse semigroup is embeddable in $\I(X)$ for some set $X$ -- this is the Wagner-Preston theorem. Every inverse semigroup possesses a natural order structure. For an inverse semigroup $s,t\in S$ we say $s\leqslant t$ if and only if $ts^*s = s$. On idempotents, this order has a nicer form -- if $e, f\in E(S)$ then $e\leqslant f$ if and only if $ef = e$. This partial order is perhaps best understood for elements of $\I(X)$, because if $g,h\in \I(X)$, then $g\leqslant h$ if and only if $h$ extends $g$ as a function. \subsection{Subshifts} As much as possible we use notation set in \cite{Ca08, CS07}. Let $\A$ be a finite set, called the {\em alphabet}, and endow it with the discrete topology. The product space \[ \A^\NN = \prod_{n\in \NN}\A \] is called the {\em one-sided full shift over $\A$}. If $x=(x_n)_{n\in\NN}$, we will write $x$ in the shorter form \[ x = x_1x_2x_3\cdots. \] The map $\sigma:\A^\NN\to \A^\NN$ given by $\sigma(x_1x_2x_3\cdots) = x_2x_3\cdots$ is called the {\em shift}, and is a continuous surjection. A subspace $\X\subset \A^\NN$ is called a {\em subshift} if it is closed and $\sigma(\X) \subset \X$. If this is the case, we will also sometimes say that $\X$ is a {\em one-sided subshift over $\A$}. Since $\A^\NN$ is compact and metrizable, then so is any subshift over $\A$. For an integer $k\geq 1$, we let $\A^k$ denote the set of words of length $k$ in elements of $\A$ -- we again write an element $w\in \A^k$ as $w_1w_2\cdots w_k$. We also let $\A^0 = \{\epsilon\}$, and call $\epsilon$ the {\em empty word}. For $w\in\A^k$ we write $|w| =k$ and say that the {\em length} of $w$ is $k$. We set $\A^* = \cup_{k\geq 0}\A^k$ and say this is the set of words in $\A$. Given $v,w\in\A^*$, we may form their concatenation \[ vw = v_1v_2\cdots v_{|v|}w_1w_2\cdots w_{|w|}\in\A^*. \] In addition, for all $v\in \A^*$, we take $v\epsilon = \epsilon v = v$. Given $v\in \A^*$ and $x\in \A^\NN$, we may also concatenate $v$ and $x$: \[ vx = v_1v_2\cdots v_{|v|}x_1x_2\cdots\in \A^\NN. \] Again, for all $x\in \A^\NN$ we let $\epsilon x = x$. If $v\in \A^*$, $x\in \A^*\cup \A^\NN$ and $y = vx$, then we say that $v$ is a {\em prefix} of $y$. For $x\in \X$ and $k\in\NN$ we let \[ x_{[1,k]} = x_1x_2\cdots x_k,\hspace{1cm} x_{(k,\infty)} = x_{k+1}x_{k+2}\cdots. \] In addition, if $F\subset \A^*$ and $w\in \A^*$ we let $Fw = \{fw\mid f\in F\}$ and $wF = \{wf\mid f\in F\}$. For $v\in\A^*$, we let $C(v) = \{vx\in\A^\NN\mid x\in \A^\NN\}$ and call sets of this form {\em cylinder sets}. These sets are closed and open in $\A^\NN$, and generate the topology on $\A^\NN$. If $\X$ is a one-sided subshift over $\A$ and $v\in \A^*$, then we set $C_\X(v) = C(v)\cap \X$, although the subscript will frequently be dropped. \section{Inverse semigroups associated to subshifts} Given a one-sided subshift $\X$, the shift map $\sigma:\X\to \X$ is continuous, but in general it is not a local homeomorphism. Still, it is locally a {\em bijection}, in that $\left.\sigma\right|_{C(a)}$ is a bijection for all $a\in \A$. As mentioned in Section \ref{ISGsubsection}, inverse semigroups are a natural object with which to study partially defined bijections, and so we use the partial bijections above to associate an inverse semigroup $\s_\X$ to $\X$. \subsection{Construction of $\s_\X$} Following \cite{Ca08}, for $\mu, \nu\in \A^*$, we let \[ C(\mu, \nu) = \{ \nu x \in \X\mid \mu x\in \X\} \] and notice that $C(\mu, \mu)= C(\mu)$. We note that \[ C(\mu, \nu) = C(\nu)\cap\sigma^{-|\nu|}(\sigma^{|\mu|}(C(\mu))). \] Since the shift map is a closed map, $C(\mu, \nu)$ is closed for every $\mu, \nu\in\A^*$. For each $a\in \A$, let $s_a\in \I(\X)$ be defined by \[ s_a: C(a, \epsilon) \to C(a, a) \] \[ s_a(x) = ax. \] For $\mu\in \A^*\setminus\{\epsilon\}$, we define $s_\mu = s_{\mu_1}s_{\mu_2}\cdots s_{\mu_{|\mu|}}$ so that \[ s_\mu: C(\mu, \epsilon) \to C(\mu) \] \[ s_\mu(x) = \mu x. \] For the empty word $\epsilon$, we take $s_\epsilon =$ Id$_\X$. It is clear that for each $\mu\in\A^*$, the map $s_\mu$ is a bijection between subsets of $\X$. \begin{defn} Let $\X$ be a one-sided subshift over $\A$. Then we let $\s_\X$ be the inverse semigroup generated by $\{s_\epsilon, s_a\mid a\in \A\}$ inside $\I(\X)$. \end{defn} We would like to find a convenient closed form for elements of $\s_\X$. To this end, for each $F\fs \A^*$ and $\nu\in \A^*$, let\footnote{Late in preparation for this work, we discovered that sets of this form were already considered in \cite{Th10a}, and were written $C'(\nu;F)$. We use our notation in solidarity with \cite{Ca08} and keep the ``prefix'' data in the second entry and the ``possible replacement prefixes'' data in the first entry.} \begin{eqnarray*} C(F; \nu) &=& \{\nu x\in \X\mid fx\in \X\text{ for all }f\in F\}\\ &=& \bigcap_{f\in F}C(f, \nu), \end{eqnarray*} and let $E(F;\nu) =$ Id$_{C(F;\nu)}$. We will also let $E(\mu,\nu) =$ Id$_{C(\mu, \nu)}$. A short calculation shows that \[ E(\mu, \nu) = s_\nu s^*_\mu s_\mu s^*_\nu \] \[ E(F;\nu) = s_\nu \left(\prod_{f\in F}s^*_f s_f \right) s^*_\nu. \] The collection of all such elements will be important in the sequel -- we use the notation \begin{equation}\label{EXdef} E_\X = \{E(F;v)\in \I(\X)\mid v\in \A^*, F\subset_{\text{fin}}\A^*\}\cup\{\emptyset\}. \end{equation} We note that the identity function on $\X$ is an element of $E(\X)$, taking $E(F;v)$ with $v = \epsilon$ and $F = \{\epsilon\}$. We also note that $E(F;v)E(G;w)\neq 0$ if and only if $C(F;v)\cap C(G;w)\neq \emptyset$. \begin{lem}\label{idempotentlemma} If $\X$ is a one-sided subshift over $\A$, then the set $E_\X$ is closed under multiplication. Furthermore, if $w\in \A^*$ and $e\in E_{X}$, then $s_w^*es_w\in E_\X$. \end{lem} \begin{proof} Suppose that $F, G\fs\A^*$ and that $v, w\in \A^*$. Then \begin{equation}\label{idempotentproduct} E(F;v)E(G;w) = s_v \left(\prod_{f\in F}s^*_f s_f \right) s^*_vs_w \left(\prod_{g\in G}s^*_g s_g \right) s^*_w \end{equation} This product will be 0 unless $v$ is a prefix of $w$ or vice-versa. If $w = vz$, then \begin{eqnarray*} E(F;v)E(G;w) & = & s_v \left(\prod_{f\in F}s^*_f s_f \right) s^*_vs_vs_z \left(\prod_{g\in G}s^*_g s_g \right) s^*_{vz}\\ & = & s_vs_z s^*_z\left(\prod_{f\in F}s^*_f s_f \right) s_zs^*_zs^*_vs_vs_z \left(\prod_{g\in G}s^*_g s_g \right) s^*_{vz}\\ & = & s_w\left(\prod_{f\in F}s^*_zs^*_f s_fs_z \right)s^*_ws_w \left(\prod_{g\in G}s^*_g s_g \right)s^*_w\\ & = & s_w \left(\prod_{f\in F}s^*_{fz} s_{fz} \right)\left(\prod_{g\in G}s^*_g s_g \right)s_w^*\\ & = & E(G\cup Fz; w). \end{eqnarray*} A similar calculation shows that if $v = wz$, then \[ E(F;v)E(G;w) = E(F\cup Gz; v). \] Furthermore, \[ s_w^*E(F;v)s_w = s_w^*s_v\left(\prod_{f\in F}s^*_f s_f \right) s^*_v s_w. \] This is zero unless $v$ is a prefix of $w$ or vice-versa. If $w = vz$, then \begin{eqnarray*} s_w^*E(F;v)s_w & = & s_w^*s_v\left(\prod_{f\in F}s^*_f s_f \right) s^*_v s_w\\ & = & s_z^*s_v^*s_v\left(\prod_{f\in F}s^*_f s_f \right)s^*_vs_vs_z\\ & = & s_z^*\left(\prod_{f\in F}s^*_f s_f \right)s_v^*s_vs_z\\ & = & \left(\prod_{f\in F}s_z^*s^*_f s_fs_z \right)s_z^*s_v^*s_vs_z\\ & = & E(\{w\}\cup Fz; \epsilon) \end{eqnarray*} If $v = wz$, then \begin{eqnarray*} s_w^*E(F;v)s_w & = & s_w^*s_ws_z\left(\prod_{f\in F}s^*_f s_f \right) s^*_zs^*_w s_w\\ & = & s_z\left(\prod_{f\in F}s^*_f s_f \right)s_z^* s^*_ws_w\\ & = & E(F;z)E(w, \epsilon)\\ & = & E(F\cup \{wz\}; z) \end{eqnarray*} where the last line is by our previous calculation. \end{proof} \begin{prop}\label{SXprop} Let $\X$ be a one-sided subshift over $\A$. Then \begin{equation}\label{SX} \s_{\X} = \{ s_\alpha E(F; v)s^*_\beta\in \I(\X)\mid \alpha, \beta, v\in \A^*, F\subset_{\text{fin}} \A^*\}\cup\{0\}. \end{equation} \end{prop} \begin{proof} We note that the containment ``$\supseteq$'' is trivial, because each element of the right hand side is a finite product of elements from $\{s_\epsilon, s_a\mid a\in \A\}$. Hence, we will be done if we can show that the set on the right hand side of \eqref{SX} is itself an inverse semigroup, because the right hand side contains $\{s_\epsilon, s_a\mid a\in \A\}$, and $\s_\X$ is the smallest inverse semigroup containing these elements. It is clear that \eqref{SX} is closed under inverses, so we only need to show that it is closed under multiplication. To this end, take $\alpha, \beta, \delta, \eta, v, w\in\A^*$ and $F, G\subset_{\text{fin}}\A^*$. The product $\left(s_\alpha E(F; v)s^*_\beta\right)(s_\delta E(G; w)s_\eta^*)$ will again only be nonzero if $\beta$ is a prefix of $\delta$ or vice-versa. If $\delta = \beta\gamma$, then \begin{eqnarray*} (s_\alpha E(F; v)s^*_\beta)(s_\delta E(G, w)s_\eta^*) &=&s_\alpha E(F; v)s^*_\beta s_\beta s_\gamma E(G, w)s_\eta^*\\ &=&s_\alpha s_\gamma s_\gamma^* E(F\cup\{\beta v\}; v) s_\gamma E(G, w)s_\eta^* \end{eqnarray*} and so by Lemma \ref{idempotentlemma}, this product is in $\s_\X$. A similar argument applies to the case that $\beta = \delta\gamma$. Hence $\s_\X$ is an inverse semigroup and we are done. \end{proof} \begin{rmk} For $s_\alpha E(F; v)s^*_\beta\in \s_\X$, if it happens that $\alpha_{|\alpha|}= \beta_{|\beta|} = a\in\A$, then \[ s_\alpha E(F; v)s^*_\beta = s_{\alpha_1\cdots\alpha_{|\alpha|-1}} E(F; av)s^*_{\beta_1\cdots\beta_{|\alpha|-1}}. \] For this reason, when working with elements of $\s_\X$ we will usually assume they are in ``lowest terms''. To be more precise, we take the following form for $\s_\X$: \begin{equation}\label{SXdef} \s_\X = \{s_\alpha E(F; v)s^*_\beta \mid \alpha, \beta, v\in \A^*, F\subset_{\text{fin}} \A^*, \alpha_{|\alpha|}\neq \beta_{|\beta|}\}\cup\{0\}. \end{equation} If we take two such elements $s_\alpha E(F; v)s^*_\beta$ and $s_\delta E(G; w)s_\eta^*$ with $\alpha_{|\alpha|}\neq \beta_{|\beta|}$ and $\delta_{|\delta|}\neq \eta_{|\eta|}$, then \[ s_\alpha E(F; v)s^*_\beta = s_\delta E(G; w)s_\eta^* \Rightarrow \alpha = \delta, \beta = \eta \] We caution that the above equality does not imply that $E(F; v) = E(G; w)$, Indeed, a short calculation shows that \[ s_\alpha E(F; v)s^*_\beta = s_\alpha E(F\cup\{\alpha v, \beta v\}; v)s^*_\beta. \] We could put a condition on $\s_\X$ similar to \eqref{SXdef} stating that we assume $F$ contains $\alpha v$ and $\beta v$ when writing $s_\alpha E(F; v)s^*_\beta$, but this will usually not be necessary. \end{rmk} In the proof of Proposition \ref{SXprop} we started computation of the product of two elements of $\s_\X$, but stopped when it became clear that the product was again back in $\s_\X$. In the following lemma, we record the details of the exact form of this product. \begin{lem}\label{productcomputation} Let $\X$ be a one-sided subshift over $\A$, and take $\alpha, \beta, \delta, \eta, v, w\in\A^*$ and $F, G\subset_{\text{fin}}\A^*$. \begin{enumerate} \item If $\delta = \beta\gamma$ and $\gamma = vz$ for some $\gamma, z\in\A^*$, then \begin{equation} (s_\alpha E(F; v)s^*_\beta)(s_\delta E(G; w)s_\eta^*) = s_{\alpha\gamma} E(Fzw\cup G \cup\{\gamma w \}\cup \{\delta w\} ; w)s^*_\eta \end{equation} \item If $\delta = \beta\gamma$, $v = \gamma z$, and $z = wr$ for some $\gamma, z, r\in\A^*$, then \begin{equation} (s_\alpha E(F; v)s^*_\beta)(s_\delta E(G; w)s_\eta^*) = s_{\alpha\gamma} E(F\cup Gr \cup\{\beta v\}; z) s^*_\eta \end{equation} \item If $\delta = \beta\gamma$, $v = \gamma z$, and $w = zr$ for some $\gamma, z, r\in\A^*$, then \begin{equation} (s_\alpha E(F; v)s^*_\beta)(s_\delta E(G; w)s_\eta^*) = s_{\alpha\gamma} E(Fr\cup G \cup\{\beta vr\}; w) s^*_\eta \end{equation} \item If $\beta = \delta\gamma$ and $\gamma = wz$ for some $\gamma, z\in \A^*$ then \begin{equation} (s_\alpha E(F; v)s^*_\beta)(s_\delta E(G; w)s_\eta^*) = s_{\alpha} E(F\cup Gzv \cup\{\gamma v\}\cup\{\beta v\}; v) s^*_{\eta\gamma} \end{equation} \item If $\beta = \delta\gamma$, $w = \gamma z$, and $z = vr$ for some $\gamma, z, r\in \A^*$, then \begin{equation} (s_\alpha E(F; v)s^*_\beta)(s_\delta E(G; w)s_\eta^*) = s_{\alpha} E(Fr\cup G \cup\{\delta w\}; z) s^*_{\eta\gamma} \end{equation} \item If $\beta = \delta\gamma$, $w = \gamma z$, and $v = zr$ for some $\gamma, z, r\in \A^*$, then \begin{equation} (s_\alpha E(F; v)s^*_\beta)(s_\delta E(G; w)s_\eta^*) = s_{\alpha} E(F\cup Gr \cup\{\delta wr\}; v) s^*_{\eta\gamma} \end{equation} \item If none of the conditions in 1--6 above hold, then $(s_\alpha E(F; v)s^*_\beta)(s_\delta E(G; w)s_\eta^*) = 0$. \end{enumerate} \end{lem} \begin{proof} This follows from Lemma \ref{idempotentlemma}, and is left to the enthusiastic reader. \end{proof} \begin{lem}\label{rangeandsource} Let $\X$ be a one-sided subshift over $\A$, let $\alpha, \beta, v\in\A^*$ and $F\fs \A^*$. Then \[ (s_\alpha E(F; v)s^*_\beta)(s_\alpha E(F; v)s^*_\beta)^* = E(F\cup\{\beta v\};\alpha v) \] \[ (s_\alpha E(F; v)s^*_\beta)^*(s_\alpha E(F; v)s^*_\beta) = E(F\cup\{\alpha v\};\beta v). \] \end{lem} \begin{proof} This follows from Lemma \ref{productcomputation}. \end{proof} \begin{prop} Let $\X$ be a one-sided subshift over $\A$, let $\s_\X$ be as in \eqref{SXdef}, and let $E_\X$ be as in \eqref{EXdef}. Then $E(\s_\X) = E_\X$. \end{prop} \begin{proof} This follows from Lemma \ref{rangeandsource} together with the fact that the set of idempotents of an inverse semigroup $S$ coincides with the set of elements of the form $s^*s$ for $s\in S$. \end{proof} \begin{rmk} A recent preprint of Boava, de Castro, and Mortari \cite{BdCM15} associates an inverse semigroup to every {\em labeled space}. In \cite{BCP12} Bates, Carlsen, and Pask associate a labeled space to any one-sided subshift $\X$, such that the C*-algebra associated to the constructed labeled space is isomorphic to $\OX$, see \cite[Example 4]{BCP12}. We caution that the inverse semigroup that one obtains by combining \cite{BdCM15} and \cite{BCP12} (say, $\tilde\s_\X$) will not be the same as our $\s_\X$ -- in fact $E(\tilde\s_\X)$ will be isomorphic to the Boolean algebra generated by the $C(v,w)$ as $v$ and $w$ range over $\A^*$. Hence, their set of idempotents will contain complements of the $C(v,w)$ while ours (in general) will not. For the specific situation of a subshift $\X$ our construction seems natural, as the only idempotents which appear in our construction are those which arise directly from the partial bijections arising from the shift map on $\X$. \end{rmk} \subsection{Properties of $\s_\X$} We now discuss some useful properties which our newly-defined inverse semigroup $\s_\X$ may possess. \begin{defn} Let $S$ be an inverse semigroup with identity and zero (in other words, an inverse monoid with zero). \begin{enumerate}\label{ISGproperties} \item We say that $S$ is {\em $E^*$-unitary} if $0 \neq e \leqslant s$ with $e\in E(S)$ implies that $s\in E(S)$. \item If $\Gamma$ is a group and $\phi: S\setminus\{0\}\to \Gamma$ such that $s, t\in S$ with $st\neq 0$ implies that $\phi(st) = \phi(s)\phi(t)$, then we say that $\phi$ is a {\em partial homomorphism} from $S$ to $\Gamma$. If, in addition, $\phi^{-1}(1_\Gamma) = E(S)$, then $\phi$ is called an {\em idempotent pure partial homomorphism}. \item We say that $S$ is {\em strongly $E^*$-unitary} if there exists a group $\Gamma$ and an idempotent-pure partial homomorphism from $S$ to $\Gamma$. \item We say that $S$ is {\em F*-inverse} if for each $s\in S$ there exists a unique maximal element above $s$. \item We say that $S$ is {\em strongly $F^*$-inverse} if there exists a group $\Gamma$ and an idempotent-pure partial homomorphism $\phi$ from $S$ to $\Gamma$ such that for all $g\in \Gamma$, $\phi^{-1}(g)$ has a maximal element whenever it is nonempty. \end{enumerate} \end{defn} \begin{lem} Let $\X$ be a one-sided subshift over $\A$, and let $\s_\X$ be as in \eqref{SXdef}. Then $\s_\X$ is $E^*$-unitary. \end{lem} \begin{proof} We note that for an idempotent $e$, $e \leqslant s$ if and only if $se = e$. Suppose that $\alpha, \beta, v, w\in \A^*$, that $F, G \fs \A^*$, that $\alpha_{|\alpha|}\neq \beta_{|\beta|}$, and that $E(G;w), s_\alpha E(F; v) s_\beta^* \neq 0$. We have \begin{eqnarray*} (s_\alpha E(F; v) s_\beta^*)E(G;w) & = & s_\alpha E(F; v)(s_\beta^*E(G;w)s_\beta) s_\beta^*\\ & = & s_\alpha E(F'; v')s_\beta^*\hspace{0.5cm}\text{for some }F'\fs \A^*, v'\in\A^* \end{eqnarray*} If this is equal to $E(G;w)$, we must have that $\alpha = \beta$. Since the last letters of $\alpha, \beta$ were assumed to be unequal, this implies that $\alpha = \beta = \epsilon$, and hence $s_\alpha E(F; v) s_\beta^*$ is an idempotent as required. \end{proof} We now prove that $\s_\X$ is strongly $E^*$-unitary, which seems to make the above lemma a waste because evidently being strongly $E^*$-unitary implies being $E^*$-unitary. Still, we believe that the above lemma is instructive, so there it stays. \begin{lem}\label{stronglyEunitarylemma} Let $\X$ be a one-sided subshift over $\A$, and let $\s_\X$ be as in \eqref{SXdef}. Then $\s_\X$ is strongly $E^*$-unitary. \end{lem} \begin{proof} Let $\mathbb{F}_\A$ denote the free group on the alphabet $\A$. For $\alpha, \beta, v\in \A^*$, $F\fs \A^*$ such that $\alpha_{|\alpha|}\neq \beta_{|\beta|}$, we define a map $\phi: \s_\X\setminus\{0\}\to \mathbb{F}_\A$ by \[ \phi(s_\alpha E(F; v) s_\beta^*) = \alpha\beta^{-1}. \] We claim that this map is a partial homomorphism. To prove this, we take $\alpha, \beta, \delta, \eta, v, w\in\A^*$ and $F, G\subset_{\text{fin}}\A^*$ such that $\alpha_{|\alpha|}\neq \beta_{|\beta|}$, $\delta_{|\delta|}\neq \eta_{|\eta|}$ and suppose that $(s_\alpha E(F; v)s^*_\beta)(s_\delta E(G; w)s_\eta^*) \neq 0$. This implies that either $\delta = \beta\gamma$ or $\beta = \delta\gamma$ for some $\gamma \in \A^*$. If $\delta = \beta\gamma$ for some $\gamma \in \A^*$, then \[ \phi(s_\alpha E(F; v)s^*_\beta)\phi(s_\delta E(G; w)s_\eta^*) = \alpha\beta^{-1}\delta\eta^{-1} = \alpha\beta^{-1}\beta\gamma\eta^{-1} = \alpha\gamma\eta^{-1}. \] On the other hand, in each of the first three cases of Lemma \ref{productcomputation}, the product of these two elements is $s_{\alpha\gamma} A s_{\beta}^*$ for some $A\in E_\X$. Hence $\phi(s_\alpha E(F; v)s^*_\beta s_\delta E(G; w)s_\eta^*) = \alpha\gamma\eta^{-1}$. The case $\beta = \delta\gamma$ is similar. Hence $\phi$ is a partial homomorphism. Furthermore, if $\phi(s_\alpha E(F; v) s_\beta^*) = \alpha\beta^{-1} = 1_{\mathbb{F}_\A}$, then $\alpha = \beta$, and as before this implies that $s_\alpha E(F; v) s_\beta^* = E(F;v)$, an idempotent. Thus, $\phi$ is idempotent pure. \end{proof} Finally, we consider the last two properties from Definition \ref{ISGproperties}. \begin{lem}\label{maximalelementlemma} Suppose that $\alpha, \beta, v\in \A^*$, that $F\fs \A^*$, and that $\alpha_{|\alpha|}\neq \beta_{|\beta|}$. Then $s_\alpha E(F; v) s_\beta^* \leqslant s_\alpha s_\beta^*$. Furthermore, if $s\in \s_\X$ and $s_\alpha E(F; v) s_\beta^* \leqslant s$, then $s\leqslant s_\alpha s_\beta^*$. \end{lem} \begin{proof} Let $t = s_\alpha E(F; v) s_\beta^*$. We first must show that $s_\alpha s_\beta^* t^* t = t$. By Lemma \ref{rangeandsource}, $t^*t = E(F\cup\{\alpha v\}; \beta v)$. We calculate \begin{eqnarray*} s_\alpha s_\beta^* t^*t & = & s_\alpha s_\beta^* E(F\cup\{\alpha v\}; \beta v)\\ & = & s_\alpha s_\beta^* s_\beta E(F\cup\{\alpha v\}; v)s_\beta^*\\ & = & s_\alpha E(F\cup\{\alpha v\}; v) s_\beta^* s_\beta s_\beta^*\\ & = & s_\alpha E(F\cup\{\alpha v\}; v) s_\beta^*\\ & = & s_\alpha s_v s_v^* s_\alpha^* s_\alpha s_v \left(\prod_{f\in F}s_f^*s_f\right)s_v^* s_\beta^*\\ & = & s_\alpha s_v \left(\prod_{f\in F}s_f^*s_f\right)s_v^* s_\beta^*\\ & = & s_\alpha E(F; v)s_\beta^* \end{eqnarray*} Now, take $\delta, \eta, \in \A^*$ with $\delta_{|\delta|}\neq \eta_{|\eta|}$, $A\in E_\X$ and let $s = s_\delta A s_\eta^*$. Then \[ s t^* t = s_\delta A s_\eta^* E(F\cup\{\alpha v\}; \beta v) = s_\delta A s_\eta^* E(F\cup\{\alpha v\}; \beta v)s_\eta s_\eta^* = s_\delta B s_\eta^* \] for some $B\in E_\X$ by Lemma \ref{idempotentlemma}. If this is equal to $s$, then $\delta = \alpha$ and $\eta = \beta$. Thus by the above calculation, we must have $s\leqslant s_\alpha s_\beta^*$ as well. \end{proof} We can now prove the following. \begin{prop} Let $\X$ be a one-sided subshift over $\A$ and let $\s_\X$ be as in \ref{SXdef}. Then $\s_\X$ is strongly $F^*$-inverse. \end{prop} \begin{proof} Let $\phi$ be as defined in the proof of Lemma \ref{stronglyEunitarylemma}. Then if $\phi^{-1}(g)$ is not empty, $g = \alpha\beta^{-1}$ for some $\alpha, \beta\in \A^*$, and as in the proof of Lemma \ref{maximalelementlemma}, $s_\alpha s_\beta^*$ is maximal in $\phi^{-1}(\alpha\beta^{-1})$. \end{proof} \section{C*-algebras} We now turn our attention to the C*-algebras associated to the structures we have defined. The main result of this section, Theorem \ref{mainresult}, states that given a one-sided subshift $\X$, a certain C*-algebra $\OX$ associated to $\X$ is canonically isomorphic to a certain C*-algebra $\Ct(\s_\X)$ associated to the inverse semigroup $\s_\X$. We first recall the construction of $\OX$ due to Matsumoto and Carlsen, and then the construction of $\Ct(S)$ for a general inverse semigroup $S$. Knowledge of C*-algebras is assumed -- one can find undefined terms in the excellent reference \cite{DAV}. \subsection{The Carlsen-Matsumoto algebra $\OX$}\label{CMalgebrasection} Let $\X$ be a one-sided subshift over $\A$, and consider $\ell^\infty(\X)$, the C*-algebra of bounded functions on $\X$. Define $\DX$ to be the C*-subalgebra of $\ell^\infty(\X)$ generated by $\{1_{C(\mu, \nu)} \mid \mu, \nu\in\A^*\}$. We can now define the algebra $\OX$. \begin{defn}(See \cite[Theorem 10]{CS07}) \label{OXdef} Let $\X$ be a one-sided subshift over $\A$. Then the {\em Carlsen-Matsumoto algebra} $\OX$ is the universal C*-algebra generated by partial isometries $\{S_\mu\}_{\mu\in \A^*}$ such that \begin{enumerate} \item $S_\mu S_\nu = S_{\mu\nu}$ for all $\mu, \nu\in \A^*$, and \item The map $1_{C(\mu, \nu)}\mapsto S_\nu S_\mu^* S_\mu S_\nu^*$ extends to a $*$-homomorphism from $\DX$ to the C*-algebra generated by $\{S_\mu\mid \mu\in \A^*\}$. \end{enumerate} \end{defn} So $\OX$ is generated by a set of partial isometries $\{S_\mu\}_{\mu\in\A^*}$, and we view $\DX$ as the subalgebra of $\OX$ generated by elements of the form $S_\nu S_\mu^* S_\mu S_\nu^*$. One can show that $\OX$ is unital, with unit $I_{\OX} = I_{\DX} = S_\epsilon$. Furthermore, one can show that the elements $\{S_\mu\}_{\mu\in\A^*}$ satisfy \begin{equation}\label{cuntz} \sum_{a\in\A}S_aS_a^* = I_{\OX}, \end{equation} \begin{equation}\label{comm1} S_\mu^* S_\mu S_\nu S_\nu^* = S_\nu S_\nu^* S_\mu^* S_\mu, \end{equation} \begin{equation}\label{comm2} S_\mu^* S_\mu S_\nu^* S_\nu = S_\nu^* S_\nu S_\mu^* S_\mu. \end{equation} In addition, if $\mu, \nu\in \A^*$ with $|\mu|=|\nu|$, then \begin{equation}\label{ortho} S^*_\mu S_\nu \neq 0 \Rightarrow \mu = \nu. \end{equation} Since $\DX$ is a commutative C*-algebra, it is isomorphic to $C(\tX)$ for a certain compact Hausdorff space $\tX$. This space was presented as an inverse limit space in \cite[Chapter 2]{CarlsenThesis}, and we reproduce this presentation here because we will use it to establish an isomorphism between $\Ct(\s_\X)$ and $\OX$. For $x\in \X$ and integer $k\geq 0$, let \[ \p_k(x)= \{\mu\in \A^*\mid \mu x\in X, |\mu| = k\} \] For $l\in \NN$, we say that $x, y\in \X$ are {\em $l$-past equivalent} and write $x\sim_l y$ if $\p_k(x) = \p_k(y)$ for all $k\leq l$. The $l$-past equivalence class of $x\in\X$ will be written as $[x]_l$. Let $\I = \{ (k, l)\in \NN^2 \mid k \leq l\}$. For every $(k, l)\in \I$ we define another equivalence relation $\kl{k}{l}$ on $\X$ by \[ x\kl{k}{l}y \Leftrightarrow x_{[1,k]} = y_{[1,k]} \text{ and }\p_r(x_{(k,\infty)}) = \p_r(y_{(k,\infty)})\text{ for all }r\leq l. \] We note that there is a typo in \cite[Chapter 2]{CarlsenThesis} where the above is defined with $r=l$ rather than $r\leq l$.\footnote{This was confirmed in private communication with Carlsen.} The equivalence class of $x\in\X$ under $\kl{k}{l}$ will be written as $\klclass{k}{[x]}{l}$, and the set of all such equivalence classes will be written as $\klclass{k}{\X}{l}$. It is clear that for all $(k,l)\in \I$, $\klclass{k}{\X}{l}$ is finite; we endow it with the discrete topology. There is a partial order on $\I$ which respects this equivalence relation. For $(k_1, l_1), (k_2, l_2)\in\I$ we say \[ (k_1, l_1)\leq (k_2, l_2) \Leftrightarrow k_1\leq k_2\text{ and }l_1 - k_1 \leq l_2-k_2. \] We note that if $(k,l), (r,s)\in \I$, then they have a common upper bound. Indeed, if $k =r$ then $(k, \max\{l,s\})$ is an upper bound for $(k,l)$ and $(r,s)$, and if $k<r$ then $(r, \max\{l+r-k, s\})$ is an upper bound for $(k,l)$ and $(r,s)$. If $(k_1, l_1)\leq (k_2, l_2)$ then it is straightforward that \[ x \kl{k_2}{l_2}y \Rightarrow x\kl{k_1}{l_1}y. \] Thus, for $(k_1, l_1)\leq (k_2, l_2)$, there is a map $\klclass{(k_1, l_1)}{\pi}{(k_2, l_2)}:\ \klclass{k_2}{\X}{l_2}\to \ \klclass{k_1}{\X}{l_1}$ such that $$\klclass{(k_1, l_1)}{\pi}{(k_2, l_2)}(\klclass{k_2}{[x]}{l_2}) =\ \klclass{k_1}{[x]}{l_1}$$ One can then form the inverse limit \begin{eqnarray} \tX &=& \lim_{(k, j)\in \I}(\ \klclass{k}{\X}{l}, \pi)\label{tXdef}\\ &=& \left\{ (\klclass{k}{[\klclass{k}{x}{l}]}{l})_{(k,l)\in\I} \in \prod_{(k,l)\in\I}\ \klclass{k}{\X}{l}\mid (k_1, l_1)\leq (k_2, l_2) \Rightarrow\ \klclass{k_1}{[\klclass{k_2}{x}{l_2}]}{l_1} =\ \klclass{k_1}{[\klclass{k_1}{x}{l_1}]}{l_1}\right\}\nonumber \end{eqnarray} Which is a closed subspace of the space $\prod_{(k,l)\in\I}\ \klclass{k}{\X}{l}$ when given the product topology of the discrete topologies. Let $x\in \X$ and take $(k,l)\in \I$. The set \[ U(x, k, l) = \{(\klclass{r}{[\klclass{r}{x}{s}]}{s})_{(r,s)\in\I} \in \tX \mid\ \klclass{k}{[\klclass{k}{x}{l}]}{l} =\ \klclass{k}{[x]}{l}\} \] is open and closed. Sets of this form generate the topology on $\tX$. We have the following lemma about the relation $\kl{k}{l}$. \begin{lem}\label{eqclasscontain} Let $\X$ be a subshift, let $v\in \A^*$, let $F\fs \A^*$, and let \[ k = |v|, \hspace{1cm} l = \max\{|f|, |v| : f\in F\}. \] Then for all $x\in \X$, and all $(r,s)\geq (k,l)$, either $\klclass{r}{[x]}{s} \subset C(F;v)$ or $\klclass{r}{[x]}{s} \cap C(F; v) = \emptyset$. \end{lem} \begin{proof} Since $(r,s)\geq (k,l)$ implies that $\klclass{r}{[x]}{s} \subset$ $\klclass{k}{[x]}{l}$, we need only prove the statement for $r=k$ and $s = l$. Suppose that we have $\klclass{k}{[x]}{l} \not\subset C(F;v)$, and take $y\in\ \klclass{k}{[x]}{l} \setminus C(F;v)$. If $v$ is not a prefix of $y$, then this is true of all elements of $\klclass{k}{[x]}{l}$ and we are done. So, suppose that $y = vy'$. There must be an element $f\in F$ such that $fy'\notin \X$. If we have some other element $z = vz'\in\ \klclass{k}{[x]}{l}$, we must have that $\p_{|f|}(z') = \p_{|f|}(y')$, and so $fz'\notin \X$. This implies that $z\notin C(F;v)$ and we are done. \end{proof} \subsection{$\OX$ as the tight C*-algebra of $\s_\X$} In this section we recall the definition of the tight C*-algebra of an inverse semigroup from \cite{Ex08}. We then show that the tight C*-algebra of $\s_\X$ is isomorphic to $\OX$. Let $S$ be an inverse semigroup with 0, and let $A$ be a C*-algebra. A map $\pi: S\to A$ is called a {\em representation} of $S$ if $\pi(0) = 0$, $\pi(st) = \pi(s)\pi(t)$ and $\pi(s)^* = \pi(s^*)$ for all $s, t\in S$. We are interested in a certain class of representations which we will now describe. For $F\subset Z\subset E(S)$, we say that $F$ {\em covers} $Z$ if for every $z\in Z$, there exists $f\in F$ such that $fz \neq 0$. If $F$ covers $\{y\in E(S)\mid y\leqslant x\}$, we say that $F$ covers $x$. Let $X,Y\fs E(S)$, and let \[ E(S)^{X, Y} = \{e\in E(S)\mid e\leqslant x \text{ for all }x\in X, ey = 0 \text{ for all }y\in Y\}. \] A representation $\pi: S\to A$ with $A$ unital is said to be {\em tight} if whenever $X, Y, Z\fs E(S)$ such that $Z$ is a cover of $E(S)^{X,Y}$, then \[ \bigvee_{z\in Z}\pi(z) = \prod_{x\in X}\pi(x)\prod_{y\in Y}(1-\pi(y)). \] The {\em tight C*-algebra of $S$}, denoted $\Ct(S)$, is the universal C*-algebra generated by one element for each element of $S$ subject to the relations which say that the standard map $\pi_t: S\to \Ct(S)$ is a tight representation. At this point it is not clear that $\Ct(S)$ exists, but it was explicitly constructed in \cite{Ex08} as a groupoid C*-algebra associated to an action of $S$ on a certain space $\Et(S)$ associated to $S$. We do not go into specifics about inverse semigroup actions or groupoids here, though we will define $\Et(S)$ as it is essential for establishing isomorphism we desire. Recall that the natural partial order on $S$, when restricted to $E(S)$, takes on a simpler form: $e\leqslant f \Leftrightarrow ef = e$. A subset $\xi\subset E(S)$ is called a {\em filter} if it does not contain the zero element, is closed under products, and is ``upwards directed'', which is to say that if $e\in \xi$ and $e\leqslant f$ then $f\in \xi$. A filter is called an {\em ultrafilter} if it is not properly contained in any other filter. The set of filters is denoted $\Ef(S)$, and the set of ultrafilters is denoted $\Eu(S)$. The set $\Ef(S)$ may be viewed as a subset of the product space $\{0,1\}^{E(S)}$. We let $\Ef(S)$ inherit the subspace topology from the product topology (with $\{0,1\}$ given the discrete topology). For $e\in E(S)$, let \[ D_e = \{\xi\in \Ef(S)\mid e\in \xi\}. \] Then sets of this form together with their complements form a subbasis for the topology on $\Ef(S)$. With this topology, $\Ef(S)$ is called the {\em spectrum} of $E(S)$. We also let $\Et(S) = \overline{\Eu(S)}$, and call this the {\em tight spectrum} $E(S)$. We will shorten $D_e\cap \Et(S)$ to $D_e^t$. It is a fact that $\Ct(S)$ exists and that the C*-subalgebra of $\Ct(S)$ generated by $\pi_t(E(S))$ is $*$-isomorphic to $C(\Et)$ via the identification $\pi_t(e) \mapsto 1_{D_e^t}$. Now, we take $\X$ to be a one-sided subshift over $\A$, and describe $\Et(\s_\X)$. \begin{lem} If $\xi\subset E_\X$ is a filter, then there exists $x\in \X \cup \A^*$ such that if $E(F;v)\in \xi$, then $v$ is a prefix of $x$. \end{lem} \begin{proof} If $\xi$ is a filter and $E(G;v), E(H, w)\in\xi$, then Lemma \ref{idempotentlemma} shows that their product is zero unless $w$ is a prefix of $v$ or vice-versa. The result follows. \end{proof} \begin{lem} Let $\X$ be a one-sided subshift over $\A$, and let \[ \eta_x = \{E(F; v)\in E_\X\mid x\in C(F; v)\}. \] Then $\Eu = \{\eta_x\mid x\in \X\}$. \end{lem} \begin{proof} First, we show that $\eta_x$ is a filter. If $E(F; v), E(G, w)\in \eta_x$, then $C(F;v)\cap C(G;w) = C(H; z)\neq \emptyset$ for some $H\fs \A^*$ and $z\in \A^*$. Hence $E(F;v)E(G;w) = E(H;z)$, and so $\eta_x$ is closed under products. It is clear that $\eta_x$ does not contain the zero element and is upwards closed, so it is a filter. Now, suppose that we have $E(F;v)$ such that $E(F;v)E(G;w)\neq 0$ for all $E(G;w)\in \eta_x$. Thus for each $n\geq 0$, we can find $y_n \in C(F;v)\cap C(x_1\cdots x_n)$, and it is clear that the $y_n$ converge to $x$ in $\X$. Since $C(F;v)$ is closed in $\X$, we must have that $x\in C(F;v)$, and so $E(F;v)\in \eta_x$. This shows that $\eta_x$ is an ultrafilter. Now, suppose that $\xi\subset E_\X$ is an ultrafilter. Then $\{C(F;v)\mid E(F;v)\in \xi\}$ is a collection of closed subsets of the compact space $\X$ which has the finite intersection property, and so the intersection \[ \bigcap_{E(F;v)\in \xi}C(F;v) \] is nonempty. Take $x$ in the above intersection. Then we must have that $\xi\subset \eta_x$, and since $\xi$ is assumed to be an ultrafilter, $\xi = \eta_x$. \end{proof} We now return to the space $\tX$ from \eqref{tXdef} which is the spectrum of the commutative C*-algebra $\DX$. Our next proposition will establish a natural homeomorphism between $\tX$ and $\Et(\s_\X)$. \begin{prop}\label{xttight} The map $\theta: \tX \to \Ef(\s_\X)$ defined by \begin{equation}\label{thetadef} \theta\left((\klclass{k}{[\klclass{k}{x}{l}]}{l})_{(k,l)\in\I}\right) = \{E(F;v)\in E_\X\mid\ \klclass{k}{[\klclass{k}{x}{l}]}{l}\subset C(F;v)\text{ for some }(k,l)\in\I\} \end{equation} is continuous, injective, and $\theta(\tX) = \Et(\s_\X)$. Hence, it is a homeomorphism from $\tX$ to $\Et(\s_\X)$. \end{prop} \begin{proof} First we show that $\theta$ is well-defined. Take $(\klclass{k}{[\klclass{k}{x}{l}]}{l})_{(k,l)\in\I}\in \tX$ and consider its image under $\theta$ -- it is clearly upwards closed and does not contain the zero element. To prove closure under products, suppose we have $(r,s), (t, u)\in \I$ and $E(F; v), E(G; w)\in E_\X$ such that $\klclass{r}{[\klclass{r}{x}{s}]}{s} \subset E(F; v)$ and $\klclass{t}{[\klclass{t}{x}{u}]}{u} \subset E(G; w)$. Let $(k,l)$ be an upper bound for $(r, s), (t, u)$ in $\I$. Then $\klclass{k}{[\klclass{k}{x}{l}]}{l}$ is a subset of both $\klclass{r}{[\klclass{r}{x}{s}]}{s}$ and $\klclass{t}{[\klclass{t}{x}{u}]}{u}$, and so is contained in both $C(F;v)$ and $C(G;w)$. If $E(F;v)E(G;w) = E(H;z)$, then $\klclass{k}{[\klclass{k}{x}{l}]}{l}\subset C(H;z)$, and so $\theta((\klclass{k}{[\klclass{k}{x}{l}]}{l})_{(k,l)\in\I})$ is a filter. We now show that $\theta$ is injective. Suppose we have $x, y\in \tX$ and that $x\neq y$. Then there must exist $(k,l)\in\I$ such that $\klclass{k}{[\klclass{k}{x}{l}]}{l} \neq \klclass{k}{[\klclass{k}{y}{l}]}{l}$. If $(\klclass{k}{x}{l})_{[1,k]} \neq (\klclass{k}{y}{l})_{[1,k]}$, then $$E\left(\bigcup_{r\leq l}\p_r((\klclass{k}{x}{l})_{(k,\infty)}); \ (\klclass{k}{x}{l})_{[1,k]}\right)\in \theta(x)$$ $$E\left(\bigcup_{r\leq l}\p_r((\klclass{k}{y}{l})_{(k,\infty)}); \ (\klclass{k}{y}{l})_{[1,k]}\right)\in \theta(y).$$ The product of these two elements is zero, so $\theta(x)\neq \theta(y)$. So, we instead suppose that there exists $v\in \A^*$ with $|v| = k$ and $\klclass{k}{x}{l} = vx'$, $\klclass{k}{y}{l} = vy'$, so that \[ \klclass{k}{[vx']}{l}\neq\ \klclass{k}{[vy']}{l}. \] Without loss of generality, there must exist $w\in \A^*$ with $|w|\leq l$ such that $wx'\in\X$ and $wy'\notin \X$. Hence, $vx'\in C(w,v)$, and $vy'\notin C(w,v)$. Thus by Lemma \ref{eqclasscontain}, we must have that $\klclass{k}{[vx']}{l} \subset C(w,v)$ and $\klclass{k}{[vy']}{l}\cap C(w,v) = \emptyset$. Similar to above, this implies that $E(w,v)\in \theta(x)$ and $E(w,v)\notin \theta(y)$. Hence $\theta(x)\neq\theta(y)$, and $\theta$ is injective. Now, we prove that $\theta$ is continuous. Take $E(F;v)\in E_\X$, and as before take $D_{E(F;v)} = \{\xi\in \Ef(\s_\X)\mid E(F;v)\in \xi\}$. Then \[ \theta^{-1}(D_{E(F;v)}) = \{(\klclass{k}{[\klclass{k}{x}{l}]}{l})_{(k,l)\in\I}\in \tX\mid\ \klclass{r}{[\klclass{r}{x}{s}]}{s} \subset C(F;v)\text{ for some }(r,s)\in\I\}. \] If $(\klclass{k}{[\klclass{k}{x}{l}]}{l})_{(k,l)\in\I}\in \theta^{-1}(D_{E(F;v)})$, find $(r,s)\in\I$ such that $\klclass{r}{[\klclass{r}{x}{s}]}{s} \subset C(F;v)$. Then if $(\klclass{k}{[\klclass{k}{y}{l}]}{l})_{(k,l)\in\I}\in U(\klclass{r}{x}{s},r,s)$, $\klclass{r}{[\klclass{r}{y}{s}]}{s} =\ \klclass{r}{[\klclass{r}{x}{s}]}{s} \subset C(F;v)$, and so $U(\klclass{r}{x}{s},r,s)\subset \theta^{-1}(D_{E(F;v)})$. On the other hand, \[ \theta^{-1}((D_{E(F;v)})^c) = \{(\klclass{k}{[\klclass{k}{x}{l}]}{l})_{(k,l)\in\I}\in \tX\mid\ \klclass{r}{[\klclass{r}{x}{s}]}{s} \not\subset C(F;v)\text{ for all }(r,s)\in\I\}. \] Take $(\klclass{k}{[\klclass{k}{x}{l}]}{l})_{(k,l)\in\I}\in \theta^{-1}((D_{E(F;v)})^c)$, take $k = |v|$, and take $l = \max\{|f|, |v| : f\in F\}$. Then $\klclass{k}{[\klclass{k}{x}{l}]}{l}\cap C(F;v) = \emptyset$, and so $U(\klclass{k}{x}{l}, k, l)\subset \theta^{-1}((D_{E(F;v)})^c)$. The collection of all sets of the form $D_{E(F;v)}$ together with those of the form $(D_{E(G;w)})^c$ form a subbasis for the topology on $\Ef$, so $\theta$ is continuous. Finally, we must show that $\theta(\tX) = \Et(\s_\X)$. For $x\in \X$, let \[ \tilde x = (\klclass{k}{[x]}{l})_{(k,l)\in\I}. \] Because sets of the form $U(x, k,l)$ for $x\in \X$ and $(k,l)\in\I$ form a basis for the topology on $\tX$, the set $\{\tilde x\in \tX\mid x\in\X\}$ is dense in $\tX$. We claim that $\theta(\tilde x) = \eta_x$. If $E(F;v) \in \eta_x$, then taking $k = |v|$ and $l = \max\{|f|, |v| : f\in F\}$ gives us that $\klclass{k}{[x]}{l}\subset C(F;v)$, and so $E(F;v)\in \theta(\tilde x)$. Conversely, if $E(F;v)\in \theta(\tilde x)$, then $\klclass{k}{[x]}{l}\subset C(F;v)$ for some $(k,l)\in \I$. Hence $x\in C(F;v)$, $E(F;v)\in \eta_x$, and so $\theta(\tilde x) = \eta_x$. So $\theta: \tX\to \Ef(\s_\X)$ is continuous, injective, and maps a dense subspace of $\tX$ bijectively onto a dense subspace of $\Et(\s_\X)$. Both $\tX$ and $\Ef(\s_\X)$ are second countable, and so $\theta(\tX) \subset \Et(\s_\X)$. Since $\tX$ is compact, we must have that $\theta(\tX)$ is a closed set in $\Ef(\s_\X)$ which contains $\Eu(\s_\X)$, and so it contains its closure $\Et(\s_\X)$. Therefore $\theta(\tX) = \Et(\s_\X)$, and since $\tX$ is compact and $\Et(\s_\X)$ is Hausdorff, $\theta:\tX\to \Et(\s_\X)$ is a homeomorphism. \end{proof} Now that we have the above homeomorphism, we can establish the conditions we need to use the universal property of $\OX$. \begin{prop}\label{DXEtiso} There exists a $*$-isomorphism $\Psi:\DX\to C(\Et(\s_\X))$ such that $\Psi(1_{C(w,v)}) = 1_{D^t_{E(w,v)}}$ for all $w, v\in\A^*$. Furthermore, if $E(F;v)\in E_\X$, then $\Psi(1_{C(F;v)}) = 1_{D^t_{E(F;v)}}$. \end{prop} \begin{proof} Take $w,v\in\A^*$, and let $k = |v|$, $l = \max\{|w|,|v|\}$. There are only finitely many $l$-past equivalence classes, so pick a representative from each one, say $x^l_1, x^l_2, \dots x^l_{m(l)}$. By Lemma \ref{eqclasscontain}, $C(w,v)$ is a finite disjoint union of $\kl{k}{l}$ equivalence classes, that is there exists $F\subset \{1, \dots, m(l)\}$ such that \begin{eqnarray} C(w,v) &=& \bigcup_{f\in F}\ \klclass{k}{[vx^l_f]}{l}\nonumber\\ &=& \bigcup_{f\in F} C(v)\cap \sigma^{-k}([x_f^l]_l).\label{Cwvdisjoint} \end{eqnarray} From the proof of Proposition \ref{xttight}, if $\theta$ is as in \eqref{thetadef}, we must have that \[ \theta^{-1}(D^t_{E(w,v)}) = \bigcup_{f\in F} U(vx^l_f, k, l) \] where again this is a disjoint union. Thus, if $\Theta$ is the $*$-isomorphism from $C(\Et(\s_\X))$ to $C(\tX)$ induced by $\theta$, we have \[ \Theta(1_{D_{E(w,v)}}) = \sum_{f\in F}1_{ U(vx^l_f, k, l)}. \] By \cite[Proposition 3 in Chapter 2]{CarlsenThesis}, there exists a $*$-isomorphism $\psi:\DX\to C(\tX)$ such that $\psi(1_{C(v)\cap \sigma^{-|v|}([x^l_f]_l)}) = 1_{U(vx_f^l, |v|, l)}$. By \eqref{Cwvdisjoint} we have \begin{eqnarray*} \Theta^{-1}\circ \psi\left( 1_{C(w,v)} \right) &=& \Theta^{-1}\circ \psi\left(1_{\bigcup_{f\in F} C(v)\cap \sigma^{-k}([x_f^l]_l)}\right)\\ &=& \Theta^{-1}\left(\sum_{f\in F}\psi\left(1_{C(v)\cap \sigma^{-k}([x_f^l]_l)}\right)\right)\\ &=& \Theta^{-1}\left(\sum_{f\in F}1_{ U(vx^l_f, k, l)}\right)\\ &=& \Theta^{-1}(\Theta(1_{D^t_{E(w,v)}}))\\ &=& 1_{D^t_{E(w,v)}}. \end{eqnarray*} Hence taking $\Psi = \Theta^{-1}\circ \psi$ verifies the first statement. The second statement follows from the fact that, for all $F\fs \A^*$ and $v\in \A^*$, we have \[ 1_{C(F;v)} = 1_{\cap_{f\in F}C(f, v)} = \prod_{f\in F}1_{C(f,v)}, \] \[ D^t_{E(F;v)} = D^t_{\prod_{f\in F}E(f,v)} = \bigcap_{f\in F}D^t_{E(f,v)}. \] \end{proof} We now establish what we need to use the universal property of $\Ct(\s_\X)$ \begin{prop}\label{SXOXtight} Let $\X$ be a one-sided subshift over $\A$. Then the map $\pi: \s_\X \to \OX$ defined by \[ \pi(s_\alpha E(F;v)s_\beta^*) = S_\alpha S_v\left(\prod_{f\in F}S_f^*S_f\right)S_v^*S_\beta^*,\hspace{1cm} F\fs \A^*; \alpha, \beta, v\in \A^* \] \[ \pi(0) = 0 \] is a tight representation of $\s_\X$. \end{prop} \begin{proof} Because Definition \ref{OXdef}.1 and the relations \eqref{comm1}, \eqref{comm2}, \eqref{ortho} hold in $\OX$, and each $S_\mu$ is a partial isometry, the same computations from Lemma \ref{idempotentlemma} hold in $\OX$. Hence, the products computed in Lemma \ref{productcomputation} hold in $\OX$, and so $\pi$ is a representation of $\s_\X$. Now suppose we have $X, Y, Z\fs E_\X$ such that $Z$ is a cover of $E_\X^{X,Y}$. Then we know that, for the universal tight representation $\pi_t$, we have \[ \bigvee_{z\in Z}\pi_t(z) = \prod_{x\in X}\pi_t(x)\prod_{y\in Y}(1-\pi_t(y)). \] By Proposition \ref{DXEtiso}, $\pi_t(e) = \Psi\circ \pi(e)$ for all $e\in E_\X$. Thus we have \begin{eqnarray*} \bigvee_{z\in Z}\pi_t(z) &=& \prod_{x\in X}\pi_t(x)\prod_{y\in Y}(1-\pi_t(y))\\ \bigvee_{z\in Z}\Psi\circ \pi(z) &=& \prod_{x\in X}\Psi\circ \pi(x)\prod_{y\in Y}(\Psi\circ \pi(1)-\Psi\circ \pi(y))\\ \Psi\left(\bigvee_{z\in Z}\pi(z)\right) &=& \Psi\left(\prod_{x\in X} \pi(x)\prod_{y\in Y}(\pi(1)- \pi(y))\right)\\ \bigvee_{z\in Z}\pi(z) &=& \prod_{x\in X} \pi(x)\prod_{y\in Y}(I_{\OX}- \pi(y)) \end{eqnarray*} and so, $\pi$ is a tight representation. \end{proof} \begin{theo}\label{mainresult} Let $\X$ be a one-sided subshift over $\A$, let $\s_\X$ be as in \eqref{SXdef}, and let $\OX$ be as in Definition \ref{OXdef}. Then $\Ct(\s_\X)$ and $\OX$ are $*$-isomorphic. \end{theo} \begin{proof} By Proposition \ref{DXEtiso} and the universal property of $\OX$, there exists a $*$-homomorphism $\kappa: \OX\to \Ct(\s_\X)$ such that $\kappa(S_\mu) = \pi_t(s_\mu)$ for all $\mu\in \A^*$. By Proposition \ref{SXOXtight} and the fact that $\Ct(\s_\X)$ is universal for tight representations for $\s_\X$, there exists a $*$-homomorphism $\tau:\Ct(\s_\X)\to \OX$ such that $\tau(\pi_t(s_\mu)) = S_\mu$ for all $\mu\in \A^*$. We therefore must have that $\kappa$ and $\tau$ are inverses of each other, and so $\Ct(\s_\X)$ and $\OX$ are $*$-isomorphic. \end{proof} \subsection{$\OX$ as a partial crossed product} We close with a nice consequence of Theorem \ref{mainresult}. Recall from Lemma \ref{stronglyEunitarylemma} that $\s_\X$ is strongly $E^*$-unitary. Any strongly $E^*$-unitary inverse semigroup $S$ admits a {\em universal group} $\U(S)$, that is there exists an idempotent-pure partial homomorphism $\iota:S\setminus \{0\}\to \U(S)$ such that if every other idempotent-pure partial homomorphism from $S$ factors through $\iota$. We have the following result about strongly $E^*$-unitary inverse semigroups from \cite{MS14}. \begin{theo}\label{MStheo}(See \cite[Theorem 5.3]{MS14}) Let $S$ be a countable strongly $E^*$-unitary inverse semigroup. Then there is a natural partial action of $\U(S)$ on $\Et(S)$ such that the partial crossed product $C(\Et(S))\rtimes \U(S)$ is isomorphic to $\Ct(S)$. \end{theo} We do not define partial actions or partial crossed products here -- the interested reader is directed to the excellent reference \cite{ExBook}. In a preprint version of this work, we concluded the paper by using the above to deduce that $\OX$ could be written as a partial crossed product by the universal group of $\s_\X$. We are grateful to the referee for pointing out that our results allow us to easily see what the universal group is and to say even more about this partial crossed product. In what remains of this paper, we implement the referee's suggestions. The following Lemma is a consequence of our proof of Lemma \ref{stronglyEunitarylemma}. \begin{lem}\label{universalfreegroup} Let $\X$ be a one-sided subshift over $\A$. Then $\U(\s_\X)$ is isomorphic to $\mathbb{F}_\A$. \end{lem} \begin{proof} Let $\phi: \s_\X:\to \mathbb{F}_\A$ be the partial homomorphism from the proof of Lemma \ref{stronglyEunitarylemma}. The group $\U(\s_\X)$ is generated by $\iota(s_a)$ for $a\in \A$, so there is a group homomorphism from $\mathbb{F}_\A$ to $\U(\s_\X)$ which sends $a\in\A$ to $\iota(s_a)$. From the definition of $\U(\s_\X)$, there exists a group homomorphism from $\U(\s_\X)$ to $\mathbb{F}_\A$ such that $\iota(s_a) = a$. Therefore, $\U(\s_\X)$ is isomorphic to $\mathbb{F}_\A$. \end{proof} We now have the following from Theorem \ref{MStheo} \begin{cor}\label{FreePartialCrossedProduct} Let $\X$ be a one-sided subshift over $\A$, and let $\OX$ be as in Definition \ref{OXdef}. Then there is partial action of $\mathbb{F}_\A$ on $\tilde\X$ such that $\OX \cong C(\tilde\X)\rtimes \mathbb{F}_\A$. \end{cor} At this point we must direct the reader to the recent preprint \cite{ED15} which constructs by hand the partial action from Corollary \ref{FreePartialCrossedProduct}, studies it in detail, and uses it to give necessary and sufficient conditions on $\X$ to guarantee that $\OX$ is simple. The article \cite{ED15} appeared after the first preprint version of this work but before the final version was accepted. Therefore, the result in Corollary \ref{FreePartialCrossedProduct} is original to \cite{ED15}. As the referee points out, one can say a little more about this partial crossed product. Given a partial action $\theta$ of a group $\Gamma$ on a space $X$, one can always construct a space $\tilde X \supset X$ and a global action $\tilde \theta$ of $\Gamma$ on $\tilde X$ such that the restriction of $\theta$ to $X$ is the original partial action -- this is called the {\em enveloping action} for $\theta$, see \cite{Ab03}. Unfortunately, even if $X$ is Hausdorff, $\tilde X$ may not be. When $X$ and $\tilde X$ are both locally compact and Hausdorff, then the partial crossed product $C_0(X)\rtimes_\theta \Gamma$ is strongly Morita equivalent to the crossed product $C_0(\tilde X)\rtimes_{\tilde\theta} \Gamma$, see \cite{Ab03} for the details. In our situation, \cite[Corollary 6.17]{MS14} says that because $\s_\X$ is $F^*$-inverse, the space for the enveloping action for the partial action in \cite{ED15} and Corollary \ref{FreePartialCrossedProduct} is Hausdorff. Therefore we have the following. \begin{cor}\label{hausdorffglobalization} Let $\X$ be a one-sided subshift over $\A$, and let $\OX$ be as in Definition \ref{OXdef}. Then there exists a locally compact Hausdorff space $\Omega$ and an action of $\mathbb{F}_\A$ on $\Omega$ such that $\OX$ is strongly Morita equivalent to $C_0(\Omega)\rtimes \mathbb{F}_\A$. \end{cor} {\bf Acknowledgment:} I am grateful to the referee for an extremely careful reading, and for pointing out that the results of this paper could be strengthened to include Lemma \ref{universalfreegroup}, Corollary \ref{FreePartialCrossedProduct}, and Corollary \ref{hausdorffglobalization}. \bibliographystyle{alpha}
1,116,691,498,783
arxiv
\section{Analysis} \label{sec:ablation} \subsection{Out-of-Domain Evaluator} In the experiments in Section \ref{sec:experiment}, each evaluator of DE and DA was trained using the human evaluations of the corresponding generator responses for each of DE and DA. However, it is not practical to use human evaluations for each generator. Therefore, we investigate the impact of using different generation methods and datasets used for evaluators. The same comparisons are made as the comparisons in Section \ref{sec:experiment}. The results are shown in Table \ref{tbl:ood}. We see that the proposed systems defeat the baseline in this case as well. \subsection{Which Response is Chosen?} \label{ssec:ablatioon:which-response} We analyzed which decoding methods or DAs are selected by the evaluator model. The more equally the choices are divided, the more effective the proposed method is. This is because the proposed method cannot be surpassed by using any one specific decoding scheme or DA. The results of the analysis are shown in Tables \ref{tbl:which-response-de} and \ref{tbl:which-response-da}. The choices are scattered, and thus the proposed method can generate diverse responses. \section*{Acknowledgements} This work was supported by a joint research grant from LINE Corporation. \section{Conclusion} \label{sec:conclusion} We developed a dialogue system that can generate engaging responses by incorporating a response evaluator within the dialogue system. We proposed a generator-evaluator model, which consists of multiple response generation through multiple decoding schemes or specified DAs, responses evaluations, and the best response selection. Human evaluation showed that responses generated by the generator-evaluator model are more engaging than those by the baseline systems. However, it is still necessary to improve the quality of responses generated with specified DAs in the future. \section{Dataset} \label{sec:dataset} Since there is not a sufficiently large corpus of Japanese dialogues, we start from corpus construction. \subsection{Twitter Dataset} \label{ssec:dataset:twitter} Our dialogue dataset is collected from Twitter using the Twitter API. Some of the conversations are collected from single-turn conversations only~(Twitter-Single), while the others are collected from multi-turn conversations~(Twitter-Multi). \subsection{Response-Evaluation Dataset} \label{ssec:dataset:evaluation} Our Response-Evaluation dataset contains evaluations of how well a response meets certain viewpoints when looking at a single-turn utterance and response. We use the following four evaluation viewpoints: relevance, interestingness, engagingness, and empathy. We use two types of utterance-response pairs to ensure corpus diversity: the first is the Twitter-Single dataset described in Section \ref{ssec:dataset:twitter}, and the second is the utterances from the Twitter-Single dataset and the responses generated from generator models. We use two types of generator models: the model with the multiple decoding schemes and the model that can generate responses with specified DAs. In the datasets using responses from the generator models, the evaluations of multiple responses to an utterance are collected. They represent how evaluations differ when different responses are generated to the same utterance. The evaluations are collected through crowdsourcing. We ask a five-grade question to five people, and the average was taken as the evaluation value. The statistics of the dataset is shown in Table~\ref{tbl:evaluation-dataset}. \subsection{DA Dataset} \label{ssec:dataset:da} We assign DAs for each utterance in the Twitter-Multi dataset described in Section \ref{ssec:dataset:twitter}. By using the dataset of multi-turn conversations, we intended to make a dataset to capture the transition of DAs in a long conversation. We adopt seven DA types shown in Table \ref{tbl:da}. The number of DA types was reduced to seven because the 42 types in the previous study \cite{stolcke-etal-2000-dialogue} were too fine-grained to be annotated by crowdsourcing. Since there are utterances that do not settle on a single DA, we allow multiple DAs for each utterance. DAs are collected through crowdsourcing. We ask a question to five people and adopt the DA with at least three votes. The amount of utterances for each DA is shown in Table \ref{tbl:da-dataset}. Since the amount of data is not sufficient to be used for training the generator model described in Section \ref{sssec:method:multi-response:da}, this dataset is used to train DA classifiers that are applied to the Twitter-Single dataset for data augmentation. \begin{table}[t] \centering \small \begin{tabular}{l|r|r|r} \hline Dialogue Act& Precision & Recall & F1 \\\hline \hline Advice& 0.52 & 0.57 & 0.54 \\ Emotion & 0.54 & 0.37 & 0.44 \\ Opinion & 0.60 & 0.51 & 0.55\\ Inform & 0.44 & 0.55 & 0.49 \\ Schedule & 0.41 & 0.47 & 0.44\\ Question & 0.88 & 0.51 & 0.65\\ Agree & 0.69 & 0.53 & 0.60 \\\hline \end{tabular} \caption{Results of DA classification by five-fold cross validation.} \label{tbl:da-classifier} \end{table} \begin{table}[t] \centering \small \begin{tabular}{l|r} \hline Dialogue Act& Amount \\\hline \hline Advice& 2,284 \\ Emotion & 4,195 \\ Opinion & 6,580\\ Inform & 63,652\\ Schedule & 89,990\\ Question & 33,629\\ Agree & 70,557\\\hline \end{tabular} \caption{Amount of data for each DA obtained by data augmentation with the DA classifiers.} \label{tbl:da-augmentation} \end{table} \subsubsection*{Augmentation with DA Classifiers} \label{sssec:dataset:da:classifier} We build DA classifiers by fine-tuning BERT with the DA dataset described above. These DA classifiers are binary classifiers that determine whether a response belongs to each of the DAs. The results of DA classification by each DA classifier are shown in Table \ref{tbl:da-classifier}. Metrics are precision, recall, and F1. They are computed using five-fold cross validation. From this table, the predicted DAs do not seem sufficiently precise to be used for data augmentation. However, we manually examined a part of predicted DAs and found that their precision was around 70\%, which made us decide to use them for data augmentation. We augment the DA dataset by applying the classifiers to an unlabeled dialogue corpus. We apply each binary classifier to 1.6M responses of the Twitter-Single dataset, and assign DA labels to responses judged to be positive. The amount of data obtained for each DA is shown in Table \ref{tbl:da-augmentation}. \section{Experiments} \label{sec:experiment} We do the evaluation by crowdsourcing. Workers are shown the outputs of the two systems and asked which of the system they would prefer to continue the conversation with. We ask a question to three workers and take a majority vote as the result. The test corpus consists of 2,000 sentences from the Twitter-Single dataset described in Section \ref{ssec:dataset:twitter} which are not used for training. \subsection{Experimental Setup} \label{ssec:experimental-setup} The proposed systems use two types of generators: one by the multiple decoding schemes~(\textbf{DE}) and one by DA specified responses~(\textbf{DA}). Also, by combining DE and DA, the DA generator can generate responses using the multiple decoding schemes~(\textbf{DADE}). We define \textbf{DE Best}, \textbf{DA Best}, and \textbf{DADE Best}, which refer to the response judged to be the best among multiple responses by the evaluators in DE, DA, and DADE, respectively. Here, in DE, seven responses were generated by repeating sampling five times in addition to greedy search and beam search. In DA, seven responses were obtained by generating responses for the general DA and excluding the emotion DA, whose classifier did not perform accurately. Multiple DAs were allowed for dataset construction, but only one DA was specified for generation. In DADE, seven responses are obtained for each of the seven DAs, resulting in a total of 49 responses. We perform a one-to-one comparison of each proposed system's response with the baseline system's response following~\citet{roller-etal-2021-recipes}. There are five types of responses to be compared, which are shown below. \begin{description} \item[DE Greedy] a response generated by greedy search \item[DE Random] a randomly selected response from seven responses \item [DA General] a response generated by specifying the general DA \item[DA Random] a randomly selected response from seven DAs responses \item[DADE Random] a randomly selected response from 49 responses \end{description} \subsection{Training} \label{ssec:training} We use T5~\cite{2020t5} pretrained with a Japanese corpus\footnotemark as a generator in DE. We fine-tune it with 800,000 pairs from the Twitter-Single dataset described in Section \ref{ssec:dataset:twitter}. The generator model used in DA is further fine-tuned from the DE generator model with the augmented DA dataset in Section \ref{ssec:dataset:da} and a part of the Twitter-Single dataset as general DA responses. It has the same size as the augmented DA dataset (270,000 pairs). The evaluator is a fine-tuned BERT model and constructed for each of DE and DA. The dataset used for fine-tuning is the Engagingness data of the Response-Evaluation dataset described in Section \ref{ssec:dataset:evaluation}. It consists of 4,000 pairs derived from Twitter and 4,000 pairs from either of the DE and DA generators. For DADE, we use the same evaluator as DA. \subsection{Result} \label{ssec:result} The evaluation results of our experiments are shown in Table \ref{tbl:result}. It shows the effectiveness of generating multiple responses and selecting the best response by the evaluator. However, the results of \textbf{DADE Best vs DE Greedy} and \textbf{DADE Best vs DE Best} show the responses of the DA generator were not rated better than the responses of the DE generator. This can be attributed to the fact that the distribution of the dataset was skewed by data augmentation, and further study is needed. Example responses generated by the proposed system are shown in Table~\ref{tbl:system-response}. \footnotetext{{https://huggingface.co/sonoisa/t5-base-japanese}} \section{Introduction} Dialogue systems based on deep neural networks (DNNs) have been widely studied. Although these dialogue systems can generate fluent responses, they often generate dull responses such as ``yes, that's right'' and lack engagingness as a conversation partner~\cite{jiang-de-rijke-2018-sequence}. To develop an engaging dialogue system, it is necessary to generate a variety of responses not to bore users. However, dialogue systems that are capable of generating diverse responses are difficult to automatically evaluate. A commonly used evaluation metric is BLEU~\cite{papineni-etal-2002-bleu} used in machine translation, which measures the degree of n-gram agreement with the reference response. However, due to the diversity of responses, i.e., the one-to-many nature of dialogue~\cite{zhao-etal-2017-learning}, which means the existence of multiple appropriate responses to an utterance, methods that compare the response to reference responses are not appropriate. Therefore, there is a need for evaluation methods that do not use reference responses, and one of them is supervised evaluation. It trains DNNs using human evaluations of responses generated by humans and models~\cite{zhao-etal-2020-designing,ghazarian-etal-2019-better}. DNN-based evaluations correlate to some extent with human evaluations. We aim to develop a dialogue system that is more engaging as a conversational partner by combining independently studied response generation and response evaluation models into a single dialogue system. Specifically, we propose a generator-evaluator model in which multiple responses are generated by the generation model, evaluated by the evaluation model, and the response with the highest evaluation score is selected. By generating multiple responses, we can obtain diverse responses. This can be enabled by the response evaluator that does not require reference responses. Our methods of generating multiple responses include a method with multiple decoding schemes and a method that uses a model that can generate responses with a specified Dialogue Act (DA). Generating responses by specifying various DAs leads to a variety of responses. To evaluate the proposed method, we conducted human evaluation by crowdsourcing to compare the outputs of the proposed system and a baseline system. The evaluation results show that the proposed system outputs better responses, and indicate the effectiveness of the proposed method. We target Japanese dialogue systems and construct datasets of Japanese dialogues. \section{A Generator-Evaluator Model for an Engaging Dialogue System} \label{sec:method} \subsection{Generator-Evaluator Model} \label{ssec:method:generator-evaluator} We propose a generator-evaluator model that generates multiple responses, evaluates these responses, and selects the response with the highest evaluation score for output. The overview of the proposed model is shown in Figure~\ref{graph:system}. Two methods are used to generate multiple responses: multiple decoding schemes and a model that can generate DA specified responses. For the evaluator, BERT is fine-tuned with the Response-Evaluation dataset described in Section \ref{ssec:dataset:evaluation}. \subsection{Multiple Response Generators} \label{ssec:method:multi-response} We use T5~\cite{2020t5} as a generator by fine-tuning it with the method described below. \subsubsection{Multiple Decoding Schemes} \label{sssec:method:multi-response:decoding-scheme} The first method for obtaining multiple responses is to use multiple decoding schemes. Three types of decoding methods are used: greedy search, beam search, and sampling. In particular, to repeat sampling is thought to generate diverse responses. We use the top-50 sampling~\citep{fan-etal-2018-hierarchical}. \subsubsection{DA-Specified Response Generation} \label{sssec:method:multi-response:da} The second method to obtain multiple responses is to use a model that can generate responses with specified DAs. We achieve such a model by training a response generation model based on utterance-response pairs attached with prompts that specify the DA of a response. The dataset format is as follows: (\ref{example:utterance}) represents the input and (\ref{example:response}) represents the response. The italic span denotes the prompt specifying a DA. \eenumsentence{ \item \label{example:utterance} \textit{Return a response of advice to the interlocutor} I haven't done the assignment yet. \item \label{example:response} You should read this book before you do it. } To train this model, we need a dialogue corpus annotated with DA labels. We use the DA dataset described in Section \ref{ssec:dataset:da}. A dialogue corpus without DA labels is also used as responses with a \textit{general} DA. Its prompt is \textit{Return a response}. \section{Related Work} \label{sec:related} Methods for evaluating responses by dialogue systems can be divided into human and automatic evaluations. Automatic evaluation can be further classified into evaluation with or without reference responses. As an automatic evaluation metric, BLEU~\citep{papineni-etal-2002-bleu} is mainly used. It evaluates responses in terms of n-gram agreement with the reference sentence. However, it has been shown that there is no correlation at all between BLEU and human evaluations \citep{liu-etal-2016-evaluate}. One reason for this is the one-to-many nature of dialogue~\citep{zhao-etal-2017-learning}, which means that there are multiple appropriate responses to an utterance. Considering this nature, a method that measures the degree of n-gram agreement with the reference response is inappropriate for evaluating responses. Therefore, automatic evaluation methods without any reference responses have been studied~\citep{zhao-etal-2020-designing,ghazarian-etal-2019-better}. They trained BERT~\citep{devlin-etal-2019-bert} on a dataset of human evaluations to perform response evaluation that correlates with the human evaluations. \\ \indent DA represents the role of an utterance in a dialogue. There are some datasets annotated with DAs such as SwDA~\citep{stolcke-etal-2000-dialogue} and MRDA~\citep{shriberg-etal-2004-icsi}. However, such datasets exist only for English, and we construct a DA dataset in Japanese. ~\citet{raheja-tetreault-2019-dialogue,10.1145/3331184.3331375} constructed a model that classifies a DA for an utterance. \citet{kawano-etal-2019-neural}~proposed a model to generate responses with a specified DA. This was achieved through adversarial learning. In this study, we use a more straightforward method to control responses.
1,116,691,498,784
arxiv
\section{Introduction} \label{sec:intro} This article is adapted from Chapter 16 of \href{http://sachdev.physics.harvard.edu/qptweb/toc.html}{{\em Quantum Phase Transitions}, 2nd edition}, Cambridge University Press. It is not conventional to think of dilute quantum liquids as being in the vicinity of a quantum phase transition. However, there is a simple sense in which they are, although there is often no broken symmetry or order parameter associated with this quantum phase transition. We shall show below that the perspective of such a quantum phase transition allows a unified and efficient description of the universal properties of quantum liquids. Stated most generally, consider a quantum liquid with a global ${\rm U}(1)$ symmetry. We shall be particularly interested in the behavior of the conserved density, generically denoted by $Q$ (usually the particle number), associated with this symmetry. The quantum phase transition is between two phases with a specific $T=0$ behavior in the expectation value of $Q$. In one of the phases, $\langle Q \rangle$ is pinned precisely at a quantized value (often zero) and does not vary as microscopic parameters are varied. This quantization ends at the quantum critical point with a discontinuity in the derivative of $\langle Q \rangle$ with respect to the tuning parameter (usually the chemical potential), and $\langle Q \rangle$ varies smoothly in the other phase; there is no discontinuity in the value of $\langle Q \rangle$, however. The most familiar model exhibiting such a quantum phase transition is the dilute Bose gas. We express its coherent state partition function, $Z_B$, in terms of complex field $\Psi_B (x, \tau)$, where $x$ is a $d$-dimensional spatial co-ordinate and $\tau$ is imaginary time: \begin{eqnarray} Z_B &=& ~\int {\cal D} \Psi_B (x,\tau) \exp\left( - ~\int_0^{1/T} d \tau ~\int d^d x {\cal L}_{B} \right), \nonumber \\ {\cal L}_{B} &=& \Psi_B^{\ast} ~\frac{\partial \Psi_B}{\partial \tau} + ~\frac{1}{2m} \left| \nabla \Psi_B \right|^2 -\mu |\Psi_B|^2 + \frac{u_0}{2} |\Psi_B|^4. \label{xx0} \end{eqnarray} We can identify the charge $Q$ with the boson density $\Psi_B^{\ast} \Psi_B$ \begin{equation} \langle Q \rangle = - \frac{\partial {\cal F}_B}{\partial \mu} = \langle |\Psi_B|^2 \rangle, \label{xx0a} \end{equation} with ${\cal F}_B = - (T/V) \ln Z_B$. The quantum critical point is precisely at $\mu = 0$ and $T=0$, and there are {\em no} fluctuation corrections to this location from the terms in ${\cal L}_B$. So at $T=0$, $ \langle Q \rangle$ takes the quantized value $\langle Q \rangle = 0$ for $\mu < 0$, and $\langle Q \rangle > 0$ for $\mu > 0$; we will describe the nature of the onset at $\mu=0$ and finite-$T$ crossovers in its vicinity. Actually, we will begin our analysis in Section~\ref{sec:fermigas} by a model simpler than $Z_B$, which displays a quantum phase transition with the same behavior in a conserved ${\rm U}(1)$ density $\langle Q \rangle$ and has many similarities in its physical properties. The model is exactly solvable and is expressed in terms of a continuum canonical spinless fermion field $\Psi_F$; its partition function is \begin{eqnarray} Z_F &=& ~\int {\cal D} \Psi_F (x,\tau) \exp\left( - ~\int_0^{1/T} d \tau ~\int d^d x {\cal L}_{F} \right), \nonumber \\ {\cal L}_{F} &=& \Psi_F^{\ast} ~\frac{\partial \Psi_F}{\partial \tau} + ~\frac{1}{2m} \left| \nabla \Psi_F \right|^2 -\mu |\Psi_F|^2 . \label{xx0b} \end{eqnarray} ${\cal L}_F$ is just a free field theory. Like $Z_B$, $Z_F$ has a quantum critical point at $\mu=0$, $T=0$ and we will discuss its properties; in particular, we will show that all possible fermionic nonlinearities are irrelevant near it. The reader should not be misled by the apparently trivial nature of the model in (\ref{xx0b}); using the theory of quantum phase transitions to understand free fermions might seem like technological overkill. We will see that $Z_F$ exhibits crossovers that are quite similar to those near far more complicated quantum critical points, and observing them in this simple context leads to considerable insight. In general spatial dimension, $d$, the continuum theories $Z_B$ and $Z_F$ have different, though closely related, universal properties. However, we will argue that the quantum critical points of these theories are {\em exactly} equivalent in $d=1$. We will see that the bosonic theory $Z_B$ is strongly coupled in $d=1$, and will note compelling evidence that the solvable fermionic theory $Z_F$ is its exactly universal solution in the vicinity of the $\mu=0$, $T=0$ quantum critical point. This equivalence extends to observable operators in both theories, and allows exact computation of a number of universal properties of $Z_B$ in $d=1$. Our last main topic will be a discussion of the dilute spinful Fermi gas in Section~\ref{sec:feshbach}. This generalizes $Z_F$ to a spin $S=1/2$ fermion $\Psi_{F\sigma}$, with $\sigma = \uparrow, \downarrow$. Now Fermi statistics do allow a contact quartic interaction, and so we have \begin{eqnarray} Z_{Fs} &=& ~\int {\cal D} \Psi_{F\uparrow} (x,\tau) {\cal D} \Psi_{F\downarrow} (x,\tau) \exp\left( - ~\int_0^{1/T} d \tau ~\int d^d x \, {\cal L}_{Fs} \right), \nonumber \\ {\cal L}_{Fs} &=& \Psi_{F\sigma}^{\ast} ~\frac{\partial \Psi_{F\sigma}}{\partial \tau} + ~\frac{1}{2m} \left| \nabla \Psi_{F\sigma} \right|^2 -\mu |\Psi_{F\sigma}|^2 + u_0 \Psi_{F\uparrow}^\ast \Psi_{F \downarrow}^\ast \Psi_{F\downarrow} \Psi_{F \uparrow}. \label{fesh1} \end{eqnarray} This theory conserves fermion number, and has a phase transition as a function of increasing $\mu$ from a state with fermion number 0 to a state with non-zero fermion density. However, unlike the above two cases of $Z_B$ and $Z_F$, the transition is not always at $\mu=0$. The problem defined in (\ref{fesh1}) has recently found remarkable experimental applications in the study of ultracold gases of fermionic atoms. These experiments are aslo able to tune the value of the interaction $u_0$ over a wide range of values, extended from repulsive to attractive. For the attractive case, the two-particle scattering amplitude has a Feshbach resonance where the scattering length diverges, and we obtain the unitarity limit. We will see that this Feshbach resonance plays a crucial role in the phase transition obtained by changing $\mu$, and leads to a rich phase diagram of the so-called ``unitary Fermi gas''. Our treatment of $Z_{Fs}$ in the experimental important case of $d=3$ will show that it defines a strongly coupled field theory in the vicinity of the Feshbach resonance for attractive interactions. It therefore pays to find alternative formulations of this regime of the unitary Fermi gas. One powerful approach is to promote the two fermion bound state to a separate canonical Bose field. This yields a model, $Z_{FB}$ with both elementary fermions and bosons ; {\em i.e.\/} it is a combination of $Z_B$ and $Z_{Fs}$ with interactions between the fermions and bosons. We will define $Z_{FB}$ in Section~\ref{sec:feshbach}, and use it to obtain a number of experimentally relevant results for the unitary Fermi gas. Section~\ref{sec:fermigas} will present a thorough discussion of the universal properties of $Z_F$. This will be followed by an analysis of $Z_B$ in Section~\ref{sec:xx3}, where we will use renormalization group methods to obtain perturbative predictions for universal properties. The spinful Fermi gas will be discussed in Section~\ref{sec:feshbach}. \section{The Dilute Spinless Fermi Gas} \label{sec:fermigas} This section will study the properties of $Z_F$ in the vicinity of its $\mu=0$, $T=0$ quantum critical point. As $Z_F$ is a simple free field theory, all results can be obtained exactly and are not particularly profound in themselves. Our main purpose is to show how the results are interpreted in a scaling perspective and to obtain general lessons on the nature of crossovers at $T>0$. First, let us review the basic nature of the quantum critical point at $T=0$. A useful diagnostic for this is the conserved density $Q$, which in the present model we identify as $\Psi_F^{\dagger} \Psi_F$. As a function of the tuning parameter $\mu$, this quantity has a critical singularity at $\mu=0$: \begin{equation} \big\langle \Psi^{\dagger}_F \Psi_F \big\rangle = \left\{ \begin{array}{c@{\quad}c} (S_d /d) (2 m \mu)^{d/2}, & \mu > 0, \\ 0, & \mu < 0, \end{array} \right. \label{xx5} \end{equation} where the phase space factor $S_d = 2 /[\Gamma(d/2) (4 \pi)^{d/2}]$. We now proceed to a scaling analysis. Notice that at the quantum critical point $\mu = 0$, $T=0$, the theory ${\cal L}_F$ is invariant under the scaling transformations: \begin{eqnarray} x' &=& x e^{-\ell}, \nonumber \\ \tau' &=& \tau e^{-z\ell}, \label{xx7}\\ \Psi'_F &=& \Psi_F e^{d \ell/2}, \nonumber \end{eqnarray} provided we make the choice of the dynamic exponent \begin{equation} z = 2. \label{xx7a} \end{equation} The parameter $m$ is assumed to remain invariant under the rescaling, and its role is simply to ensure that the relative physical dimensions of space and time are compatible. The transformation (\ref{xx7}) also identifies the scaling dimension \begin{equation} \mbox{dim} [ \Psi_F ] = d/2. \label{xx7b} \end{equation} Now turning on a nonzero $\mu$, it is easy to see that $\mu$ is a relevant perturbation with \begin{equation} \mbox{dim} [\mu ] = 2. \label{xx7c} \end{equation} There will be no other relevant perturbations at this quantum critical point, and so we have for the correlation length exponent \begin{equation} \nu = 1/2. \label{xx7d} \end{equation} We can now examine the consequences of adding interactions to ${\cal L}_F$. A contact interaction such as $\int d x (\Psi^{\dagger}_F (x) \Psi_F(x))^2 $ vanishes because of the fermion anticommutation relation. (A contact interaction is however permitted for a spin-1/2 Fermi gas and will be discussed in Section~\ref{sec:feshbach}) The simplest allowed term for the spinless Fermi gas is \begin{equation} {\cal L}_1 = \lambda \big( \Psi_F^{\dagger}(x,\tau) \nabla \Psi_F^{\dagger} (x, \tau) \Psi_F (x, \tau) \nabla \Psi_F (x, \tau) \big), \label{xx7e} \end{equation} where $\lambda$ is a coupling constant measuring the strength of the interaction. However, a simple analysis shows that \begin{equation} \mbox{dim}[\lambda] = -d. \label{xx7f} \end{equation} This is negative and so $\lambda$ is irrelevant and can be neglected in the computation of universal crossovers near the point $\mu=T=0$. In particular, it will modify the result (\ref{xx5}) only by contributions that are higher order in $\mu$. Turning to nonzero temperatures, we can write down scaling forms. Let us define the fermion Green's function \begin{equation} G_F (x, t) = \big\langle \Psi_F (x,t) \Psi_F^{\dagger} (0,0) \big\rangle; \label{xx7g} \end{equation} then the scaling dimensions above imply that it satisfies \begin{equation} G_F (x, t) = \left(2 m T \right)^{d/2} \Phi_{G_F} \left( (2 m T)^{1/2} x , Tt , \frac{\mu}{T} \right), \label{xx10} \end{equation} where $\Phi_{G_F}$ is a fully universal scaling function. For this particularly simple theory ${\cal L}_F$ we can of course obtain the result for $G_F$ in closed form: \begin{equation} G_F ( x, t) = \int \frac{d^d k}{(2 \pi)^d} \frac{e^{i k x -i(k^2/(2m) - \mu) t} }{1 + e^{-(k^2 / (2m) -\mu)/T}}, \label{xx11} \end{equation} and it is easy to verify that this obeys the scaling form (\ref{xx10}). Similarly the free energy ${\cal F}_F$ has scaling dimension $d+z$, and we have \begin{equation} {\cal F}_F = T^{d/2+1} \Phi_{{\cal F}_F} \left( \frac{\mu}{T} \right) \label{xx10a} \end{equation} with $\Phi_{{\cal F}_F}$ a universal scaling function; the explicit result is, of course, \begin{equation} {\cal F}_F = - \int \frac{d^d k}{(2 \pi)^d} \ln \big( 1 + e^{(\mu - k^2 / (2 m))/T}\big), \label{xx12} \end{equation} which clearly obeys (\ref{xx10a}). The crossover behavior of the fermion density \begin{equation} \langle Q \rangle = \big\langle \Psi_F^{\dagger} \Psi_F\big\rangle = - \frac{\partial {\cal F}_F}{\partial \mu} \label{xx12z} \end{equation} follows by taking the appropriate derivative of the free energy. Examination of these results leads to the crossover phase diagram of Fig.~\ref{xxf2}. We will examine each of the regions of the phase diagram in turn, beginning with the two low-temperature regions. \begin{figure}[t] \centerline{\includegraphics[width=3.5in]{fig-11-2.eps}} \caption{Phase diagram of the dilute Fermi gas $Z_F$ (Eqn. (\protect\ref{xx0b})) as a function of the chemical potential $\mu$ and the temperature $T$. The regions are separated by crossovers denoted by dashed lines, and their physical properties % are discussed in the text. The full lines are contours of equal density, with higher densities above lower densities; the zero density line is $\mu <0$, $T=0$. The line $\mu > 0$, $T=0$ is a line of $z=1$ critical points that controls the longest scale properties of the low-$T$ Fermi liquid region. The critical end point $\mu=0$, $T=0$ has $z=2$ and controls global structure of the phase diagram. In $d=1$, the Fermi liquid is more appropriately labeled a Tomonaga--Luttinger liquid. The shaded region marks the boundary of applicability of the continuum theory and occurs at $\mu, T \sim w$.} \label{xxf2} \end{figure} \subsection{Dilute Classical Gas, $k_B T \ll |\mu|$, $\mu < 0$} \label{sec:xxcg} The ground state for $\mu < 0$ is the vacuum with no particles. Turning on a nonzero temperature produces particles with a small nonzero density $\sim\!\!e^{-|\mu|/T}$. The de Broglie wavelength of the particles is of order $T^{-1/2}$, which is significantly smaller than the mean spacing between the particles, which diverges as $e^{|\mu|/dT}$ as $ T \rightarrow 0$. This implies that the particles behave semiclassically. To leading order from (\ref{xx11}), the fermion Green's function is simply the Feynman propagator of a single particle \begin{equation} G_F (x,t) = \left( \frac{m}{2 \pi i t} \right)^{d/2} \exp \left( - \frac{i m x^2}{2 t} \right), \label{xx8} \end{equation} and the exclusion of states from the other particles has only an exponentially small effect. Notice that $G_F$ is independent of $\mu$ and $T$ and (\ref{xx8}) is the exact result for $\mu=T=0$. The free energy, from (\ref{xx10a}) and (\ref{xx12}), is that of a classical Boltzmann gas \begin{equation} {\cal F}_F = - \left( \frac{m T}{2 \pi} \right)^{d/2} e^{- |\mu|/T}. \label{xx11a} \end{equation} \subsection{Fermi Liquid, $k_B T \ll \mu$, $\mu > 0$} \label{sec:xxflt} The behavior in this regime is quite complex and rich. As we will see, and as noted in Fig.~\ref{xxf2}, the line $\mu > 0$, $T=0$ is itself a line of quantum critical points. The interplay between these critical points and those of the $\mu=0$, $T=0$ critical end point is displayed quite instructively in the exact results for $G_F$ and is worth examining in detail. It must be noted that the scaling dimensions and critical exponents of these two sets of critical points need not (and indeed will not) be the same. The behavior of the $\mu > 0$, $T=0$ critical line emerges as a particular scaling limit of the global scaling functions of the $\mu =0$, $T=0$ critical end point. Thus the latter scaling functions are globally valid everywhere in Fig.~\ref{xxf2}, and describe the physics of all its regimes. First it can be argued, for example, by studying asymptotics of the integral in (\ref{xx11}), that for very short times or distances, the correlators do not notice the consequences of other particles present because of a nonzero $T$ or $\mu$ and are therefore given by the single-particle propagator, which is the $T=\mu=0$ result in (\ref{xx8}). More precisely we have \begin{equation} \mbox{$G(x,t)$ is given by (\ref{xx8}) for}~|x| \ll \left( 2 m \mu \right)^{-1/2},\quad|t| \ll \frac{1}{\mu}. \label{xx12a} \end{equation} With increasing $x$ or $t$, the restrictions in (\ref{xx12a}) are eventually violated and the consequences of the presence of other particles, resulting from a nonzero $\mu$, become apparent. Notice that because $\mu$ is much larger than $T$, it is the first energy scale to be noticed, and as a first approximation to understand the behavior at larger $x$ we may ignore the effects of $T$. Let us therefore discuss the ground state for $\mu > 0$. It consists of a filled Fermi sea of particles (a Fermi liquid) with momenta $k < k_F = (2 m \mu)^{1/2}$. An important property of the this state is that it permits excitations at arbitrarily low energies (i.e., it is {\em gapless}). These low energy excitations correspond to changes in occupation number of fermions arbitrarily close to $k_F$. As a consequence of these gapless excitations, the points $\mu > 0$ ($T=0$) form a line of quantum critical points, as claimed earlier. We will now derive the continuum field theory associated with this line of critical points. We are interested here only in $x$ and $t$ values that violate the constraints in (\ref{xx12a}), and so in occupation of states with momenta near $\pm k_F$. So let us parameterize, in $d=1$, \begin{equation} \Psi (x, \tau) = e^{ik_F x} \Psi_R (x, \tau) + e^{-ik_F x} \Psi_L (x, \tau), \label{xx12b} \end{equation} where $\Psi_{R,L}$ describe right- and left-moving fermions and are fields that vary slowly on spatial scales $\sim\!\!1/k_F = (1/{ 2 m \mu})^{1/2}$ and temporal scales $\sim\!\!1 /\mu$; most of the results discussed below hold, with small modifications, in all $d$. Inserting the above parameterization in ${\cal L}_F$, and keeping only terms lowest order in spatial gradients, we obtain the ``effective'' Lagrangean for the Fermi liquid region, ${\cal L}_{FL}$ in $d=1$: \begin{equation} {\cal L}_{FL} = \Psi_R^{\dagger} \left( \frac{\partial}{\partial \tau} - i v_F \frac{\partial}{\partial x} \right) \Psi_R + \Psi_L^{\dagger} \left( \frac{\partial}{\partial \tau} + i v_F \frac{\partial}{\partial x} \right) \Psi_L, \label{xx13} \end{equation} where $v_F = k_F / m = (2 \mu/m)^{1/2} $ is the Fermi velocity. Now notice that ${\cal L}_{FL}$ is invariant under a scaling transformation, which is rather different from (\ref{xx7}) for the $\mu=0$, $T=0$ quantum critical point: \begin{equation} \begin{array}{rcl} x' &=& x e^{-\ell},\\[4pt] \tau' &=& \tau e^{-\ell}, \\[4pt] \Psi_{R,L}' (x', \tau') &=& \Psi_{R,L} (x, \tau) e^{\ell /2}, \\[4pt] v_F' &=& v_F. \label{xx14} \end{array} \end{equation} The above results imply \begin{equation} z=1, \label{xx14a} \end{equation} unlike $z=2$ (Eqn. (\ref{xx7a})) at the $\mu=0$ critical point, and \begin{equation} \mbox{dim}[\Psi_{R,L}] = 1/2, \label{xx14b} \end{equation} which actually holds for all $d$ and therefore differs from (\ref{xx7b}). Further notice that $v_F$, and therefore $\mu$, are {\em invariant} under rescaling, unlike (\ref{xx7c}) at the $\mu=0$ critical point. Thus $v_F$ plays a role rather analogous to that of $m$ at the $\mu=0$ critical point: It is simply the physical units of spatial and length scales. The transformations (\ref{xx14}) show that ${\cal L}_{LF}$ is scale invariant for each value of $\mu$, and we therefore have a line of quantum critical points as claimed earlier. It should also be emphasized that the scaling dimension of interactions such as $\lambda$ will also change; in particular not all interactions are irrelevant about the $\mu\neq 0$ critical points. These new interactions are, however, small in magnitude provided $\mu$ is small (i.e., provided we are within the domain of validity of the global scaling forms (\ref{xx10}) and (\ref{xx10a}), and so we will neglect them here. Their main consequence is to change the scaling dimension of certain operators, but they preserve the relativistic and conformal invariance of ${\cal L}_{FL}$. This more general theory of $d=1$ fermions is the Tomonaga--Luttinger liquid. \subsection{High-${T}$ Limit, $k_B T \gg |\mu|$} \label{sec:xxhight} This is the last, and in many ways the most interesting, region of Fig.~\ref{xxf2}. Now $ T $ is the most important energy scale controlling the deviation from the $\mu=0$, $T=0$ quantum critical point, and the properties will therefore have some similarities to the ``quantum critical region'' of other strongly interacting models \cite{book}. It should be emphasized that while the value of $ T$ is significantly larger than $|\mu|$, it cannot be so large that it exceeds the limits of applicability for the continuum action ${\cal L}_F$. If we imagine that ${\cal L}_F$ was obtained from a model of lattice fermions with bandwidth $w$, then we must have $T \ll w$. We discuss first the behavior of the fermion density. In the high-$T$ limit of the continuum theory ${\cal L}_F$, $|\mu| \ll T \ll w$, we have from (\ref{xx12}) and (\ref{xx12z}) the universal result \begin{eqnarray} \big\langle \Psi^{\dagger}_F \Psi_F \big\rangle &=& \left({2 m T} \right)^{d/2} \int \frac{d^d y}{(2 \pi)^d} \frac{1}{e^{y^2} + 1} \nonumber \\[5pt] &=& \left( {2 m T} \right)^{d/2} \zeta (d/2) \frac{ ( 1 - 2^{d/2} )}{(4 \pi)^{d/2}}. \label{xx16} \end{eqnarray} This density implies an interparticle spacing that is of order the de Broglie\index{de Broglie wavelength} wavelength $= (1/2 m T)^{1/2}$. Hence thermal and quantum effects are to be equally important, and neither dominate. For completeness, let us also consider the fermion density for $T \gg w$ (the region above the shaded region in Fig.~\ref{xxf2}), to illustrate the limitations on the continuum description discussed above. % Now the result depends upon the details of the nonuniversal fermion dispersion; on a hypercubic lattice with dispersion $\epsilon_k-\mu$, we obtain \begin{eqnarray} \big\langle \Psi^{\dagger}_F \Psi_F \big\rangle &=& \int_{-\pi/a}^{\pi/a}\frac{d^d k}{(2 \pi)^d} \frac{1}{e^{(\varepsilon_k - \mu)/T} + 1} \nonumber\\[5pt] &=& \frac{1}{2 a^d} -\frac{1}{4T} \int_{-\pi/a}^{\pi/a}\frac{d^d k}{(2 \pi)^d} (\varepsilon_k - \mu) + {\cal O}(1/T^{2} ) . \end{eqnarray} The limits on the integration, which extend from $-\pi/a$ to $\pi/a$ for each momentum component, had previously been sent to infinity in the continuum limit $a \rightarrow 0$. In the presence of lattice cutoff, we are able to make a naive expansion of the integrand in powers of $1/T$, and the result therefore only contains negative integer powers of $T$. Contrast this with the universal continuum result (\ref{xx16}) where we had noninteger powers of $T$ dependent upon the scaling dimension of $\Psi$. We return to the universal high-$T$ region, $|\mu| \ll T \ll w$, and describe the behavior of the fermionic Green's function $G_F$, given in (\ref{xx11}). At the shortest scales we again have the free quantum particle behavior of the $\mu = 0$, $T =0$ critical point: \begin{eqnarray} &\mbox{$G_F (x,t)$ is given by (\ref{xx8}) for}~|x| \ll \left( { 2 m T} \right)^{-1/2}, |t| \ll \,\frac{1}{ T}. \label{xx17} \end{eqnarray} Notice that the limits on $x$ and $t$ in (\ref{xx17}) are different from those in (\ref{xx12a}), in that they are determined by $ T$ and not $\mu$. At larger $|x|$ or $t$ the presence of the other thermally excited particles becomes apparent, and $G_F$ crosses over to a novel behavior characteristic of the high-$T$ region. We illustrate this by looking at the large-$x$ asymptotics of the equal-time $G$ in $d=1$ (other $d$ are quite similar): \begin{equation} G_F (x, 0) = \int \frac{dk}{2 \pi} \frac{e^{ikx}}{ 1 + e^{- k^2 /2 m T}}. \end{equation} For large $x$ this can be evaluated by a contour integration, which picks up contributions from the poles at which the denominator vanishes in the complex $k$ plane. The dominant contributions come from the poles closest to the real axis, and give the leading result \begin{eqnarray} && G_F (|x| \rightarrow \infty, 0) = - \left( \,\frac{\pi ^2}{2 m T} \right)^{1/2} \exp\left( - ( 1- i) \left( {m \pi T}\right)^{1/2} x\right). \label{xx18} \end{eqnarray} Thermal effects therefore lead to an exponential decay of equal-time correlations, with a correlation length $\xi = \left({m \pi T} \right)^{-1/2}$. Notice that the $T$ dependence is precisely that expected from the exponent $z=2$ associated with the $\mu=0$ quantum critical point and the general scaling relation $\xi \sim T^{-1/z}$. The additional oscillatory term in (\ref{xx18}) is a reminder that quantum effects are still present at the scale $\xi$, which is clearly of order the de Broglie wavelength of the particles. \section{The Dilute Bose Gas} \label{sec:xx3} This section will study the universal properties quantum phase transition of the dilute Bose gas model $Z_B$ in (\ref{xx0}) in general dimensions. We will begin with a simple scaling analysis that will show that $d=2$ is the upper-critical dimension. The first subsection will analyze the case $d<2$ in some more detail, while the next subsection will consider the somewhat different properties in $d=3$. Some of the results of this section were also obtained by Kolomeisky and Straley \cite{KS1,KS2}. We begin with the analog of the simple scaling considerations presented at the beginning of Section~\ref{sec:fermigas}. At the coupling $u=0$, the $\mu =0 $ quantum critical point of ${\cal L}_B$ is invariant under the transformations (\ref{xx7}), after the replacement $\Psi_F \rightarrow \Psi_B$, and we have as before $z=2$ and\index{scaling dimension!dilute bosons} \begin{equation} \mbox{dim}[\Psi_B ] = d/2,\qquad \mbox{dim}[\mu] = 2; \label{xx40} \end{equation} these results will shortly be seen to be exact in all $d$. We can easily determine the scaling dimension of the quartic coupling $u$ at the $u=0$, $\mu=0$ fixed point under the bosonic analog of the transformations (\ref{xx7}); we find \begin{equation} \mbox{dim}[u_0] = 2-d. \label{xx41} \end{equation} Thus the free-field fixed point is stable for $d>2$, in which case it is suspected that a simple perturbative analysis of the consequences of $u$ will be adequate. However, for $d<2$, a more careful renormalization group--based resummation of the consequences of $u$ is required. This identifies $d=2$ as the upper-critical dimension of the present quantum critical point. \begin{figure}[t] \centerline{\includegraphics[width=3in]{fig-11-3.eps}} \caption{The ladder series of diagrams that contribute the renormalization of the coupling $u$ in $Z_B$ for $d< 2$.} \label{xxf3} \end{figure} Our analysis of the case $d<2$ for the dilute Bose gas quantum critical point will find, somewhat surprisingly, that all the renormalizations, and the associated flow equations, can be determined exactly in closed form. We begin by considering the one-loop renormalization of the quartic coupling $u_0$ at the $\mu=0$, $T=0$ quantum critical point. It turns out that only the ladder series of Feynman diagrams shown in Fig.~\ref{xxf3} need be considered (the $T$ matrix\index{T matrix@$T$ matrix}). Evaluating the first term of the series in Fig.~\ref{xxf3} for the case of zero external frequency and momenta, we obtain the contribution \begin{equation} -u_0^2 \,\int \,\frac{d \omega}{2 \pi} \,\int \,\frac{d^d k}{(2 \pi )^d} \,\frac{1}{(- i \omega + k^2 / (2m))} \,\frac{1}{ (i \omega + k^2 / (2m))} = -u_0^2 \,\int \,\frac{d^d k}{(2 \pi )^d} \frac{m}{k^2} \label{xx42} \end{equation} (the remaining ladder diagrams\index{ladder diagrams} are powers of (\ref{xx42}) and form a simple geometric series). Notice the infrared singularity for $d<2$, which is cured by moving away from the quantum critical point, or by external momenta. We can proceed further by a simple application of the momentum shell RG. Note that we will apply cutoff $\Lambda$ only in momentum space. The RG then proceeds by integrating {\em all\/} frequencies, and momentum modes in the shell between $\Lambda e^{-\ell}$ and $\Lambda$. The renormalization of the coupling $u_0$ is then given by the first diagram in Fig.~\ref{xxf3}, and after absorbing some phase space factors by a redefinition of interaction coupling \begin{equation} u_0 = \frac{\Lambda^{2-d}}{2 m S_d} u, \label{xx43a} \end{equation} we obtain~\cite{fishoh,fwgf} \begin{equation} \frac{du}{d \ell} = \epsilon u - \frac{u^2}{2}. \label{xx45} \end{equation} Here $S_d = 2/(\Gamma(d/2) (4 \pi)^{d/2})$ is the usual phase space factor, and \begin{equation} \epsilon = 2 - d. \label{xx44} \end{equation} Note that for $\epsilon > 0$, there is a stable fixed point at \begin{equation} u^{\ast} = 2 \epsilon, \label{xx46} \end{equation} which will control all the universal properties of $Z_B$. The flow equation (\ref{xx45}), and the fixed point value (\ref{xx46}) are {\em exact\/} to all orders in $u$ or $\epsilon$, and it is not necessary to consider $u$-dependent renormalizations to the field scale of $\Psi_B$ or any of the other couplings in $Z_B$. This result is ultimately a consequence of a very simple fact: The ground state of $Z_B$ at the quantum critical point $\mu=0$ is simply the empty vacuum with no particles. So any interactions that appear are entirely due to particles that have been created by the external fields. In particular, if we introduce the bosonic Green's function (the analog of (\ref{xx11})) \begin{equation} G_B (x, t) = \big\langle \Psi_B (x,t) \Psi_B^{\dagger} (0,0) \big\rangle, \label{xx20y} \end{equation} then for $\mu \leq 0$ and $T=0$, its Fourier transform $G(k, \omega)$ is given exactly by the free field expression \begin{equation} G_B (k, \omega ) = \frac{1}{-\omega + k^2 / (2 m) - \mu}. \label{xx47} \end{equation} The field $\Psi_B^{\dagger}$ creates a particle that travels freely until its annihilation at $(x,t)$ by the field $\Psi_B$; there are no other particles present at $T=0$, $\mu \leq 0$, and so the propagator is just the free field one. The simple result (\ref{xx47}) implies that the scaling dimensions in (\ref{xx40}) are exact. Turning to the renormalization of $u$, it is clear from the diagram in Fig.~\ref{xxf3} that we are considering the interactions of just two particles. For these, the only nonzero diagrams are the one shown in Fig.~\ref{xxf3}, which involve repeated scattering of just these particles. Formally, it is possible to write down many other diagrams that could contribute to the renormalization of $u$; however, all of these vanish upon performing the integral over internal frequencies for there is always one integral that can be closed in one half of the frequency plane where the integrand has no poles. This absence of poles is of course just a more mathematical way of stating that there are no other particles around. We will consider application of these renormalization group results separately for the cases below and above the upper-critical dimension of $d=2$. \subsection{$d < 2$} \label{sec:xx4} First, let us note some important general implications of the theory controlled by the fixed point interaction (\ref{xx46}). As we have already noted, the scaling dimensions of $\Psi_B$ and $\mu$ are given precisely by their free field values in (\ref{xx40}), and the dynamic exponent $z$ also retains the tree-level value $z=2$. All these scaling dimensions are identical to those obtained for the case of the spinless Fermi gas in Section~\ref{sec:fermigas}. Further, the presence of a nonzero and universal interaction strength $u^{\ast}$ in (\ref{xx46}) implies that the bosonic system is stable for the case $\mu > 0$ because the repulsive interactions will prevent the condensation of infinite density of bosons (no such interaction was necessary for the fermion case, as the Pauli exclusion was already sufficient to stabilize the system). These two facts imply that the formal scaling structure of the bosonic fixed point being considered here is identical to that of the fermionic one considered in Section~\ref{sec:fermigas} and that the scaling forms of the two theories are {\em identical}. In particular, $G_B$ will obey a scaling form identical to that for $G_F$ in (\ref{xx10}) (with a corresponding scaling function $\Phi_{G_B}$), while the free energy, and associated derivatives, obey (\ref{xx10a}) (with a scaling function $\Phi_{{\cal F}_B}$). The universal functions $\Phi_{G_B}$ and $\Phi_{{\cal F}_B}$ can be determined order by order in the present $\epsilon = 2-d$ expansion, and this will be illustrated shortly. Although the fermionic and bosonic fixed points share the same scaling dimensions, they are distinct fixed points for general $d < 2$. However, these two fixed points are identical precisely in $d=1$~\cite{sss}. Evidence for this was presented in Ref.~\cite{dsbose}, where the anomalous dimension of the composite operator $\Psi_B^2$ was computed exactly in the $\epsilon$ expansion and was found to be identical to that of the corresponding fermionic operator. Assuming the identity of the fixed points, we can then make a stronger statement about the universal scaling function: those for the free energy (and all its derivatives) are identical $\Phi_{{\cal F}_B} = \Phi_{{\cal F}_F}$ in $d=1$. In particular, from (\ref{xx12}) and (\ref{xx12z}) we conclude that the boson density is given by \begin{equation} \langle Q \rangle = \big\langle \Psi_B^{\dagger} \Psi_B \big\rangle = \int \frac{dk}{2 \pi} \frac{1}{e^{(k^2/(2m) - \mu)/T} + 1} \label{xx47a} \end{equation} in $d=1$ only. The operators $\Psi_B$ and $\Psi_F$ are still distinct and so there is no reason for the scaling functions of their correlators to be the same. However, in $d=1$, we can relate the universal scaling function of $\Psi_B$ to those of $\Psi_F$ via a continuum version of the Jordan-Wigner transformation \begin{equation} \Psi_B (x, t) = \exp \left( i \pi \int_{-\infty}^x dy \Psi_F^{\dagger} (y,t) \Psi_F (y,t) \right) \Psi_F (x,t). \label{xx20z} \end{equation} This identity is applied to obtain numerous exact results in Ref.~\cite{book} As not all observables can be computed exactly in $d=1$ by the mapping to the free fermions, we will now consider the $\epsilon=2-d$ expansion. We will present a simple $\epsilon$ expansion % calculation~\cite{senthil} for illustrative purposes. We focus on density of bosons at $T=0$. Knowing that the free energy obeys the analog of (\ref{xx10a}), we can conclude that a relationship like (\ref{xx5}) holds: \begin{equation} \big\langle \Psi^{\dagger}_B \Psi_B \big\rangle = \left\{ \begin{array}{c@{\quad}c} {\cal C}_d (2 m \mu)^{d/2}, & \mu > 0, \\[3pt] 0, & \mu < 0, \end{array}\right. \label{xx48} \end{equation} at $T=0$, with ${\cal C}_d$ a universal number. The identity of the bosonic and fermionic theories in $d=1$ implies from (\ref{xx5}) or from (\ref{xx47a}) that ${\cal C}_1 = S_1/1 = 1/\pi$. We will show how to compute ${\cal C}_d$ in the $\epsilon$ expansion; similar techniques can be used for almost any observable. Even though the position of the fixed point is known exactly in (\ref{xx46}), not all observables can be computed exactly because they have contributions to arbitrary order in $u$. However, universal results can be obtained order-by-order in $u$, which then become a power series in $\epsilon = 2-d$. As an example, let us examine the low order contributions to the boson density. To compute the boson density for $\mu > 0$, we anticipate that there is condensate of the boson field $\Psi_B$, and so we write \begin{equation} \Psi_B (x,\tau) = \Psi_0 + \Psi_1 (x,t), \label{xx49} \end{equation} where $\Psi_1$ has no zero wavevector and frequency component. Inserting this into ${\cal L}_B$ in (\ref{xx0}), and expanding to second order in $\Psi_1$, we get \begin{eqnarray} {\cal L}_1 &=& - \mu |\Psi_0|^2 + \frac{u_0}{2} |\Psi_0|^4 - \Psi_1^{\ast} \frac{\partial \Psi_1}{\partial \tau} + \frac{1}{2m} \left| \nabla \Psi_1 \right|^2 \nonumber \\ &&-\mu |\Psi_1|^2 + \frac{u_0}{2} \left( 4 |\Psi_0|^2 |\Psi_1|^2 + \Psi_0^2 \Psi_1^{\ast 2} + \Psi_0^{\ast 2} \Psi_1^2 \right). \label{xx50} \end{eqnarray} This is a simple quadratic theory in the canonical Bose field $\Psi_1$, and its spectrum and ground state energy can be determined by the familiar Bogoliubov transformation. Carrying out this step, we obtain the following formal expression for the free energy density ${\cal F}$ as a function of the condensate $\Psi_0$ at $T=0$: \begin{eqnarray} {\cal F} ( \Psi_0 ) &=& - \mu |\Psi_0|^2 + \frac{u_0}{2} |\Psi_0|^4 + \frac{1}{2} \int \frac{d^d k}{( 2\pi )^d} \Biggl[ \left\{ \left(\frac{k^2}{2m} - \mu + 2 u_0 |\Psi_0|^2 \right)^2\right. -\left.\vphantom{L^{L^{L^{L^{L^{L^L}}}}}} u_0^2 |\Psi_0|^4 \right\}^{1/2}\nonumber\\ && - \left(\frac{k^2}{2m} - \mu + 2 u_0 |\Psi_0|^2 \right) \Biggr].\label{xx51} \end{eqnarray} To obtain the physical free energy density, we have to minimize ${\cal F}$ with respect to variations in $\Psi_0$ and to substitute the result back into (\ref{xx51}). Finally, we can take the derivative of the resulting expression with respect to $\mu$ and obtain the required expression for the boson density, correct to the first two orders in $u_0$: \begin{equation} \big\langle \Psi_B^{\dagger} \Psi_B \big\rangle = \frac{\mu}{u_0} + \frac{1}{2} \int \frac{d^d k}{(2 \pi)^d} \left[ 1 - \frac{k^2}{\sqrt{k^2 ( k^2 + 4 m \mu )}} \right]. \label{xx52} \end{equation} To convert (\ref{xx52}) into a universal result, we need to evaluate it at the coupling appropriate to the fixed point (\ref{xx46}). This is most easily done by the field-theoretic RG. So let us translate the RG equation (\ref{xx45}) into this language. We introduce a momentum scale $\tilde{\mu}$ (the tilde is to prevent confusion with the chemical potential) and express $u_0$ in terms of a dimensionless coupling $u_R$ by \begin{equation} u_0= u_R \frac{(2m) \tilde{\mu}^{\epsilon}}{S_d} \left( 1 + \frac{u_R}{2 \epsilon} \right). \label{xx43} \end{equation} The motivation behind the choice of the renormalization factor in (\ref{xx43}) is that the renormalized four-point coupling, when expressed in terms of $u_R$, and evaluated in $d=2-\epsilon$, is free of poles in $\epsilon$ as can easily be explicitly checked using (\ref{xx42}) and the associated geometric series. Then, we evaluate (\ref{xx52}) at the fixed point value of $u_R$, compute any physical observable as a formal diagrammatic expansion in $u_0$, substitute $u_0$ in favor of $u_R$ using (\ref{xx43}), and expand the resulting expression in powers of $\epsilon$. All poles in $\epsilon$ should cancel, but the resulting expression will depend upon the arbitrary momentum scale $\tilde{\mu}$. At the fixed point value $u_R^{\ast}$, dependence upon $\tilde{\mu}$ then disappears and a universal answer remains. In this manner we obtain from (\ref{xx52}) a universal expression in the form (\ref{xx48}) with \begin{equation} {\cal C}_d = S_d \left[\frac{1}{2 \epsilon} + \frac{\ln 2 - 1}{4} + {\cal O} ( \epsilon )\right]. \label{xx53} \end{equation} \subsection{$d=3$} \label{sec:xxd3} Now we briefly discuss $2 < d < 4$: details appear elsewhere \cite{book}. In $d=2$, the upper critical dimension, there are logarithmic corrections which were computed by Prokof'ev {\em et al.} \cite{prokofev}. Related results, obtained through somewhat different methods, are available in the literature~\cite{popov1,popov2,fishoh,sss}. The quantum critical point at $\mu =0$, $T=0$ is above its upper-critical dimension, and we expect mean-field theory to apply. The analog of the mean-field result in the present context is the $T=0$ relation for the density \begin{equation} \big\langle \Psi^{\dagger}_B \Psi_B \big\rangle = \left\{ \begin{array}{c@{\quad}c} \mu / u_0 + \cdots, & \mu > 0, \\[2pt] 0, & \mu < 0, \end{array} \right. \label{xx54} \end{equation} where the ellipses represents terms that vanish faster as $\mu \rightarrow 0$. Notice that this expression for the density is not universally dependent upon $\mu$; rather it depends upon the strength of the two-body interaction $u_0$ (more precisely, it can be related to the $s$-wave scattering length $a$ by $u_0 = 4 \pi a/m$).\index{scattering length} The crossovers and phase transitions at $T>0$ are sketched in Fig.~\ref{xxf4}. \begin{figure}[t] \centerline{\includegraphics[width=3.6in]{fig-11-4.eps}} \caption{Crossovers of the dilute Bose gas in $d=3$ as a function of the chemical potential $\mu$ and the temperature $T$. The regimes labeled A, B, C are described in Ref.~\cite{book}. The solid line is the finite-temperature phase transition where the superfluid order disappears; the shaded region is where there is an effective classical description of thermal fluctuations. The contours of constant density are similar to those in Fig.~\protect\ref{xxf2} and are not displayed. } \label{xxf4} \end{figure} These are similar to those of the spinless Fermi gas, but now there can be a phase transition within one of the regions. Explicit expressions for the crossovers \cite{book} have been presented by Rasolt et al.~\cite{rasolt}, Weichman et al.~\cite{rasolt2} and also addressed in earlier work~\cite{kk1,kk2,creswick}. \section{The Dilute Spinful Fermi Gas: the Feshbach Resonance} \label{sec:feshbach} This section turns to the case of the spinful Fermi gas with short-range interactions; as we noted in the introduction, this is a problem which has acquired renewed importance because of the new experiments on ultracold fermionic atoms. The partition function of the theory examined in this section was displayed in (\ref{fesh1}). The renormalization group properties of this theory in the zero density limit are {\em identical\/} to those the dilute Bose gas considered in Section~\ref{sec:xx3}. The scaling dimensions of the couplings are the same, the scaling dimension of $\Psi_{F\sigma}$ is $d/2$ as for $\Psi_B$ in (\ref{xx40}), and the flow of the $u$ is given by (\ref{xx45}). Thus for $d<2$, a spinful Fermi gas with repulsive interactions is described by the stable fixed point in (\ref{xx46}). However, for the case of spinful Fermi gas case, we can consider another regime of parameters which is of great experimental importance. We can also allow $u$ to be attractive: unlike the Bose gas case, the $u<0$ case is not immediately unstable, because the Pauli exclusion principle can stabilize a Fermi gas even with attractive interactions. Furthermore, at the same time we should also consider the physically important case with $d>2$, when $\epsilon < 0$. The distinct nature of the RG flows predicted by (\ref{xx45}) for the two signs of $\epsilon$ are shown in Fig.~\ref{fig:feshbach}. \begin{figure} \centerline{\includegraphics[width=180pt]{feshbach.eps}} \caption{The exact RG flow of (\ref{xx45}). ({\em a\/}) For $d<2$ ($\epsilon>0$), the infrared stable fixed point at $u=u^\ast > 0$ describes quantum liquids of either bosons or fermions with repulsive interactions which are generically universal in the low density limit. In $d=1$ this fixed point is described by the spinless free Fermi gas (`Tonks' gas), for all statistics and spin of the constituent particles. ({\em b\/}) For $d>2$ ($\epsilon<0$) the infrared unstable fixed point at $u=u^\ast < 0$ describes the Feshbach resonance which obtains for the case of attractive interactions. The relevant perturbation $(u-u^\ast)$ corresponds to the the detuning from the resonant interaction.} \label{fig:feshbach} \end{figure} Notice the {\em unstable\/} fixed point present for $d>2$ and $u<0$. Thus accessing the fixed point requires fine-tuning of the microscopic couplings. As discussed in Refs.~\cite{nishida,predrag}, this fixed point describes a Fermi gas at a {\em Feshbach resonance,\/} where the interaction between the fermions is universal. For $u<u^\ast$, the flow is to $u \rightarrow -\infty$: this corresponds to a strong attractive interaction between the fermions, which then bind into tightly bound pairs of bosons, which then Bose condense; this corresponds to the so-called `BEC' regime. On the other hand, for $u > u^\ast$, the flow is to $u \nearrow 0$, and the weakly interacting fermions then form the Bardeen-Cooper-Schrieffer (BCS) superconducting state. Note that the fixed point at $u=u^\ast$ for $Z_{Fs}$ has {\em two} relevant directions for $d>2$. As in the other problems considered earlier, one corresponds to the chemical potential $\mu$. The other corresponds to the deviation from the critical point $u - u^\ast$, and this (from (\ref{xx45})) has RG eigenvalue $-\epsilon = d-2 > 0$. This perturbation corresponds to the ``detuning'' from the Feshbach resonance, $\nu$ (not to be confused with the symbol for the correlation length exponent); we have $\nu \propto u - u^\ast$. Thus we have \begin{equation} \mbox{dim}[\mu] = 2~~,~~\mbox{dim}[\nu] = d-2. \label{fesh0} \end{equation} These two relevant perturbations will have important consequences for the phase diagram, as we will see shortly. For now, let us understand the physics of the Feshbach resonance better. For this, it is useful to compute the two body $T$ matrix exactly by summing the graphs in Fig.~\ref{xxf3}, along with a direct interaction first order in $u_0$. The second order term was already evaluated for the bosonic case in (\ref{xx42}) for zero external momentum and frequency, and has an identical value for the present fermionic case. Here, however, we want the off-shell $T$-matrix, for the case in which the incoming particles have momenta $k_{1,2}$, and frequencies $\omega_{1,2}$. Actual for the simple momentum-independent interaction $u_0$, the $T$ matrix depends only upon the sums $k=k_1+ k_2$ and $\omega=\omega_1 + \omega_2$, and is independent of the final state of the particles, and the diagrams in Fig.~\ref{xxf3} form a geometric series. In this manner we obtain \begin{eqnarray} && \frac{1}{T (k, i \omega )} = \frac{1}{u_0} \nonumber \\ &&~~+ \int \,\frac{d \Omega}{2 \pi} \int \frac{d^d p}{(2 \pi )^d} \,\frac{1}{(- i (\Omega+\omega) + (p+k)^2 / (2m))} \,\frac{1}{ (i \Omega + p^2 / (2m))} \nonumber \\ &&= \frac{1}{u_0} + \int_0^\Lambda \frac{d^d p}{(2 \pi )^d} \frac{m}{p^2}+ \frac{\Gamma (1-d/2)}{(4 \pi)^{d/2}} m^{d/2} \left[ - i \omega + \frac{k^2}{4m} \right]^{d/2-1}. \label{fesh2} \end{eqnarray} In $d=3$, the $s$-wave scattering amplitude of the two particles, $f_0$, is related to the $T$-matrix at zero center of mass momentum and frequency $k^2/m$ by $f_0 (k) = - m T(0, k^2 /m)/(4 \pi)$, and so we obtain \begin{equation} f_0 (k) = \frac{1}{-1/a - ik} \label{fesh3} \end{equation} where the scattering length, $a$, is given by \begin{equation} \frac{1}{a} = \frac{4 \pi}{m u_0} + \int_0^\Lambda \frac{d^3 p}{(2 \pi )^3} \frac{4 \pi}{p^2}. \label{fesh4} \end{equation} For $u_0 < 0$, we see from (\ref{fesh4}) that there is a critical value of $u_0$ where the scattering length diverges and changes sign: this is the Feshbach resonance. We identify this critical value with the fixed point $u=u^\ast$ of the RG flow (\ref{xx45}). It is conventional to identify the deviation from the Feshbach resonance by the detuning $\nu$ \begin{equation} \nu \equiv - \frac{1}{a}. \label{fesh5} \end{equation} Note that $\nu \propto u - u^\ast$, as claimed earlier. For $\nu > 0$, we have weak attractive interactions, and the scattering length is negative. For $\nu < 0$, we have strong attractive interactions, and a positive scattering length. Importantly, for $\nu < 0$, there is a two-particle bound state, whose energy can be deduced from the pole of the scattering amplitude; recalling that the reduced mass in the center of mass frame is $m/2$, we obtain the bound state energy, $E_b$ \begin{equation} E_b = - \frac{\nu^2}{m}. \label{fesh6} \end{equation} We can now draw the zero temperature phase diagram \cite{predrag} of $Z_{Fs}$ as a function of $\mu$ and $\nu$, and the result is shown in Fig.~\ref{fig:unitary}. \begin{figure} \centerline{\includegraphics[width=180pt]{unitary.eps}} \caption{Universal phase diagram at zero temperature for the spinful Fermi gas in $d=3$ as a function of the chemical potential $\mu$ and the detuning $\nu$. The vacuum state (shown hatched) has no particles. The position of the $\nu < 0 $ phase boundary is determined by the energy of the two-fermion bound state in (\ref{fesh6}): $\mu = - \nu^2/(2m)$. The density of particles vanishes continuously at the second order quantum phase transition boundary of the superfluid phase, which is indicated by the thin continuous line. The quantum multicritical point at $\mu=\nu=0$ (denoted by the filled circle) controls all the universal physics of the dilute spinful Fermi gas near a Feshbach resonance. The universal properties of the critical line $\mu=0$, $\nu > 0$ map onto the theory of Section~\ref{sec:fermigas}, while those of the critical line $\mu = - \nu^2 / (2m)$, $\nu < 0$ map onto the theory of Section~\ref{sec:xx3}. This implies that the $T>0$ crossovers in Fig.~\ref{xxf2} apply for $\nu > 0$ (the ``Fermi liquid'' region of Fig.~\ref{xxf2} now has BCS superconductivity at an exponentially small $T$), while those of Fig.~\ref{xxf4} apply for $\nu < 0$.} \label{fig:unitary} \end{figure} For $\nu > 0$, there is no bound state, and so no fermions are present for $\mu < 0$. At $\mu =0$, we have an onset of non-zero fermion density, just as in the other sections. These fermions experience a weak attractive interaction, and so experience the Cooper instability once there is a finite density of fermions for $\mu > 0$. So the ground state for $\mu > 0$ is a paired Bardeen-Cooper-Schrieffer (BCS) superfluid, as indicated in Fig.~\ref{fig:unitary}. For small negative scattering lengths, the BCS state modifies the fermion state only near the Fermi level. Consequently as $\mu \searrow 0$ (specifically for $\mu < \nu^2/m$), we can neglect the pairing in computing the fermion density. We therefore conclude that the universal critical properties of the line $\mu=0$, $\nu > 0$ map precisely on to two copies (for the spin degeneracy) of the non-interacting fermion model $Z_F$ studied in Section~\ref{sec:fermigas}. In particular the $T>0$ properties for $\nu>0$ will map onto the crossovers in Fig.~\ref{xxf2}. The only change is that the BCS pairing instability will appear below an exponentially small $T$ in the ``Fermi liquid'' regime. However, the scaling functions for the density as a function of $\mu/T$ will remain unchanged. For $\nu < 0$, the situation changes dramatically. Because of the presence of the bound state (\ref{fesh6}), it will pay to introduce fermions even for $\mu < 0$. The chemical potential for a fermion pair is $2 \mu$, and so the threshold for having a non-zero density of paired fermions is $\mu = E_b/2$. This leads to the phase boundary shown in Fig.~\ref{fig:unitary} at $\mu = - \nu^2 / (2m)$. Just above the phase boundary, the density of fermion pairs in small, and so these can be treated as canonical bosons. Computations of the interactions between these bosons \cite{predrag} show that they are repulsive. Therefore we map their dynamics onto those of the dilute Bose gas studied in Section~\ref{sec:xx3}. Thus the universal properties of the critical line $\mu = - \nu^2 / (2m)$ are equivalent to those of $Z_B$. Specifically, this means that the $T>0$ properties across this critical line map onto those of Fig.~\ref{xxf4}. Thus we reach the interesting conclusion that the Feshbach resonance at $\mu=\nu=0$ is a multicritical point separating the density onset transitions of $Z_F$ (Section~\ref{sec:fermigas}) and $Z_B$ (Section~\ref{sec:xx3}). This conclusion can be used to sketch the $T>0$ extension of Fig.~\ref{fig:unitary}, on either side of the $\nu=0$ line. We now need a practical method of computing universal properties of $Z_{Fs}$ near the $\mu = \nu=0$ fixed point, including its crossovers into the regimes described by $Z_F$ and $Z_B$. The fixed point (\ref{xx45}) of $Z_{Fs}$ provides an expansion of the critical theory in the powers of $\epsilon = 2-d$. However, observe from Fig.~\ref{fig:feshbach}, the flow for $u < u^\ast$ is to $u \rightarrow -\infty$. The latter flow describes the crossover into the dilute Bose gas theory, $Z_B$, and so this cannot be controlled by the $2-d$ expansion. The following subsections will propose two alternative analyses of the Feshbach resonant fixed point which will address this difficulty. \subsection{The Fermi-Bose Model} \label{sec:fb} One successful approach is to promote the two fermion bound state in (\ref{fesh6}) to a canonical boson field $\Psi_B$. This boson should also be able to mix with the scattering states of two fermions. We are therefore led to consider the following model \begin{eqnarray} Z_{FB} &=& \,\int {\cal D} \Psi_{F\uparrow} (x,\tau) {\cal D} \Psi_{F\downarrow} (x,\tau) {\cal D} \Psi_{B} (x, \tau) \exp\left( - \,\int d \tau d^d x \, {\cal L}_{FB} \right), \nonumber \\ {\cal L}_{FB} &=& \Psi_{F\sigma}^{\ast} \,\frac{\partial \Psi_{F\sigma}}{\partial \tau} + \,\frac{1}{2m} \left| \nabla \Psi_{F\sigma} \right|^2 -\mu |\Psi_{F\sigma}|^2 \nonumber \\ &+& \Psi_{B}^{\ast} \,\frac{\partial \Psi_{B}}{\partial \tau} + \,\frac{1}{4m} \left| \nabla \Psi_{F\sigma} \right|^2 + (\delta -2\mu) |\Psi_{B}|^2 \nonumber \\ &-& \lambda_0 \left( \Psi_B^\ast \Psi_{F \uparrow} \Psi_{F \downarrow} + \Psi_B \Psi_{F \downarrow}^\ast \Psi_{F \uparrow}^\ast \right). \label{fesh7} \end{eqnarray} Here we have taken the bosons to have mass $2m$, because that is the expected mass of the two-fermion bound state by Galilean invariance. We have omitted numerous possible quartic terms between the bosons and fermions above, and these will turn out to be irrelevant in the analysis below. The conserved U(1) charge for $Z_{FB}$ is \begin{equation} Q = \Psi_{F \uparrow}^\ast \Psi_{F \uparrow} + \Psi_{F \downarrow}^\ast \Psi_{F \downarrow} + 2 \Psi_{B}^\ast \Psi_{B} , \label{fesh8} \end{equation} and so $Z_{FB}$ is in the class of models being studied here. The factor of 2 in (\ref{fesh8}) accounts for the $2 \mu$ chemical potential for the bosons in (\ref{fesh7}). For $\mu$ sufficiently negative it is clear that $Z_{FB}$ will have neither fermions nor bosons present, and so $\langle Q \rangle = 0$. Conversely for positive $\mu$, we expect $\langle Q \rangle \neq 0$, indicating a transition as a function of increasing $\mu$. Furthermore, for $\delta$ large and positive, the $Q$ density will be primarily fermions, while for $\delta $ negative the $Q$ density will be mainly bosons; thus we expect a Feshbach resonance at intermediate values of $\delta$, which then plays the role of detuning parameter. We have thus argued that the phase diagram of $Z_{FB}$ as a function of $\mu$ and $\delta$ is qualitatively similar to that in Fig.~\ref{fig:unitary}, with a Feshbach resonant multicritical point near the center. The main claim of this section is that the universal properties of $Z_{FB}$ and $Z_{Fs}$ are {\em identical\/} near this multicritical point \cite{predrag,nishida}. Thus, in a strong sense, the theories $Z_{FB}$ and $Z_{Fs}$ are equivalent. Unlike the equivalence between $Z_B$ and $Z_F$, which held only in $d=1$, the present equivalence applies for $d > 2$. We will establish the equivalence by an exact RG analysis of the zero density critical theory. We scale the spacetime co-ordinates and the fermion field as in (\ref{xx7}), but allow an anomalous dimension $\eta_b$ for the boson field relative to (\ref{xx40}): \begin{eqnarray} x' &=& x e^{-\ell}, \nonumber \\ \tau' &=& \tau e^{-z\ell}, \nonumber \\ \Psi'_{F\sigma} &=& \Psi_{F \sigma} e^{d \ell/2}, \nonumber \\ \Psi'_{B} &=& \Psi_{B} e^{(d + \eta_b) \ell/2} \nonumber \\ \lambda'_0 &=& \lambda_0 e^{(4 - d - \eta_b) \ell/2} \label{fesh9} \end{eqnarray} where, as before, we have $z=2$. At tree level, the theory $Z_{FB}$ with $\mu = \delta=0$ is invariant under the transformations in (\ref{fesh9}) with $\eta_b = 0$. At this level, we see that the coupling $\lambda_0$ is relevant for $d<4$, and so we will have to consider the influence of $\lambda_0$. This also suggests that we may be able to obtain a controlled expansion in powers of $(4-d)$. Upon considering corrections in powers of $\lambda_0$ in the critical theory, it is not difficult to show that there is a non-trivial contribution from only a single Feynman diagram: this is the self -energy diagram for $\Psi_B$ which is shown in Fig.~\ref{fig:fesh1}. \begin{figure} \centerline{\includegraphics[width=2.5in]{fesh1.eps}} \caption{Feynman diagram contributing to the RG. The dark triangle is the $\lambda_0$ vertex, the full line is the $\Psi_B$ propagator, and the dashed line is the $\Psi_F$ propagator.} \label{fig:fesh1} \end{figure} All other diagrams vanish in the zero density theory, for reasons similar to those discussed for $Z_B$ below (\ref{xx46}). This diagram is closely related to the integrals in the $T$-matrix computation in (\ref{fesh2}), and leads to the following contribution to the boson self energy $\Sigma_B$: \begin{eqnarray} && \Sigma_B (k, i \omega) \nonumber \\ && = \lambda_0^2 \int \frac{d \Omega}{2 \pi} \int_{\Lambda e^{-\ell}}^{\Lambda} \frac{d^d p}{(2 \pi )^d} \frac{1}{(- i (\Omega+\omega) + (p+k)^2 / (2m))} \,\frac{1}{ (i \Omega + p^2 / (2m))} \nonumber \\ && = \lambda_0^2 \int_{\Lambda e^{-\ell}}^{\Lambda} \frac{d^d p}{(2 \pi )^d} \frac{1}{(- i \omega + (p+k)^2 / (2m) + p^2 / (2m))} \nonumber \\ && = \lambda_0^2 \int_{\Lambda e^{-\ell}}^{\Lambda} \frac{d^d p}{(2 \pi )^d} \frac{m}{p^2} - \lambda_0^2 \left( - i \omega + \frac{k^2}{4m} \left( 2 - \frac{4}{d} \right) \right) \int_{\Lambda e^{-\ell}}^{\Lambda} \frac{d^d p}{(2 \pi )^d} \frac{m^2}{p^4}. \label{fesh10} \end{eqnarray} The first term is a constant that can absorbed into a redefinition of $\delta$. For the first time, we see above a special role for the spatial dimension $d=4$, where the momentum integral is logarithmic. Our computations below will turn to be an expansion in powers of $(4-d)$, and so we will evaluate the numerical prefactors in (\ref{fesh10}) with $d=4$. The result turns out to be correct to all orders in $(4-d)$, but to see this explicitly we need to use a proper Galilean-invariant cutoff in a field theoretic approach \cite{predrag}. The simple momentum shell method being used here preserves Galilean invariance only in $d=4$. With the above reasoning, we see that the second term in the boson self-energy in (\ref{fesh10}) can be absorbed into a rescaling of the boson field under the RG. We therefore find a non-zero anomalous dimension \begin{equation} \eta_b = \lambda^2 , \label{fesh11} \end{equation} where we have absorbed phase space factors into the coupling $\lambda$ by \begin{equation} \lambda_0 = \frac{\Lambda^{2-d/2}}{m \sqrt{S_d}} \lambda . \label{fesh12} \end{equation} With this anomalous dimension, we use (\ref{fesh9}) to obtain the exact RG equation for $\lambda$: \begin{equation} \frac{d \lambda}{d \ell} = \frac{(4-d)}{2} \lambda - \frac{\lambda^3}{2} . \label{fesh13} \end{equation} For $d<4$, this flow has a stable fixed point at $\lambda = \lambda^\ast = \sqrt{(4-d)}$. The central claim of this subsection is that the theory $Z_{FB}$ at this fixed point is identical to the theory $Z_{Fs}$ at the fixed point $u=u^\ast$ for $2 < d < 4$. Before we establish this claim, note that at the fixed point, we obtain the exact result for the anomalous dimension of the the boson field \begin{equation} \eta_b = 4-d .\label{fesh14} \end{equation} Let us now consider the spectrum of relevant perturbations to the $\lambda = \lambda^\ast$ fixed point. As befits a Feshbach resonant fixed point, there are 2 relevant perturbations in $Z_{FB}$, the detuning parameter $\delta$ and the chemical potential $\mu$. Apart from the tree level rescalings, at one loop we have the diagram shown in Fig.~\ref{fig:fesh2}. \begin{figure} \centerline{\includegraphics[width=2in]{fesh2.eps}} \caption{Feynman diagram for the mixing between the renormalization of the $\Psi_F^\dagger \Psi_F$ and $\Psi_B^\dagger \Psi_B$ operators. The filled circle is the $\Psi_F^\dagger \Psi_F$ source. Other notation is as in Fig.~\ref{fig:fesh1}.} \label{fig:fesh2} \end{figure} This diagram has a $\Psi_{F \sigma}^{\dagger} \Psi_{F \sigma}$ source, and it renormalizes the co-efficient of $\Phi^{\dagger} \Phi$; it evaluates to \begin{eqnarray} && 2 \lambda_0^2 \int \frac{d \Omega}{2 \pi} \int_{\Lambda e^{-\ell}}^{\Lambda} \frac{d^d p}{(2 \pi)^d} \frac{1}{(- i \Omega + p^2 /(2m))^2 (i \Omega + p^2 /(2m))} \nonumber \\ &&~~~~= 2 \lambda_0^2 \int_{\Lambda e^{-\ell}}^{\Lambda} \frac{d^d p}{(2 \pi)^d} \frac{m^2}{p^4} . \label{fesh15} \end{eqnarray} Combining (\ref{fesh15}) with the tree-level rescalings, we obtain the RG flow equations \begin{eqnarray} \frac{d \mu}{d \ell} &=& 2 \mu \nonumber \\ \frac{d}{d \ell} (\delta - 2 \mu ) &=& (2 - \eta_b) (\delta - 2 \mu) - 2 \lambda^2 \mu, \label{fesh16} \end{eqnarray} where the last term arises from (\ref{fesh15}). With the value of $\eta_b$ in (\ref{fesh11}), the second equation simplifies to \begin{equation} \frac{d \delta}{d \ell} = (2 - \lambda^2) \delta. \label{fesh17} \end{equation} Thus we see that $\mu$ and $\delta$ are actually eigen-perturbations of the fixed point at $\lambda = \lambda^\ast$, and their scaling dimensions are \begin{equation} \mbox{dim}[\mu] = 2~~,~~\mbox{dim}[\delta] = d-2. \label{fesh18} \end{equation} Note that these eigenvalues coincide with those of $Z_{Fs}$ in (\ref{fesh0}), with $\delta$ identified as proportional to the detuning $\nu$. This, along with the symmetries of $Q$ conservation and Galilean invariance, establishes the equivalence of the fixed points of $Z_{FB}$ and $Z_{Fs}$. The utility of the present $Z_{FB}$ formulation is that it can provide a description of universal properties of the unitary Fermi gas in $d=3$ via an expansion in $(4-d)$. Further details of explicit computations can be found in Ref.~\cite{nishida}. \subsection{Large $N$ expansion} \label{sec:predrag} We now return to the model $Z_{Fs}$ in (\ref{fesh1}), and examine it in the limit of a large number of spin components \cite{predrag,veillette}. We also use the structure of the large $N$ perturbation theory to obtain exact results relating different experimental observable of the unitary Fermi gas. The basic idea of the large $N$ expansion is to endow the fermion with an additional flavor index $a = 1 \ldots N/2$, to the fermion field is $\Psi_{F \sigma a}$, where we continue to have $\sigma=\uparrow, \downarrow$. Then, we write $Z_{Fs}$ as \begin{equation} \begin{array}{rcl} Z_{Fs} &=& \,\int {\cal D} \Psi_{F\sigma a} (x,\tau) \exp\left( - \,\int_0^{1/T} d \tau \,\int d^d x \, {\cal L}_{Fs} \right), \\[16pt] {\cal L}_{Fs} &=& \Psi_{F\sigma a}^{\ast} \,\frac{\partial \Psi_{F\sigma a}}{\partial \tau} + \,\frac{1}{2m} \left| \nabla \Psi_{F\sigma a} \right|^2 -\mu |\Psi_{F\sigma a}|^2 \\[16pt] &~&~~~~~ + \displaystyle \frac{2 u_0}{N} \Psi_{F\uparrow a}^\ast \Psi_{F \downarrow a}^\ast \Psi_{F\downarrow b} \Psi_{F \uparrow b}. \end{array} \label{fesh19} \end{equation} where there is implied sum over $a,b = 1 \ldots N/2$. The case of interest has $N=2$, but we will consider the limit of large even $N$, where the problem becomes tractable. As written, there is an evident O($N/2$) symmetry in $Z_{Fs}$ corresponding to rotations in flavor space. In addition, there is U(1) symmetry associated with $Q$ conservation, and a SU(2) spin rotation symmetry. Actually, the spin and flavor symmetry combine to make the global symmetry U(1)$\times$Sp($N$), but we will not make much use of this interesting observation. The large $N$ expansion proceeds by decoupling the quartic term in (\ref{fesh19}) by a Hubbard-Stratanovich transformation. For this we introduce a complex bosonic field $\Psi_B (x, \tau)$ and write \begin{equation} \begin{array}{rcl} Z_{Fs} &=& \,\int {\cal D} \Psi_{F\sigma a} (x,\tau) {\cal D} \Psi_B (x, \tau) \exp\left( - \,\int_0^{1/T} d \tau \,\int d^d x \, \widetilde{\cal L}_{Fs} \right), \\[16pt] \widetilde{\cal L}_{Fs} &=& \Psi_{F\sigma a}^{\ast} \,\frac{\partial \Psi_{F\sigma a}}{\partial \tau} + \,\frac{1}{2m} \left| \nabla \Psi_{F\sigma a} \right|^2 -\mu |\Psi_{F\sigma a}|^2 \\[16pt] &~& + \displaystyle \frac{N}{2 |u_0|} |\Psi_B|^2 - \Psi_B \Psi_{F\uparrow a}^\ast \Psi_{F \downarrow a}^\ast - \Psi_B^\ast \Psi_{F\downarrow a} \Psi_{F \uparrow a}. \end{array} \label{fesh20} \end{equation} Here, and below, we assume $u_0 < 0$, which is necessary for being near the Feshbach resonance. Note that $\Psi_B$ couples to the fermions just like the boson field in the Bose-Fermi model in (\ref{fesh7}), which is the reason for choosing this notation. If we perform the integral over $\Psi_B$ in (\ref{fesh20}), we recover (\ref{fesh19}), as required. For the large $N$ expansion, we have to integrate over $\Psi_{F \sigma a}$ first and obtain an effective action for $\Psi_B$. Because the action in (\ref{fesh20}) is Gaussian in the $\Psi_{F \sigma a}$, the integration over the fermion field involves evaluation of a functional determinant, and has the schematic form \begin{equation} \mathcal{Z}_{Fs} = \int {\cal D} \Psi_B (x, \tau) \exp\left( -N \mathcal{S}_{\rm eff} \left[ \Psi_B (x, \tau) \right] \right), \label{fesh21} \end{equation} where $\mathcal{S}_{\rm eff}$ is the logarithm of the fermion determinant of a single flavor. The key point is that the only $N$ dependence is in the prefactor in (\ref{fesh21}), and so the theory of $\Psi_B$ can controlled in powers of $1/N$. We can expand $\mathcal{S}_{\rm eff}$ in powers of $\Psi_B$: the $p$'th term has a fermion loop with $p$ external $\Psi_B$ insertions. Details can be found in Refs.~\cite{predrag,veillette}. Here, we only note that the expansion to quadratic order at $\mu=\delta=T=0$, in which case the co-efficient is precisely the inverse of the fermion $T$-matrix in (\ref{fesh2}): \begin{equation} \mathcal{S}_{\rm eff} \left[ \Psi_B (x, \tau) \right] = - \frac{1}{2} \int \frac{d \omega}{2 \pi} \frac{d^d k}{( 2\pi)^d} \frac{1}{T (k, i \omega) } |\Psi_B (k, \omega)|^2 + \ldots \label{fesh22} \end{equation} Given $S_{\rm eff}$, we then have to find its saddle point with respect to $\Psi_B$. At $T=0$, we will find the optimal saddle point at a $\Psi_B \neq 0$ in the region of Fig.~\ref{fig:unitary} with a non-zero density: this means that the ground state is always a superfluid of fermion pairs. The traditional expansion about this saddle point yields the $1/N$ expansion, and many experimental observables have been computed in this manner \cite{predrag,veillette,veillette2}. We conclude our discussion of the unitary Fermi gas by deriving an exact relationship between the total energy, $E$, and the momentum distribution function, $n(k)$, of the fermions \cite{tan1,tan2}. We will do this using the structure of the large $N$ expansion. However, we will drop the flavor index $a$ below, and quote results directly for the physical case of $N=2$. As usual, we define the momentum distribution function by \begin{equation} n (k) = \langle \Psi_{F \sigma}^{\dagger} (k, t) \Psi_{F \sigma} (k, t) \rangle, \label{fesh23} \end{equation} with no implied sum over the spin label $\sigma$. The Hamiltonian of the system in (\ref{fesh19}) is the sum of kinetic and interaction energies: the kinetic energy is clearly an integral over $n(k)$ and so we can write \begin{eqnarray} E &=& 2 V \int \frac{d^d k}{( 2\pi)^d} \frac{k^2}{2m} n(k) + u_0 V \langle \Psi_{F\uparrow}^\dagger \Psi_{F\downarrow}^\dagger \Psi_{F\downarrow} \Psi_{F\uparrow} \rangle \nonumber \\ &=& 2 V \int \frac{d^d k}{( 2\pi)^d} \frac{k^2}{2m} n(k) - u_0 \frac{\partial \ln Z_{Fs}}{\partial u_0}. \label{fesh24} \end{eqnarray} where $V$ is the system volume, and all the $\Psi_F$ fields are at the same $x$ and $t$. Now let us evaluate the $u_0$ derivative using the expression for $Z_{Fs}$ in (\ref{fesh20}); this leads to \begin{equation} \frac{E}{V} = 2 \int \frac{d^d k}{( 2\pi)^d} \frac{k^2}{2m} n(k) + \frac{1}{u_0} \left \langle \Psi^\ast_B (x, t) \Psi_B (x, t) \right\rangle. \label{fesh25} \end{equation} Now using the expression (\ref{fesh4}) relating $u_0$ to the scattering length $a$ in $d=3$, we can write this expression as \begin{equation} \frac{E}{V} = \frac{m}{4 \pi a} \left\langle \Psi_B^\ast \Psi_B \right \rangle + 2 \int \frac{d^3 k}{( 2\pi)^3} \frac{k^2}{2m} \left( n(k) - \frac{\left\langle \Psi_B^\ast \Psi_B \right \rangle m^2}{k^4} \right) \label{fesh26} \end{equation} This is the needed universal expression for the energy, expressed in terms of $n(k)$ and the scattering length, and independent of the short distance structure of the interactions. At this point, it is useful to introduce ``Tan's constant'' $C$, defined by \cite{tan1,tan2} \begin{equation} C = \lim_{k \rightarrow \infty} k^4 n (k). \label{fesh27} \end{equation} The requirement that the momentum integral in (\ref{fesh26}) is convergent in the ultraviolet implies that the limit in (\ref{fesh27}) exists, and further specifies its value \begin{equation} C = m^2 \left\langle\Psi_B^\ast \Psi_B \right \rangle. \label{fesh28} \end{equation} We now note that the relationship $n (k) \rightarrow m^2 \left\langle\Psi_B^\ast \Psi_B \right \rangle/k^4$ at large $k$ is also as expected from a scaling perspective. We saw in Section~\ref{sec:fb} that the fermion field $\Psi_F$ does not acquire any anomalous dimensions, and has scaling dimension $d/2$. Consequently $n(k)$ has scaling dimension zero. Next, note that the operator $\Psi_B^\ast \Psi_B$ is conjugate to the detuning from the Feshbach critical point; from (\ref{fesh18}) the detuning has scaling dimension $d-2$, and so $\Psi_B^\ast \Psi_B$ has scaling dimension $d+z - (d-2) = 4$. Combining these scaling dimensions, we explain the $k^{-4}$ dependence of $n(k)$. It now remains to establish the claimed exact relationship in (\ref{fesh28}) as a general property of a spinful Fermi gas near unitarity. As a start, we can examine the large $k$ limit of $n(k)$ in the BCS mean field theory of the superfluid phase: the reader can easily verify that the text-book BCS expressions for $n(k)$ do indeed satisfy (\ref{fesh28}). However, the claim of Refs.~\cite{braaten,sonunitary} is that (\ref{fesh28}) is exact beyond mean field theory, and also holds in the non-superfluid states at non-zero temperatures. A general proof was given in Refs.~\cite{sonunitary}, and relied on the operator product expansion (OPE) applied to the field theory (\ref{fesh20}). The OPE is a general method for describing the short distance and time (or large momentum and frequency) behavior of field theories. Typically, in the Feynman graph expansion of a correlator, the large momentum behavior is dominated by terms in which the external momenta flow in only a few propagators, and the internal momentum integrals can be evaluated after factoring out these favored propagators. For the present situation, let us consider the $1/N$ correction to the fermion Green's function given by the diagram in Fig.~\ref{fig:fesh3}. \begin{figure} \centerline{\includegraphics[width=2.5in]{fesh3.eps}} \caption{Order $1/N$ correction to the fermion Green's function. Notation is as in Fig.~\ref{fig:fesh1}.} \label{fig:fesh3} \end{figure} Representing the bare fermion and boson Green's functions by $G_F$ and $G_B$ respectively, Fig~\ref{fig:fesh3} evaluates to \begin{equation} G_F^2 (k, \omega) \int \frac{d^d p}{(2 \pi)^d} \frac{d \Omega}{2 \pi} G_B (p, \Omega) G_F (-k +p, - \omega + \Omega). \label{fesh29} \end{equation} Here $G_B$ is the propagator of the boson action $\mathcal{S}_{\rm eff}$ specified by (\ref{fesh22}). In the limit of large $k$ and $\omega$, the internal $p$ and $\Omega$ integrals are dominated by $p$ and $\Omega$ much smaller than $k$ and $\omega$; so we can approximate (\ref{fesh29}) by \begin{eqnarray} && G_F^2 (k, \omega) G_F (-k, - \omega) \int \frac{d^d p}{(2 \pi)^d} \frac{d \Omega}{2 \pi} G_B (p, \Omega) \nonumber \\ &&~~= G_F^2 (k, \omega) G_F (-k, - \omega) \left \langle \Psi_B^\ast \Psi_B \right \rangle. \label{fesh30} \end{eqnarray} This analysis can now be extended to all orders in $1/N$. Among these higher order contributions are terms which contribute self energy corrections to the boson propagator $G_B$ in (\ref{fesh30}): it is clear that these can be summed to replace the bare $G_B$ in (\ref{fesh30}) by the exact $G_B$. Then the value of $\left \langle |\Psi_B |^2 \right \rangle$ in (\ref{fesh30}) also becomes the exact value. All remaining contributions can be shown \cite{sonunitary} to fall off faster at large $k$ and $\omega$ than the terms in (\ref{fesh30}). So (\ref{fesh30}) is the exact leading contribution to the fermion Green's function in the limit of large $k$ and $\omega$ after replacing $\left \langle |\Psi_B |^2 \right \rangle$ by its exact value. We can now integrate (\ref{fesh30}) over $\omega$ to obtain $n(k)$ at large $k$. Actually the $\omega$ integral is precisely that in (\ref{fesh15}), which immediately yields the needed relation (\ref{fesh28}). Similar analyses can be applied to determine the the spectral functions of other observables \cite{veillette2,punk,randeriarf,randeriarf2,combescot,braaten2,sonunitary}. Determining of the specific value of Tan's constant requires numerical computations in the $1/N$ expansion of (\ref{fesh21}). From the scaling properties of the Feshbach resonant fixed point in $d=3$, we can deduce the result obeys a scaling form similar to (\ref{xx10}): \begin{equation} C = (2 m T)^2 \Phi_C \left( \frac{\mu}{T} , \frac{ \nu}{\sqrt{2mT}} \right), \label{fesh31} \end{equation} where $\Phi_C$ is a dimensionless universal function of its dimensionless arguments; note that the arguments represent the axes of Fig.~\ref{fig:unitary}. The methods of Refs~\cite{predrag,veillette} can now be applied to (\ref{fesh28}) to obtain numerical results for $\Phi_C$ in the $1/N$ expansion. We illustrate this method here by determining $C$ to leading order in the $1/N$ expansion at $\mu=\nu =0$. For this, we need to generalize the action (\ref{fesh22}) for $\Psi_B$ to $T>0$ and general $N$. Using (\ref{fesh2}) we can modify (\ref{fesh22}) to \begin{equation} \mathcal{S}_{\rm eff} = N T \sum_{\omega_n} \int \frac{d^3 k}{8 \pi^3} \left[ D_0 (k, \omega_n) + D_1 (k, \omega_n) \right] |\Psi_B (k, \omega_n)|^2 , \label{fesh32} \end{equation} where $D_0$ is the $T=0$ contribution, and $D_1$ is the correction at $T>0$: \begin{eqnarray} && D_0 (k, \omega_n) = \frac{m^{3/2}}{16 \pi} \sqrt{ - i \omega_n + \frac{k^2}{4 m}} \label{fesh33} \\ && D_1(k, \omega_n) = \frac{1}{2} \int \frac{d^3 p}{8 \pi^3} \frac{1}{(e^{p^2/(2 mT)} + 1)} \frac{1}{\left( -i \omega + p^2/(2m) + (p+k)^2/(2m) \right)} . \nonumber \end{eqnarray} We now have to evaluate $\left\langle\Psi_B^\ast \Psi_B \right \rangle$ using the Gaussian action in (\ref{fesh32}). It is useful to do this by separating the $D_0$ contribution, which allows us to properly deal with the large frequency behavior. So we can write \begin{equation} \left\langle\Psi_B^\ast \Psi_B \right \rangle = \frac{1}{N} T \sum_{\omega_n} \int \frac{d^3 k}{8 \pi^3} \left[ \frac{1}{D_0 (k, \omega_n) + D_1 (k, \omega_n)} - \frac{1}{D_0 (k, \omega_n)} \right] + D_{00}. \label{fesh34} \end{equation} In evaluating $D_{00}$ we have to use the usual time-splitting method to ensure that the bosons are normal-ordered, and evaluate the frequency summation by analytically continuing to the real axis: \begin{eqnarray} D_{00} &=& \frac{1}{N} \int \frac{d^3 k}{8 \pi^3} \lim_{\eta \rightarrow 0} T \sum_{\omega_n} \frac{e ^{i \omega_n \eta}}{D_0 (k, \omega_n)} \nonumber \\ &=& \frac{16 \pi}{N m^{3/2}} \int \frac{d^3 k}{8 \pi^3} \int_{\frac{k^2}{4m}}^{\infty} \frac{d \Omega}{\pi} \frac{1}{(e^{\Omega/T} - 1)} \frac{1}{\sqrt{ \Omega - k^2 / (4 m)}}. \nonumber \\ &=& \frac{8.37758}{N}\, T^2 \label{fesh35} \end{eqnarray} The frequency summation in (\ref{fesh34}) can be evaluated directly on the imaginary frequency axis: the series is convergent at large $\omega_n$, and is easily evaluated by a direct numerical summation. Numerical evaulation of (\ref{fesh34}) now yields \begin{equation} C = (2mT)^2 \left( \frac{0.67987}{N} + \mathcal{O} (1/N^2) \right) \end{equation} at $\mu = \nu = 0$.
1,116,691,498,785
arxiv
\section{Introduction}\label{sec:intro} The first mathematical study of optimal investment and consumption of an agent with intolerance for any decline in the standard of living appears in \cite{dybvig}. The constraint that the consumption process is non-decreasing, also called \textit{ratcheting of consumption}, can be seen as an extreme form of habit formation. {\footnotesize \begin{displayquote} ``The model in this paper is close to modern models of habit formation such as those of Constantinides (1990), Detemple and Zapareto (1991), Ingersoll (1992), Shrikhande (1992), or Sunderesan (1989). The main difference is in the rapidity (immediate) of the habit formation and the severity (lexicographic) of the agent's preferences for maintaining a new standard of living." \hfill \citet[p. 289]{dybvig} \end{displayquote}} Dybvig suggests that this model might be suitable for situations where consumption involves long-term commitments. {\footnotesize \begin{displayquote} ``For example, it may be a good model of a university or foundation that at least some of the expenditures cannot be decreased quickly, due to implicit and explicit long-term commitments to faculty and due to the commitments to donors about the use of buildings or equipment." \hfill\citet[p. 288]{dybvig} \end{displayquote}} By several changes of variables, a constraint that consumption can fall no faster than at some given rate (i.e., the constraint of the type $c_t\geq e^{-\int_s^t\alpha_udu}c_s$ for $t\geq s\geq 0$, where $\alpha_\cdot$ is given) can be transformed into ratcheting constraint (see Section 2 in \cite{dybvig} for the case when $\alpha_\cdot$ is constant), thus generalizing Dybvig's model to a less rigid form of habit formation. \cite{dybvig} finds the optimal investment and consumption strategies for an infinitely-lived ratchet investor with CRRA utility function in a market with underlying risky asset following the Geometric Brownian Motion (GBM) by considering the corresponding Hamilton--Jacobi--Bellman (HJB) equation. \cite{riedel} shows that, in a complete market with pricing kernel driven by a L\'evy process, an optimal consumption plan of an infinitely-lived ratchet investor with an arbitrary utility function is in fact equal to the running maximum of the optimal consumption plan of an unconstrained investor with the same utility function. Riedel's proof is elementary in the sense that, essentially, it relies on a concavity argument and integrations by parts. \cite{Koo:2012aa} generalize Dybvig's portfolio selection result under the same assumptions on the market to an arbitrary utility function by using duality and the Feynman-Kac formula. \cite{Watson-Scott} extend Riedel's result to finite time horizon by introducing a deterministic function of time, called a coupling curve, that reflects the effects of finiteness of horizon in optimization and is described as a free boundary of a certain free-boundary problem. \cite{Jeon-Koo-Shin2018} study the finite horizon GMB market case with general utility function by transforming it into an infinite family of optimal stopping problems. The working paper of \cite{Arun2012} is the first to study optimal investment and consumption problem under a \textit{drawdown constraint on consumption}. Under this condition, the consumption is not allowed to fall below a fixed proportion $\lambda\in[0,1]$ of the running maximum of past consumption. In particular, $\lambda=0$ corresponds to the unconstrained problem and $\lambda=1$ corresponds to consumption ratcheting. \cite{Arun2012} finds the optimal portfolio and consumption in a GBM market over infinite time horizon for an agent with CRRA utility function by writing down the HJB equation and using duality and a modification of Dybvig's verification argument. \cite{Jeon:2021aa} extend Arun's result to a general class of utility functions by deriving a dual problem consisting of only the choice of an optimal (non-decreasing) maximum consumption process, converting it into an infinite two-dimensional family of optimal stopping problems, and characterizing the solutions to the latter by a family of free boundaries depending on the state variable of the maximum process. The current paper studies both the ratchet constraint and the drawdown constraint on consumption in general incomplete semimartingale markets by duality methods. Clearly, the ratchet constraint is a special case of the drawdown constraint, but in this paper the case $\lambda=1$ is thought of as the basic one and the solution for $\lambda\in(0,1)$ is derived by, loosely speaking, an interpolation between the ratchet investor problem with $\lambda=1$ and the unconstrained problem with $\lambda=0$. Our market model is taken from \cite{mostovyi} with an additional assumption that the stochastic clock is equivalent to the Lebesgue measure on $[0,\hat{T})$, where $\hat{T}$ is either finite or infinite time horizon. \cite{mostovyi} provides a simple necessary and sufficient condition, namely, finiteness of the primal and dual value functions, for the key assertions of utility maximization theory to hold in an incomplete semimartingale market model with intermediate consumption, stochastic clock, and utility stochastic field. This approach parallels the work of \cite{Kramkov:2003aa} on maximization of the expected value of deterministic utility from terminal wealth. A different but common approach to establish duality for expected utility maximization with time- and/or scenario-dependent utility is to require the utility to satisfy additional assumptions, including uniform reasonable asymptotic elasticity of \cite{Karatzas-Zitkovic2003} (see also \cite{Zitkovic2005}, \cite{BK}, \cite{yu}), a generalization of the reasonable asymptotic elasticity condition of \cite{Kramkov:1999aa} for deterministic utility applied to terminal wealth. The main difficulty in applying Mostovyi's result for the ratchet/drawdown constraint is identifying the appropriate primal and dual domains. We handle this issue by introducing a natural extension of the notion of the running maximum to arbitrary non-negative optional processes, which we call a \textit{running essential supremum}, and defining the primal domain as the solid hull of all consumption plans satisfying the ratchet/drawdown constraint (formulated in terms running essential supremum) together with the budget constraint. The dual domains, defined as the polar sets of the primal domains in the sense of \cite{bipolar}, are characterized via a family of orderings we introduce on the set of non-negative optional processes. The corresponding ordering for the ratchet constraint, which we call \textit{chronological ordering}, implicitly appears in convex duality method for optimization over the set of increasing processes described in \cite{BK}, however, the approach of the current paper (i) seems to be more direct, (ii) allows for generalization to the drawdown constraint with $\lambda\in(0,1)$, (iii) allows to add another parameter to optimization, an essential lower bound on the consumption process. This parameter is omnipresent in the literature on optimization under the ratchet/drawdown constraint, since, if the market is Markovian, adding this second parameter turns the problem into Markovian: all the information about the past consumption that is necessary for the future optimization is contained in the current running maximum which serves (up to multiplication by $\lambda$) as a lower bound for the future consumption. Based on \cite{mostovyi}, we derive a duality result for the \textit{two-parameter} optimization problem, where the parameters are the initial wealth and the essential lower bound on consumption. In this regard, the formulation and derivation of our main optimization result (Theorem \ref{thm:main-duality}) is similar to the derivation of \cite{hug-kramkov} from the results of \cite{Kramkov:1999aa,Kramkov:2003aa}, and to the work of \cite{yu}. In a complete market, where the set of equivalent martingale deflators is a singleton, the characterization of dual domains simplifies (Proposition \ref{prop:complete-case-D}) and allows for a more detailed description of the structure of optimizers. It turns out that in the drawdown constraint case the optimizers exhibit tree possible types of behavior: the agent consumes either at the minimal level currently allowed by the drawdown constraint, or at the current running essential supremum level, or as an unconstrained agent (i.e., not restricted by the drawdown constraint) with the same utility function but a different initial wealth. In the case of ratchet constraint, the description of optimizers is linked to the Bank--El Karoui Representation Theorem for stochastic processes and to the related notion of envelope process introduced in \cite{BK}. As a special case, we derive from Corollary~\ref{cor:complete-env} Riedel's formula for the optimal consumption plans in a complete L\'evy market model with exponential time preferences and infinite time horizon. The paper is organized as follows. In Section \ref{sec:math-framework}, we describe the market model, define the domains for consumption processes, and formulate the utility maximization problem. In Section \ref{sec:run-esssup}, we define the running essential supremum process and study its properties. Section \ref{sec:domains} describes the structure of the Brannath--Schachermayer polar sets of the consumption domains. Section \ref{sec:optimization} introduces the two-parameter families of primal and dual domains and proves the main optimization result, Theorem \ref{thm:main-duality}. In Section \ref{sec:complete}, we derive more specific results for the complete market case. In the \hyperref[app:envelope]{Appendix}, we give a modification of a lemma by \cite{BK} concerning existence and uniqueness of the envelope process allowing us to say more about the structure of optimizers for the ratchet constraint in a complete market and, in fact, give an alternative solution (Proposition \ref{prop:alternative-sol}), not relying on the main duality result, for this case. \section{Mathematical framework}\label{sec:math-framework} \subsection{The market model} Let $\hat{T}\in(0,\infty]$ be a deterministic time horizon and denote $\mathcal{I}:=[0,\hat{T}]$ if $\hat{T}<\infty$ and $\mathcal{I}:=[0,\infty)$ if $\hat{T}=\infty$. We consider a market consisting of one num\'eraire asset (for example, a savings account) and $d$ risky assets with discounted price process $S=(S)_{t\in\mathcal{I}}$ given by an $\mathbb{R}^d$-valued semimartingale on a filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in\mathcal{I}},\mathbb{P})$, where the filtration $(\mathcal{F}_t)_{t\in\mathcal{I}}$ satisfies the usual conditions. We denote the corresponding optional $\sigma$-algebra on $\Omega\times[0,\hat{T})$ by $\mathcal{O}$. We fix a stochastic clock $\kappa=(\kappa_t)_{t\in[0,\hat{T}]}$ representing the notion of time according to which consumption is assumed to occur. \begin{assume}\label{ass:clock} There is a \textit{strictly positive}, adapted process $\dot{\kappa}_s:\mathcal{I}\to(0,\infty)$ such that $\kappa_t(\omega)=\int_0^t \dot{\kappa}_s(\omega)ds$ for all $(\omega,t)\in\Omega\times[0,\hat{T}]$. Additionally, $\kappa_t$ is uniformly bounded: $\kappa_{\hat{T}}(\omega)\leq A$ for a finite constant $A$ and for all $\omega\in\Omega$. \end{assume} Two examples of stochastic clocks satisfying these conditions are the exponential clock $\kappa(t)=\frac{1}{\nu}(1-e^{-\nu t})$, $t\geq 0$, for some $\nu>0$ and for any $\hat{T}\in(0,\infty]$, and the Lebesgue clock $\kappa(t)=t$ for $\hat{T}<\infty$. Assumption~\ref{ass:clock} implies, in particular, that the measures $\mathbb{P}\times d\kappa$ and $\mathbb{P}\times \text{Leb}([0,\hat{T}))$ on $\Omega\times[0,\hat{T})$ are equivalent. A portfolio is defined as a triplet $\Pi=(x,H,c)$ of a constant initial wealth $x$, a predictable $\mathbb{R}^d$-valued $S$-integrable process $H=(H_t)_{t\in\mathcal{I}}$ of risky asset quantities, and a non-negative finite-valued optional process $c:\Omega\times[0,\hat{T})\to[0,\infty)$ representing the consumption rate relative to $\kappa$. The discounted value process $V=(V_t)_{t\in\mathcal{I}}$ of portfolio $\Pi$ is \begin{equation}\label{eq:wealth-process} V_t:=x+\int_0^t H_sdS_s-\int_0^tc_sd\kappa_s,\quad t\in\mathcal{I}. \end{equation} For $x>0$, a consumption plan $c$ is called \textit{$x$-admissible} if there exists a predictable $\mathbb{R}^d$-valued $S$-integrable process $H$ such that the value process $V$ of portfolio $(x,H,c)$ is non-negative. \begin{rem}\label{rem:on-model} \begin{enumerate} \item[(i)] Our market model is essentially the model of \cite{mostovyi} but with additional assumptions on the stochastic clock $\kappa$: it is absolutely continuous with respect to the Lebesgue measure and has support of the form $\mathcal{I}=[0,\hat{T}]\cap[0,\infty)$. Clearly, if $\hat{T}<\infty$ then we can embed our model into the infinite horizon Mostovyi's model by assuming that $S_t=S_{\hat{T}}$, $\kappa_t=\kappa_{\hat{T}}$, $\mathcal{F}_t=\mathcal{F}_{\hat{T}}$ for all $t\in(\hat{T},\infty)$. This allows us to freely apply Mostovyi's duality results in what follows. The reason why we separate the finite and infinite time horizon cases is because an $x$-admissible consumption plan $c$ satisfying the drawdown constraint on $[0,\hat{T}]$ for $\hat{T}<\infty$ generally cannot be extended to an $x$-admissible consumption plan on $[0,\infty)$ still satisfying the drawdown constraint on $[0,\infty)$: if we extend $c$ by taking $c_t=0$ for $t\geq \hat{T}$, the drawdown constraint fails after time $\hat{T}$. Hence, the drawdown constraints on consumption throughout $[0,\infty)$ and on $[0,\hat{T}]$ for $\hat{T}<\infty$ are two different (though similar) conditions, and we have to separate their treatment. The results of this paper remain valid if we assume $\hat{T}$ to be a $[0,\infty]$-valued stopping time with respect to a filtration $(\mathcal{F}_{t})_{t\in[0,\infty)}$ satisfying the usual conditions (with the exception of supplementary results in the \hyperref[app:envelope]{Appendix}, which rely on the Representation Theorem of \cite{bank-el-karoui} requiring the stopping time $\hat{T}$ to be predictable, according to their Remark 2.1). In this case, $\mathcal{I}$ becomes a stochastic interval and we assume all processes after time $\hat{T}$ to be equal to their value at time $\hat{T}$ when $\hat{T}$ is finite. \item[(ii)] Since we work in discounted units, $V_t$ in \eqref{eq:wealth-process} is given in the number of num\'eraire assets, and so is the cumulative consumption process $C_t:=\int_0^t c_s d\kappa_s$, $t\in\mathcal{I}$. The initial model of \cite{mostovyi} allows the flexibility of changing $\dot\kappa_t$ to $\dot\kappa_t/n_t$ and $c_t$ to $c_tn_t$, where $n_t$ is a strictly positive optional process, so that $C_t$ remains the same and the uniform boundedness in Assumption \ref{ass:clock} is satisfied for the new stochastic clock with density $\dot\kappa_tn_t$. In the utility maximization problem, \eqref{eq:primal-problem} below, optimizing over $c_t$ can be reformulated as optimizing over $c_tn_t$ by a simple change of utility function $U$, as described in \cite[Remark 2.2]{mostovyi}. However, in our case, this flexibility is no more useful since $c_t$ is the consumption rate on which we want to impose the ratchet/drawdown constraint and hence is uniquely determined: multiplication by time-dependent process $n_t$ does not preserve this type of constraint. Thus, if we are imposing a ratchet/drawdown constraint on consumption rate measured in currency per unit time (with respect to the usual Lebesgue clock $dt$), $\dot\kappa_t$ is uniquely determined from \eqref{eq:wealth-process} through the num\'eraire asset: if the num\'eraire asset is $N_t:=N_0\exp\left(\int_0^t r_sds\right)$, where $N_0>0$ and an optional process $r_t$ represents an interest rate, then the appropriate stochastic clock density is given by $\dot\kappa_t=1/N_t$ and Assumption \ref{ass:clock} must be satisfied for this clock. On the other hand, if we are imposing a ratchet/drawdown constraint on discounted consumption rate (i.e., measured in number of num\'eraire assets per unit time with respect to $dt$) then the appropriate stochastic clock density is $\dot\kappa_t\equiv1$. In particular, this stochastic clock satisfies Assumption \ref{ass:clock} only if $\hat{T}$ is finite (or uniformly bounded in case of stopping time $\hat{T}$). If we want to impose a ratchet/drawdown constraint on consumption rate measured in a different asset, this can be done in a similar way as long as Assumption \ref{ass:clock} for the corresponding clock $\kappa$ is satisfied. \end{enumerate} \end{rem} Next, we define the set of all non-negative value processes associated with portfolios of the form $\Pi=(1,H,0)$, $$\mathcal{X}:=\left\{X\geq 0:\ X_t=1+\int_0^t H_sdS_s \text{ for } t\in\mathcal{I}\right\},$$ and the set of \textit{equivalent martingale deflators}, \begin{align*} \mathcal{Z}:=\left\{\right.&\left.(Z_t)_{t\in\mathcal{I}}>0:\ Z \text{ is a c\`adl\`ag martingale such that } Z_0=1 \text{ and}\right.\\ &\left. XZ=(X_tZ_t)_{t\in\mathcal{I}} \text{ is a local martingale for every } X\in\mathcal{X}\right\}. \end{align*} We make the following assumption, which is related to the absence of arbitrage in the market. \begin{assume}\label{ass:NA}$\mathcal{Z}\neq\emptyset$.\end{assume} If $\hat{T}$ is finite and $S$ is locally bounded then $S$ is a local martingale under any probability measure $\mathbb{Q}\sim \mathbb{P}$ given by $\frac{d\mathbb{Q}}{d\mathbb{P}}=Z_{\hat{T}}$ with $Z\in\mathcal{Z}$. And vice versa, if $S$ is a local martingale under a probability measure $\mathbb{Q}\sim \mathbb{P}$ then $Z_t=\frac{d\mathbb{Q}}{d\mathbb{P}}\big\vert_{\mathcal{F}_{t}}$, $t\in\mathcal{I}$, belongs to $\mathcal{Z}$. Hence in this case, by \cite[Corollary 1.2]{delbaen_general_1994}, Assumption \ref{ass:NA} is equivalent to no free lunch with vanishing risk (NFLVR) condition on the market, a version of no arbitrage condition. According to \cite{mostovyi}, Lemma 4.2, a consumption plan $c$ is $x$-admissible if and only if \begin{equation}\label{eq:x-admissible} \mathbb{E}\left[\int_0^{\hat{T}} c_tZ_td\kappa_t\right]\leq x,\quad \forall Z\in\mathcal{Z}. \end{equation} We adopt the following simplifying notation for non-negative optional processes $c$ and $\delta$, $$\langle c,\delta\rangle:=\mathbb{E}\left[\int_0^{\hat{T}} c_t\delta_t d\kappa_t\right],$$ so, in particular, the $x$-admissibility condition \eqref{eq:x-admissible} can be written as $\sup_{Z\in\mathcal{Z}}\langle c,Z\rangle\leq x$. \begin{rem} The results of this paper hold as well if the set $\mathcal{Z}$ of equivalent martingale deflators is replaced by a set \begin{align*} \mathcal{Z}':=\left\{\right.&\left.(Z_t)_{t\in\mathcal{I}}>0:\ Z \text{ is a c\`adl\`ag local martingale such that } Z_0=1 \text{ and}\right.\\ &\left. XZ=(X_tZ_t)_{t\in\mathcal{I}} \text{ is a local martingale for every } X\in\mathcal{X}\right\} \end{align*} of \textit{equivalent local martingale deflators}. Clearly, $\mathcal{Z}\subseteq\mathcal{Z}'$ and the assumption $\mathcal{Z}'\neq\emptyset$ is weaker than Assumption \ref{ass:NA}. By Proposition~1 in \cite{mostovyiNUPBR}, assumption $\mathcal{Z}'\neq\emptyset$ is equivalent to another version of no arbitrage condition -- no unbounded profit with bounded risk (NUPBR). Under this assumption, a consumption plan $c$ is $x$-admissible if and only if $\sup_{Z\in\mathcal{Z}'}\langle c,Z\rangle\leq x$, analogously to \eqref{eq:x-admissible} (see Lemma~1 in \cite{mostovyiNUPBR}), which makes it possible to replace $\mathcal{Z}$ with $\mathcal{Z}'$ in the arguments that follow. \end{rem} \subsection{Domains for consumption processes}\label{subsec:domains} In order to be able to formulate the drawdown constraint on consumption rate in this generalized framework, it is necessary to have a meaningful extension of the notion of running maximum from continuous processes to processes that are just measurable. Such an extension, which we call a running essential supremum process, is introduced in Definition \ref{def:running-max} and studied in Section \ref{sec:run-esssup}. The running essential supremum $\bar{c}$ of a non-negative optional process $c$ belongs (by Proposition \ref{prop:run-sup-properties}) to the following class of stochastic processes: \begin{defn}\label{def:C-0-class} The class $\mathcal{C}_{\text{inc}}$ consists of predictable processes $c:\Omega\times[0,\hat{T})\to[0,\infty]$ with non-decreasing, left-continuous paths starting from $c_0=0$. \end{defn} Moreover, $\bar{c}$ is the smallest element of $\mathcal{C}_{\text{inc}}$ that is greater or equal than $c$, $\mathbb{P}\times d\kappa-$almost surely on $\Omega\times[0,\hat{T})$ (Propositions \ref{prop:increasing-dom}, \ref{prop:minimality-of-c-bar}). Now we can formulate the constraint and define the corresponding domains. Let $q\in\mathbb{R}$ and $\lambda\in[0,1]$. We introduce the following drawdown constraint on consumption: \begin{equation}\label{cond:ddc-with-q} c_t(\omega)\geq \lambda\cdot(\bar{c}_t(\omega)\vee q),\quad \mathbb{P}\times d\kappa-\text{a.e. on }\Omega\times[0,\hat{T}),\tag{DC$_{\lambda,q}$} \end{equation} where ``$\vee$'' denotes the pointwise maximum of two processes (one of which is the constant process $q$ in this case). Similarly, we reserve the notation ``$\wedge$'' for pointwise minimum of two processes. Clearly, when $q\leq 0$, the constraint \label{def:ddc-with-q} reduces to simply \begin{equation}\label{cond:ddc} c_t(\omega)\geq \lambda\bar{c}_t(\omega),\quad \mathbb{P}\times d\kappa-\text{a.e. on }\Omega\times[0,\hat{T}).\tag{DC$_{\lambda}$} \end{equation} For $x>0$, $\lambda\in[0,1]$, and $q\in\mathbb{R}$, we define the primal domains as follows: \begin{equation}\label{def:primal-dom} \mathcal{C}^\lambda(x,q):=\{c\geq 0\text{ optional}:\ c\vee \lambda(\bar{c}\vee q) \text{ is } x\text{-admissible} \}, \end{equation} and denote $\mathcal{C}^\lambda(x):=\mathcal{C}^\lambda(x,0)$, the domain for optimization without parameter $q$, and $\mathcal{C}^\lambda:=\mathcal{C}^\lambda(1)$ so that $\mathcal{C}^\lambda(x)=x\cdot\mathcal{C}^\lambda$. The set $\mathcal{C}^\lambda(x,q)$ is non-empty as long as the constant process $c\equiv\lambda(0\vee q)$ is $x$-admissible. By \eqref{eq:x-admissible}, this holds if and only if $\lambda q\cdot\sup_{Z\in\mathcal{Z}}\mathbb{E}\left[\int_0^{\hat{T}}Zd\kappa\right]\leq x$. The sets $\mathcal{C}^\lambda(x,q)$ are solid: if $c'\geq c\geq 0$, $\mathbb{P}\times d\kappa-$a.e., and $c'\in\mathcal{C}^\lambda(x,q)$ then $c\in\mathcal{C}^\lambda(x,q)$. As the following proposition states, $\mathcal{C}^\lambda(x,q)$ is in fact the solid hull of all $x$-admissible processes satisfying \eqref{cond:ddc-with-q}. We take definition \eqref{def:primal-dom} as our working definition of the domains, rather than the solid hull definition \eqref{def:primal-dom2}, because it provides a concrete way of checking whether a \textit{given} process $c$ belongs to $\mathcal{C}^\lambda(x,q)$: by checking whether $c\vee\lambda(\bar{c}\vee q)$ is $x$-admissible. \begin{prop}\label{prop:C-is-closed} Let $x>0$ and $q\in\mathbb{R}$. Every $x$-admissible process satisfying \eqref{cond:ddc-with-q} belongs to $\mathcal{C}^\lambda(x,q)$. If $c\in\mathcal{C}^\lambda(x,q)$ then $c\vee \lambda(\bar{c}\vee q)$ satisfies \eqref{cond:ddc-with-q} and belongs to $\mathcal{C}^\lambda(x,q)$. As a consequence, \begin{equation}\label{def:primal-dom2} \begin{aligned} \mathcal{C}^\lambda(x,q)=\left\{\right.&\left. c\geq 0\text{ optional}:\ \exists\ c'\geq 0 \text{ s.t. } c\leq c',\ \mathbb{P}\times d\kappa-\text{a.e.},\right.\\ &\left. c' \text{ satisfies }\eqref{cond:ddc-with-q},\text{ and }c'\text{ is } x\text{-admissible}\right\}. \end{aligned} \end{equation} \end{prop} \begin{proof} For every $x$-admissible process $c$ satisfying \eqref{cond:ddc-with-q}, $c\vee \lambda(\bar{c}\vee q)=c$, $\mathbb{P}\times d\kappa-$a.e., hence $c\vee \lambda(\bar{c}\vee q)$ is $x$-admissible and $c$ belongs to $\mathcal{C}^\lambda(x,q)$ by definition. It is easy to check directly from Definition \ref{def:running-max} that for every non-negative optional $c$ the running essential supremum of the process $c\vee \lambda(\bar{c}\vee q)$ is equal to $\bar{c}\vee \lambda q\mathbbm{1}_{(0,\hat{T})}$ and, therefore, $c\vee \lambda(\bar{c}\vee q)$ automatically satisfies \eqref{cond:ddc-with-q}. Therefore, if $c\in\mathcal{C}^\lambda(x,q)$ then $c\vee \lambda(\bar{c}\vee q)$ is $x$-admissible and satisfies \eqref{cond:ddc-with-q}, hence $c\vee \lambda(\bar{c}\vee q)$ belongs to $\mathcal{C}^\lambda(x,q)$ by the first assertion. To show \eqref{def:primal-dom2}, we take $c':=c\vee \lambda(\bar{c}\vee q)$ and use the first two assertions. \end{proof} Proposition \ref{prop:C-is-closed} implies that if a functional $\mathbb{U}$ on $\mathcal{C}^\lambda(x,q)$ satisfies the monotonicity property \begin{equation}\label{eq:monotone-functional} c_1\leq c_2,\quad \mathbb{P}\times d\kappa-\text{a.e.}\quad \Rightarrow\quad \mathbb{U}(c_1)\leq\mathbb{U}(c_2), \end{equation} and has a maximizer $c$ on $\mathcal{C}^\lambda(x,q)$, then $\hat{c}:=c\vee \lambda(\bar{c}\vee q)\geq c$ is also a maximizer and, in particular, \textit{it is a maximizer over all $x$-admissible processes satisfying \eqref{cond:ddc-with-q}}. If $\lambda=0$ then the domains $\mathcal{C}^\lambda(x,q)=\mathcal{C}^0(x)$ consist of all $x$-admissible consumption plans and this case is handled in \cite{mostovyi}, even with more general assumptions on the stochastic clock $\kappa$. Therefore, we will only consider the optimization problem for $\lambda\in(0,1]$ in this paper. If $\lambda=1$ then we can say slightly more: the constraint \eqref{cond:ddc-with-q} turns into $c_t\geq \bar{c}_t\vee q$, $\mathbb{P}\times d\kappa-$a.e., which, by Proposition \ref{prop:increasing-dom}, holds if and only if $c=\bar{c}\geq q$, $\mathbb{P}\times d\kappa-$a.e. The definition \eqref{def:primal-dom} for $\lambda=1$ turns into $$\mathcal{C}^1(x,q):=\{c\geq 0\text{ optional}:\ \bar{c}\vee q \text{ is } x\text{-admissible} \},$$ and for a functional $\mathbb{U}$ on $\mathcal{C}^1(x,q)$ satisfying \eqref{eq:monotone-functional} we obtain: if $c\in\mathcal{C}^1(x,q)$ is an optimizer then $\bar{c}\vee q\mathbbm{1}_{(0,\hat{T})}$ is also an optimizer and, in particular, it is an optimizer over all $x$-admissible processes $c'\in\mathcal{C}_{\text{inc}}$ with $c'_{0+}\geq q$. This argument allows us to completely embed an optimization problem over the set of $x$-admissible processes $c'\in\mathcal{C}_{\text{inc}}$ with $c'_{0+}\geq q$ into an optimization problem over the subset $\mathcal{C}^1(x,q)$ of non-negative optional processes. Finally, notice that the set $\mathcal{C}^1(x,q)$ contains all $x$-admissible non-negative optional $c$ such that for almost every $\omega\in\Omega$ the path $t\mapsto c_t(\omega)$ is non-decreasing and $c_{0+}(\omega)\geq q$. This holds because $c=\bar{c}\vee q$, $\mathbb{P}\times d\kappa-$a.e., for such $c$. Hence, it is completely legitimate to look at optimization of $\mathbb{U}$ satisfying \eqref{eq:monotone-functional} over all $x$-admissible non-decreasing optional processes that are (essentially) bounded from below by $q$ as optimization over $\mathcal{C}^1(x,q)$. \subsection{Formulation of the optimization problem} Let $U=U(\omega,t,x):\Omega\times[0,\hat{T})\times[0,\infty)\to\mathbb{R}\cup\{-\infty\}$ be a \textit{utility stochastic field} satisfying the following conditions (same conditions as in \cite{mostovyi}): \begin{assume}\label{ass:utility} For every $(\omega,t)\in\Omega\times[0,\hat{T})$ the function $x\mapsto U(\omega,t,x)$ is strictly concave, increasing, continuously differentiable on $(0,\infty)$ and satisfies the Inada conditions: $$\lim_{x\downarrow 0}U'(\omega,t,x)=+\infty\quad\text{and}\quad\lim_{x\to\infty}U'(\omega,t,x)=0,$$ where $U'$ denotes the partial derivative with respect to $x$. At $x=0$ we define, by continuity, $U(\omega,t,0):=\lim_{x\downarrow 0}U(\omega,t,x)$, this value may be $-\infty$. For every $x\geq 0$ the stochastic process $U(\cdot,\cdot,x)$ is optional. \end{assume} We consider the problem where an agent maximizes his expected utility of intertemporal consumption $c$ under the $x$-admissibility constraint \eqref{eq:x-admissible} and the drawdown constraint \eqref{cond:ddc-with-q}. This problem therefore can be seen as the optimization over the set $\mathcal{C}^\lambda(x,q)$ defined in \eqref{def:primal-dom}. The associated value function is \begin{equation}\label{eq:primal-problem} u(x,q):=\sup_{c\in\mathcal{C}^\lambda(x,q)}\mathbb{E}\left[\int_0^{\hat{T}} U(\omega,t,c_t)d\kappa_t\right],\quad (x,q)\in\mathcal{K}, \end{equation} where an appropriate domain $\mathcal{K}\subseteq\mathbb{R}^2$ for $(x,q)$ is to be specified later. Here we use the convention $$\mathbb{E}\left[\int_0^{\hat{T}} U(\omega,t,c_t)d\kappa_t\right]:=-\infty\quad\text{if}\quad \mathbb{E}\left[\int_0^{\hat{T}} U^{-}(\omega,t,c_t)d\kappa_t\right]=+\infty,$$ where $W^{-}$ denotes the negative part of a stochastic field $W$. Our goal is to develop duality arguments, based on \cite{mostovyi}, describing the solutions of the two-parameter optimization problem \eqref{eq:primal-problem} in similar fashion as it is done in \cite{hug-kramkov} for expected utility maximization problem with random endowments at maturity. \section{Running essential supremum of a non-negative optional process}\label{sec:run-esssup} In this section, we introduce a notion of running essential supremum. This is the appropriate generalization to measurable processes of what running maximum is for continuous processes. To the best of our knowledge, this process has not been previously considered in the literature. Let $c$ be a non-negative optional process on $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\in[0,\hat{T})},\mathbb{P})$. Since $c$ is $\mathcal{F}\otimes\mathcal{B}([0,\hat{T}))$-measurable, every path of $c$ is Borel measurable. Therefore, the following pathwise definition of running essential supremum of $c$ makes sense. \begin{defn}\label{def:running-max} For every $\omega\in\Omega$, we define $$\bar{c}_t(\omega):=\esssup_{s\in[0,t]}c_s(\omega)\in[0,\infty]\quad \text{ for } t\in(0,\hat{T}),\quad\text{ and } \bar{c}_0(\omega):=0,$$ where the essential supremum is taken with respect to the Lebesgue measure. We call $\bar{c}$ the \textit{running essential supremum} of $c$. \end{defn} In the sequel, we often denote by the bar over the process its running essential supremum without stating it explicitly. \begin{rem} Let $\tilde{c}_t(\omega):=\sup_{s\in[0,t]}c_s(\omega)$ for $(\omega,t)\in\Omega\times[0,\hat{T})$. This is the smallest non-decreasing process satisfying $c_t(\omega)\leq\tilde{c}_t(\omega)$ for all $(\omega,t)\in\Omega\times[0,\hat{T})$. Clearly, $\tilde{c}\geq \bar{c}$ and if $c\geq \lambda(\tilde{c}\vee q)$ then $c$ satisfies \eqref{cond:ddc-with-q}. However, there is a reason why we consider $\bar{c}$ and not $\tilde{c}$ in the formulation of the drawdown constraint \eqref{cond:ddc-with-q}. An optimizer for \eqref{eq:primal-problem} is determined up to $\mathbb{P}\times d\kappa-$nullsets, therefore, our definition of the drawdown constraint should be indifferent to changes of $c$ on $\mathbb{P}\times d\kappa-$nullsets. Hence, it would be natural to adopt a definition of running maximum which gives out the same process, up to indistinguishability, for any two processes that are equal $\mathbb{P}\times d\kappa-$a.e. This holds for $\bar{c}$ (Proposition \ref{prop:mon-of-sup}) but not for $\tilde{c}$. It also seems reasonable from an economic perspective to have a definition of running maximum that is unaffected by a jump of consumption rate $c$ at a single point, since this jump does not even affect the expected utility functional eventually. \end{rem} We will give an alternative definition of the running essential supremum, which is equivalent and makes it easier to prove certain properties of $\bar{c}$. First, we introduce the notion of the essential debut of level $l\geq 0$ by process $c$. \subsection{Essential debut}\label{sec:ess-debut} Let $l\in[0,\infty)$. Define a stochastic process $$X_t^l(\omega)=\int_0^t\mathbbm{1}_{\{c_s(\omega)\geq l\}}ds,\quad t\in[0,\hat{T}),$$ which is non-negative, non-decreasing, continuous, and adapted. Define $$\tau^l(\omega)=\inf\{t\in[0,\hat{T}): X_t^l(\omega)>0\},$$ the hitting time of $(0,\infty)$ by the process $X^l$, where by convention the infimum of an empty set is $\hat{T}$. By construction, $\tau^l$ can also be characterized as follows: $$\tau^l(\omega)=\inf\{t\in[0,\hat{T}): \vert \left\{s\in[0,t]: c_s(\omega)\geq l\right\}\vert>0\},$$ where $\vert\cdot\vert$ denotes the Lebesgue measure of a set. That is, $\tau^l$ is the \textit{essential debut} (see \cite{dellacherie-meyerA}, Chapter IV, p.108) of the set $\{(\omega,t): c_t(\omega)\geq l\}$: given $\omega\in\Omega$, the set $\{s\in[0,t]: c_s(\omega)\geq l\}$ has a positive Lebesgue measure for every $t\in(\tau^l(\omega),\hat{T})$, and $\tau^l(\omega)$ is the smallest time satisfying this property. Clearly, the essential debut $\tau^l$ is no smaller than the hitting time $\inf\{t\in[0,\hat{T}):c_t(\omega)\geq l\}$, but it could be strictly larger if the process $c_t$ does not spend a strictly positive amount of time (in the sense of the Lebesgue measure) at or above level $l$ right after hitting it. \begin{prop}\label{prop:stopping-time} For every $l\geq 0$, $\tau^l$ is a stopping time. \end{prop} \begin{proof} Since $\tau^l$ is the hitting time of the open set $(0,\infty)$ by the continuous adapted process $X^l$ and since the filtration $\left(\mathcal{F}_t\right)_{t\in[0,\hat{T})}$ is right-continuous, $\tau^l$ is a stopping time (see, for example, \cite{karatzas-shreve-book}, p.7, 2.6 Problem). \end{proof} \subsection{Alternative characterization of $\bar{c}_t$} In the sequel, we often omit writing $\omega$ in pathwise statements. \begin{prop}\label{prop:second-def} The definition of the running essential supremum $\bar{c}_t$ in Definition \ref{def:running-max} is equivalent to the following: $$\bar{c}_t=\sup\{l\geq 0: \tau^l<t\}=\inf\{l\geq 0: \tau^l\geq t\}.$$ \end{prop} Note that the function $l\mapsto\tau^l(\omega)$ is non-decreasing for every $\omega\in\Omega$, hence Proposition~\ref{prop:second-def} says that $t\mapsto\bar{c}_t(\omega)$ is a generalized inverse of the non-decreasing function $l\mapsto\tau^l(\omega)$. \begin{proof} The statement is obvious for $t=0$, so let us fix $\omega\in\Omega$ and $t\in(0,\hat{T})$. Since $l\mapsto\tau^l(\omega)$ is a non-decreasing function, the second equality, between supremum and infimum, holds and we can denote $\theta:=\sup\{l\geq 0: \tau^l<t\}=\inf\{l\geq 0: \tau^l\geq t\}$. For every $0\leq\nu<\theta$, $\tau^{\nu+\varepsilon}<t$ for all $\varepsilon>0$ small enough, therefore $$\vert\left\{s\in[0,t]: c_s>\nu\right\}\vert\geq\vert\left\{s\in[0,t]: c_s\geq\nu+\varepsilon\right\}\vert>0$$ for an $\varepsilon$ small enough, and hence $\nu<\esssup_{s\in[0,t]}c_s$ by the definition of essential supremum. This proves that $\theta\leq\esssup_{s\in[0,t]}c_s$. On the other hand, for every $\nu>\theta$, $\tau^\nu\geq t$ and therefore $\vert \left\{s\in[0,t]: c_s\geq \nu\right\}\vert=0$. This implies that $\vert \{s\in[0,t]: c_s> \theta\}\vert=0$, i.e., $\theta\geq\esssup_{s\in[0,t]}c_s$, and that the two definitions of $\bar{c}_t$, as the essential supremum and as the generalized inverse, coincide. \end{proof} \subsection{Properties of $\bar{c}_t$} Clearly, $t\mapsto\bar{c}_t(\omega)$ is non-decreasing. In the following two propositions we show that the process $\bar{c}$ is left-continuous, predictable, and $c_t(\omega)\leq\bar{c}_t(\omega)$ for every $\omega\in\Omega$, Lebesgue-a.e. in $t$. \begin{prop}\label{prop:run-sup-properties} The running essential supremum $\bar{c}$ of a non-negative optional process $c$ is left-continuous and predictable. \end{prop} In particular, this proposition shows that $t\mapsto\bar{c}_t(\omega)$ is the \textit{left-continuous generalized inverse} of the non-decreasing function $l\mapsto\tau^l(\omega)$ (cf.~definition of the right-continuous generalized inverse, for example, in \cite{karatzas-shreve-book}, p.174, 4.5 Problem). \begin{proof} To prove left-continuity, let us fix a path $\omega\in\Omega$ and $t\in(0,\hat{T})$. Since $s\mapsto\bar{c}_s$ is non-decreasing, we need to show that for every $\varepsilon>0$ there exists $\delta>0$ such that $\bar{c}_{t-\delta}\geq \bar{c}_t-\varepsilon$. Indeed, by the definition of $\bar{c}_t$, $$m:=\vert\{s\in[0,t]: c_s\geq \bar{c}_t-\varepsilon\}\vert>0,$$ and, in particular, $\vert\{s\in[0,t-m/2]: c_s\geq \bar{c}_t-\varepsilon\}\vert\geq m/2>0$. Hence $\bar{c}_{t-m/2}\geq \bar{c}_t-\varepsilon$. To prove predictability, we write for every $\nu>0$ and a sequence of numbers $\varepsilon_n\downarrow 0$, \begin{equation}\label{eq:predictability} \left\{(\omega,t): \bar{c}_t(\omega)<\nu\right\}=\cup_{n\geq 1}\left\{(\omega,t): \tau^{\nu-\varepsilon_n}(\omega)\geq t\right\}. \end{equation} The inclusion ``$\subseteq$'' in \eqref{eq:predictability} holds because if $\bar{c}_t(\omega)<\nu$ then for $n$ large enough (i.e., for $\varepsilon_n$ small enough) $\tau^{\nu-\varepsilon_n}(\omega)\geq t$ thanks to the characterization of $\bar{c}_t$ as the generalized inverse of $\tau^k$. Conversely, if $\tau^{\nu-\varepsilon}(\omega)\geq t$ for some $\varepsilon>0$ then $\bar{c}_t(\omega)\leq\nu-\varepsilon<\nu$. Note that for every $\varepsilon>0$ the set $\left\{(\omega,t): \tau^{\nu-\varepsilon}(\omega)\geq t\right\}$ can be rewritten as $$\left\{(\omega,t): \mathbbm{1}_{(\tau^{\nu-\varepsilon}(\omega),\hat{T})}(t)=0\right\}.$$ Since $\tau^{\nu-\varepsilon}$ is a stopping time, the indicator function $\mathbbm{1}_{(\tau^{\nu-\varepsilon},\hat{T})}$ is measurable with respect to the predictable sigma-algebra. Hence, the right-hand side of \eqref{eq:predictability} is a countable union of predictable sets, which implies that the set $\{\bar{c}_t(\omega)<\nu\}$ is predictable for every $\nu>0$ and therefore $\bar{c}_t$ is a predictable stochastic process. \end{proof} \begin{prop}\label{prop:increasing-dom} The running essential supremum $\bar{c}$ of a non-negative optional process $c$ satisfies for every $\omega\in\Omega$ the inequality $$c_t(\omega)\leq\bar{c}_t(\omega)\quad \text{for Lebesgue almost every } t\in[0,\hat{T}).$$ \end{prop} \begin{proof} Fix a path $\omega\in\Omega$. The statement would follow if we show that for every $\varepsilon>0$ the set $A^\varepsilon:=\{t\in[0,\hat{T}):\bar{c}_t\leq c_t-\varepsilon\}$ has zero Lebesgue measure. Assume that $\vert A^\varepsilon\vert>0$ for some $\varepsilon>0$. Then there exists $t_1\in A^\varepsilon$ such that $\vert A^\varepsilon\cap [0,t_1]\vert>0$. Otherwise, the set $A^\varepsilon\cap [0,\sup A^\varepsilon)\subseteq\cup_{k\geq 1}(A^\varepsilon\cap [0,s_k])$, where $s_k\in A^\varepsilon$ and $s_k\uparrow\sup A^\varepsilon$ as $k\to\infty$, has measure zero, a contradiction. The fact that $t_1\in A^\varepsilon$ and the definition of $\bar{c}_{t_1}$ as the essential supremum imply that the set $$B_1:=\{s\in[0,t_1]: c_s>c_{t_1}-\varepsilon\}$$ has Lebesgue measure zero. Therefore, the set $A^\varepsilon\cap [0,t_1]\setminus B_1$ still has a strictly positive Lebesgue measure, and $c_s\leq c_{t_1}-\varepsilon$ on $[0,t_1]\setminus B_1$. Now we can choose $t_2\in A^\varepsilon\cap [0,t_1]\setminus B_1$ such that $\vert A^\varepsilon\cap [0,t_2]\setminus B_1\vert>0$ and define the set $$B_2:=\{s\in[0,t_2]: c_s>c_{t_2}-\varepsilon\}.$$ In the same way, we obtain that $\vert B_2\vert=0$, $\vert A^\varepsilon\cap [0,t_2]\setminus (B_1\cup B_2)\vert>0$, and $c_s\leq c_{t_2}-\varepsilon\leq c_{t_1}-2\varepsilon$ on $[0,t_2]\setminus (B_1\cup B_2)$. Continuining this procedure analogously, after $n$ steps we obtain $n$ Lebesgue-negligible sets $B_1,...,B_n$ and a number $t_n\in(0,\hat{T})$ such that $$\vert A^\varepsilon\cap [0,t_n]\setminus (B_1\cup ...\cup B_n)\vert>0$$ and $c_s\leq c_{t_1}-n\varepsilon$ on $[0,t_n]\setminus (B_1\cup...\cup B_n)$. This cannot be repeated infinitely many times since the process $c$ is non-negative and $c_{t_1}<\infty$, and the contradiction implies that $\vert A^\varepsilon\vert=0$. \end{proof} By Proposition \ref{prop:run-sup-properties}, the running essential supremum $\bar{c}$ of every non-negative optional process $c$ belongs to $\mathcal{C}_{\text{inc}}$ (see Definition \ref{def:C-0-class}). Note that since two left-continuous stochastic processes on $(\Omega,[0,\hat{T}))$ are equal $\mathbb{P}\times\text{Leb}([0,\hat{T}))-$a.e. if and only if they are indistinguishable, the same is true for any two processes in $\mathcal{C}_{\text{inc}}$. Moreover, the following property holds for running essential suprema: \begin{prop}\label{prop:mon-of-sup} If $c^1$, $c^2$ are two non-negative optional processes such that $c^1\leq c^2$, $\mathbb{P}\times\text{Leb}([0,\hat{T}))-$a.e., then $$\mathbb{P}\left(\{\omega: \bar{c}^1_t(\omega)\leq\bar{c}^2_t(\omega)\text{ for every }t\in[0,\hat{T})\}\right)=1.$$ In particular, if $c^1=c^2$, $\mathbb{P}\times\text{Leb}([0,\hat{T}))-$a.e., then $\bar{c}^1$ and $\bar{c}^2$ are indistinguishable. \end{prop} \begin{proof} If $\bar{c}^1_t(\omega)>\bar{c}^2_t(\omega)$ for some $t\in(0,\hat{T})$ then $\vert s\in [0,\hat{T}): c^1_s(\omega)>c^2_s(\omega)\vert>0$. However, by Fubini's theorem, since $c^1\leq c^2$, $\mathbb{P}\times\text{Leb}([0,\hat{T}))-$a.e., $$\mathbb{P}\left(\{\omega: \vert s\in[0,\hat{T}): c^1_s(\omega)>c^2_s(\omega)\vert>0\}\right)=0,$$ proving the first assertion. The second assertion easily follows from the first. \end{proof} The following proposition states that $\bar{c}$ is the smallest element of $\mathcal{C}_{\text{inc}}$ dominating $c$. \begin{prop}\label{prop:minimality-of-c-bar} If $c$ is a non-negative optional process and $c'\in\mathcal{C}_{\text{inc}}$ is such that $c\leq c'$, $\mathbb{P}\times\text{Leb}([0,\hat{T}))-$a.e., then $\bar{c}_t\leq c'_t$ for all $t\in [0,\hat{T})$, $\mathbb{P}-$a.e. \end{prop} \begin{proof} Assume that $\bar{c}_t(\omega)>c'_t(\omega)$ for some $\omega\in\Omega$ and $t\in(0,\hat{T})$. Then, since $c'$ is non-decreasing and by the definition of running essential supremum, \begin{equation}\label{eq:ineq-a-e} \vert\{s\in[0,\hat{T}): c_s(\omega)>c'_s(\omega)\}\vert>0. \end{equation} Since $c\leq c'$, $\mathbb{P}\times\text{Leb}([0,\hat{T}))-$a.e., by Fubini's theorem, the set of $\omega$'s satisfying \eqref{eq:ineq-a-e} has measure zero. Hence, $\mathbb{P}\left(\{\omega: \bar{c}_t(\omega)\leq c'_t(\omega)\text{ for all }t\in [0,\hat{T})\}\right)=1.$ \end{proof} \subsection{Integration with respect to $c\in\mathcal{C}_{\text{inc}}$} Fix a $[0,\infty)$-valued process $c\in\mathcal{C}_{\text{inc}}$. For every $\omega\in\Omega$, $c$ uniquely defines a non-negative Borel measure $dc_t(\omega)$ on $[0,\hat{T})$ via $$dc_t(\omega)([0,t))=c_{t}(\omega),\quad t\in(0,\hat{T}).$$ Note that if $c(\omega)$ has a jump at $t$, then $dc_t(\omega)(\{t\})=c_{t+}-c_t$. This is the same random measure that comes from the non-decreasing c\`adl\`ag adapted process $$c'_t:=c_{t+}=\lim_{\varepsilon\downarrow 0}c_{t+\varepsilon}\quad\text{for } t\in[0,\hat{T}),\quad\quad c'_{0-}:=0.$$ Integration with respect to non-decreasing c\`adl\`ag adapted processes, being a generalization of the Lebesgue-Stieltjes integration to stochastic processes, is a classical topic in the general theory of stochastic processes (see, for example, \cite{dellacherie-meyerB}, Chapter VI, no. 51--58) and in the next proposition we list several properties of integration with respect to $dc_t=dc'_t$ that will be extensively used in this paper. Throughout the paper, we adopt the following conventions: \begin{enumerate} \item The integral $\int_0^{\hat{T}}$ is understood as $\int_{[0,\hat{T})}$, i.e., $0$ is included in the domain of the integration. \item For two $\mathcal{F}\otimes\mathcal{B}([0,\hat{T}))$-measurable processes $D^{1,2}:\Omega\times[0,\hat{T})\to[0,\infty)$ we say that $$D^1\leq D^2\quad \text{up to indistinguishability}$$ if $\mathbb{P}\left(\{\omega: D^1_t(\omega)\leq D^2_t(\omega)\text{ for all }t\in[0,\hat{T})\}\right)=1$. By the Optional Section Theorem, for optional processes $D^{1,2}$ this condition is equivalent to $D^1_T\mathbbm{1}_{\{T<\hat{T}\}}\leq D^2_T\mathbbm{1}_{\{T<\hat{T}\}}$, $\mathbb{P}-$a.e., for every stopping time $T:\Omega\to[0,\hat{T}]$. \end{enumerate} \begin{prop} Let $c\in\mathcal{C}_{\text{inc}}$ be $[0,\infty)$-valued. Then \begin{enumerate} \item For every $D:\Omega\times[0,\hat{T})\to[0,\infty)$ that is $\mathcal{F}\otimes\mathcal{B}([0,\hat{T}))$-measurable, we have \begin{equation}\label{eq:int-optional-proj} \mathbb{E}\left[\int_0^{\hat{T}} D_tdc_t\right]=\mathbb{E}\left[\int_0^{\hat{T}}{}^oD_tdc_t\right], \end{equation} where ${}^oD$ denotes the optional projection of $D$. \item For every $f:\Omega\times[0,\hat{T})\to[0,\infty)$ that is $\mathcal{F}\otimes\mathcal{B}([0,{\hat{T}}))$-measurable, we have \begin{equation}\label{eq:IBP} \mathbb{E}\left[\int_0^{\hat{T}} f_t c_t dt\right]=\mathbb{E}\left[\int_0^{\hat{T}} \left(\int_s^{\hat{T}} f_tdt\right)dc_s\right]. \end{equation} \item If two optional processes $D^{1,2}:\Omega\times[0,{\hat{T}})\to[0,\infty)$ satisfy $D^1_T\mathbbm{1}_{\{T<\hat{T}\}}\leq D^2_T\mathbbm{1}_{\{T<\hat{T}\}}$, $\mathbb{P}-$a.e., for every stopping time $T\in[0,\hat{T}]$, then \begin{equation}\label{eq:comparison-for-int} \mathbb{E}\left[\int_0^{\hat{T}} D^1_tdc_t\right]\leq\mathbb{E}\left[\int_0^{\hat{T}} D^2_tdc_t\right]. \end{equation} \end{enumerate} \end{prop} \begin{proof} Item 1 is proved in \cite{dellacherie-meyerB}, Chapter VI, no. 57. Item 2 follows from $$\mathbb{E}\left[\int_0^{\hat{T}} f_t c_t dt\right]=\mathbb{E}\left[\int_0^{\hat{T}} f_t \int_{[0,t)}dc_s dt\right]$$ and Fubini's theorem. Item 3 follows since the Optional Section Theorem implies that $D^1\leq D^2$ up to indistinguishability. \end{proof} We will also need the following definition and lemma on several occasions. \begin{defn}\label{def:pt-of-increase} For $c\in\mathcal{C}_{\text{inc}}$ and $\omega\in\Omega$, we write $dc_t(\omega)>0$ (or simply $dc_t>0$) and call $t\in[0,\hat{T})$ a \textit{point of increase} of $c(\omega)$ if $c_{s}(\omega)>c_t(\omega)$ for all $s\in(t,\hat{T})$. \end{defn} \begin{lem}\label{lem:pt-of-increase} For a non-negative optional right-continuous process $D$ and for $c\in\mathcal{C}_\text{inc}$, the following are equivalent: \begin{enumerate} \item $D_t=0$, $\mathbb{P}\times dc_t-$almost everywhere on $\Omega\times[0,{\hat{T}})$. \item $\mathbb{P}-$almost everywhere, $D_t= 0$ for all $t\in[0,{\hat{T}})$ such that $dc_t>0$. \end{enumerate} \end{lem} \begin{proof} $``1.\Rightarrow 2."$ Assume that for some $\omega\in\Omega$ there exists $t\in[0,{\hat{T}})$ such that $D_t(\omega)> 0$ and $dc_t(\omega)>0$. Then $D_s(\omega)>0$ for all $s\in[t,t+\varepsilon]$ for $\varepsilon>0$ small enough by right-continuity of $D$, but also $dc(\omega)([t,t+\varepsilon])>0$. Hence $\int_0^{\hat{T}} D_t(\omega)dc_t(\omega)>0$, which is only possible on a $\mathbb{P}-$nullset by 1. $``2.\Rightarrow 1."$ For $l\geq 0$, define a stopping time $T_l:=\inf\{t\in[0,{\hat{T}}): c_t>l\}$, where the infimum of an empty set is taken to be ${\hat{T}}$. This is indeed a stopping time by the right-continuity of the filtration. Since $c$ is left-continuous and non-decreasing, $c_{T_l}\leq l<c_{s}$ for any $s\in(T_l,{\hat{T}})$, hence $T_l$ is either ${\hat{T}}$, or a point of increase of $c$. Therefore, by 2., $D_{T_l}\mathbbm{1}_{\{T_l<{\hat{T}}\}}=0$ for every $l\geq0$, $\mathbb{P}-$almost surely. The time-change formula (55.1) in \cite{dellacherie-meyerB}, Chapter VI, when applied to the c\`adl\`ag version $c'_t$ of $c_t$, states that $$\mathbb{E}\left[\int_0^{\hat{T}} D_tdc_t\right]=\mathbb{E}\left[\int_0^\infty D_{T_l}\mathbbm{1}_{\{T_l<{\hat{T}}\}}dl\right].$$ Since the right-hand side is zero, it follows that $D_t=0$, $\mathbb{P}\times dc_t-$almost everywhere. \end{proof} \section{Domains}\label{sec:domains} \subsection{The primal domain} Recall the definition $\mathcal{C}^\lambda=\mathcal{C}^\lambda(1,0)=\{c\geq 0 \text{ optional}:c\vee\lambda\bar{c}\text{ is 1-admissible}\}$ for $\lambda\in[0,1]$. The following proposition is important for being able to apply the results from \cite{mostovyi} to our optimization problem \eqref{eq:primal-problem}. \begin{prop}\label{prop:C-bipolar} For $\lambda\in[0,1]$, the set $\mathcal{C}^\lambda$ is convex, solid, and closed with respect to convergence in (finite) measure $\mathbb{P}\times d\kappa$ on $\Omega\times[0,\hat{T})$. \end{prop} \begin{proof} Solidity is obvious. To prove convexity, let $c^1,c^2\in\mathcal{C}^\lambda$. Since $c^1\vee\lambda\bar{c}^1,c^2\vee\lambda\bar{c}^2$ are $1$-admissible, $\theta(c^1\vee\lambda\bar{c}^1)+(1-\theta)(c^2\vee\lambda\bar{c}^2)$ is also $1$-admissible for $\theta\in[0,1]$. The running essential supremum of $\theta c^1+(1-\theta)c^2$ is bounded from above by $\theta\bar{c}^1+(1-\theta)\bar{c}^2$. Hence, both the process $\theta c^1+(1-\theta)c^2$ and $\lambda$ times its running essential supremum are bounded from above by the $1$-admissible process $\theta(c^1\vee\lambda\bar{c}^1)+(1-\theta)(c^2\vee\lambda\bar{c}^2)$, implying that $\theta c^1+(1-\theta)c^2\in\mathcal{C}^\lambda$ and proving convexity. To prove that $\mathcal{C}^\lambda$ is closed, let $(c^n)_{n\geq1}\subseteq\mathcal{C}^\lambda$ converge to $c$ almost everywhere. After replacing $c^n$ by $\tilde{c}^n:=\inf_{k\geq n}c^k\in\mathcal{C}^\lambda$ and using Proposition \ref{prop:mon-of-sup}, we can assume without loss of generality that the sequence $c^n$ increases to $c$ pointwise on $\Omega\times[0,\hat{T})$. Then it is easy to check directly from Definition \ref{def:running-max} that $\bar{c}^n\uparrow \bar{c}$ pointwise, therefore $c^n\vee\lambda\bar{c}^n\uparrow c\vee\lambda\bar{c}$ pointwise as well, and, by monotone convergence applied to the characterization \eqref{eq:x-admissible} of $x$-admissiblity, $c\vee\lambda\bar{c}$ is $1$-admissible, implying $c\in\mathcal{C}^\lambda$. \end{proof} \subsection{Chronological ordering} The following ordering will be crucial in our definition and characterization of the dual domains. This ordering appears implicitly in \cite{BK} through their definition (11). Borrowing this idea, we solve the singular control problem of \cite{BK} type, corresponding to $\lambda=1$ and $q=0$ in our framework, in what seems to be a more direct way (i.e., with a more straightforward definition of the dual domain and of the dual optimization problem; see Remark \ref{rem:duality-thm}(iii)) and extend the method to constraints given by parameters $\lambda\in(0,1)$ and $q>0$. \begin{defn}\label{defn:chron-ord} For a stochastic clock $\kappa$ satisfying Assumption \ref{ass:clock}, we define a \textit{chronological ordering} $\preceq$ on the set of non-negative optional processes on $\Omega\times[0,\hat{T})$ as follows: $\tilde{\delta}\preceq\delta$ if and only if \begin{equation}\label{def:chron-ord} ^o\left(\int_.^{\hat{T}} \tilde{\delta}d\kappa\right)\leq{}^o\left(\int_.^{\hat{T}} \delta d\kappa\right)\quad \text{up to indistinguishability}. \end{equation} \end{defn} \begin{rem}\label{rem:chron-ord-def} \begin{enumerate} \item[(i)] By the Optional Section Theorem and by the definition of optional projection, condition \eqref{def:chron-ord} is equivalent to saying that for every stopping time $T\in[0,\hat{T}]$, \begin{equation}\label{def:chron-alternative} \mathbb{E}\left[\left(\int_T^{\hat{T}}\tilde\delta d\kappa\right) \mathbbm{1}_{\{T<{\hat{T}}\}}\Big\vert\mathcal{F}_T\right]\leq \mathbb{E}\left[\left(\int_T^{\hat{T}}\delta d\kappa\right) \mathbbm{1}_{\{T<{\hat{T}}\}}\Big\vert\mathcal{F}_T\right],\quad \mathbb{P}-\text{a.e.} \end{equation} For $B\in\mathcal{F}_T$, $T_B(\omega):=\begin{cases}T(\omega)\ \text{if }\omega\in B,\\ {\hat{T}} \text{ otherwise},\end{cases}$ is a stopping time and $\mathbbm{1}_{\{T<{\hat{T}}\}}\cdot\mathbbm{1}_B=\mathbbm{1}_{\{T_B<{\hat{T}}\}}$, therefore, \eqref{def:chron-ord} and \eqref{def:chron-alternative} are equivalent to the following: \begin{equation}\label{eq:chron-ord-characterization} \mathbb{E}\left[\int_T^{\hat{T}}\tilde\delta d\kappa\right]\leq \mathbb{E}\left[\int_T^{\hat{T}}\delta d\kappa\right]\quad\text{for every stopping time } T\in[0,{\hat{T}}] \end{equation} (where we discarded the indicator function $\mathbbm{1}_{\{T<{\hat{T}}\}}$ because $\int_{\hat{T}}^{\hat{T}}\square d\kappa=0$). \item[(ii)] If $\tilde{\delta}\leq\delta$, $\mathbb{P}\times d\kappa-$a.e., then $\left(\int_T^{\hat{T}}\tilde{\delta}d\kappa\right)\mathbbm{1}_{\{T<{\hat{T}}\}}\leq\left(\int_T^{\hat{T}}\delta d\kappa\right)\mathbbm{1}_{\{T<{\hat{T}}\}}$, $\mathbb{P}-$a.e., for every stopping time $T\in[0,{\hat{T}}]$, implying that $\tilde{\delta}\preceq\delta$. \end{enumerate} \end{rem} Intuitively, all the processes obtained from $\delta$ by ``moving some of the mass of $\delta d\kappa$ to earlier times'', possibly removing some of the mass, and taking the optional projection afterwards, are $\preceq$-dominated by $\delta$, hence the name ``chronological''. This is best seen in the deterministic case, i.e., on a trivial filtered probability space, when the definition of $\preceq$ turns into comparison of mass of $\tilde\delta d\kappa$ and $\delta d\kappa$ on each of the intervals $(t,{\hat{T}})$ for $t\in[0,{\hat{T}})$. The most subtle application of this intuition of moving mass to earlier in time is in Step 1 of the proof of Lemma \ref{lem:important} below. The following is the remarkable property of the chronological ordering: \begin{prop}\label{prop:remarkable-property} For two non-negative optional processes $\tilde{\delta}$ and $\delta$, \begin{equation}\label{eq:monotonicity-chron-ord} \tilde{\delta}\preceq\delta\quad \Leftrightarrow\quad \langle c,\tilde\delta\rangle\leq\langle c,\delta \rangle\text{ for all }c\in\mathcal{C}_{\text{inc}}. \end{equation} \end{prop} In the informal language from above, the most important message of this proposition reduces to the obvious statement that ``moving some of the mass of $\delta d\kappa$ to earlier times can only make the integral of an increasing process with respect to $\delta d\kappa$ smaller''. \begin{proof} To show ``$\Leftarrow$'', take $c:=\mathbbm{1}_{(T,{\hat{T}})}$ for every stopping time $T$ and apply the equivalent characterization \eqref{eq:chron-ord-characterization} of $\preceq$. For ``$\Rightarrow$'', when $c$ is finite-valued, we have \begin{equation*} \begin{aligned} \langle c,\tilde\delta\rangle&=\mathbb{E}\left[\int_0^{\hat{T}} c\tilde{\delta} d\kappa\right]\overset{\eqref{eq:IBP}}{=}\mathbb{E}\left[\int_0^{\hat{T}}\left(\int_t^{\hat{T}} \tilde{\delta} d\kappa\right)dc_t\right]\overset{\eqref{eq:int-optional-proj}}{=}\mathbb{E}\left[\int_0^{\hat{T}} {}^o\left(\int_.^{\hat{T}} \tilde{\delta} d\kappa\right)_tdc_t\right]\\ &\overset{\eqref{eq:comparison-for-int}}{\leq}\mathbb{E}\left[\int_0^{\hat{T}} {}^o\left(\int_.^{\hat{T}} \delta d\kappa\right)_tdc_t\right]\overset{\eqref{eq:int-optional-proj},\eqref{eq:IBP}}{=}\mathbb{E}\left[\int_0^{\hat{T}} c\delta d\kappa\right]=\langle c,\delta\rangle. \end{aligned} \end{equation*} An arbitrary $c\in\mathcal{C}_{\text{inc}}$ can be approximated by finite-valued $c\wedge N\in\mathcal{C}_{\text{inc}}$ as $N\to\infty$ and the right-hand side of \eqref{eq:monotonicity-chron-ord} follows by monotone convergence. \end{proof} Now we introduce an ordering $\preceq_\lambda$ that will be used to characterize dual domains for $\lambda<1$. \begin{defn}\label{def:lambda-ordering} For $\lambda\in[0,1]$ and $\delta,\tilde\delta\geq 0$ optional, we define $\tilde\delta\preceq_\lambda\delta$ to hold if and only if there exists a representation $\delta=\delta^1+\delta^2$, $\tilde\delta=\tilde\delta^1+\tilde\delta^2$ with $\delta^i,\tilde\delta^i\geq 0$ optional, $i=1,2$, such that $$\tilde\delta^1\leq\delta^1,\ \mathbb{P}\times d\kappa-\text{a.e.},\quad\text{and}\quad\tilde\delta^2\preceq\lambda\delta^2.$$ \end{defn} \begin{rem} Note that $\preceq_1$ coincides with $\preceq$ by Remark \ref{rem:chron-ord-def}(ii) and that $\tilde\delta\preceq_0\delta$ if and only if $\tilde\delta\leq\delta$, $\mathbb{P}\times d\kappa-$a.e. Moreover, $\tilde\delta\preceq_{\lambda_1}\delta$ implies $\tilde\delta\preceq_{\lambda_2}\delta$ for $0\leq\lambda_1\leq\lambda_2\leq 1$. Hence, $\preceq_\lambda$ for $\lambda\in[0,1]$ can be seen as a family of orderings obtained by interpolation between ``$\leq$ up to $\mathbb{P}\times d\kappa-$nullsets'' and $\preceq$. \end{rem} Informally, $\tilde\delta\preceq_\lambda\delta$ if, up to taking optional projections, $\tilde\delta$ is obtained from $\delta$ by leaving some mass of $\delta d\kappa$ where it is, ``moving some of its mass to earlier time'' while also multiplying it by $\lambda$ (i.e., the $(1-\lambda)$ fraction of mass gets lost), and removing the rest. An alternative characterization of $\preceq_\lambda$ is given in the following proposition. \begin{prop}\label{prop:lambda-ord-alternative} For $\lambda\in[0,1]$ and $\delta,\tilde\delta\geq 0$ optional, $$\tilde\delta\preceq_\lambda\delta\quad \Leftrightarrow \quad (\tilde\delta-\delta)\vee 0\preceq \lambda(\delta-\tilde\delta)\vee 0.$$ \end{prop} \begin{proof} If $(\tilde\delta-\delta)\vee 0\preceq \lambda(\delta-\tilde\delta)\vee 0$ then the processes $\delta^1:=\tilde\delta^1:=\tilde\delta\wedge\delta$, $\delta^2:=(\delta-\tilde\delta)\vee 0$, and $\tilde\delta^2:=(\tilde\delta-\delta)\vee 0$ satisfy the conditions of Definition \ref{def:lambda-ordering}. Conversely, for any decomposition as in Definition \ref{def:lambda-ordering}, it holds that $\tilde\delta^1\leq \tilde\delta$ and $\tilde\delta^1\leq\delta^1\leq\delta$, hence $\tilde\delta^1\leq \tilde\delta\wedge\delta$ and $\tilde\delta^2\geq (\tilde\delta-\delta)\vee 0$. We can then write \begin{equation*} \begin{aligned} (\tilde\delta-\delta)\vee 0&=\tilde\delta^2-\left[(\tilde\delta\wedge\delta)-\tilde\delta^1\right]\leq \tilde\delta^2- \lambda\left\{\left[(\tilde\delta\wedge\delta)-\tilde\delta^1\right]\wedge \delta^2\right\}\\ &\preceq \lambda\delta^2- \lambda\left\{\left[(\tilde\delta\wedge\delta)-\tilde\delta^1\right]\wedge \delta^2\right\}=\lambda\left[\delta^2+\tilde\delta^1-(\tilde\delta\wedge\delta)\right]\vee 0\\ &\leq \lambda\left[\delta-(\tilde\delta\wedge\delta)\right]\vee 0=\lambda(\delta-\tilde\delta)\vee 0, \end{aligned} \end{equation*} proving the claim. \end{proof} Hence, $\tilde\delta\preceq_\lambda\delta$ if and only if the conditions of Definition \ref{def:lambda-ordering} hold with $\delta^1=\tilde\delta^1=\tilde\delta\wedge\delta$, i.e., in the case when we choose not to move all the mass that is possible not to move if we want to obtain $\tilde\delta$ from $\delta$. The following proposition is an extension of Proposition \ref{prop:remarkable-property} to $\preceq_\lambda$. \begin{prop} For $\lambda\in[0,1]$ and $\tilde\delta,\delta\geq 0$ optional, \begin{equation}\label{eq:monotonicity-lambda-ord} \tilde{\delta}\preceq_\lambda\delta\quad \Leftrightarrow\quad \langle c,\tilde{\delta}\rangle\leq\langle c,\delta \rangle\text{ for all }c\text{ satisfying \eqref{cond:ddc}}. \end{equation} \end{prop} \begin{proof} For ``$\Rightarrow$'', let $\delta^i,\tilde\delta^i$, $i=1,2$, be as in Definition \ref{def:lambda-ordering}. Then for all $c$ satisfying \eqref{cond:ddc} we have \begin{equation*} \langle c,\tilde\delta^2 \rangle\leq\langle \bar{c},\tilde\delta^2 \rangle\overset{\eqref{eq:monotonicity-chron-ord}}{\leq}\langle \bar{c},\lambda\delta^2 \rangle=\langle\lambda\bar{c},\delta^2\rangle\leq\langle c,\delta^2 \rangle. \end{equation*} The right-hand side of \eqref{eq:monotonicity-lambda-ord} now follows by adding the above inequality and $\langle c,\tilde\delta^1 \rangle\leq\langle c,\delta^1 \rangle$. To prove ``$\Leftarrow$'', we define $\delta^1=\tilde\delta^1=\tilde\delta\wedge\delta$, $\delta^2=(\delta-\tilde\delta)\vee 0$, and $\tilde\delta^2=(\tilde\delta-\delta)\vee 0$, as in Proposition \ref{prop:lambda-ord-alternative}, and show that $\tilde\delta^2\preceq\lambda\delta^2$ by testing the right-hand side of \eqref{eq:monotonicity-lambda-ord} with suitable $c$'s. The right-hand side of \eqref{eq:monotonicity-lambda-ord} implies that \begin{equation}\label{eq:lambda-ord-aux} \langle c,\tilde\delta^2\rangle\leq\langle c,\delta^2 \rangle\text{ for all }c\text{ satisfying \eqref{cond:ddc}}. \end{equation} For every stopping time $T\in[0,{\hat{T}}]$, we define $c:=(\mathbbm{1}_{\{\tilde\delta^2>0\}}+\lambda\mathbbm{1}_{\{\tilde\delta^2=0\}})\mathbbm{1}_{(T,{\hat{T}})}$. Since $c\geq\lambda\geq\lambda\bar{c}$ on $(T,{\hat{T}})$, it is easy to see that $c$ satisfies \eqref{cond:ddc}. Therefore, \eqref{eq:lambda-ord-aux} implies that for all stopping times $T\in[0,{\hat{T}}]$, $$\mathbb{E}\left[\int_T^{\hat{T}} \tilde\delta^2d\kappa\right]=\langle c,\tilde\delta^2\rangle\leq \langle c,\delta^2\rangle=\mathbb{E}\left[\int_T^{\hat{T}} \lambda\delta^2 d\kappa\right],$$ where we have used that $\{\delta^2>0\}\subseteq\{\tilde\delta^2=0\}$. By \eqref{eq:chron-ord-characterization}, this means that $\tilde\delta^2\preceq\lambda\delta^2$. \end{proof} \subsection{An important lemma} \begin{lem}\label{lem:important} For all $\lambda\in[0,1]$ and $c\geq 0$ optional, the following holds: \begin{equation}\label{eq:important} \langle c\vee\lambda\bar{c},\delta\rangle=\sup_{\tilde{\delta}\preceq_\lambda \delta}\langle c,\tilde{\delta}\rangle\quad\text{for all}\quad\delta\geq0\text{ optional}. \end{equation} \end{lem} Informally, in order to achieve the value of $\langle c,\tilde{\delta}\rangle$ close to the value on the left-hand side of \eqref{eq:important}, we obtain $\tilde\delta d\kappa$ by splitting the mass of $\delta d\kappa$ into two parts: (i) if $c_t\geq\lambda\bar{c}_t$ then we leave the mass of $\delta d\kappa$ at time $t$ where it is (i.e., the corresponding part of $\delta$ goes to $\delta^1$ of Definition~\ref{def:lambda-ordering}), (ii) if $c_t<\lambda\bar{c}_t$ then we move the mass of $\delta d\kappa$ at time $t$ (i.e., the corresponding part of $\delta$ goes to $\delta^2$ of Definition~\ref{def:lambda-ordering}) to an earlier time where $c$ is close to its current running essential supremum $\bar{c}_t$, with the penalty of $(1-\lambda)$ fraction of the mass moved. \begin{proof} The inequality ``$\geq$'' holds since for every $\tilde{\delta}\preceq_\lambda\delta$, $$\langle c,\tilde\delta \rangle \leq \langle c\vee\lambda\bar{c},\tilde\delta\rangle\leq\langle c\vee\lambda\bar{c},\delta \rangle,$$ where the second inequality holds by \eqref{eq:monotonicity-lambda-ord}, since $c\vee\lambda\bar{c}$ satisfies \eqref{cond:ddc}. The opposite inequality ``$\leq$'' in \eqref{eq:important} will be proved in four steps. \begin{enumerate} \item[\textbf{Step 1.}] \textbf{$\mathbf{\lambda=1}$ and $\mathbf{c=\mathbbm{1}_A}$, where $\mathbf{A\in\mathcal{O}}$.} Since $c\vee\bar{c}=\bar{c}$, $\mathbb{P}\times d\kappa-$a.e., by Proposition~\ref{prop:increasing-dom}, we are proving that $\langle\bar{c},\delta \rangle\leq\sup_{\tilde{\delta}\preceq\delta}\langle c,\tilde{\delta}\rangle$ for every optional process $\delta\geq 0$. Let $\tau_A$ be the essential debut of $A$. Then $\tau_A$ is a stopping time by Proposition~\ref{prop:stopping-time} applied with $l=1$, and $\bar{c}_t=\mathbbm{1}_{(\tau_A,\hat{T})}(t)$ by the definition of $\bar{c}$. Let us fix a non-negative optional process $\delta$. For an arbitrary $\varepsilon>0$ and for $\tau_A^\varepsilon:=(\tau_A+\varepsilon)\wedge\hat{T}$, the $\omega$-section of the set $(\tau_A,\tau_A^\varepsilon)\cap A$ has a positive Borel measure, and hence a positive measure $d\kappa$, as long as $\tau_A(\omega)<\hat{T}$. Therefore, a non-negative optional process $\delta^\varepsilon$ defined as the optional projection of the non-negative jointly measurable process \begin{equation}\label{eq:delta-eps-def} \begin{cases} \frac{\mathbbm{1}_{(\tau_A,\tau_A^\varepsilon)\cap A}}{d\kappa((\tau_A,\tau_A^\varepsilon)\cap A)}\int_{\tau_A^\varepsilon}^{\hat{T}}\delta d\kappa,\quad\text{if }\tau_A<\hat{T},\\ 0,\quad\text{otherwise}, \end{cases} \end{equation} is well-defined. Note that \begin{equation*} \mathbb{E}\left[\int_0^{\hat{T}} \mathbbm{1}_A\delta^\varepsilon d\kappa\right]=\mathbb{E}\left[\int_0^{\hat{T}} \mathbbm{1}_A\frac{\mathbbm{1}_{(\tau_A,\tau_A^\varepsilon)\cap A}}{d\kappa((\tau_A,\tau_A^\varepsilon)\cap A)}\left(\int_{\tau_A^\varepsilon}^{\hat{T}}\delta d\kappa\right) d\kappa\right]=\mathbb{E}\left[\int_{\tau_A^\varepsilon}^{\hat{T}}\delta d\kappa\right]. \end{equation*} As $\varepsilon\downarrow 0$, \begin{equation}\label{eq:eps-conv} \langle c,\delta^\varepsilon \rangle=\mathbb{E}\left[\int_{\tau_A^\varepsilon}^{\hat{T}}\delta d\kappa\right]\nearrow\mathbb{E}\left[\int_{\tau_A}^{\hat{T}}\delta d\kappa\right]=\langle \bar{c},\delta \rangle, \end{equation} which proves the desired inequality as soon as we show that $\delta^\varepsilon\preceq\delta$ for all $\varepsilon>0$. As Remark \ref{rem:chron-ord-def}(i) says, it is sufficient to prove that for every stopping time $T$, \begin{equation}\label{eq:delta-epsilon2} \mathbb{E}\left[\int_T^{\hat{T}}\delta^\varepsilon d\kappa\right]\leq \mathbb{E}\left[\int_T^{\hat{T}}\delta d\kappa\right]. \end{equation} Finally, \eqref{eq:delta-epsilon2} follows from \begin{equation*} \begin{aligned} \mathbb{E}\left[\int_T^{\hat{T}}\delta^\varepsilon d\kappa\right]&=\mathbb{E}\left[\int_0^{\hat{T}}\mathbbm{1}_{[T,{\hat{T}})}\frac{\mathbbm{1}_{(\tau_A,\tau_A^\varepsilon)\cap A}}{d\kappa((\tau_A,\tau_A^\varepsilon)\cap A)}\left(\int_{\tau_A^\varepsilon}^{\hat{T}}\delta d\kappa\right)d\kappa\right]\\ &=\mathbb{E}\left[\frac{d\kappa((\tau_A,\tau_A^\varepsilon)\cap A\cap[T,{\hat{T}}))}{d\kappa((\tau_A,\tau_A^\varepsilon)\cap A)}\left(\int_{\tau_A^\varepsilon}^{\hat{T}}\delta d\kappa\right)\right]\\ &\leq \mathbb{E}\left[0\cdot \mathbbm{1}_{\{T\geq\tau_A^\varepsilon\}}\right]+\mathbb{E}\left[\left(\int_{\tau_A^\varepsilon}^{\hat{T}}\delta d\kappa\right)\mathbbm{1}_{\{T<\tau_A^\varepsilon\}}\right]\leq \mathbb{E}\left[\int_T^{\hat{T}}\delta d\kappa\right].\\ \end{aligned} \end{equation*} \item[\textbf{Step 2.}] \textbf{$\mathbf{\lambda\in[0,1]}$ and $\mathbf{c=\mathbbm{1}_A}$, where $\mathbf{A\in\mathcal{O}}$.} Let $\tau_A$ be the essential debut of $A$. Then $$\bar{c}=\mathbbm{1}_{(\tau_A,{\hat{T}})}\quad \text{and}\quad c\vee\lambda\bar{c}=\mathbbm{1}_{A}+\lambda\mathbbm{1}_{(\tau_A,{\hat{T}})}\mathbbm{1}_{A^c}.$$ We fix a non-negative optional process $\delta$, split it into $\delta^1=\delta\mathbbm{1}_{A}$ and $\delta^2=\delta\mathbbm{1}_{A^c}$, and define $\delta^\varepsilon=\tilde\delta^1+\tilde\delta^{2,\varepsilon}$ for $\varepsilon>0$ as follows: $\tilde\delta^1=\delta\mathbbm{1}_{A}$ and $\tilde\delta^{2,\varepsilon}$ is constructed as in Step~1 as the optional projection of \eqref{eq:delta-eps-def} for $c=\mathbbm{1}_{A}$ and $\delta$ replaced with $\lambda\delta\mathbbm{1}_{A^c}$. This ensures that (i) by \eqref{eq:delta-epsilon2}, $\tilde\delta^{2,\varepsilon}\preceq\lambda\delta\mathbbm{1}_{A^c}$, in particular, $\delta^\varepsilon=\tilde\delta^1+\tilde\delta^{2,\varepsilon}\preceq_\lambda \delta^1+\delta^2=\delta$, and (ii) by \eqref{eq:eps-conv}, \begin{equation*} \langle c,\delta^\varepsilon \rangle=\langle c, \delta\mathbbm{1}_{A}+\tilde\delta^{2,\varepsilon} \rangle\nearrow\langle c,\delta\mathbbm{1}_{A}\rangle+\langle\bar{c},\lambda\delta\mathbbm{1}_{A^c} \rangle=\langle \mathbbm{1}_{A}+\mathbbm{1}_{(\tau_A,{\hat{T}})}\lambda\mathbbm{1}_{A^c},\delta \rangle=\langle c\vee\lambda\bar{c},\delta \rangle \end{equation*} as $\varepsilon\downarrow 0$, completing the proof of inequality ``$\leq$'' in \eqref{eq:important} for this case. \item[\textbf{Step 3.}] \textbf{Sum of two optional processes with disjoint supports.} Let $\lambda\in[0,1]$ and let $c^1,c^2$ be two non-negative processes with disjoint supports for which \eqref{eq:important} holds. In this case, it follows directly from the definition of running essential supremum that $\overline{c^1+c^2}=\overline{(c^1\vee c^2)}=\bar{c}^1\vee\bar{c}^2$, and therefore $$(c^1+c^2)\vee\lambda\overline{(c^1+c^2)}=(c^1\vee c^2)\vee\lambda(\bar{c}^1\vee\bar{c}^2)=(c^1\vee\lambda\bar{c}^1)\vee(c^2\vee\lambda\bar{c}^2).$$ We denote $c^{\lambda,i}:=c^i\vee\lambda\bar{c}^i$ for $i=1,2$ and obtain \begin{equation*} \begin{aligned} \langle(c^1+c^2)\vee\lambda\overline{(c^1+c^2)},\delta\rangle&=\langle c^{\lambda,1}\vee c^{\lambda,2},\delta \rangle=\langle c^{\lambda,1},\delta\mathbbm{1}_{\{c^{\lambda,1}>c^{\lambda,2}\}} \rangle+\langle c^{\lambda,2},\delta\mathbbm{1}_{\{c^{\lambda,1}\leq c^{\lambda,2}\}} \rangle\\ &\overset{\eqref{eq:important}}{=}\sup\left\{\langle c^1,\tilde\delta^1 \rangle:\ \tilde\delta^1\preceq_\lambda \delta\mathbbm{1}_{\left\{c^{\lambda,1}>c^{\lambda,2}\right\}}\right\}\\ &\quad\quad+\sup\left\{\langle c^2,\tilde\delta^2 \rangle:\ \tilde\delta^2\preceq_\lambda \delta\mathbbm{1}_{\left\{c^{\lambda,1}\leq c^{\lambda,2}\right\}}\right\}\\ &\leq\sup_{\tilde\delta^1,\tilde\delta^2}\langle c^1+c^2,\tilde\delta^1+\tilde\delta^2\rangle \leq\sup_{\tilde\delta\preceq_\lambda\delta}\langle c^1+c^2,\tilde\delta \rangle, \end{aligned} \end{equation*} where the last inequality holds by the additive property of $\preceq_\lambda$: $$\tilde\delta^1\preceq_\lambda \delta\mathbbm{1}_{\left\{c^{\lambda,1}>c^{\lambda,2}\right\}}\text{ and }\tilde\delta^2\preceq_\lambda \delta\mathbbm{1}_{\left\{c^{\lambda,1}\leq c^{\lambda,2}\right\}}\quad\Rightarrow\quad \tilde\delta:=\tilde\delta^1+\tilde\delta^2\preceq_\lambda \delta.$$ Comparing the first and the last terms in the above sequence of equalities and inequalities shows that ``$\leq$'' in \eqref{eq:important} holds for $c^1+c^2$. \item[\textbf{Step 4.}] \textbf{Monotone convergence.} Let $(c^n)_{n\geq1}$ be an increasing sequence of non-negative optional processes such that $c^n\uparrow c$ and $$\langle c^n\vee\lambda\bar{c}^n,\delta \rangle\leq\sup_{\tilde{\delta}\preceq_\lambda \delta}\langle c^n,\tilde{\delta}\rangle\quad \text{for every } n.$$ Then since $\bar{c}^n\uparrow \bar{c}$, and therefore $c^n\vee\lambda\bar{c}^n\uparrow c\vee\lambda\bar{c}$, monotone convergence implies \eqref{eq:important}. \end{enumerate} Combining the results of Steps 2 and 3 shows that the conclusion of the lemma holds for all $\lambda\in[0,1]$ in case when $c$ is a non-negative simple $\mathcal{O}$-measurable function on $\Omega\times[0,\hat{T})$. Step~4 allows to conclude the proof for an arbitrary non-negative optional process $c$. \end{proof} \subsection{The dual domain} For $\lambda\in[0,1]$, we define the dual domain $\mathcal{D}^\lambda$ as the polar set of $\mathcal{C}^\lambda$ in $L_+^0(\Omega\times[0,\hat{T}),\mathcal{O},\mathbb{P}\times d\kappa)$ in the sense of \cite{bipolar}: $$\mathcal{D}^\lambda:=(\mathcal{C}^\lambda)^\circ=\left\{\delta\geq 0\text{ optional}: \langle c,\delta \rangle\leq 1\text{ for all }c\in\mathcal{C}^\lambda\right\},$$ and denote $\mathcal{D}^\lambda(y)=y\cdot\mathcal{D}^\lambda$ for $y>0$. By the bipolar theorem (\cite{bipolar}) and Proposition \ref{prop:C-bipolar}, we have $\mathcal{C}^\lambda=(\mathcal{D}^\lambda)^\circ$. As a polar set, $\mathcal{D}^\lambda$ is solid, convex, and closed. Furthermore, $\mathcal{D}^\lambda$ has the following characterization in terms of $\mathcal{Z}$: \begin{prop}\label{prop:min-of-D} The set $\mathcal{D}^\lambda$ is $\preceq_\lambda$-solid, i.e., if $\delta\in\mathcal{D}^\lambda$ and $\tilde{\delta}\preceq_\lambda\delta$ then $\tilde\delta\in\mathcal{D}^\lambda$. Moreover, it is the smallest set in $L_+^0(\Omega\times[0,\hat{T}),\mathcal{O},\mathbb{P}\times d\kappa)$ that is convex, closed, $\preceq_\lambda$-solid, and contains $\mathcal{Z}$. \end{prop} \begin{proof} Let $\delta\in\mathcal{D}^\lambda$ and $\tilde{\delta}\preceq_\lambda\delta$. For every $c\in\mathcal{C}^\lambda$, $c\vee\lambda\bar{c}$ satisfies \eqref{cond:ddc} and belongs to $\mathcal{C}^\lambda$, and therefore $$\langle c,\tilde{\delta} \rangle\leq\langle c\vee\lambda\bar{c},\tilde{\delta} \rangle\overset{\eqref{eq:monotonicity-lambda-ord}}{\leq}\langle c\vee\lambda\bar{c},\delta \rangle\leq 1,$$ implying $\tilde\delta\in(\mathcal{C}^\lambda)^\circ=\mathcal{D}^\lambda$, i.e., $\mathcal{D}^\lambda$ is $\preceq_\lambda$-solid. Let $\mathcal{D}'$ be the smallest set in $L_+^0(\Omega\times[0,\hat{T}),\mathcal{O},\mathbb{P}\times d\kappa)$ that is convex, closed, $\preceq_\lambda$-solid, and contains $\mathcal{Z}$. Clearly, $\mathcal{D}'\subseteq\mathcal{D}^\lambda$. We will show $\mathcal{D}^\lambda\subseteq\mathcal{D}'$ by proving that $(\mathcal{D}')^\circ\subseteq(\mathcal{D}^\lambda)^\circ=\mathcal{C}^\lambda$. Fix $c\in(\mathcal{D}')^\circ$. By Lemma \ref{lem:important} and since $\mathcal{D}'$ is $\preceq_\lambda$-solid and contains $\mathcal{Z}$, we obtain $$\langle c\vee\lambda\bar{c},Z \rangle=\sup_{\tilde\delta\preceq_\lambda Z}\langle c,\tilde\delta \rangle\leq 1,\quad \forall Z\in\mathcal{Z},$$ which means that $c\vee\lambda\bar{c}$ is $1$-admissible and therefore $c\in\mathcal{C}^\lambda$. \end{proof} \begin{rem}[Monotonicity in $\lambda$]\label{rem:mon} By the definition \eqref{def:primal-dom} of primal domains, $$\mathcal{C}^1\subseteq\mathcal{C}^{\lambda_2}\subseteq\mathcal{C}^{\lambda_1}\subseteq\mathcal{C}^0\quad \text{for}\quad1\geq\lambda_2\geq\lambda_1\geq0,$$ where $\mathcal{C}^0$ is the set of all $x$-admissible consumption plans. This reflects the fact that the drawdown constraint gets more restrictive as $\lambda\in[0,1]$ increases. The dual domains $\mathcal{D}^\lambda$, being the Brannath--Schachermayer duals of $\mathcal{C}^\lambda$, have the reverse monotonicity property: $$\mathcal{D}^1\supseteq\mathcal{D}^{\lambda_2}\supseteq\mathcal{D}^{\lambda_1}\supseteq\mathcal{D}^0\quad \text{for}\quad1\geq\lambda_2\geq\lambda_1\geq0,$$ where, according to Proposition \ref{prop:min-of-D}, $\mathcal{D}^0$ is the smallest solid, convex, and closed subset of $L_+^0(\Omega\times[0,\hat{T}),\mathcal{O},\mathbb{P}\times d\kappa)$ containing $\mathcal{Z}$. \end{rem} \begin{prop}\label{prop:alpha-for-D} For every $\lambda\in[0,1]$, the following holds: \begin{equation*} \alpha:=\sup_{Z\in\mathcal{Z}}\mathbb{E}\left[\int_0^{\hat{T}} Zd\kappa\right]=\sup_{\delta\in\mathcal{D}^\lambda}\mathbb{E}\left[\int_0^{\hat{T}}\delta d\kappa\right]\in(0,\infty). \end{equation*} \end{prop} \begin{proof} For a fixed $\lambda$, let us define $$\mathcal{D}':=\left\{\delta\in\mathcal{D}^\lambda: \mathbb{E}\left[\int_0^{\hat{T}}\delta d\kappa\right]\leq \alpha\right\}.$$ Clearly, $\mathcal{Z}\subseteq\mathcal{D}'$. On the other hand, the set $\mathcal{D}'$ inherits the properties of $\mathcal{D}^\lambda$: it is convex, it is closed by Fatou's lemma, and it is $\preceq_\lambda$-solid, since it is easy to check that $\mathbb{E}\left[\int_0^{\hat{T}}\tilde\delta d\kappa\right]\leq\mathbb{E}\left[\int_0^{\hat{T}}\delta d\kappa\right]$ for $\tilde\delta\preceq_\lambda\delta$. By minimality of $\mathcal{D}^\lambda$ established in Proposition \ref{prop:min-of-D}, we must have $\mathcal{D}'=\mathcal{D}^\lambda$ and therefore $\sup_{\delta\in\mathcal{D}^\lambda}\mathbb{E}\left[\int_0^{\hat{T}} \delta d\kappa\right]=\alpha$. Using Ito's formula for semimartingales and localization, one can show that $\mathbb{E}\left[\int_0^T Zd\kappa\right]=\mathbb{E}[Z_T\kappa_T]\leq A$ for every finite $T\leq\hat{T}$, $Z\in\mathcal{Z}$, and for constant $A$ from Assumption \ref{ass:clock}. Sending $T\to\infty$ if $\hat{T}=\infty$ and taking $T=\hat{T}$ otherwise, we conclude that $\alpha\leq A<\infty$. \end{proof} \section{The optimization problem}\label{sec:optimization} In this section, we describe the solution of the optimization problem \eqref{eq:primal-problem} using convex duality methods. We fix $\lambda\in(0,1]$ and for brevity omit mentioning the dependence on $\lambda$ explicitly, wherever it is possible. That is, we omit $\lambda$ from notations $\mathcal{C}^\lambda$, $\mathcal{D}^\lambda$, $\mathcal{C}^\lambda(x)$, $\mathcal{D}^\lambda(y)$, $\mathcal{C}^\lambda(x,q)$. The domain $\mathcal{K}$ for parameters $(x,q)$, the domain $\mathcal{L}$ for dual variables $(y,r)$, dual domains $\mathcal{D}(y,r)$, as well as the value function $u$ from \eqref{eq:primal-problem}, the dual value function $v$ defined in \eqref{eq:dual-problem}, and the primal and dual optimizers, \textit{all implicitly depend on $\lambda\in(0,1]$}. Note that the definition \eqref{def:primal-dom} of the primal domains $\mathcal{C}(x,q)$ (i.e., $\mathcal{C}^\lambda(x,q)$) can be reformulated in terms of $\mathcal{C}(x)$ (i.e., $\mathcal{C}^\lambda(x)$) as follows: \begin{equation}\label{eq:2nd-def-of-Cxq} \mathcal{C}(x,q)=\{c\geq 0 \text{ optional}: c\vee \lambda q\in\mathcal{C}(x)\}\subseteq \mathcal{C}(x), \end{equation} without any reference to the drawdown constraint, only to the essential lower bound on consumption. In fact, all of the statements in this section (except Propositions \ref{prop:D-y-r-solid} and \ref{prop:sufficient-for-Dyr2}) hold for arbitrary subsets $\mathcal{C}$, $\mathcal{D}$, and $\mathcal{C}(x,q)$ defined as in \eqref{eq:2nd-def-of-Cxq}, of $L_+^0(\Omega\times[0,\hat{T}),\mathcal{O},\mathbb{P}\times d\kappa)$ satisfying the following three conditions: \begin{enumerate} \item The sets $\mathcal{C}$ and $\mathcal{D}$ are polar to each other in the sense of \cite{bipolar}. \item The set $\mathcal{D}$ contains an element that is strictly positive $\mathbb{P}\times d\kappa-$almost everywhere. \item $\alpha:=\sup_{\delta\in\mathcal{D}}\mathbb{E}\left[\int_0^{\hat{T}} \delta d\kappa\right]<\infty$. \end{enumerate} All of these conditions hold for $\mathcal{C}^\lambda$ and $\mathcal{D}^\lambda$, $\lambda\in(0,1]$, the second one being satisfied since $\mathcal{Z}\subseteq\mathcal{D}^\lambda$. Additionally, this means that the arguments of this section can be adopted in order to introduce, with the definition analogous to \eqref{eq:2nd-def-of-Cxq}, essential lower bound on consumption in the usual unconstrained model of \cite{mostovyi}. \subsection{Dual relations between domains}\label{sec:dual-rel} Due to Proposition \ref{prop:alpha-for-D}, the set $\mathcal{C}(x,q)$ for $x>0$ is non-empty if and only if $\lambda q\leq x/\alpha$. This leads to the definition of two cones in $\mathbb{R}^2$, \begin{equation*} \begin{aligned} \bar{\mathcal{K}}&=\left\{(x,q): x\geq 0 \text{ and } x/(\alpha\lambda)\geq q\right\},\\ \bar{\mathcal{L}}&=\left\{(y,r): xy+qr\geq 0\text{ for all }(x,q)\in\bar{\mathcal{K}}\right\}=\left\{(y,r):y\geq 0 \text{ and } 0\geq r\geq -(\alpha\lambda) y\right\}, \end{aligned} \end{equation*} and their interiors $\mathcal{K}=\text{int}(\bar{\mathcal{K}})$, $\mathcal{L}=\text{int}(\bar{\mathcal{L}})$. The open cone $\mathcal{K}$ is precisely the set of pairs $(x,q)$ such that the optimization problem \eqref{eq:primal-problem} is non-trivial, the closed cone $\bar{\mathcal{L}}$ is the negative of the polar cone of $\mathcal{K}$ in $\mathbb{R}^2$. Further, we define the set $$\mathcal{L}^*:=\mathcal{L}\cup\{(y,r):y>0,r=0\}\subseteq\bar{\mathcal{L}}$$ that turns out to be the appropriate domain for the dual value function. The dual domains are defined as follows (cf. (11) in \cite{hug-kramkov} and (4.4) in \cite{yu}): \begin{equation*} \mathcal{D}(y,r):=\left\{\delta\in\mathcal{D}(y):\langle c,\delta \rangle\leq xy+qr,\ \forall \ (x,q)\in\mathcal{K},\ c\in\mathcal{C}(x,q)\right\}, \quad (y,r)\in\mathcal{L}^*. \end{equation*} Some trivial but useful properties of the domains $(\mathcal{C}(x,q))_{(x,q)\in\mathcal{K}}$ and $(\mathcal{D}(y,r))_{(y,r)\in\mathcal{L}^*}$ are summarized in the following remark. \begin{rem} \begin{enumerate} \item[(i)] We have $\mathcal{C}(x,q)=\mathcal{C}(x)$ for $q\leq 0$. For all $(x,q)\in\mathcal{K}$, the set $\mathcal{C}(x,q)$ is solid and contains the constant process $c\equiv x/\alpha>0$. For $x>0$ and $q_1<q_2<x/(\alpha\lambda)$, we have $\mathcal{C}(x,q_2)\subseteq\mathcal{C}(x,q_1)$, i.e., the sets $\mathcal{C}(x,q)$ decrease as $q$ increases. \item[(ii)] We have $\mathcal{D}(y,0)=\mathcal{D}(y)$ for all $y>0$. Since $\mathcal{C}(x,q)=\mathcal{C}(x)$ for $q\leq 0$ and since $r\leq0$ for every $(y,r)\in\mathcal{L}^*$, the definition of $\mathcal{D}(y,r)$ is equivalent to \begin{equation}\label{eq:def-d-spaces} \mathcal{D}(y,r)=\{\delta\in\mathcal{D}(y):\langle c,\delta \rangle\leq xy+qr,\ \forall\ q\geq 0, \ (x,q)\in\mathcal{K}, \ c\in\mathcal{C}(x,q)\}. \end{equation} In particular, $\mathcal{D}(y,r_1)\subseteq\mathcal{D}(y,r_2)$ for all $y>0$ and $-(\alpha\lambda) y<r_1<r_2\leq 0$, i.e., the sets $\mathcal{D}(y,r)$ increase as $r$ increases. \end{enumerate} \end{rem} \begin{prop}\label{prop:D-y-r-solid} The set $\mathcal{D}(y,r)$ is $\preceq_\lambda$-solid for every $(y,r)\in\mathcal{L}^*$. \end{prop} \begin{proof} Assume $\delta\in\mathcal{D}(y,r)$ and $\tilde\delta\preceq_\lambda\delta$. Let $(x,q)\in\mathcal{K}$ and $c\in\mathcal{C}(x,q)$. Then, by Proposition \ref{prop:C-is-closed}, $c\vee\lambda\bar{c}\in\mathcal{C}(x,q)$ and satisfies \eqref{cond:ddc}, therefore $$\langle c,\tilde{\delta}\rangle\leq\langle c\vee\lambda\bar{c},\tilde{\delta}\rangle\overset{\eqref{eq:monotonicity-lambda-ord}}{\leq}\langle c\vee\lambda\bar{c},\delta \rangle\leq xy+rq,$$ implying that $\tilde\delta\in\mathcal{D}(y,r)$, i.e., $\mathcal{D}(y,r)$ is $\preceq_\lambda$-solid. \end{proof} The following is a simple sufficient condition for an element of $\mathcal{D}(y)$ in order to be an element of $\mathcal{D}(y,r)$. It will be used in Section \ref{sec:complete} to characterize the optimizers for \eqref{eq:primal-problem} in a complete market. \begin{prop}\label{prop:sufficient-for-Dyr2} Let $\tilde\delta\preceq_\lambda\delta\in\mathcal{D}(y)$ and $\tilde\delta$ is strictly positive on a set of positive measure (i.e., excluding the trivial case $\tilde\delta=0$, $\mathbb{P}\times d\kappa-$a.e.). Then $\tilde\delta\in\mathcal{D}(y,r)$ for \begin{equation*} r:=\mathbb{E}\left[\int_0^{\hat{T}} [(\tilde\delta-\delta)\vee 0-\lambda(\delta-\tilde\delta)\vee0] d\kappa\right]. \end{equation*} \end{prop} \begin{proof} First, since $(\tilde\delta-\delta)\vee 0\preceq\lambda(\delta-\tilde\delta)\vee0$ by Proposition \ref{prop:lambda-ord-alternative}, $r\leq 0$. Second, $$r\geq -\mathbb{E}\left[\int_0^{\hat{T}} [\lambda(\delta-\tilde\delta)\vee0] d\kappa\right]\geq -\mathbb{E}\left[\int_0^{\hat{T}} \lambda\delta d\kappa\right]\geq -(\alpha\lambda)y,$$ and the strict inequality holds between the first and the last terms because $\tilde\delta>0$ on a set of positive measure. Hence, $(y,r)\in\mathcal{L}^*$ so that $\mathcal{D}(y,r)$ is well-defined. According to \eqref{eq:def-d-spaces}, it is enough to test the defining property of $\mathcal{D}(y,r)$ only for $(x,q)\in\mathcal{K}$, $c\in\mathcal{C}(x,q)$ with $q\geq 0$. Since $\bar{c}\vee q\geq c\geq \lambda(\bar{c}\vee q)$, $\mathbb{P}\times d\kappa-$a.e., \begin{equation*} \begin{aligned} \langle c,\tilde\delta-\delta\rangle&=\langle c,(\tilde\delta-\delta)\vee 0\rangle-\langle c,(\delta-\tilde\delta)\vee 0\rangle\leq \langle \bar{c}\vee q,(\tilde\delta-\delta)\vee 0\rangle-\langle \bar{c}\vee q,\lambda(\delta-\tilde\delta)\vee 0\rangle\\ &=\langle \bar{c}\vee q,(\tilde\delta-\delta)\vee 0-\lambda(\delta-\tilde\delta)\vee0\rangle\leq \langle q, (\tilde\delta-\delta)\vee 0-\lambda(\delta-\tilde\delta)\vee0\rangle=qr, \end{aligned} \end{equation*} where the last inequality follows from $(\bar{c}\vee q)-q\in\mathcal{C}_{\text{inc}}$ and $(\tilde\delta-\delta)\vee 0\preceq\lambda(\delta-\tilde\delta)\vee0$. Rearranging the terms and using $c\in\mathcal{C}(x)$, $\delta\in\mathcal{D}(y)$, we obtain $\langle c,\tilde\delta\rangle\leq\langle c,\delta\rangle+qr\leq xy+qr$, completing the proof. \end{proof} Now we prove duality relations between the families $(\mathcal{C}(x,q))_{(x,q)\in\mathcal{K}}$ and $(\mathcal{D}(y,r))_{(y,r)\in\mathcal{L}^*}$. The next proposition is an analogue of Proposition 1 in \cite{hug-kramkov} for our setting and is crucial for reducing, similarly to \cite{hug-kramkov}, the two-dimensional optimization problem to one dimension in the proof of Theorem \ref{thm:main-duality} below. \begin{prop}\label{prop:conjugate-rel} \begin{enumerate} \item[(i)] For every $(x,q)\in\mathcal{K}$, $\mathcal{C}(x,q)$ contains a stictly positive process. For a non-negative optional process $c$, the following holds: \begin{equation}\label{eq:C-characterization} c\in\mathcal{C}(x,q)\quad\Leftrightarrow\quad \langle c,\delta \rangle\leq xy+qr\quad \text{for all } \ (y,r)\in\mathcal{L}^*,\ \delta\in\mathcal{D}(y,r). \end{equation} \item[(ii)] For every $(y,r)\in\mathcal{L}^*$, $\mathcal{D}(y,r)$ contains a strictly positive process. For a non-negative optional process $\delta$, the following holds: \begin{equation}\label{eq:D-characterization} \delta\in\mathcal{D}(y,r)\quad\Leftrightarrow\quad \langle c,\delta \rangle\leq xy+qr\quad \text{for all } \ (x,q)\in\mathcal{K},\ c\in\mathcal{C}(x,q). \end{equation} \end{enumerate} \end{prop} \begin{proof} We show (ii) first. The implication ``$\Rightarrow$'' in \eqref{eq:D-characterization} holds by the definition of $\mathcal{D}(y,r)$. Assuming the right-hand side of \eqref{eq:D-characterization} and substituting $q=0$ implies that $\delta\in\mathcal{D}(y)$, therefore the implication ``$\Leftarrow$'' also follows from the definition of $\mathcal{D}(y,r)$. To show that $\mathcal{D}(y,r)$ contains a stictly positive process, we will find an $\varepsilon>0$ such that $\mathcal{D}(\varepsilon)\subseteq\mathcal{D}(y,r)$. Since $\mathcal{Z}\subseteq\mathcal{D}$, the claim then follows from strict positivity of equivalent martingale deflators $Z\in\mathcal{Z}$. A sufficient condition for $\mathcal{D}(\varepsilon)\subseteq\mathcal{D}(y,r)$ is $$\varepsilon x\leq xy+qr,\quad \forall (x,q)\in\mathcal{K},$$ since then for every $\delta\in\mathcal{D}(\varepsilon)$ and $c\in\mathcal{C}(x,q)\subseteq\mathcal{C}(x)$, $\langle c,\delta \rangle\leq \varepsilon x\leq xy+qr$. Equivalently, $\varepsilon\leq y+(q/x)r$ for all $(x,q)\in\mathcal{K}$. By the definition of the open cone $\mathcal{K}$ this means that $\varepsilon\leq y+r/(\alpha\lambda)$. Finally, by the definition of $\mathcal{L}^*$, $y+r/(\alpha\lambda)>0$, therefore such an $\varepsilon>0$ exists. Now we show (i). The constant consumption plan $c\equiv x/\alpha$ belongs to $\mathcal{C}(x,q)$ and is strictly positive. The implication ``$\Rightarrow$'' in \eqref{eq:C-characterization} follows from the definition of $\mathcal{D}(y,r)$. To show ``$\Leftarrow$'' when $q\leq 0$, we conclude that $c\in\mathcal{C}(x)=\mathcal{C}(x,q)$ by testing the right-hand side of \eqref{eq:C-characterization} with $\delta\in\mathcal{D}=\mathcal{D}(1,0)$. It remains to prove ``$\Leftarrow$'' for the case $q>0$. Let us fix $\delta\in\mathcal{D}$. According to \eqref{eq:2nd-def-of-Cxq}, we want to show that the right-hand side of \eqref{eq:C-characterization} implies $\langle c\vee\lambda q,\delta \rangle\leq x$ or, equivalently, \begin{equation}\label{eq:belong-to-Cx} \langle\lambda q, \delta \rangle+\langle (c-\lambda q)\vee 0,\delta \rangle\leq x. \end{equation} The idea of the proof of this inequality below is to show that $\delta\mathbbm{1}_{\{c\geq\lambda q\}}\in\mathcal{D}(1,r)$ for an appropriate $r$ and then apply the right-hand side of \eqref{eq:C-characterization} to this process. Denote $s:=\mathbb{E}\left[\int_0^{\hat{T}} \delta d\kappa\right]\in[0,\alpha]$. For every $q'\geq 0, (x',q')\in\mathcal{K}$, and $c'\in\mathcal{C}(x',q')$, we have $c'\vee\lambda q'\in\mathcal{C}(x')$. Therefore, \begin{equation}\label{eq:test-delta} \langle (c'-\lambda q')\vee0, \delta\mathbbm{1}_{\{c\geq\lambda q\}}\rangle \leq \langle (c'-\lambda q')\vee 0,\delta \rangle\leq x'-\lambda q's. \end{equation} Denote $p:=\mathbb{E}\left[\int_0^{\hat{T}} \delta\mathbbm{1}_{\{c\geq\lambda q\}} d\kappa\right]\in[0,s]$. If $p=0$, then $\delta=0$ almost everywhere on $\{c\geq\lambda q\}$ and \eqref{eq:belong-to-Cx} is proved, since $\lambda qs\leq \lambda q\alpha<x$. Hence, we can assume that $p\in(0,s]$. By adding $\lambda q'p$ to the inequality \eqref{eq:test-delta}, we obtain \begin{equation}\label{eq:belong-to-D} \langle c'\vee\lambda q', \delta\mathbbm{1}_{\{c\geq\lambda q\}}\rangle \leq x'+\lambda q'(p-s), \end{equation} for all $q'\geq 0$, $(x',q')\in\mathcal{K}$, and $c'\in\mathcal{C}(x',q')$. Since $0\geq (p-s)\lambda>-\alpha\lambda$, $(1,(p-s)\lambda)\in\mathcal{L}^*$ and the inequality \eqref{eq:belong-to-D} means that $\delta\mathbbm{1}_{\{c\geq\lambda q\}}\in\mathcal{D}(1,(p-s)\lambda)$. Therefore, we can test the right-hand side of \eqref{eq:C-characterization} with this process, which yields $$\langle c, \delta\mathbbm{1}_{\{c\geq\lambda q\}}\rangle\leq x+\lambda q(p-s).$$ By subtracting $\lambda qp=\langle\lambda q,\delta\mathbbm{1}_{\{c\geq\lambda q\}} \rangle$ from both sides, we get $$\langle (c-\lambda q)\vee 0,\delta \rangle=\langle c-\lambda q,\delta\mathbbm{1}_{\{c\geq\lambda q\}} \rangle\leq x-\lambda qs.$$ Moving $\lambda qs=\langle\lambda q,\delta \rangle$ to the left yields \eqref{eq:belong-to-Cx}, concluding the proof. \end{proof} \subsection{The main duality result for optimization}\label{sec:duality-sol} Following \cite{mostovyi} (and the standard duality theory), we introduce the conjugate stochastic field $V$ to $U$: $$V(\omega,t,y):=\sup_{x>0}(U(\omega,t,x)-xy),\quad (\omega,t,y)\in\Omega\times[0,\hat{T})\times[0,\infty),$$ and denote by $I(\omega,t,y):=-V'(\omega,t,y)$, the derivative of $-V$ with respect to $y$. It is well-known that $-V$ satisfies Assumption \ref{ass:utility} and $I(\omega,t,\cdot)=(U')^{-1}(\omega,t,\cdot)$. We define the dual optimization problem for \eqref{eq:primal-problem} through the value function \begin{equation}\label{eq:dual-problem} v(y,r):=\inf_{\delta\in\mathcal{D}(y,r)}\mathbb{E}\left[\int_0^{\hat{T}} V(\omega,t,\delta_t)d\kappa_t\right],\quad(y,r)\in\mathcal{L}^*, \end{equation} where the convention $$\mathbb{E}\left[\int_0^{\hat{T}} V(\omega,t,\delta_t)d\kappa_t\right]:=+\infty\quad\text{if}\quad\mathbb{E}\left[\int_0^{\hat{T}} V^+(\omega,t,\delta_t)d\kappa_t\right]=+\infty$$ is used and $W^{+}$ denotes the positive part of a stochastic field $W$. We often omit writing down the dependence of $U$, $U'$, $V$, and $I$ on $\omega$ and $t$ in what follows. The following theorem describes the duality between \eqref{eq:primal-problem} and \eqref{eq:dual-problem}, and establishes existence and uniqueness of optimizers under Mostovyi's finiteness assumptions for value functions $u$ and $v$. \begin{thm}\label{thm:main-duality} Suppose that $\lambda\in(0,1]$, Assumptions \ref{ass:clock}, \ref{ass:NA}, \ref{ass:utility} hold, and $$u(x,q)>-\infty\text{ for all }(x,q)\in\mathcal{K}\quad \text{and}\quad v(y,r)<\infty\text{ for all }(y,r)\in\mathcal{L}^*.$$ Then we have \begin{enumerate} \item $u(x,q)<\infty$ for all $(x,q)\in\mathcal{K}$, $v(y,r)>-\infty$ for all $(y,r)\in\mathcal{L}^*$. The functions $u$ and $v$ are conjugate: \begin{equation}\label{eq:conjugacy-rel} \begin{aligned} u(x,q)&=\inf_{(y,r)\in\mathcal{L}^*}\{v(y,r)+xy+qr\},&\quad (x,q)\in\mathcal{K},\\ v(y,r)&=\sup_{(x,q)\in\mathcal{K}}\{u(x,q)-xy-qr\},&\quad (y,r)\in\mathcal{L}^*.\\ \end{aligned} \end{equation} \item The subdifferential of $u$ maps $\mathcal{K}$ into $\mathcal{L}^*$: $$\partial u(x,q)\subseteq\mathcal{L}^*\quad \text{for all}\quad(x,q)\in\mathcal{K}.$$ \item For all $(x,q)\in\mathcal{K}$ and $(y,r)\in\mathcal{L}^*$ the optimal solutions $\hat{c}(x,q)$ to \eqref{eq:primal-problem} and $\hat{\delta}(y,r)$ to \eqref{eq:dual-problem} exist and are unique. Moreover, if $(y,r)\in\partial u(x,q)$ then we have the dual relations \begin{equation}\label{eq:dual-relations-thm} \begin{aligned} \hat{\delta}_t(y,r)&=U'(t,\hat{c}_t(x,q)),\quad \mathbb{P}\times d\kappa-\text{a.e.},\\ \langle\hat{c}(x,q),\hat{\delta}(y,r)\rangle&=xy+qr. \end{aligned} \end{equation} \end{enumerate} \end{thm} \begin{rem}\label{rem:duality-thm} \begin{enumerate} \item[(i)]Since each of the sets $\mathcal{C}(x,q)$, $(x,q)\in\mathcal{K}$, contains the strictly positive constant consumption plan $c\equiv x/\alpha$, the condition that $u(x,q)>-\infty$ for all $(x,q)$ in $\mathcal{K}$ is satisfied if $\mathbb{E}\left[\int_0^{\hat{T}} U^-(t,z)d\kappa_t\right]<\infty$ for all $z>0$. \item[(ii)] It follows from the proof of Proposition \ref{prop:conjugate-rel}(ii) that for all $(y,r)\in\mathcal{L}^*$ the set $\mathcal{D}(y,r)$ contains $\mathcal{D}(\varepsilon)$ for $\varepsilon>0$ small enough. Hence, the condition that $v(y,r)<\infty$ for all $(y,r)\in\mathcal{L}^*$ is equivalent to $v(y,0)<\infty$ for all $y>0$. Furthermore, since ${D}^0(\varepsilon)\subseteq\mathcal{D}(\varepsilon)$ by Remark \ref{rem:mon}, we have $$v(y,r)\leq \inf_{\delta\in{D}^0(\varepsilon)}\mathbb{E}\left[\int_0^{\hat{T}} V(t,\delta_t)d\kappa_t\right]\quad \text{for some }\varepsilon>0.$$ The infimum on the right-hand side is the dual value function of the unconstrained problem, when both the drawdown and the essential lower bound constraints are dropped ($v(\varepsilon)$, in the notation of \cite{mostovyi}). Therefore, the condition that $v(y,r)<\infty$ for all $(y,r)\in\mathcal{L}^*$ is satisfied as long as the dual value function of the unconstrained problem is finite (from above), in particular, it is always true if the unconstrained problem satisfies the assumptions of \cite{mostovyi}, Theorem 2.3. \item[(iii)] Under Assumptions \ref{ass:clock}, \ref{ass:NA}, \ref{ass:utility} and assuming $u(x,0)>-\infty$ and $v(y,0)<\infty$ for all $x,y>0$ (in particular, under assumptions of Theorem \ref{thm:main-duality}) we can apply \cite[Theorem 3.2]{mostovyi} to the sets $\mathcal{C}:=\mathcal{C}^\lambda$ and $\mathcal{D}:=\mathcal{D}^\lambda$ for $\lambda\in(0,1]$ and thus solve the initial problem \eqref{eq:primal-problem} in case $q=0$. We use this observation in the proof of Proposition \ref{prop:complete-case} below. In particular, for the case $\lambda=1$, Theorem 3.2 in \cite{mostovyi} applied to the sets $\mathcal{C}^1$ and $\mathcal{D}^1$ is an analogue of Theorem 3.2 in \cite{BK}. \end{enumerate} \end{rem} The following two lemmas are analogues of Lemmas 11 and 12 in \cite{hug-kramkov} for our setting and will help us prove Theorem \ref{thm:main-duality}. \begin{lem}\label{lem:sup-over-set} Let $\mathcal{E}\subseteq L_+^0(\Omega\times[0,{\hat{T}}),\mathcal{O},\mathbb{P}\times d\kappa)$ be a convex set. If for every $\varepsilon>0$ there exists $c^\varepsilon\in \varepsilon\mathcal{E}$ such that $$\mathbb{E}\left[\int_0^{\hat{T}} U(t,c^\varepsilon_t)d\kappa_t\right]>-\infty,$$ then, for every $x>0$, $$\sup_{c\in x\mathcal{E}}\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right]=\sup_{c\in x\text{cl}(\mathcal{E})}\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right],$$ where $\text{cl}(\mathcal{E})$ denotes the closure of $\mathcal{E}$ with respect to convergence in measure $\mathbb{P}\times d\kappa$. \end{lem} The proof below follows very closely the proof of Lemma 11 in \cite{hug-kramkov}, except a small adjustment due to the stochastic utility and working on the product space $\Omega\times[0,{\hat{T}})$. \begin{proof} Denote, for $x>0$, $$\phi(x):=\sup_{c\in x\mathcal{E}}\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right]\quad\text{and}\quad \psi(x):=\sup_{c\in x\text{cl}(\mathcal{E})}\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right].$$ Clearly, $\phi$ and $\psi$ are concave functions and $\psi\geq\phi>-\infty$ on $(0,\infty)$. If $\phi(x)=\infty$ for some $x>0$, then, due to concavity, $\phi$ is infinite on entire $(0,\infty)$ and the assertion of the lemma is trivial. Hereafter we assume that $\phi$ is finite. Fix $x>0$ and $c\in x\text{cl}(\mathcal{E})$. Let $(c^n)_{n\geq 1}$ be a sequence in $x\mathcal{E}$ that converges $\mathbb{P}\times d\kappa-$a.e. to $c$. For any $\varepsilon>0$, we have \begin{align*} \mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right]&\leq \mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t+c^\varepsilon_t)d\kappa_t\right]\\ &\leq \liminf_{n\to\infty}\mathbb{E}\left[\int_0^{\hat{T}} U(t,c^n_t+c^\varepsilon_t)d\kappa_t\right]\leq\phi(x+\varepsilon), \end{align*} where the first inequality holds true because $U$ is increasing, the second one follows from Fatou's lemma, since $U(t,c^n_t+c^\varepsilon_t)\to U(t,c_t+c^\varepsilon_t)$ almost surely and all the terms are bounded from below by the integrable process $-U^{-}(t, c^\varepsilon_t)$, and the third one follows from the fact that $\mathcal{E}$ is convex and therefore $c^n+c^\varepsilon\in (x+\varepsilon)\mathcal{E}$ for $n\geq 1$. Since $\phi$ is concave, it is continuous. It follows that $$\psi(x)=\sup_{c\in x\text{cl}(\mathcal{E})}\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right]\leq\lim_{\varepsilon\downarrow 0}\phi(x+\varepsilon)=\phi(x).$$ \end{proof} \begin{lem}\label{lem:limit-in-dual} Let $(y_n,r_n)\in\mathcal{L}^*$ and $\delta^n\in\mathcal{D}(y_n,r_n)$, $n\geq 1$, converge to $(y,r)$ and to an optional process $\delta\geq 0$, respectively. If $\delta>0$, $\mathbb{P}\times d\kappa-$a.e., then $(y,r)\in\mathcal{L}^*$ and $\delta\in\mathcal{D}(y,r)$. \end{lem} \begin{proof} Let $(x,q)\in\mathcal{K}$. Since the constant process $x/\alpha$ belongs to $\mathcal{C}(x,q)$, by Proposition~\ref{prop:conjugate-rel}, $\langle x/\alpha,\delta^n\rangle \leq xy_n+qr_n$ for $n\geq1$. Then, by Fatou's lemma, \begin{equation}\label{eq:bdry-of-K} 0<\langle x/\alpha,\delta\rangle\leq xy+qr. \end{equation} Note that since the second inequality in \eqref{eq:bdry-of-K} holds for all $q<x/(\alpha\lambda)$, it holds for $q=x/(\alpha\lambda)$ by continuity as well. Since $xy+qr>0$ for all $(x,q)\in\mathcal{K}\cup\{(x',q'):x'>0, q'=x'/(\alpha\lambda)\}$, we have $(y,r)\in\mathcal{L}^*.$ Finally, Fatou's lemma and Proposition \ref{prop:conjugate-rel} imply that $\delta\in\mathcal{D}(y,r)$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main-duality}] Since $V(\omega,t,y)+xy\geq U(\omega,t,x)$ for all $x,y>0$ and $(\omega,t)\in\Omega\times[0,\hat{T})$ and since $v(y,r)<\infty$ for some $(y,r)\in\mathcal{L}^*$, we deduce from Proposition \ref{prop:conjugate-rel} that $u(x,q)<\infty$ for all $(x,q)\in\mathcal{K}$. Analogously, since $u(x,q)>-\infty$ for some $(x,q)\in\mathcal{K}$, $v(y,r)>-\infty$ for all $(y,r)\in\mathcal{L}^*$. Hence, $u$ and $v$ are both finite on $\mathcal{K}$ and $\mathcal{L}^*$, respectively. For $(y,r)\in\mathcal{L}^*$, we define the sets \begin{equation*} \begin{aligned} A(y,r)&=\{(x,q)\in\mathcal{K}: xy+qr\leq 1\},\\ \tilde{\mathcal{C}}^{(y,r)}&=\bigcup_{(x,q)\in A(y,r)}\mathcal{C}(x,q), \end{aligned} \end{equation*} and denote by $\mathcal{C}^{(y,r)}$ the closure of $\tilde{\mathcal{C}}^{(y,r)}$ with respect to convergence in measure $\mathbb{P}\times d\kappa$. By Lemma \ref{lem:sup-over-set}, for $z>0$, $$\sup_{c\in z\mathcal{C}^{(y,r)}}\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right]=\sup_{c\in z\tilde{\mathcal{C}}^{(y,r)}}\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right]=\sup_{(x,q)\in zA(y,r)} u(x,q)>-\infty.$$ By Proposition \ref{prop:conjugate-rel}, the sets $\mathcal{D}(y,r)$ and $\mathcal{C}^{(y,r)}$ are polar sets of each other. Therefore, they satisfy the assumptions of \cite{mostovyi}, Theorem 3.2. This theorem implies that there exists a unique solution $\hat{\delta}(y,r)$ to \eqref{eq:dual-problem} and the second conjugacy relation in \eqref{eq:conjugacy-rel} holds: \begin{equation}\label{eq:1st-conj-rel} \begin{aligned} v(y,r)&=\sup_{z>0}\left\{\sup_{c\in z\mathcal{C}^{(y,r)}}\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right]-z\right\}=\sup_{z>0, (x,q)\in zA(y,r)}\{u(x,q)-z\}\\ &=\sup_{(x,q)\in\mathcal{K},z\geq xy+qr}\{u(x,q)-z\}=\sup_{(x,q)\in\mathcal{K}}\{u(x,q)-xy-qr\}. \end{aligned} \end{equation} The function $u$ is clearly concave on $\mathcal{K}$. The first equation in \eqref{eq:conjugacy-rel} follows from \cite{rockafellar}, Section 12. For $(x,q)\in\mathcal{K}$, we define the sets \begin{equation*} \begin{aligned} B(x,q)&=\{(y,r)\in\mathcal{L}^*: xy+qr\leq 1\},\\ \tilde{\mathcal{D}}^{(x,q)}&=\bigcup_{(y,r)\in B(x,q)}\mathcal{D}(y,r), \end{aligned} \end{equation*} and denote by $\mathcal{D}^{(x,q)}$ the closure of $\tilde{\mathcal{D}}^{(x,q)}$ with respect to convergence in measure $\mathbb{P}\times d\kappa$. Clearly, for $z>0$ we have $$\inf_{\delta\in z\mathcal{D}^{(x,q)}}\mathbb{E}\left[\int_0^{\hat{T}} V(t,\delta_t)d\kappa_t\right]\leq \inf_{\delta\in z\tilde{\mathcal{D}}^{(x,q)}}\mathbb{E}\left[\int_0^{\hat{T}} V(t,\delta_t)d\kappa_t\right]=\inf_{(y,r)\in zB(x,q)}v(y,r)<\infty.$$ By Proposition \ref{prop:conjugate-rel}, the sets $\mathcal{C}(x,q)$ and $\mathcal{D}^{(x,q)}$ are polar sets of each other. Therefore, they satisfy the assumptions of \cite{mostovyi}, Theorem 3.2, and this theorem implies that there exists a unique solution $\hat{c}(x,q)$ to \eqref{eq:primal-problem}. Moreover, denoting $$\hat{\delta}_t^{(x,q)}:=U'(t,\hat{c}_t(x,q))\quad\text{and}\quad z:=\langle \hat{c}(x,q),\hat{\delta}^{(x,q)}\rangle,$$ we deduce from \cite{mostovyi}, Theorem 3.2, that $\hat{\delta}_t^{(x,q)}\in z\mathcal{D}^{(x,q)}$ and that it is the unique solution of the optimization problem $$\mathbb{E}\left[\int_0^{\hat{T}} V(t,\hat{\delta}_t^{(x,q)})d\kappa_t\right]=\inf_{\delta\in z\mathcal{D}^{(x,q)}}\mathbb{E}\left[\int_0^{\hat{T}} V(t,\delta_t)d\kappa_t\right].$$ Since $\hat{\delta}_t^{(x,q)}>0$, $\mathbb{P}\times d\kappa-$a.e., and since the set $zB(x,q)\subseteq\mathbb{R}^2$ is bounded, Lemma \ref{lem:limit-in-dual} implies the existence of $(y,r)\in zB(x,q)$ such that $\hat{\delta}_t^{(x,q)}\in\mathcal{D}(y,r)$. Since $\hat{\delta}_t^{(x,q)}\in\mathcal{D}(y,r)$ is the optimizer on $z\mathcal{D}^{(x,q)}$, $$xy+qr=z=\langle \hat{c}(x,q),\hat{\delta}^{(x,q)}\rangle\quad\text{and}\quad \hat{\delta}_t(y,r)=\hat{\delta}_t^{(x,q)}=U'(t,\hat{c}_t(x,q)).$$ Further, the pointwise equality $U(t,\hat{c}_t(x,q))=V(t,\hat{\delta}_t(y,r))+\hat{c}_t(x,q)\hat{\delta}_t(y,r)$ implies \begin{equation}\label{eq:equality} u(x,q)=v(y,r)+xy+qr, \end{equation} which, according to \cite{rockafellar}, Theorem 23.5, is equivalent to $(y,r)\in\partial u(x,q)$. In particular, $\partial u(x,q)\cap\mathcal{L}^*\neq\emptyset$. Conversely, if $(y,r)\in\partial u(x,q)\cap\mathcal{L}^*$, i.e., if \eqref{eq:equality} holds, then \begin{align*} 0&\leq\mathbb{E}\left[\int_0^{\hat{T}} V(t,\hat{\delta}_t(y,r))+\hat{c}_t(x,q)\hat{\delta}_t(y,r)-U(t,\hat{c}_t(x,q))d\kappa_t\right]\\ &\leq v(y,r)+xy+qr-u(x,q)=0, \end{align*} which immediately implies the relations \eqref{eq:dual-relations-thm}. Finally, to show that $\partial u(x,q)\subseteq\mathcal{L}^*$, let $(y,r)\in\partial u(x,q)$. Since $\partial u(x,q)$ is a closed convex subset of $\bar{\mathcal{L}}$ and since $\partial u(x,q)\cap\mathcal{L}^*\neq\emptyset$, there is a sequence $(y_n,r_n)$ in $\partial u(x,q)\cap\mathcal{L}^*$ that converges to $(y,r)$. Since each of the sets $\mathcal{D}(y_n,r_n)$ contains the strictly positive process $U'(t,\hat{c}_t(x,q))$, Lemma \ref{lem:limit-in-dual} implies that $(y,r)\in\mathcal{L}^*$. \end{proof} \section{Complete market}\label{sec:complete} In this section, we examine the complete market case, when $\mathcal{Z}=\left\{ Z \right\}$ is a singleton, in much greater detail. This becomes possible due to the fact that the description of the dual set $\mathcal{D}^\lambda$ given in Proposition~\ref{prop:min-of-D} simplifies significantly in a complete market: \begin{prop}\label{prop:complete-case-D} If $\mathcal{Z}=\left\{ Z \right\}$ is a singleton then \begin{equation}\label{eq:description-D-complete} \mathcal{D}^\lambda=\{\delta\geq 0\text{ optional}: \delta\preceq_\lambda Z\},\quad \lambda\in[0,1]. \end{equation} In particular, for any consumption plan $c$ satisfying \eqref{cond:ddc}, $\sup_{\delta\in\mathcal{D}^\lambda}\langle c,\delta \rangle=\langle c,Z \rangle$. \end{prop} \begin{proof} Clearly, the set on the right-hand side of \eqref{eq:description-D-complete} is $\preceq_\lambda$-solid, convex, and contains $Z$. By Proposition \ref{prop:min-of-D}, it remains to verify that it is closed with respect to convergence in measure $\mathbb{P}\times d\kappa$. Let $\delta^n\preceq_\lambda Z$, $n\geq 1$, be a sequence converging almost surely to an optional process $\delta\geq 0$. To show that $\delta\preceq_\lambda Z$, we check the right-hand side of \eqref{eq:monotonicity-lambda-ord}: for every $c$ satisfying \eqref{cond:ddc}, by Fatou's lemma and by \eqref{eq:monotonicity-lambda-ord} applied to $\delta^n\preceq_\lambda Z$, $$\langle c, \delta \rangle\leq\liminf_{n\to\infty}\langle c, \delta^n \rangle\leq\langle c, Z\rangle.$$ The last assertion follows from \eqref{eq:monotonicity-lambda-ord} as well. \end{proof} We note that from \eqref{eq:description-D-complete} it is easy to show the strict inclusion $\mathcal{D}^{\lambda_2}\supsetneq\mathcal{D}^{\lambda_1}$ for $1\geq\lambda_2>\lambda_1\geq0$ in a complete market. Namely, it suffices to find a process $\delta$ such that $\delta\preceq_{\lambda_2}Z$ but $\delta\npreceq_{\lambda_1}Z$. One can take a stopping time $T\in(0,\hat{T}]$ such that $\mathbb{P}(T<\hat{T})>0$, define $$\delta_t:=Z_t\mathbbm{1}_{[0,T]}+\delta^2_t,\quad \text{where}\quad\delta^2_t:=\lambda_2\cdot{}^o\left(\frac{\mathbbm{1}_{[0,T]}}{d\kappa([0,T])}\int_{T}^{\hat{T}}Z d\kappa\right)_t,$$ and check the required properties which, by Proposition \ref{prop:lambda-ord-alternative}, are equivalent to $\delta^2\preceq \lambda_2 Z\mathbbm{1}_{(T,\hat{T})}$ and $\delta^2\npreceq \lambda_1 Z\mathbbm{1}_{(T,\hat{T})}$. \subsection{Case $q=0$} First, we consider the case $q=0$, when there is no lower bound on consumption -- only the drawdown constraint. As in Section \ref{sec:optimization}, the results hold for any $\lambda\in(0,1]$, unless specified otherwise, and we often omit mentioning the dependence of domains, value functions, and optimizers on $\lambda$. For brevity, we denote $u(x)=u(x,0)$, $\hat{c}(x)=\hat{c}(x,0)$, $v(y)=v(y,0)$, $\hat{\delta}(y)=\hat{\delta}(y,0)$ for $x,y>0$. \begin{prop}\label{prop:complete-case} Suppose all the assumptions of Theorem \ref{thm:main-duality} hold and the market is complete. Let $x>0$, $y=u'(x)$, and assume a consumption plan $c$ satisfies \eqref{cond:ddc} and $\langle c,Z \rangle=x$. Then $c\in\mathcal{C}(x)$. The consumption plan $c$ is the optimizer in $\mathcal{C}(x)$ if and only if for $\hat\delta_t:=U'(t,c_t)$ the following holds: \begin{enumerate} \item $\{yZ_t>\hat\delta_t\}\subseteq\{c_t=\lambda\bar{c}_t\}$, up to $\mathbb{P}\times d\kappa-$nullsets; \item $\{\hat\delta_t>yZ_t\}\subseteq\{c_t=\bar{c}_t\}$, up to $\mathbb{P}\times d\kappa-$nullsets; \item $\mathbb{P}$-almost surely, \begin{equation*} \begin{aligned} ^o\left(\int_.^{\hat{T}} (\hat\delta-yZ)\vee0 d\kappa\right)_t\leq{}^o\left(\int_.^{\hat{T}} \lambda(yZ-\hat\delta)\vee 0 d\kappa\right)_t,\quad\text{for all }t\in[0,\hat{T}),\\ \text{with equality }d\bar{c}_t-\text{almost everywhere.} \end{aligned} \end{equation*} In this case, the optimizer $c$ is related with its running essential supremum $\bar{c}$ in the following way (up to $\mathbb{P}\times d\kappa-$nullsets): \begin{equation}\label{eq:c-through-c-bar} c_t=\lambda\bar{c}_t\vee I(t,yZ_t)\wedge \bar{c}_t,\quad t\in[0,\hat{T}), \end{equation} and, $\mathbb{P}$-almost surely, $\bar{c}$ satisfies \begin{equation}\label{eq:cond-on-c-bar} \begin{aligned} ^o\left(\int_.^{\hat{T}} (U'(\bar{c})-yZ)\vee0 d\kappa\right)_t\leq{}^o\left(\int_.^{\hat{T}} \lambda(yZ-U'(\lambda\bar{c}))\vee 0 d\kappa\right)_t,\quad\text{for all }t\in[0,{\hat{T}}),\\ \text{with equality }d\bar{c}_t-\text{almost everywhere.} \end{aligned} \end{equation} \end{enumerate} \end{prop} \begin{rem}[Interpretation] Let $\lambda\in(0,1)$.We see that only the following three types of behavior are possible for the optimal consumption plan $c=\hat{c}(x)$ (up to $\mathbb{P}\times d\kappa-$nullsets): \begin{enumerate} \item[(i)] $c_t=\lambda\bar{c}_t$. The agent consumes at the minimal level allowed by the drawdown constraint. \item[(ii)] $c_t=\bar{c}_t$. The agent consumes at the current running essential supremum level. \item[(iii)] $c_t=I(t,yZ_t)$. The agent consumes as an unconstrained agent with a different initial wealth $x_0$ given by $u_0'(x_0)=y$, where $u_0$ is the value function for the unconstrained problem. \end{enumerate} How exactly the timeline separates into these three regions is encoded in \eqref{eq:cond-on-c-bar}, however, this is more difficult to interpret. For the simpler ratchet constraint $\lambda=1$, see Corollary~\ref{cor:complete-env} below and the discussion after it. \end{rem} \begin{rem}\label{rem:pts-of-increase} The jointly measurable process defined by $$D_t:=\int_t^{\hat{T}} \left(\hat\delta-yZ\right)\vee0 -\lambda\left(yZ-\hat\delta\right)\vee 0d\kappa,\quad t\in[0,\hat{T}),$$ is continuous. Condition 3.~of Proposition \ref{prop:complete-case} implies $\mathbb{E}[D_0]\leq 0$ and therefore $\mathbb{E}\left[\int_0^{\hat{T}}\hat\delta d\kappa\right]\leq\mathbb{E}\left[\int_0^{\hat{T}} yZ d\kappa\right]$. Hence $\sup_{t\geq 0} \vert D_t\vert\leq \int_0^{\hat{T}} (\hat\delta+yZ) d\kappa$ is integrable and $D_t$ is uniformly integrable. By \cite{dellacherie-meyerB}, Chapter VI, Theorem no.~47 and Remark no.~50(f), the optional projection ${}^oD_t$ is right-continuous (in fact, c\`adl\`ag). Therefore, by Lemma \ref{lem:pt-of-increase}, condition 3. of Proposition \ref{prop:complete-case} is equivalent to the following: $\mathbb{P}$-almost surely, \begin{equation*} \begin{aligned} ^o\left(\int_.^{\hat{T}} (\hat\delta-yZ)\vee0 d\kappa\right)_t\leq{}^o\left(\int_.^{\hat{T}} \lambda(yZ-\hat\delta)\vee 0 d\kappa\right)_t,\quad\text{for all }t\in[0,{\hat{T}}),\\ \text{with equality when } d\bar{c}_t>0. \end{aligned} \end{equation*} The same is true for \eqref{eq:cond-on-c-bar}: ``equality $d\bar{c}_t-$almost everywhere" can be replaced with ``equality when $d\bar{c}_t>0$". \end{rem} \begin{proof}[Proof of Proposition \ref{prop:complete-case}] The consumption plan $c$ belongs to $\mathcal{C}(x)$ because it satisfies \eqref{cond:ddc} and is $x$-admissible by Proposition \ref{prop:complete-case-D}: $\sup_{\delta\in\mathcal{D}^\lambda}\langle c,\delta \rangle=\langle c,Z \rangle=x$. ``$\Rightarrow$''. Assume that $c=\hat{c}(x)$, the optimizer in $\mathcal{C}(x)$. By \cite{mostovyi}, Theorem~3.2, under the assumptions of Theorem \ref{thm:main-duality}, \begin{equation}\label{eq:max-by-duality} \langle c,\hat\delta \rangle=xy=\langle c,yZ \rangle \end{equation} and $\hat\delta$ is the optimizer in $\mathcal{D}(y)$. In particular, $\hat\delta\preceq_\lambda yZ$ and, by Proposition \ref{prop:lambda-ord-alternative}, $\hat\delta^2\preceq\lambda\delta^2$ for $\hat\delta^2=(\hat\delta-yZ)\vee 0$ and $\delta^2=(yZ-\hat\delta)\vee 0$. Let $\delta^1=\hat\delta\wedge yZ$. Tracing the following sequence of inequalities implies that \eqref{eq:max-by-duality} is only possible if conditions 1.-3. are satisfied: \begin{equation}\label{eq:sequence} \begin{aligned} \langle c,\hat\delta\rangle&=\langle c,\delta^1 \rangle+\langle c,\hat\delta^2 \rangle\leq\langle c,\delta^1 \rangle+\langle \bar{c},\hat\delta^2 \rangle=\langle c,\delta^1 \rangle+\mathbb{E}\left[\int_0^{\hat{T}} {}^o\left(\int_.^{\hat{T}} \hat\delta^2 d\kappa\right)_t d\bar{c}_t\right]\\ &\leq \langle c,\delta^1 \rangle+\mathbb{E}\left[\int_0^{\hat{T}} {}^o\left(\int_.^{\hat{T}} \lambda\delta^2 d\kappa\right)_t d\bar{c}_t\right]=\langle c,\delta^1 \rangle+\langle\bar{c},\lambda\delta^2 \rangle\\ &\leq\langle c,\delta^1 \rangle+\langle c,\delta^2 \rangle=\langle c,yZ \rangle=xy, \end{aligned} \end{equation} where for inequalities we used that $\lambda\bar{c}\leq c\leq\bar{c}$ and $\hat\delta^2\preceq\lambda \delta^2$. ``$\Leftarrow$''. If $\hat\delta$ and $yZ$ satisfy conditions 1.-3. then the above sequence of inequalities \eqref{eq:sequence} holds with equalities everywhere, i.e., \eqref{eq:max-by-duality} holds. Condition 3. implies that $\hat\delta\preceq_\lambda yZ$ and, by \eqref{eq:description-D-complete}, $\hat\delta\in\mathcal{D}(y)$. Using $y=u'(x)$, the conjugacy relations between $u$ (respectively, $U$) and $v$ (respectively, $V$), $c\in\mathcal{C}(x)$, and $\hat\delta\in\mathcal{D}(y)$, we obtain \begin{align*} u(x)-v(y)&=xy\overset{\eqref{eq:max-by-duality}}{=}\langle c,\hat\delta \rangle=\langle c_t,U'(t,c_t) \rangle\\ &=\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t) d\kappa_t\right]-\mathbb{E}\left[\int_0^{\hat{T}} V(t,\hat\delta_t) d\kappa_t\right]\leq u(x)-v(y). \end{align*} Hence, the last inequality is in fact an equality, which is only possible when $c$ and $\hat\delta$ are the optimizers in $\mathcal{C}(x)$ and $\mathcal{D}(y)$, respectively. Finally, we show that the expression \eqref{eq:c-through-c-bar} of $c$ through its running essential supremum and the condition \eqref{eq:cond-on-c-bar} for the latter follow from 1.-3. Since $c\geq\lambda\bar{c}$, we have $\hat\delta=U'(c)\leq U'(\lambda\bar{c})$ and $\{yZ> U'(c)\}\supseteq\{yZ>U'(\lambda\bar{c})\}$. But 1. implies that $\{yZ> U'(c)\}\subseteq\{yZ>U'(\lambda\bar{c})\}$. Hence, $\{yZ> U'(c)\}=\{yZ>U'(\lambda\bar{c})\}$ and on this set $c=\lambda\bar{c}$, $\hat\delta=U'(\lambda\bar{c})$ (everything holds up to $\mathbb{P}\times d\kappa-$nullsets). Similarly, $c\leq\bar{c}$ therefore $U'(c)\geq U'(\bar{c})$ and $\{yZ< U'(c)\}\supseteq\{yZ<U'(\bar{c})\}$. But 2. implies that $\{yZ< U'(c)\}\subseteq\{yZ<U'(\bar{c})\}$, hence $\{yZ< U'(c)\}=\{yZ<U'(\bar{c})\}$ and on this set $c=\lambda\bar{c}$, $\hat\delta=U'(\bar{c})$. On the complement of these two sets, $\hat\delta=yZ$ therefore $c=I(yZ)$. We can summarize this as \begin{equation*} c_t=\left\{\begin{aligned} \lambda\bar{c}_t&\quad\text{on}\quad\{yZ>U'(\lambda\bar{c})\}=\{\lambda\bar{c}>I(yZ)\}, \\ \bar{c}_t&\quad\text{on}\quad\{U'(\bar{c})>yZ\}=\{\bar{c}<I(yZ)\},\\ I(t,yZ_t)&\quad\text{on}\quad \{\bar{c}\geq I(yZ)\geq \lambda\bar{c}\},\\ \end{aligned}\right. \end{equation*} or, equivalently, as \eqref{eq:c-through-c-bar}. The condition \eqref{eq:cond-on-c-bar} follows from 3. since $(\hat\delta-yZ)\vee0=(U'(\bar{c})-yZ)\vee 0$ and $(yZ-\hat\delta)\vee 0=(yZ-U'(\lambda\bar{c}))\vee 0$. \end{proof} For the ratchet constraint, the characterization of the optimizers given in Proposition \ref{prop:complete-case} simplifies significantly. \begin{cor}[$\lambda=1$]\label{cor:complete-env} Suppose all the assumptions of Theorem \ref{thm:main-duality} hold and the market is complete. Let $x>0$, $y=u'(x)$, and $c\in\mathcal{C}_{\text{inc}}$ such that $\langle c,Z \rangle=x$. Then $c\in\mathcal{C}^1(x)$. The consumption plan $c$ is the optimizer in $\mathcal{C}^1(x)$ if and only if, $\mathbb{P}-$almost surely, \begin{equation}\label{eq:env-ratchet} ^o\left(\int_.^{\hat{T}} U'(c)d\kappa\right)_t\leq{}^o\left(\int_.^{\hat{T}} yZ d\kappa\right)_t\quad\text{for all }t\in[0,{\hat{T}}),\ \text{with equality when }dc_t>0. \end{equation} \end{cor} This characterization is closely related to the notion of the \textit{envelope process} introduced in Lemma A.1 in \cite{BK} and to the Representation Theorem of \cite{bank-el-karoui} it is based on. In Lemma \ref{lem:envelope} of the \hyperref[app:envelope]{Appendix}, we slightly modify Lemma~A.1 of \cite{BK} to show the following: if, in addition to Assumption~\ref{ass:utility} on utility $U$, we assume that $\mathbb{E}\left[\int_0^{\hat{T}}U'(\omega,t,x)d\kappa_t\right]<\infty$ for every $x>0$ then for every $y>0$ there exists a unique $c=c^y\in\mathcal{C}_\text{inc}$ for which \eqref{eq:env-ratchet} holds. By Corollary \ref{cor:complete-env}, this consumption plan has to be the optimizer, $c^y=\hat{c}(x)\in\mathcal{C}^1(x)$. In the \hyperref[app:envelope]{Appendix}, we also give a short proof, inspired by the arguments of \cite{riedel} and not involving duality, that, under an additional assumption on utility (Assumption \ref{ass:utility-additional}), the optimal consumption plans have this structure. In the example below, we derive, as a special case of Corollary \ref{cor:complete-env}, the formula of \cite{riedel} for the optimal consumption plans. \begin{exmp} Let $\hat{T}=\infty$ and assume that stochastic clock is given by $\dot\kappa_s=e^{-rs}$, where $r>0$ is the interest rate (cf.~Remark \ref{rem:on-model}(ii)). We take the utility field given by $U(t,x)=\frac{e^{-\delta t} \mathtt{u}(x)}{e^{-rt}}$, where $\mathtt{u}$ is a strictly concave, increasing, continuously differentiable deterministic function on $(0,\infty)$ satisfying the Inada conditions $\mathtt{u}(0)=+\infty$, $\mathtt{u}(+\infty)=0$ and $\delta>0$ is the parameter of exponential time preferences of the agent. This utility satisfies Assumption \ref{ass:utility} and the expected utility functional in \eqref{eq:primal-problem} becomes $\mathbb{E}\left[\int_0^\infty e^{-\delta t} \mathtt{u}(c_t)dt\right]$. Assume further that $\log(Z_t)$ is a L\'evy process (starting at zero) for the unique equivalent martingale deflator $Z$. With these choices of $\hat{T}$, $\kappa$, $U$, and $Z$, we turn out in the framework of \cite{riedel}. The case of the GBM market considered in \cite{dybvig} corresponds to $\log(Z_t)=-\theta B_t-\frac{1}{2}\theta^2 t$, where $B_t$ is the underlying Brownian motion and $\theta$ is the market price of risk. Let $\mathtt{i}=(\mathtt{u}')^{-1}$. We will show that for a suitable constant $K>0$ the consumption plan \begin{equation}\label{eq:riedel-sol} c_t:=\begin{cases}0,& t=0,\\ \mathtt{i}\left(\inf_{s\in[0,t)}KZ_se^{(\delta-r)s}\right),& t>0,\end{cases}\quad\in\mathcal{C}_\text{inc}\end{equation} satisfies \eqref{eq:env-ratchet}. By Lemma \ref{lem:envelope}, this is then the unique process in $\mathcal{C}_\text{inc}$ satisfying \eqref{eq:env-ratchet}, hence by Corollary \ref{cor:complete-env} it is the optimizer $\hat{c}(x)\in\mathcal{C}^1(x)$. The definition \eqref{eq:riedel-sol} corresponds to the formula (3) of \cite{riedel} for the optimal consumption in case $q=0$; in order to obtain the result analogous to Ridel's for the case $q>0$, we can simply apply Corollary \ref{cor:q-positive} below. With the martingale property of $Z$ and with $\dot\kappa$ being deterministic and exponential, it is easy to check for the right-hand side of \eqref{eq:env-ratchet} that ${}^o\left(\int_.^\infty yZ d\kappa\right)_t=yZ_t\cdot d\kappa([t,\infty))=y\frac{e^{-rt}}{r}Z_t$. With $c$ defined as in \eqref{eq:riedel-sol}, we have \begin{equation}\label{eq:before-opt-proj} \begin{aligned} \int_t^\infty U'(c)d\kappa&=\int_t^\infty e^{-\delta s}\left(\inf_{u\in[0,s)} KZ_u e^{(\delta-r)u}\right) ds\leq \int_t^\infty e^{-\delta s}\left(\inf_{u\in[t,s)} KZ_u e^{(\delta-r)u}\right) ds\\ &=Ke^{-rt}Z_t\cdot\int_0^\infty e^{-\delta s} \left(\inf_{u\in[0,s)} \frac{Z_{t+u}}{Z_t} e^{(\delta-r)u}\right) ds=:Ke^{-rt}Z_t I_t, \end{aligned} \end{equation} where ``$=$" holds in place of ``$\leq$" on $\{dc_t>0\}\in\mathcal{O}$. Due to the L\'evy assumption on $\log(Z_t)$, the integral $I_t$ satisfies $\mathbb{E}[I_t\vert\mathcal{F}_t]=\mathbb{E}\left[\int_0^\infty e^{-\delta s}\inf_{u\in[0,s)} Z_u e^{(\delta-r)u} ds\right]:=I\in(0,\infty)$, hence ${}^oI_t\equiv I$. Taking the optional projection in \eqref{eq:before-opt-proj}, we obtain $$^o\left(\int_.^\infty U'(c)d\kappa\right)_t\leq Ke^{-rt}Z_tI\quad\text{for all }t\geq 0,\ \text{with equality when }dc_t>0.$$ This is precisely \eqref{eq:env-ratchet} if we take $K:=y/(Ir)$. \end{exmp} \subsection{Monotonicity and continuity of optimizers} Next, we establish monotonicity and continuity of the optimizers with respect to the initial wealth. These results will be useful when dealing with the case $q>0$ in the following subsection. \begin{prop}\label{prop:mon-and-cont} Suppose all the assumptions of Theorem \ref{thm:main-duality} hold and the market is complete. For $x_2>x_1>0$, $$\hat{c}(x_1)\leq \hat{c}(x_2),\quad \mathbb{P}\times d\kappa-\text{a.e.},$$ and for $x_n\to x$ with $x,x_n>0$, $\hat{c}(x_n)\to \hat{c}(x)$, $\mathbb{P}\times d\kappa-$almost everywhere. \end{prop} \begin{proof} For fixed $x_2>x_1>0$ we denote for brevity $y_i=u'(x_i)$, $c^i=\hat{c}(x_i)$ and $\bar{c}^i$ its running essential supremum, $i=1,2$. By \eqref{eq:c-through-c-bar}, $c^i=\lambda\bar{c}^i\vee I(y_iZ)\wedge \bar{c}^i$ for $i=1,2$. We split the product space $\Omega\times[0,\hat{T})$ into two regions: $$R:=\left\{(\omega,t):\ \bar{c}_t^1\geq I(t,y_1Z_t)\geq \lambda\bar{c}^1_t\text{ and } \bar{c}^2_t\geq I(t,y_2Z_t)\geq \lambda\bar{c}^2_t\right\}\in\mathcal{O}$$ and its complement. On $R$, $c^1_t=I(t,y_1Z_t)<I(t,y_2Z_t)=c^2_t$ since $y_1>y_2$ and $I$ is strictly decreasing. In order to prove $c^1\leq c^2$, it remains to show that $\bar{c}^1\leq\bar{c}^2$ on $R^c$, $\mathbb{P}\times d\kappa-$a.e. Let $f$ be a (time-dependent and stochastic) function defined by $$f(y,\bar{c}):=f(\omega, t,y,\bar{c}):=(U'(t,\bar{c})-yZ_t)\vee0 -\lambda(yZ_t-U'(t,\lambda\bar{c}))\vee 0\quad\text{for}\quad y,\bar{c}>0.$$ and let $$D^i_t:=\int_t^{\hat{T}} f\left(s,y_i,\bar{c}^i_s\right) d\kappa_s=\int_t^{\hat{T}} \left(\hat\delta(y_i)-y_iZ\right)\vee0 -\lambda\left(y_iZ-\hat\delta(y_i)\right)\vee 0d\kappa,\quad i=1,2,$$ as in Remark \ref{rem:pts-of-increase} (the second equality holds up to indistinguishability). By Remark \ref{rem:pts-of-increase}, \begin{equation}\label{eq:eq-at-pts-increase} \mathbb{P}-\text{almost surely}:\quad ^oD^i_t\leq 0\quad\text{for all }t\in [0,\hat{T})\text{ with }``="\text{ if } d\bar{c}^i_t>0. \end{equation} For a fixed $l\geq 0$, we define two stopping times ${T}^i_l:=\inf\{t\in[0,\hat{T}): \bar{c}^i_t> l \}$, $i=1,2$, where the infimum of an empty set in taken to be $\hat{T}$. As in the proof of Lemma \ref{lem:pt-of-increase}, ${T}^i_l$ is either $\hat{T}$, or a point of increase of $\bar{c}^i$. Next, let \begin{equation*} S^1_l:=\begin{cases} {T}^1_l,&\quad\text{if }{T}^1_l<{T}^2_l,\\ \hat{T},&\quad\text{otherwise}, \end{cases}\quad\text{and}\quad S^2_l:= \begin{cases} {T}^2_l,&\quad\text{if }{T}^1_l<{T}^2_l,\\ \hat{T},&\quad\text{otherwise}. \end{cases} \end{equation*} Thus, $S^1_l={T}^1_l<{T}^2_l=S^2_l$ on $\{\omega: {T}^1_l(\omega)<{T}^2_l(\omega)\}$ and $S^1_l=S^2_l=\hat{T}$ otherwise. Property \eqref{eq:eq-at-pts-increase} implies \begin{equation*} \mathbb{E} D^1_{S_l^1}=0,\quad \mathbb{E} D^1_{S_l^2}\leq 0,\quad \mathbb{E}D^2_{S_l^1}\leq 0,\quad\text{and}\quad \mathbb{E} D^2_{S_l^2}=0. \end{equation*} Taking into account $S^1_l\leq S^2_l$ and the definition of $D^i$, $i=1,2$, we obtain: \begin{equation}\label{eq:ineq-at-stopping} \mathbb{E}\left[\int_{S^1_l}^{S^2_l}f\left(y_1,\bar{c}^1\right) d\kappa\right]\geq 0\geq \mathbb{E}\left[\int_{S^1_l}^{S^2_l}f\left(y_2,\bar{c}^2\right) d\kappa\right]. \end{equation} On the other hand, $\bar{c}^1>l\geq \bar{c}^2$ on $(S^1_l,S^2_l]$ and $y_1>y_2$, so by the monotonicity of $f$, \begin{equation}\label{eq:mon-of-f} f\left(\omega,t,y_1,\bar{c}^1_t(\omega)\right)\leq f\left(\omega,t,y_2,\bar{c}^2_t(\omega)\right)\quad \text{on }(S^1_l(\omega),S^2_l(\omega)]. \end{equation} Moreover, the inequality in \eqref{eq:mon-of-f} is strict on $R^c$ due to the strict monotonicity of $f$ on $R^c$ for at least one of $\bar{c}^1$ or $\bar{c}^2$: either $\bar{c}_t^1\in \left[I(t,y_1Z_t),\frac{1}{\lambda}I(t,y_1Z_t)\right]^c$ and then $$f\left(\omega,t,y_1,\bar{c}^1_t(\omega)\right)<f\left(\omega,t,y_1,\bar{c}^2_t(\omega)\right)\leq f\left(\omega,t,y_2,\bar{c}^2_t(\omega)\right),$$ or $\bar{c}^2_t\in \left[I(t,y_2Z_t),\frac{1}{\lambda}I(t,y_2Z_t)\right]^c$ and then $$f\left(\omega,t,y_2,\bar{c}^2_t(\omega)\right)> f\left(\omega,t,y_2,\bar{c}^1_t(\omega)\right)\geq f\left(\omega,t,y_1,\bar{c}^1_t(\omega)\right).$$ Therefore, \eqref{eq:ineq-at-stopping} implies that the set $N_l:=\{(\omega,t): S^1_l(\omega)<t\leq S^2_l(\omega)\}\cap R^c$ is a $\mathbb{P}\times d\kappa-$nullset. If $\{(\omega,t):\bar{c}_t^1>\bar{c}_t^2\}\cap R^c$ has a positive $\mathbb{P}\times d\kappa$ measure then there exists an $l\in\mathbb{Q}$ such that the set $$\{(\omega,t):\bar{c}_t^1> l\geq \bar{c}_t^2\}\cap R^c=\{(\omega,t):S^1_l(\omega)<t\leq S^2_l(\omega)\}\cap R^c= N_l$$ has a positive $\mathbb{P}\times d\kappa$ measure, a contradiction. Therefore, $\bar{c}_t^1\leq\bar{c}_t^2$, $\mathbb{P}\times d\kappa-$a.e. on $R^c$, completing the proof of monotonicity. To prove continuity, we take an increasing sequence $x_n\uparrow x$ (for a decreasing sequence the argument is analogous). By $\lim_{n\to\infty}\hat{c}(x_n)\leq \hat{c}(x)$, $\mathbb{P}\times d\kappa-$a.e., and by monotone convergence $$x=\lim_{n\to\infty} x_n=\lim_{n\to\infty}\langle\hat{c}(x_n),Z\rangle = \langle\lim_{n\to\infty}\hat{c}(x_n),Z\rangle\leq \langle\hat{c}(x),Z\rangle=x,$$ so the equality should hold in place of inequality, i.e., $\hat{c}(x)=\lim_{n\to\infty}\hat{c}(x_n)$, $\mathbb{P}\times d\kappa-$a.e. \end{proof} \subsection{Case $q>0$} Now we consider the presence of a lower bound $q>0$ on initial consumption. It turns out that optimizers for $q>0$ can be described in terms of appropriate optimizers for $q=0$, as the following proposition states. Note that in the case of a complete market the constant $\alpha$ defined in Proposition \ref{prop:alpha-for-D} becomes simply $\alpha=\mathbb{E}\left[\int_0^{\hat{T}} Zd\kappa\right]$. \begin{prop} Suppose all the assumptions of Theorem \ref{thm:main-duality} hold and the market is complete. Fix $q>0$. For $x>0$, $y=u'(x)$, let $\bar{c}^{x}$ be the running essential supremum of the optimizer $\hat{c}(x)\in\mathcal{C}(x)$ and define \begin{equation}\label{eq:lower-bound} c_t:=c_t(x,q)=\lambda [\bar{c}_t^{x}\vee q]\vee I(t,yZ_t)\wedge [\bar{c}_t^{x}\vee q]=\hat{c}(x)\vee[\lambda q\vee I(t,yZ_t)\wedge q] \end{equation} for $t\in[0,\hat{T})$ and $\pi(x):=\langle c,Z\rangle$, the price of the consumption plan $c$. Then $\pi(x)\in[(\alpha\lambda)q,\infty)$, and if $\pi(x)>(\alpha\lambda)q$ then $c\in\mathcal{C}(\pi(x),q)$ and $c$ is the optimizer in $\mathcal{C}(\pi(x),q)$. The function $x\mapsto \pi(x)$ is continuous non-decreasing on $(0,\infty)$ with $\pi(x)\downarrow (\alpha\lambda)q$ when $x\downarrow 0$ and $\pi(x)\uparrow \infty$ when $x\uparrow \infty$. As a consequence, for every $x'>(\alpha\lambda)q$ there exists an $x>0$ such that $\pi(x)=x'$ and hence the optimizer $\hat{c}(x',q)\in\mathcal{C}(x',q)$ is given by \eqref{eq:lower-bound}. \end{prop} \begin{rem}[Interpretation] In view of Proposition \ref{prop:complete-case}, $\hat{c}(x)=\lambda\bar{c}^x\vee I(yZ)\wedge \bar{c}^x$, therefore, the optimizer in $\mathcal{C}(x',q)$, $(x',q)\in\mathcal{K}$, can be described as follows: for some $x>0$, the optimizers $\hat{c}(x)$ and $\hat{c}(x',q)$ behave identically on $\{\bar{c}^x\geq q\}$, i.e., starting from the time when the running essential supremum of $\hat{c}(x)$ becomes at least $q$. On $\{\bar{c}^x< q\}$, $\hat{c}(x',q)$ behaves as an unconstrained agent's consumption $I(t,yZ_t)$ restricted to stay between $\lambda q$ and $q$: $\hat{c}(x',q)=\lambda q\vee I(t,yZ_t)\wedge q$. This agrees with the observation that the drawdown constraint \eqref{cond:ddc-with-q} with $q>0$ becomes the simple drawdown constraint \eqref{cond:ddc} on the set $\{\bar{c}\geq q\}$ and becomes simply a lower bound $c\geq \lambda q$ on $\{\bar{c}< q\}$. As in the case $q=0$, we have three possible types of behaviour: either the agent consumes at the minimal level allowed by the drawdown constraint \eqref{cond:ddc-with-q}, $c_t=\lambda[\bar{c}_t\vee q]=\lambda [\bar{c}_t^{x}\vee q]$, or the agent consumes at the current running essential supremum level $c_t=\bar{c}_t=\bar{c}_t^{x}\vee q$, or the agent consumes as an unconstrained agent, $c_t=I(t,yZ_t)$. This separation into three regions agrees with the result of \cite[Theorem 1]{Arun2012} on the drawdown constraint in a GBM market with CRRA utility, but it does not seem obvious from Arun's result that the optimal consumption can be linked to the optimal consumption of an unconstrained agent. We note that $\pi(x)=(\alpha\lambda)q$ is only possible if $c$ given by \eqref{eq:lower-bound} satisfies $c\equiv\lambda q$, $\mathbb{P}\times d\kappa-$a.e. But this is the only admissible consumption plan satisfying \eqref{cond:ddc-with-q} with initial wealth $(\alpha\lambda)q$ and therefore $c$ given by \eqref{eq:lower-bound} is the optimizer in this trivial case as well. \end{rem} \begin{proof} First, we prove the announced properties of the function $x\mapsto\pi(x)$. Since $x+\alpha q\geq\langle\hat{c}(x)+ q,Z\rangle\geq \pi(x)\geq\langle \lambda q,Z\rangle=(\alpha\lambda)q$, we have $\pi(x)\in[(\alpha\lambda)q,\infty)$. Since $c\geq \hat{c}(x)$, we also obtain $\pi(x)=\langle c, Z\rangle\geq \langle \hat{c}(x),Z\rangle =x$, i.e., $\pi(x)\uparrow\infty$ as $x\uparrow \infty$. The map $x\mapsto\hat{c}(x)$ is continuous and monotone in the sense of Proposition \ref{prop:mon-and-cont}, moreover, by this monotonicity $\hat{c}(x)\downarrow 0$ as $x\downarrow 0$, $\mathbb{P}\times d\kappa-$a.e., since $\langle \hat{c}(x),Z\rangle=x\downarrow 0$ and $Z>0$, $\mathbb{P}\times d\kappa-$a.e. The function $x\mapsto y=u'(x)$ is strictly decreasing with $u'(0)=\infty$ by \cite[Theorem 3.2]{mostovyi}, while the (random, time-dependent) function $I$ is strictly decreasing with $I(\infty)=0$. Therefore, for every $(\omega,t)$ the map $x\mapsto I(t,yZ_t)$ is continuous and strictly increasing with $I(t,yZ_t)\downarrow 0$ as $x\downarrow 0$. These observations and \eqref{eq:lower-bound} imply continuity and monotonicity of $x\mapsto c(x,q)$ in the sense of Propostion \ref{prop:mon-and-cont}. By monotone convergence $x\mapsto \pi(x)=\langle c(x,q),Z\rangle$ is continuous and non-decreasing on $(0,\infty)$, and $\pi(x)\downarrow \langle 0\vee[\lambda q\vee 0\wedge q],Z\rangle=(\alpha\lambda)q$ as $x\downarrow 0$. Now, let $x':=\pi(x)>(\alpha\lambda)q$ for some $x>0$. The consumption plan $c$ defined by \eqref{eq:lower-bound} satisfies \eqref{cond:ddc} because it is bounded between the non-decreasing processes $\bar{c}_t^{x}\vee q$ and $\lambda[\bar{c}_t^{x}\vee q]$. Hence, $c$ is $x'$-admissible, satisfies \eqref{cond:ddc} and $c\geq\lambda q$, meaning $c\in\mathcal{C}(x',q)$. It remains to show that $c$ is the optimizer in $\mathcal{C}(x',q)$. Define $\hat\delta_t=\hat\delta_t(y)=U'(t,\hat{c}_t(x))$ and \begin{equation}\label{eq:def-of-delta} \tilde\delta_t=U'(t,c_t)=U'(\lambda [\bar{c}_t^{x}\vee q])\wedge yZ_t\vee U'(\bar{c}_t^{x}\vee q). \end{equation} Since $c\geq\hat{c}(x)$, we have $\tilde\delta\leq \hat{\delta}\in\mathcal{D}(y)$ and therefore $\tilde\delta\preceq_\lambda yZ$ by \eqref{prop:complete-case-D}. Furthermore, by Proposition \ref{prop:sufficient-for-Dyr2}, $\tilde\delta\in\mathcal{D}(y,r)$ with $r$ given by \begin{equation*} \begin{aligned} r:&=\mathbb{E}\left[\int_0^{\hat{T}} (\tilde\delta-yZ)\vee 0-\lambda(yZ-\tilde\delta)\vee0 d\kappa\right]\\ &=\mathbb{E}\left[\int_0^{\hat{T}} (U'(\bar{c}_t^{x}\vee q)-yZ)\vee 0-\lambda(yZ-U'(\lambda [\bar{c}_t^{x}\vee q]))\vee0 d\kappa\right]. \end{aligned} \end{equation*} We will finish the proof by showing that $\langle c,\tilde\delta\rangle=x'y+qr$. This equality, $c\in\mathcal{C}(x',q)$, $\tilde\delta\in\mathcal{D}(y,r)$, and conjugacy relations between $U$ and $V$ then yield $$u(x',q)\geq\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right]\overset{\eqref{eq:def-of-delta}}{=}\mathbb{E}\left[\int_0^{\hat{T}} V(t,\tilde\delta_t)d\kappa_t\right]+x'y+qr\geq v(y,r)+x'y+qr,$$ which by \eqref{eq:conjugacy-rel} implies $c=\hat{c}(x',q)$ and $\tilde\delta=\hat\delta(y,r)$. Let $c':=\bar{c}^{x}\vee q-q\in\mathcal{C}_{\text{inc}}$. The equality $\langle c,\tilde\delta\rangle=x'y+qr$ is equivalent to $\langle c, \tilde\delta - yZ\rangle =qr$, which can be checked with the following sequence of equalities: \begin{equation*} \begin{aligned} \langle c, \tilde\delta - yZ\rangle \overset{\eqref{eq:def-of-delta}}{=}& \langle c\cdot\mathbbm{1}_{\{I(yZ)>\bar{c}^{x}\vee q\}},(U'(\bar{c}^{x}\vee q)-yZ)\vee 0\rangle\\ &\quad\quad-\langle c\cdot\mathbbm{1}_{\{I(yZ)<\lambda[\bar{c}^{x}\vee q]\}},(yZ-U'(\lambda [\bar{c}^{x}\vee q]))\vee0\rangle\\ \overset{\eqref{eq:lower-bound}}{=}&\langle \bar{c}^{x}\vee q,(U'(\bar{c}^{x}\vee q)-yZ)\vee 0\rangle-\langle \lambda[\bar{c}^{x}\vee q],(yZ-U'(\lambda [\bar{c}^{x}\vee q]))\vee0\rangle\\ =&\ qr+\langle c', (U'(\bar{c}^{x}\vee q)-yZ)\vee 0-\lambda (yZ-U'(\lambda [\bar{c}^{x}\vee q]))\vee0\rangle\\ =&\ qr+\langle c', (U'(\bar{c}^{x})-yZ)\vee 0-\lambda (yZ-U'(\lambda\bar{c}^{x}))\vee0\rangle\\ =&\ qr + \mathbb{E}\left[\int_0^{\hat{T}}{}^o\left(\int_.^{\hat{T}} (U'(\bar{c}^x)-yZ)\vee0 - \lambda(yZ-U'(\lambda\bar{c}^x))\vee 0 d\kappa\right)_tdc'_t\right]=qr, \end{aligned} \end{equation*} where the expectation on the last line vanishes by \eqref{eq:cond-on-c-bar} due to the fact that the measure $dc'$ is absolutely continuous with respect to $d\bar{c}^x$. \end{proof} \begin{cor}[$\lambda=1$]\label{cor:q-positive} Suppose all the assumptions of Theorem \ref{thm:main-duality} hold and the market is complete. Fix $q>0$. For $x>0$, define $c:=\hat{c}(x)\vee q$ and $\pi(x):=\langle c,Z\rangle$, the price of the consumption plan $c$. Then $\pi(x)\in[\alpha q,\infty)$, and if $\pi(x)>\alpha q$ then $c\in\mathcal{C}^1(\pi(x),q)$ and $c$ is the optimizer in $\mathcal{C}^1(\pi(x),q)$. Moreover, for every $x'>\alpha q$ there exists an $x>0$ such that $\pi(x)=x'$ and hence the optimizer $\hat{c}(x',q)\in\mathcal{C}^1(x',q)$ is given by $\hat{c}(x)\vee q$. \end{cor} In Proposition \ref{prop:alternative-sol} in the \hyperref[app:envelope]{Appendix}, we achieve a similar conclusion for the ratchet constraint without relying on Theorem \ref{thm:main-duality}, but under additional assumptions on $U$, by using the Bank--Kauppila envelope process. \begin{appendix} \section*{Ratcheting consumption in complete market -- envelope process and alternative solution}\label{app:envelope} The following lemma is a modification of \cite[Lemma A.1]{BK} for less restrictive assumptions on $U$ and possibly finite time horizon $\hat{T}$. The proof is based on the Representation Theorem of \cite{bank-el-karoui} and follows closely the proof of \cite[Lemma A.1]{BK}. This lemma is used for showing existence and uniqueness of $c\in\mathcal{C}_{\text{inc}}$ satisfying \eqref{eq:env-ratchet} (see the discussion following Corollary \ref{cor:complete-env}), as well as for an alternative solution of the ratchet constraint problem in a complete market given in Proposition \ref{prop:alternative-sol} below. In a complete market, $\mathcal{Z}=\left\{Z\right\}$ is a singleton and we denote $\tilde{Z}_t:=\int_t^{\hat{T}} Zd\kappa$ for $0\leq t\leq \hat{T}$. \begin{lem}\label{lem:envelope} Assume that stochastic clock satisfies Assumption~\ref{ass:clock}, utility satisfies Assumption~\ref{ass:utility}, and $\mathbb{E}\left[\int_0^{\hat{T}}U'(\omega,t,x)d\kappa_t\right]<\infty$ for every $x>0$. Then for every $y>0$ there exists a unique process $c^y\in\mathcal{C}_{\text{inc}}$ such that, $\mathbb{P}$-almost surely, \begin{equation}\label{eq:envelope2} ^o\left(\int_.^{\hat{T}} U'(c^y) d\kappa\right)_t\leq y{}^o\tilde{Z}_t\quad\text{for all}\quad t\in[0,\hat{T}),\ \text{with equality when } dc^y_t>0. \end{equation} The process $c^y$ is finite-valued. \end{lem} In the language of \cite{BK}, $^o\left(\int_.^{\hat{T}} U'(c^y) d\kappa\right)$ is called the \textit{envelope process} of $y{}^o\tilde{Z}$, and Lemma \ref{lem:envelope} states its existence and uniqueness under given assumptions on $U$. \begin{proof} Let $$f(\omega,t,l):=\begin{cases}U'(\omega,t,-1/l),&\quad l<0,\\ -l,&\quad l\geq0.\end{cases}$$ By the properties of $U$, this mapping satisfies: \begin{itemize} \item For every $(\omega,t)\in\Omega\times[0,\hat{T})$, the function $l\mapsto f(\omega,t,l)$ is continuous and strictly decreasing from $+\infty$ to $-\infty$. \item For all $l\in\mathbb{R}$, $(\omega,t)\mapsto f(\omega,t,l)$ is an optional process with $\mathbb{E}\left[\int_0^{\hat{T}}\vert f(\omega,t,l)\vert d\kappa_t\right]<\infty$. \end{itemize} Since $\tilde{Z}$ is a continuous non-increasing jointly measurable process with $Z_{\hat{T}}=0$ and $\mathbb{E}[\tilde{Z}_0]=\alpha<\infty$, the optional projection ${}^o\tilde{Z}$ is a class (D), continuous in expectation, non-negative supermartingale with $\lim_{t\uparrow \hat{T}}{}^o\tilde{Z}_{t}=0$, $\mathbb{P}-$a.s. By Theorem 3 and Remark 2.1 in \cite{bank-el-karoui} there exists an optional process $L_t$ taking values in $\mathbb{R}\cup\{-\infty\}$ such that for every stopping time $S\leq\hat{T}$, \begin{equation}\label{eq:repr-formula2} \mathbb{E}\left[\int_S^{\hat{T}} f(t,\sup_{s\in[S,t)}L_s)d\kappa_t\Big\vert \mathcal{F}_S\right]=y{}^o\tilde{Z}_S. \end{equation} The process $L_t$ is non-positive on $[0,\hat{T})$ up to indistinguishability. Assuming otherwise, by the optional section theorem there exists a stopping time $S\leq\hat{T}$ such that $\mathbb{P}(S<\hat{T})>0$ and $L_S>0$ on $\{S<\hat{T}\}$, which implies $$0\leq y{}^o\tilde{Z}_S\leq \mathbb{E}\left[\int_S^{\hat{T}} f(t,L_S)d\kappa_t\Big\vert \mathcal{F}_S\right]=-L_S\mathbb{E}\left[d\kappa([S,\hat{T}))\Big\vert \mathcal{F}_S\right]<0,$$ a contradiction. Furthermore, almost every path of the non-positive left-continuous process $\tilde{L}_t:=\sup_{s\in[0,t)}L_s$ is strictly negative on $(0,\hat{T})$: otherwise, for the stopping times $$S:=\inf\{t\in(0,\hat{T}]: \tilde{L}_t=0\}\quad\text{and}\quad S^n:=\inf\{t\in(0,\hat{T}]:\tilde{L}_t>-1/n\},\ n\geq 1,$$ we have $\mathbb{P}(S<\hat{T})>0$, $S^n\uparrow S$ as $n\uparrow\infty$, $\sup_{s\in[S^n,t)}L_s=\tilde{L}_t$ for every $n\geq 1$, $t>S^n$, and, by monotone convergence, $$0=\lim_{n\to\infty}\mathbb{E}\left[\int_{0}^{\hat{T}} f(t,\tilde{L}_t)\mathbbm{1}_{[S^n,\hat{T})}d\kappa_t\right]=\lim_{n\to\infty}\mathbb{E}\left[y\tilde{Z}_{S^n}\right]=\mathbb{E}\left[y\tilde{Z}_{S}\right]>0.$$ Hence, the process $$c^y_t:=\begin{cases} 0,&\quad t=0,\\ -1/\tilde{L}_t,&\quad 0<t<\hat{T}, \end{cases}$$ belongs to $\mathcal{C}_{\text{inc}}$, is finite-valued, and, by \eqref{eq:repr-formula2}, satisfies \eqref{eq:envelope2}. For uniqueness, assume $c\in\mathcal{C}_{\text{inc}}$ is such that $^o\left(\int_.^{\hat{T}} U'(c) d\kappa\right)_t\leq y{}^o\tilde{Z}_t$ for all $0\leq t<\hat{T}$ with ``=" when $dc_t>0$. For $l>0$, we show below that $T_l:=\inf\{t\geq 0: c_t>l\}$ is the largest stopping time minimizing $\mathbb{E}\left[\int_T^{\hat{T}} (yZ-U'(l))d\kappa\right]$ over all stopping times $T\in[0,\hat{T}]$. This means that the stopping times $T_l$, $l>0$, are uniquely determined, hence, $c$ is uniquely determined, and coincides with $c^y$, up to indistinguishability. Note that $\mathbb{E}\left[\int_0^{\hat{T}}\vert yZ-U'(l)\vert d\kappa\right]<\infty$ due to the assumptions on $\kappa$ and $U'$. For a stopping time $T\in[0,\hat{T}]$, we have $$\mathbb{E}\left[\int_T^{\hat{T}}(yZ-U'(l)) d\kappa\right]\geq \mathbb{E}\left[\int_T^{\hat{T}}(U'(c)-U'(l)) d\kappa\right]\geq\mathbb{E}\left[\int_{T_l}^{\hat{T}}(U'(c)-U'(l)) d\kappa\right],$$ where the first inequality follows from $^o\left(\int_.^{\hat{T}} U'(c) d\kappa\right)_t\leq y{}^o\tilde{Z}_t$, the second inequality follows from the definition of $T_l$ and monotonicity of $U'$, and equality holds in both for $T=T_l$. Thus, $T_l$ is a solution to the optimal stopping problem under consideration. Moreover, any stopping time $T$ such that $\mathbb{P}(T>T_l)>0$ will yield a strict inequality between the second and third terms, making $T_l$ the largest solution to this optimal stopping problem. \end{proof} The solution below for the ratchet constraint in a complete semimartingale market is inspired by the argument of \cite{riedel} for a complete market with pricing kernel generated by a L\'evy process. It turns out that the notion of the envelope process allows for such a generalization of Riedel's result. A similar argument, but again only for the L\'evy market model, can be found in \cite{Watson-Scott} in their use of the Bank--El Karoui Representation Theorem. We denote $\mathbb{U}(c):=\mathbb{E}\left[\int_0^{\hat{T}} U(t,c_t)d\kappa_t\right]$, the expected utility functional for a consumption plan $c$, and make additional assumptions on the utility $U$: \begin{assume}\label{ass:utility-additional} $\mathbb{E}\left[\int_0^{\hat{T}}U^{-}(\omega,t,0)d\kappa_t\right]<\infty$ and $\mathbb{E}\left[\int_0^{\hat{T}}U'(\omega,t,x)d\kappa_t\right]<\infty$ for all $x>0$. \end{assume} \begin{prop}\label{prop:alternative-sol} Let $\kappa$ satisfy Assumption \ref{ass:clock}, $U$ satisfy Assumptions \ref{ass:utility} and \ref{ass:utility-additional}, assume $\mathcal{Z}=\{Z\}$ is a singleton, and let $y>0$, $q\geq 0$. Define the consumption plan $\hat{c}:=c^y\vee q\mathbbm{1}_{(0,\hat{T})}\in\mathcal{C}_\text{inc}$, where $c^y$ is given by Lemma \ref{lem:envelope}, and let $x:=\langle\hat{c},Z\rangle$. If $x<\infty$ and $\mathbb{U}(\hat{c})<\infty$ then $\hat{c}$ is the unique maximizer of $\mathbb{U}$ in $\mathcal{C}^1(x,q)$. \end{prop} \begin{proof} Let $c\in\mathcal{C}_{\text{inc}}$ be $x$-admissible and such that $c_{0+}\geq q$. By discussion in the last paragraph of Subsection \ref{subsec:domains}, it is enough to show that $\mathbb{U}(\hat{c})\geq \mathbb{U}(c)$ for all such $c$. Denote $\hat{c}'=(\hat{c}-q)\vee 0\in\mathcal{C}_{\text{inc}}$ and $c'=(c-q)\vee 0\in\mathcal{C}_\text{inc}$. By concavity of $U$, $U(\hat{c}_t)-U(c_t)\geq U'(\hat{c}_t)(\hat{c}_t-c_t)$. Integrating this inequality with respect to $\mathbb{P}\times d\kappa$, we obtain \begin{equation}\label{r-positive} \mathbb{U}(\hat{c})-\mathbb{U}(c)\geq \mathbb{E}\left[\int_0^{\hat{T}} U'(\hat{c}_t)(\hat{c}_t-c_t)d\kappa_t\right]=\mathbb{E}\left[\int_0^{\hat{T}} \left(\int_s^{\hat{T}} U'(\hat{c}_t)d\kappa_t\right)(d\hat{c}_s-dc_s)\right]. \end{equation} Since $U(\hat{c}_t)-U(0)\geq U'(\hat{c}_t)\hat{c}_t\geq 0$ and $\mathbb{E}\left[\int_0^{\hat{T}}U^{-}(t,0)d\kappa_t\right]<\infty$, the process $U'(\hat{c}_t)\hat{c}_t$ is $\mathbb{P}\times d\kappa-$integrable. Hence, we can indeed interchange subtraction and taking expectation in \eqref{r-positive}, and the equality in \eqref{r-positive} is justified. Since $\hat{c}_t\geq c^y_t$ and $U'$ is decreasing, $\int_s^{\hat{T}} U'(c^y_t)d\kappa_t\geq \int_s^{\hat{T}} U'(\hat{c}_t)d\kappa_t$ for all $s\geq 0$. Moreover, for $s\geq 0$ such that $d\hat{c}_s'>0$, we have $\hat{c}_t=c^y_t$ for all $t>s$ and, since $d\kappa$ has no atoms, $\int_s^{\hat{T}} U'(c^y_t)d\kappa_t= \int_s^{\hat{T}} U'(\hat{c}_t)d\kappa_t$. As a consequence,\begin{equation}\label{eq:chain} \begin{aligned} \mathbb{E}\left[\int_0^{\hat{T}} \left(\int_s^{\hat{T}} U'(\hat{c}_t)d\kappa_t\right)(d\hat{c}_s-dc_s)\right]&=\mathbb{E}\left[\int_0^{\hat{T}} \left(\int_s^{\hat{T}} U'(\hat{c}_t)d\kappa_t\right)(d\hat{c}'_s-dc'_s)\right]\\ &\geq\mathbb{E}\left[\int_0^{\hat{T}} \left(\int_s^{\hat{T}} U'(c^y_t)d\kappa_t\right)(d\hat{c}'_s-dc'_s)\right]\\ &=\mathbb{E}\left[\int_0^{\hat{T}} {}^o\left(\int_s^{\hat{T}} U'(c^y_t)d\kappa_t\right)(d\hat{c}'_s-dc'_s)\right]. \end{aligned} \end{equation} Finally, combining \eqref{r-positive}, \eqref{eq:chain}, using the defining property \eqref{eq:envelope2} of $c^y$ and the fact that $d\hat{c}'_s>0$ implies $dc^y_s>0$, we obtain \begin{equation*} \begin{aligned} \mathbb{U}(\hat{c})-\mathbb{U}(c)&\geq \mathbb{E}\left[\int_0^{\hat{T}} {}^o\left(\int_s^{\hat{T}} U'(c^y_t)d\kappa_t\right)(d\hat{c}'_s-dc'_s)\right]\geq \mathbb{E}\left[\int_0^{\hat{T}} y^\circ\tilde{Z}_s(d\hat{c}'_s-dc'_s)\right]\\ &=\mathbb{E}\left[\int_0^{\hat{T}} y\tilde{Z}_s(d\hat{c}_s-dc_s)\right]=y\left(\langle\hat{c},Z\rangle-\langle c,Z\rangle\right)\geq 0. \end{aligned} \end{equation*} The uniqueness of optimizer, up to $\mathbb{P}\times d\kappa-$nullsets of $\Omega\times[0,\hat{T})$, follows by strict concavity of $U$ and finiteness of $\mathbb{U}(\hat{c})$. \end{proof} \end{appendix} \textbf{Acknowledgments.} I thank my advisor Mihai S\^irbu for the many helpful discussions on topics related to this paper. The comments of the anonymous referees led to major improvements of the paper and are greatly appreciated. I would also like to thank Bahman Angoshtari for several insightful discussions on ratchet and drawdown constraints. \textbf{Funding.} The research was supported in part by the National Science Foundation under Grant DMS-1908903. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Part of this research was performed while the author was visiting the Institute for Mathematical and Statistical Innovation (IMSI), which is supported by the National Science Foundation (Grant DMS-1929348). \footnotesize
1,116,691,498,786
arxiv
\section{Introduction} \subsection{Motivation}\ \medskip \noindent If one is given a metric space it is natural to look at the metric geometry of the space at different scales. For instance, if in a non-discrete metric space we care only about small distances between points and properties that are stable under uniform embeddings we will refer to this particular geometry of the space as {\it uniform geometry}. Similarly, the {\it coarse geometry} of an unbounded metric space will deal with large distances and properties stable under coarse embeddings. Finally the {\it Lipschitz geometry} accounts for the metric geometry at all scales and the behavior of Lipschitz embeddings. The Lipschitz geometry of finite metric spaces is a central theme in theoretical computer science and the design of algorithms and has been thoroughly investigated since the 90's both from functional analysts and computer scientists. The uniform and Lipschitz geometry of Banach spaces has been studied in some form or another since the rise of the 20th Century and culminated in the publication of the authoritative book of Benyamini and Lindenstrauss \cite{BenyaminiLindenstrauss2000} in 2000. Introduced by Gromov in \cite{Gromov1993}, the coarse geometry of finitely generated groups turns out to be a key concept in noncommutative geometry in connection to the Baum-Connes and Novikov Conjectures as demonstrated by Yu \cite{Yu2000}. It is of great interest to understand what kind of metric spaces (mostly finitely generated groups) can be coarsely embeddable into some ``nice'' Banach spaces (e.g. Hilbert spaces). Despite the good deal of work that has been done in understanding the coarse geometry of the domain spaces, namely groups, we have a quite narrow picture regarding the coarse geometry of the target spaces, even if they are taken amongst classical Banach spaces. The main motivation of the authors is to initiate a systematic study of the coarse geometry of {\it general metric spaces}. It is clearly an ambitious task and in this paper we will attack this project under a specific angle which seems to us to be a natural place to start. \subsection{Notation and terminology}\ \medskip \noindent Let $(\mathcal M_{1},d_{1})$ and $(\mathcal M_{2},d_{2})$ be two unbounded metric spaces. In nonlinear theory we are interested in knowing whether or not we can find a copy of $\mathcal M_{1}$ inside $\mathcal M_{2}$ in one of the following senses: \medskip $\bullet$ $\mathcal M_{1}$ {\it Lipschitz embeds} into $\mathcal M_{2}$, and we write it $\mathcal M_{1}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} \mathcal M_{2}$ for short, if there exists a one-to-one map $f:\mathcal M_{1}\to \mathcal M_{2}$, which for some constants $A, B>0$ satisfies \begin{equation}\label{Lipembedding} A^{-1} d_{1}(x,y)\le d_{2}(f(x),f(y))\le B d_{1}(x,y), \qquad \forall x,y\in \mathcal M_{1}. \end{equation} Equivalently, a 1-1 map $f:\mathcal M_{1}\to \mathcal M_{2}$ is a Lipschitz embedding if $\omega_f(t)\le B t$ and $\rho_f(t)\ge {t}/{A}$ for some constants $A, B>0$ and all $t>0$, where $$\rho_f(t)=\inf\{d_{2}(f(x),f(y)) : d_{1}(x,y)\geq t\},$$ and $$\omega_f(t)={\rm sup}\{d_{2}(f(x),f(y)) : d_{1}(x,y)\leq t\}.$$ \medskip $\bullet$ $\mathcal M_{1}$ {\it uniformly embeds} into $\mathcal M_{2}$, which is denoted by $\mathcal M_{1}\ensuremath{\underset{unif}{\lhook\joinrel\relbar\joinrel\rightarrow}} \mathcal M_{2}$, if there is an injective, uniformly continuous map $f:\mathcal M_{1}\to \mathcal M_{2}$ whose inverse $f^{-1}:f(\mathcal M_{1})\subset \mathcal M_{2}\to \mathcal M_{1}$ is also uniformly continuous. This equates to asking the injective map $f$ that $\rho_f(t)>0$ for all $t>0$ and $\lim_{t\to 0}\omega_f(t)=0$. A uniform embedding imposes a uniformity on how the map and its inverse change distances locally. Sometimes, e.g., when the domain space and its image under $f$ are metrically convex, this implies uniformity on the changes that the embedding makes to distant points. However, although a Banach space is metrically convex, its image under a uniform embedding may not be so. \medskip $\bullet$ $\mathcal M_{1}$ {\it coarsely embeds} into $\mathcal M_{2}$, denoted by $\mathcal M_{1}\ensuremath{\underset{coarse}{\lhook\joinrel\relbar\joinrel\rightarrow}} \mathcal M_{2}$, if there is $f:\mathcal M_{1}\to \mathcal M_{2}$ so that $\omega_f(t)<\infty$ for all $t>0$ and $\displaystyle \lim_{t\to \infty}\rho_f(t)=\infty$. It is perhaps worth mentioning that a coarse embedding need not be injective nor continuous, hence this kind of embedding overlooks the structure of a metric space in the neighborhood of a point. Indeed, in contrast to uniform embeddings, coarse embeddings only capture the structure of the space at large scales. \medskip Aside from these, we will write $\mathcal M_{1}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} \mathcal M_{2}$ if there exists an {\it isometric embedding} from $\mathcal M_{1}$ into $\mathcal M_{2}$, and use $\mathcal M_{1}\ensuremath{\underset{1+\epsilon}{\lhook\joinrel\relbar\joinrel\rightarrow}} \mathcal M_{2}$ to denote {\it almost isometric embeddability} of $\mathcal M_{1}$ into $\mathcal M_{2}$, i.e., if for every $\epsilon>0$ there exists a Lipschitz embedding $f_{\epsilon}:\mathcal M_{1}\to \mathcal M_{2}$ so that the distortion of the embedding, measured by the product of the optimal constants $A$, $B$ in \eqref{Lipembedding} is smaller than $1+\epsilon$. Two metric spaces $\mathcal M_{1}$, $\mathcal M_{2}$ will be referred to as {\it Lipschitz isomorphic} (also, Lipschitz equivalent), {\it uniformly homeomorphic}, or {\it coarsely homeomorphic}, if we can find a bijective embedding $f:\mathcal M_{1}\to \mathcal M_{2}$ of the specific kind. For short we will write $\mathcal M_{1} \ensuremath{\underset{Lip}{\sim}} \mathcal M_{2}$, $\mathcal M_{1} \ensuremath{\underset{unif}{\sim}} \mathcal M_{2}$, and $\mathcal M_{1} \ensuremath{\underset{coarse}{\sim}} \mathcal M_{2}$, respectively. \medskip Our main subject of study in this article will be the nonlinear embeddings that occur between any two members of the classical sequence spaces $\ell_p$ and the function spaces $L_p=L_{p}[0,1]$, for the whole range of $0<p<\infty$, when equipped either with their usual distances or their snowflakings. Recall that for $0<p\le 1$, the standard distances in $\ell_{p}$ and $L_{p}$ are respectively given by $$d_{\ell_p}(x,y)=\sum_{n=1}^\infty \vert x_n-y_n\vert^p,$$ and $$d_{L_p}(f,g)= \int_0^1 \vert f(t)-g(t)\vert^p dt,$$ whereas for values of $p$ on the other side of the spectrum, $p\ge 1$, the distance in the spaces is induced by their norms $\Vert x\Vert_{\ell_p}=(\sum_{n=1}^\infty \vert x_n\vert^p)^{1/p},$ and $\Vert f\Vert_{L_p} = (\int_0^1 \vert f(t)\vert^p dt)^{1/p}.$ Note that for $p=1$ we have $(L_1,\Vert\cdot\Vert_{L_1})=(L_1,d_{L_1})$ and $(\ell_1,\Vert\cdot\Vert_{\ell_1})=(\ell_1,d_{\ell_1})$, hence for those two metric spaces we will drop the distances and just write without confusion $L_1$ or $\ell_1$. For simplicity, we will unify the notation for the four metrics introduced above, e.g., when we write $(L_p,d_p)$ for $0<p<\infty$ it shall be understood that we endow $L_p$ with the metric $d_{L_p}(f,g)$ if $0<p\le 1$ or with the metric $\Vert f-g\Vert_{L_p}$ if $p\ge 1$. \subsection{Organization of the paper}\ \medskip We have divided this article in five more sections, each of which is rendered as self-contained as possible. The flow of the exposition has a deliberate survey flavor that, we hope, will smooth the way for understanding the topics we cover and will help the reader put the new results in the right place. \smallskip When a Lipschitz embedding between two metric spaces is ruled out, it is only natural to determine whether there exist weaker embeddings, e.g. coarse, uniform, or quasisymmetric embeddings to name a few. For $0<s<1$, the $\mathbf s$-{\it snowflaked} version of a metric space $(\mathcal M,d)$ is the metric space $(\mathcal M,d^s)$, sometimes denoted $\mathcal M^{(s)}$. It is clear that $(\mathcal M,d)$ and $(\mathcal M,d^s)$ are coarsely and uniformly equivalent and it is easy to show that they are quasisymmetrically equivalent. However, they are rarely Lipschitz equivalent. Another important remark is that a Lipschitz embedding of some snowflaking of a metric space induces an embedding of the original metric space which is simultaneously a coarse, uniform, and quasisymmetric embedding. In Section~\ref{Section2} we introduce three new classes of metric spaces in order to study the following general question: \begin{question}\label{snow} Let $0<s_1\neq s_2\le1$. Under what conditions is it possible to Lipschitz embed $(\mathcal M,d^{s_1})$ into $(\mathcal M,d^{s_2})$? \end{question} To tackle this problem we will analyze how some metric invariants such as the Hausdorff dimension, Enflo type, or roundness may thwart the embeddability of snowflakings of general metric spaces. \smallskip In the third section, the theme is embedding snowflakings of the real line into the metric spaces $\ell_{p}$ for $0<p\le 1$. Although our approach is infinite-dimensional in nature, it is inspired by the work of Assouad \cite{Assouad1983} on Lipschitz embeddability of $(\mathbb R, |\cdot|^{p})$ for $0<p\le 1$ into the finite-dimensional space $\ell_{q}^{N}$ for $q\ge 1$ equipped with the standard distance. \smallskip In Section~\ref{Section5} we complete the picture of the Lipschitz embedding theory between the spaces $L_{p}$ and $\ell_{p}$ for $0<p<\infty$ and connect it with the unique Lipschitz subspace structure problems. \smallskip In the fifth section we discuss the embeddability of snowflakings of the spaces $L_p$ and $\ell_{p}$ for $0<p<\infty$, and show that the Mendel and Naor's isometric embeddings between $L_{p}$ spaces obtained in \cite{MendelNaor2004} have an $\ell_{p}$-space counterpart in the Lipschitz category. In light of these results we are able to give partial answers to Question~\ref{snow}. \smallskip We close with a brief section devoted to bridge our work with a few known results on coarse and uniform embeddings between the metric spaces $L_{p}$ and $\ell_{p}$ for $0<p<\infty$, where we also state a few open problems that seem the natural road to take to further research in this direction. \section{Embedding snowflakings of metric spaces}\label{Section2} \noindent Let us introduce the following three new classes of metric spaces: \medskip $\bullet$ ${\mathsf S_{\mathsf D}}= \Big\{(\mathcal M,d) : (\mathcal M,d^s)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} (\mathcal M,d)\;\text{for all}\, 0<s\le1\ \Big\}$, i.e., the collection of metric spaces $\mathcal M$ that contain a subset Lipschitz equivalent to $\mathcal M^{(s)}$ for all $0<s<1$. \smallskip $\bullet$ $\ensuremath{\mathsf{NS}_{\mathsf{D}}}=\Big\{(\mathcal M,d) : (\mathcal M,d^s)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} (\mathcal M,d)\; \text{for any}\, 0<s<1\Big\}$, that is, the class of metric spaces that cannot be the target space of a Lipschitz embedding from any $\mathcal M^{(s)}$. \smallskip $\bullet$ $\ensuremath{\mathsf{NS}_{\mathsf{T}}}=\Big\{(\mathcal M,d) : (\mathcal M,d)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} (\mathcal M,d^s)\; \textrm{for any}\; 0<s<1\Big\}$, formed by those metric spaces that are not Lipschitz equivalent to any subset of their snowflakings. \smallskip To get started, using the metric invariance of Enflo-type and of Hausdorff dimension, we will be able to exhibit a few first members of the classes $\ensuremath{\mathsf{S}_{\mathsf{D}}},\ensuremath{\mathsf{NS}_{\mathsf{D}}},\ensuremath{\mathsf{NS}_{\mathsf{T}}}$ and give rather general restrictions regarding the existence of a Lipschitz embedding between any two metric spaces. \smallskip First we describe briefly the notion of Enflo type, which was introduced by Enflo in \cite{Enflo1978} although it seems as if the term was coined in \cite{Pisier1986b} by Pisier. An $n$-dimensional {\it cube} in an arbitrary metric space $\mathcal{M}$ is a collection of $2^{n}$ not necessarily distinct points $C=\{x_u\}_{u\in\{-1,1\}^n}$ in $\mathcal{M}$, where each point $x_u$ in $C$ is indexed by a distinct vector $u\in\{-1,1\}^n$. If $C$ is an $n$-dimensional cube in $\mathcal{M}$, a {\it diagonal} in $C$ is an unordered pair of the form $\{x_u,x_{-u}\}$, i.e., a pair of vertices in $C$ whose indexing vectors differ in all their coordinates. The set of all the diagonals in $C$ will be denoted by $D(C)$. An {\it edge} in $C$ is an unordered pair $\{x_u,x_v\}$ such that $u$ and $v$ differ in only one coordinate. The set of all edges of $C$ is denoted by $E(C)$. Then, a metric space $(\mathcal M, d)$ is said to have {\it Enflo-type} ${\mathbf p}\ge 1$ if there exists a constant $K>0$ such that for every $n\in\ensuremath{\mathbb{N}}$ and for every $n$-dimensional cube $C\subset\mathcal{M}$ the sum of the lengths of the $2^{n-1}$ diagonals in $C$ is related to the sum of the lengths of the $n2^{n-1}$ edges in $C$ by the formula \begin{equation}\label{etype} \displaystyle\sum_{\{a,b\}\in D(C)}d(a,b)^p\le K^p\displaystyle\sum_{\{a,b\}\in E(C)}d(a,b)^p.\end{equation} Every metric space has Enflo-type $1$ with constant $K=1$ by the triangle inequality, so we will put $$\textrm{E-type}(\mathcal M)=\sup\{p : \mathcal M\;\; \textrm{has Enflo-type $p$}\}.$$ A metric space $\mathcal{M}$ is said to have {\it finite (supremum) Enflo-type} if $\textrm{E-type}(\mathcal M)<\infty$. \smallskip The first assertion of our next Lemma makes Enflo-type a powerful tool to study Lipschitz embeddability between general metric spaces. In turn, the second assertion will be extremely relevant when dealing with snowflakings. \begin{lemma}\label{enflolemma} Let $(\mathcal M_{1},d_{1})$, $(\mathcal M_{2},d_{2})$ be metric spaces. \begin{enumerate} \item[(i)] If $(\mathcal M_{1},d_{1})\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}(\mathcal M_{2},d_{2})$ then $\textrm{E-type}(\mathcal M_{1})\ge \textrm{E-type}(\mathcal M_{2})$. \item[(ii)] Let $0<s<1$. Then $\displaystyle \textrm{E-type}(\mathcal M_{1}^{(s)})=\frac{\textrm{E-type}(\mathcal M_{1})}{s}.$\end{enumerate} \end{lemma} \begin{proof} (i) Assume $(\mathcal M_{2},d_{2})$ satisfies \eqref{etype} for some $p\ge 1$. Let $f\colon \mathcal M_{1}\to \mathcal M_{2}$ and $A,B>0$ such that $${A^{-1}}d_{1}(x,y)\le d_{2}(f(x),f(y))\le Bd_{1}(x,y),\quad \forall x,y\in \mathcal M_{1}.$$ Let $n\in\ensuremath{\mathbb{N}}$ and $C$ be a $n$-dimensional cube in $\mathcal{M}_1$. Then $f(C)=\{f(x_u): u\in\{-1,1\}^n\}$ is a $n$-dimensional cube in $\mathcal{M}_2$ and \begin{align*} \displaystyle\sum_{\{a,b\}\in D(f(C))}d(a,b)^p & \le K^p\displaystyle\sum_{\{a,b\}\in E(f(C))}d(a,b)^p \\ &\le K^p\displaystyle\sum_{\{a,b\}\in E(C)}d(f(a),f(b))^p\\ & \le K^pB^p\displaystyle\sum_{\{a,b\}\in E(C)}d(a,b)^p \end{align*} But, \begin{align*} \displaystyle\sum_{\{a,b\}\in D(f(C))}d(a,b)^p & = \displaystyle\sum_{\{a,b\}\in D(C)}d(f(a),f(b))^p \\ &\ge A^{-p}\displaystyle\sum_{\{a,b\}\in D(C)}d(a,b)^p\\ \end{align*} and so \begin{align*} \displaystyle\sum_{\{a,b\}\in D(C)}d(a,b)^p & \le A^pK^pB^p \displaystyle\sum_{\{a,b\}\in D(C)}d(a,b)^p \\ \end{align*} (ii) follows readily from the definition of Enflo-type. \end{proof} In the next straightforward lemma we see that the Hausdorff dimension verifies an analogue of Lemma~\ref{enflolemma} (ii) and the reverse inequality with respect to Lipschitz embeddings. We will not attempt to include here the subtle (and long!) definition of Hausdorff dimension and instead we prefer to refer to \cite{BuragoBuragoIvanov2001} and \cite{Heinonen2001} for an extended discussion of this notion. \begin{lemma} Let $(\mathcal M_1,d_1)$ and $(\mathcal M_2,d_2)$ be metric spaces, and $0<s<1$. \begin{enumerate} \item[(i)] If $(\mathcal M_1,d_1)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}(\mathcal M_2,d_2)$, then $\textrm{dim}_{\mathcal{H}}(\mathcal M_1)\le \textrm{dim}_{\mathcal{H}}(\mathcal M_2)$. \item[(ii)] $\displaystyle \textrm{dim}_{\mathcal{H}}(\mathcal M_1^{(s)})=\displaystyle\frac{\textrm{dim}_{\mathcal{H}}(\mathcal M_1)}{s}$. \end{enumerate} \end{lemma} The last two lemmas put together lead to the following proposition. \begin{proposition}\label{restriction} Let $(\mathcal M_{1},d_{1})$, $(\mathcal M_{2},d_{2})$ be metric spaces, and let $0<s_1,s_2\le1$. If $(\mathcal M_{1},d_{1}^{s_1})\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} (\mathcal M_{2},d_{2}^{s_2})$, then $$\frac{\textrm{E-type}(\mathcal M_{1})}{s_1}\ge \frac{\textrm{E-type}(\mathcal M_{2})}{s_2},$$ and $$\frac{{\text dim}_{\mathcal{H}}(\mathcal M_{1})}{s_1}\le \frac{{\text dim}_{\mathcal{H}}(\mathcal M_{2})}{s_2}.$$ \end{proposition} We are now able to partially answer Question \ref{snow} when $0<s_2<1$ and $s_1=1$, and when $s_2=1$ and $0<s_1<1$. \begin{corollary}\label{Enflo} Let $0<s<1$ and $(M,d)$ be a metric space with finite Enflo-type, then $(M,d)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} (M,d^s)$. \end{corollary} \begin{proof} Put $p=\textrm{E-type}(\mathcal M)$. We have $1\le p<\infty$, hence if $(\mathcal M,d)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} (\mathcal M,d^s)$ it follows from Proposition \ref{restriction} that $p/s\le p$, which contradicts $0<s<1$. \end{proof} It follows from Corollary \ref{Enflo} that the class $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$ is a rather large class of metric spaces. Indeed, any metrically convex space (i.e., every pair of points has metric midpoints), or more generally any metric space containing a line segment has Enflo-type less than 2. In particular CAT(0)-spaces, for instance metric trees, are in $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$. In Section~\ref{Section4} (\textsection\ref{Section4.2}) we will show that all the $L_p$-spaces and $\ell_{p}$-spaces for $0<p<\infty$ belong to $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$. We can prove the following corollary along the same lines. \begin{corollary}\label{Hausdorff} Let $0<s<1$ and $(\mathcal M,d)$ be a metric space with finite positive Hausdorff dimension. Then $(\mathcal M,d^s)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} (\mathcal M,d)$ \end{corollary} \begin{remark} Note that any metric space with finite Hausdorff-dimension, in particular any finite dimensional Banach space since the $n$-dimensional Euclidean space has Hausdorff dimension $n$, is in the class $\ensuremath{\mathsf{NS}_{\mathsf{D}}}$. \end{remark} In the case of almost isometric embeddings we have a more precise description of those metric spaces not embeddable into any of their snowflakings. For this purpose we use the concept of roundness introduced by Enflo in \cite{Enflo1970}, where it is applied to prove that an $L_{p}(\mu)$-space is not uniformly equivalent to $L_{q}(\nu)$ if $1\le p \not=q\le 2$. A metric space $(\mathcal M, d)$ is said to have {\it roundness} $ p\ge 1$ if the following generalization of the triangle law of the distance is fulfilled for every four points $a_{1},a_{2},a_{3},a_{4}\in \mathcal M$, \begin{equation}\label{roundness} d(a_{1},a_{3})^p+d(a_{2},a_{4})^p\le d(a_{1},a_{2})^p+d(a_{2},a_{3})^p+d(a_{3},a_{4})^p+d(a_{4},a_{1})^{p}. \end{equation} Put $$r(\mathcal M)=\sup\{p :\mathcal M\; \textrm{has roundness $p$}\}.$$ Notice that by a classical tensorization argument, roundness $p$ implies Enflo type $p$ with constant $1$. \begin{lemma}\label{miau} Let $(\mathcal M_{1}, d_{1})$, $(\mathcal M_{2}, d_{2})$ be two metric spaces. \begin{enumerate} \item[(i)] If $\mathcal M_{1}\ensuremath{\underset{1+\epsilon}{\lhook\joinrel\relbar\joinrel\rightarrow}} \mathcal M_{2}$, then $r(\mathcal M_{1})\ge r(\mathcal M_{2})$. \item[(ii)] For $0<s<1$, $\displaystyle r(\mathcal M_{1}^{(s)})=\displaystyle\frac{r(\mathcal M_{1})}{s}.$ \end{enumerate} \end{lemma} \begin{proof} We only prove (i) and leave (ii) as an exercise. Let $a_{1},a_{2},a_{3},a_{4}$ in $\mathcal M_{1}$ and pick $\epsilon>0$. There exists $f=f_\epsilon\colon \mathcal M_{1}\to \mathcal M_{2}$ and constants $A,B>0$ such that $A^{-1}d_{1}(x,y)\le d_{2}(f(x),f(y))\le B d_{1}(x,y)$ for all $x,y\in \mathcal M$, with $AB\le 1+\epsilon$. Assume that $\mathcal M_2$ has roundness $p$, then \begin{align*} &d_{1}(a_{1},a_{3})^p+d_{1}(a_{2},a_{4})^p \le A^p [d_{2}(f(a_{1}),f(a_{3}))^p+d_{2}(f(a_{2}),f(a_{4}))^p] \\ & \le A^p[d_{2}(f(a_{1}),f(a_{2}))^p+d_{2}(f(a_{2}),f(a_{3}))^p +d_{2}(f(a_{3}),f(a_{4}))^p+d_{2}(f(a_{4}),f(a_{1}))^p]\\ &\le A^pB^p[d_{1}(a_{1},a_{2})^p+d_{1}(a_{2},a_{3})^p +d_{1}(a_{3},a_{4})^p+d_{1}(a_{4},a_{1})^p]\\ & \le (1+\epsilon)^p[d_{1}(a_{1},a_{2})^p+d_{1}(a_{2},a_{3})^p +d_{1}(a_{3},a_{4})^p+d_{1}(a_{4},a_{1})^p]. \end{align*} Letting $\epsilon\to 0$ we get the roundness $p$ inequality for $\mathcal M_1$. \end{proof} \begin{proposition} Let $0<s<1$ and $(\mathcal M,d)$ be a metric space with finite roundness. Then $(\mathcal M,d)\ensuremath{\underset{1+\epsilon}{\lhook\joinrel\relbar\joinrel\not\relbar\joinrel\rightarrow}} (\mathcal M,d^s)$. \end{proposition} \begin{proof} Let $p=r(\mathcal M)$. We have $1\le p<\infty$, hence if $(\mathcal M,d)\ensuremath{\underset{1+\epsilon}{\lhook\joinrel\relbar\joinrel\rightarrow}} (M,d^s)$ it follows from Lemma~\ref{miau} that $p\ge{p}/{s}$, which contradicts $0<s<1$. \end{proof} Combining with the characterization of metric spaces with infinite roundness from \cite{Westonandall}, gives: \begin{corollary} If $(\mathcal M,d)$ is not an ultrametric space then for every $0<s<1$, $(\mathcal M,d)\ensuremath{\underset{1+\epsilon}{\lhook\joinrel\relbar\joinrel\not\relbar\joinrel\rightarrow}} (\mathcal M,d^s)$. \end{corollary} \section{Embedding snowflakings of the real line}\label{Section3} \noindent Here we consider embeddings of power transforms of the Euclidean real line, namely $\ensuremath{(\mathbb{R},d_p)}$ for $0<p\le 1$ where $d_p$ is the $p$-th power of the absolute value. First, notice that $\ensuremath{(\mathbb{R},d_p)}$ isometrically embeds into $\ensuremath{(\ell_p,d_p)}$. If $0<p,q\le 1$ the identity map between $\ensuremath{(\mathbb{R},d_p)}$ and $\ensuremath{(\mathbb{R},d_q)}$ is simultaneously a coarse and uniform equivalence and therefore $\ensuremath{(\mathbb{R},d_p)}$ is uniformly and coarsely embeddable into $\ensuremath{(\ell_q,d_q)}$. Now, if $0<q<p\le 1$ this can be expressed in terms of snowflakings. Indeed the identity mapping from $\ensuremath{(\mathbb{R},d_q)}$ into $\ensuremath{(\mathbb{R},d_p)}$ satisfies $d_{p}(x,y)=d_{q}(x,y)^{p/q}$, where $p/q\le 1$. It implies that the $p/q$-snowflaked version of $(\ensuremath{\mathbb{R}},d_{q})$ embeds isometrically into $\ensuremath{(\mathbb{R},d_p)}$, therefore into $\ensuremath{(\ell_p,d_p)}$. If $0<p\neq q\le 1$, it is easy to see that $\ensuremath{(\mathbb{R},d_p)}$ does not admit a Lipschitz copy of $\ensuremath{(\mathbb{R},d_q)}$ using either the Enflo-type or the Hausdorff dimension argument. Indeed the Euclidean real line has (supremum) Enflo-type 2 and Hausdorff dimension 1. Actually we can prove a much stronger result. \begin{proposition}\label{nolabel} If $0<p<q\le 1$, there is no nonconstant Lipschitz map from $\ensuremath{(\mathbb{R},d_q)}$ into $\ensuremath{(\ell_p,d_p)}$ and, consequently, there is no Lipschitz embedding from $\ensuremath{(\mathbb{R},d_q)}$ into $\ensuremath{(\ell_p,d_p)}$. \end{proposition} \begin{proof} Let $0<p<q\le 1$ and suppose there is $\varphi: \mathbb R\to \ell_{p}$ that satisfies a Lipschitz condition $$ \Vert \varphi(s)-\varphi(t)\Vert_{p}^{p}\le K \vert s-t\vert^{q}, \qquad \forall s,t \in \mathbb R, $$ for some $K>0$. Without loss of generality we assume $\varphi(0)=0$. We compose $\varphi$ with $x^{\ast}\in \ell_{p}^{\ast}$ of norm $\Vert x^{\ast}\Vert \le 1$ to obtain $$ |x^{\ast}\circ\varphi(s)-x^{\ast}\circ\varphi(t)|\le \Vert x^{\ast}\Vert \Vert \varphi(s)-\varphi(t)\Vert_{p} \le K^{1/p} \vert s-t\vert^{q/p}, \qquad \forall s,t\in \mathbb R. $$ Since $(\mathbb R, |\cdot|)$ is a metrically convex space and $q/p>1$ we deduce that $x^{\ast}\circ\varphi(t)=0$ for all $t\in [0,1]$. But $\ell_{p}^{\ast}$ separates the points of $\ell_{p}$, which forces $\varphi$ to be $0$. \end{proof} \begin{remark} The Enflo-type argument is inconclusive in this situation since $\ensuremath{(\mathbb{R},d_q)}$ has Enflo-type $2/q$ and $\ensuremath{(\ell_p,d_p)}$ has Enflo-type $1$. \end{remark} For doubling metric spaces (in particular the real line) Assouad \cite{Assouad1983} proved the following deep result: \begin{theorem}[Assouad] Let $0<s\le 1$ and $(\mathcal{M},d)$ be a doubling space. Then $(\mathcal{M},d^s)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} \ell_2^N=(\ensuremath{\mathbb{R}}^N,\Vert\cdot\Vert_2)$ for some $N\ge 2$. \end{theorem} The proof of Assouad's theorem is subtle and a special attention is given on estimating the dimension of the target space (see \cite{Kahane1981}, \cite{Talagrand1992}, \cite{NaorNeiman}, \cite{AbrahamBartalNeiman2011} for refinements and discussion of this result). In this paper we deal mainly with infinite-dimensional target spaces and we do not need the full power of Assouad's embedding. The proof of the next theorem makes use of a simplification of the Assouad-type embedding since we allow infinitely many maps, hence infinitely many coordinates. \begin{theorem}\label{ellpsnow} For $0<p<q$ there exist real-valued maps $(\psi_{j,k})_{(j,k)\in \ensuremath{\mathbb{Z}}}$ and positive constants $A_{p,q}, B_{p,q}$ such that \begin{equation}\label{longproof} A_{p,q}\vert x-y\vert^p\le \sum_{k\in\ensuremath{\mathbb{Z}}}\sum_{j\in\ensuremath{\mathbb{Z}}}\vert \psi_{j,k}(x)-\psi_{j,k}(y)\vert^q\le B_{p,q}\vert x-y\vert^p, \end{equation} for all $x,y\in \ensuremath{\mathbb{R}}.$ \end{theorem} We will present in Section~\ref{Section4} a nice application of Theorem \ref{ellpsnow} to the embeddability of snowflakings of the spaces $\ell_{p}$, which in certain cases can also be derived from Assouad's theorem (see Remark \ref{remark}). We wrote the proof of Theorem \ref{ellpsnow} with a ``wavelet flavor'' since even if we do not know whether or not there are other connections with approximation theory it seems that the maps obtained or some modification of them might be use to derive interesting embeddings. \begin{proof} Let $\psi:\ensuremath{\mathbb{R}} \to \ensuremath{\mathbb{R}}_{+}$ be given by $\displaystyle \psi(t) = \begin{cases} t+2 &\text{if}\; -2\le t\le 0,\\ 2-t &\text{if}\; 0\le t\le 2,\\ 0 & \text{otherwise}. \end{cases}$ \noindent Let $\beta\in\ensuremath{\mathbb{R}}$ to be chosen later. Define the functions $$\psi_{j,k}(t)=2^{k\beta-1}\psi(2^k t-j),\quad \text{for } j,k\in\ensuremath{\mathbb{Z}}.$$ That is, $$ \psi_{j,k}(t) =\begin{cases} 2^{k\beta}2^{k-1}\left(t-\displaystyle\frac{j-2}{2^k}\right)&\text{if}\;\; \displaystyle\frac{j-2}{2^k}\le t<\frac{j}{2^k},\\ -2^{k\beta}2^{k-1}\left(t-\displaystyle\frac{j+2}{2^k}\right)&\text{if}\;\; \displaystyle\frac{j}{2^k}\le t<\frac{j+2}{2^k},\\ 0 & \text{otherwise}. \end{cases}$$ Each $\psi_{j,k}$ is supported on the interval $\Big[\frac{j-2}{2^k},\frac{j+2}{2^k}\Big]$, and fulfills these two estimates: \begin{equation}\label{FACT1} \psi_{j,k}(t)\le 2^{k\beta},\, \qquad \forall\, t\in \ensuremath{\mathbb{R}}, \end{equation} and \begin{equation}\label{FACT2} \vert \psi_{j,k}(s)-\psi_{j,k}(t)\vert\le 2^{k-1}2^{k\beta}\vert s-t\vert,\qquad \forall\, s,t\in \ensuremath{\mathbb{R}}. \end{equation} Inequality \eqref{FACT1} follows directly from the definition of the maps $(\psi_{j,k})_{j,k}$. We remark that $\psi$ is $1$-Lipschitz, and therefore $\psi_{j,k}(t)=2^{k\beta-1} \psi(2^k t -j)$ is $2^{k\beta-1} 2^k $-Lipschitz. This proves \eqref{FACT2}. \medskip Now we prove two types of upper-bound estimates using \eqref{FACT1} and \eqref{FACT2}. Given $s<t$ pick $K\in\ensuremath{\mathbb{Z}}$ such that $2^{-(K+1)}\le \vert s-t\vert\le 2^{-K}$. \medskip - Upper bound estimate of first type (based on \eqref{FACT1}): \begin{align*} \vert \psi_{j,k}(s)-\psi_{j,k}(t)\vert^q & \le (2^{k\beta}+2^{k\beta})^q\\ & \le 2^q\cdot2^{kq\beta}\\ & \le 2^q\cdot2^{kq\beta}\cdot2^{(K+1)p}\cdot2^{-(K+1)p}\\ & \le 2^q\cdot2^{kq\beta}\cdot2^{(K+1)p}\vert s-t\vert^p\\ & \le 2^{p+q}\cdot2^{kq\beta}\cdot2^{Kp}\vert s-t\vert^p. \end{align*} - Upper bound estimates of second type (based on \eqref{FACT2}): \begin{align*} \vert \psi_{j,k}(s)-\psi_{j,k}(t)\vert^q & \le 2^{q(k-1)}\cdot2^{qk\beta}\vert s-t\vert^q\\ & \le 2^{q(k-1)}\cdot2^{qk\beta}\vert s-t\vert^{q-p}\vert s-t\vert^p\\ & \le 2^{q(k-1)}\cdot2^{qk\beta}2^{-K(q-p)}\vert s-t\vert^p\\ & \le 2^{-q}\cdot2^{kq(1+\beta)}\cdot2^{K(p-q)}\vert s-t\vert^p. \end{align*} Armed with the above inequalities we are ready to substantiate \eqref{longproof}. \medskip \noindent {\sc The lower-bound estimate in \eqref{longproof}}. Note that the support of the function $\psi_{j,K+1}$ is of size exactly $2^{-(K-1)}$. Therefore we can find $j$ such that $s,t\in \text{supp}(\psi_{j,K+1})$. We pick the largest such $j$ that we denote $J$. By our choice of $J$, we force $s$ to belong to $\left[\frac{J-2}{2^{K+1}},\frac{J-1}{2^{K+1}}\right]$ and $t$ to lie in $[\frac{J-1}{2^{K+1}},\frac{J+1}{2^{K+1}}]$. We consider two cases: \medskip - If $\displaystyle t\in\Big[\frac{J-1}{2^{K+1}},\frac{J}{2^{K+1}}\Big]$, \begin{align*} \vert \psi_{j,K+1}(s)-\psi_{j,K+1}(t)\vert^q & = 2^{qK}\cdot2^{q(K+1)\beta}\vert s-t\vert^q\\ & = 2^{qK}\cdot2^{q(K+1)\beta}\vert s-t\vert^{q-p}\vert s-t\vert^p\\ & \ge 2^{qK}\cdot2^{q(K+1)\beta}\cdot2^{-(K+1)(q-p)}\vert s-t\vert^p\\ & \ge 2^{q(\beta-1)+p}\cdot2^{K(q\beta+p)}\vert s-t\vert^p. \end{align*} - If $\displaystyle t\in\Big[\frac{J}{2^{K+1}},\frac{J+1}{2^{K+1}}\Big]$, then $s\notin \text{supp}(\psi_{J+1,K+1})$, hence \begin{align*} \vert \psi_{J+1,K+1}(s)-\psi_{J+1,K+1}(t)\vert^q & = \vert \psi_{J+1,K+1}(t)\vert^q\\ & = 2^{qK}\cdot2^{q(K+1)\beta}\left\vert t-\frac{J-1}{2^{K+1}}\right\vert^q\\ & \ge 2^{qK}\cdot2^{q(K+1)\beta}\cdot2^{-(K+1)q}\\ & \ge 2^{qK}\cdot2^{q(K+1)\beta}\left(\frac{\vert s-t\vert}{2}\right)^q\\ & \ge 2^{-q}\cdot2^{qK}\cdot2^{q(K+1)\beta}\vert s-t\vert^{q-p}\vert s-t\vert^p\\ & \ge 2^{-q}\cdot2^{qK}\cdot2^{q(K+1)\beta}\cdot2^{-(K+1)(q-p)}\vert s-t\vert^p\\ & \ge 2^{q(\beta-2)+p}\cdot2^{K(q\beta+p)}\vert s-t\vert^p. \end{align*} It becomes pretty clear that if we want a Lipschitz lower estimate we are forced to choose $\beta=-\frac{p}{q}$, which we do from now on. We remark that $A_{p,q}=2^{-2q}$. \smallskip \noindent {\sc The upper-bound estimate in \eqref{longproof}}. Taking $\beta=-p/q$ the upper estimates of first and second type become $$\vert \psi_{j,k}(s)-\psi_{j,k}(t)\vert^q \le 2^{p+q}\cdot2^{p(K-k)}\vert s-t\vert^p,$$ and $$\vert \psi_{j,k}(s)-\psi_{j,k}(t)\vert^q \le 2^{-q}\cdot2^{(q-p)(k-K)}\vert s-t\vert^p.$$ Notice that for $k$ fixed, $s$ or $t$ belong to the support of $\psi_{j,k}$ for at most 8 values of $j$. All the other contributions in the sum over $j$ are zero. Hence, \begin{align*} \sum_{k\in\ensuremath{\mathbb{Z}}}\sum_{j\in\ensuremath{\mathbb{Z}}}\vert \psi_{j,k}(s)-\psi_{j,k}(t)\vert^q &= \sum_{k\le K}\sum_{j\in\ensuremath{\mathbb{Z}}}\vert \psi_{j,k}(s)-\psi_{j,k}(t)\vert^q+\sum_{k>K}\sum_{j\in\ensuremath{\mathbb{Z}}}\vert \psi_{j,k}(s)-\psi_{j,k}(t)\vert^q\\ & \le 8\left( \sum_{k\le K}2^{-q}\cdot2^{(q-p)(k-K)}+ \sum_{k>K}2^{p+q}\cdot2^{p(K-k)}\right)\vert s-t\vert^p\\ & \le 8\left(2^{-q}\sum_{N\ge 0}2^{-(q-p)N}+2^{p+q}\sum_{N>0}2^{-Np}\right)\vert s-t\vert^p\\ & \le 8\left(\ds2^{-q}\frac{2^{q-p}}{2^{q-p}-1}+2^{p+q}\displaystyle\frac{2^{p}}{2^{p}-1}\right)\vert s-t\vert^p. \end{align*} We thus get $B_{p,q}=\ds8\left(\frac{1}{2^q-2^p}+\displaystyle\frac{2^{p+q}}{2^p-1}\right)$, and the proof is over. \end{proof} \begin{remark} An immediate consequence of Theorem \ref{ellpsnow} is that for $0<p<q\le1$, $\ensuremath{(\mathbb{R},d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_q,d_q)}$. The embedding $x\mapsto (\psi_{\varphi(n)}(x)-\psi_{\varphi(n)}(0))_{n\in\ensuremath{\mathbb{N}}}$, where $\varphi$ is any enumeration of $\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}}$, does the job. \medskip \end{remark} \section{Lipschitz embeddings between $L_p$-spaces and $\ell_{p}$-spaces for $0<p<\infty$}\label{Section5} \noindent A Lipschitz function from a separable Banach space $X$ to a Banach space $Y$ with the Radon-Nikod\'{y}m property, (RNP) for short, is G\^ateaux differentiable at least at one point. This important theorem, proved independently by Aronszajn, Christensen, and Mankiewicz \cite{Aronszajn1976, Christensen1973, Mankiewicz1973}, in combination with the simple fact that if a Lipschitz embedding between Banach spaces is differentiable at some point then its derivative at this point is a linear into isomorphism, proves the impossibility of certain Lipschitz embeddings. Thus, for Banach spaces the linear theory (cf.\ \cite{AlbiacKalton2006}) yields:\newline \noindent (i) If $1\le p,q<\infty$ with $p\not=q$, then $\ell_{p}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} \ell_{q}$. \noindent (ii) Unless $p=q=2$, $L_{q}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} \ell_{p}$. \noindent (iii) If $1\le q<\infty$ then $L_{p}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_{q}$ unless $1\le q\le p\le 2$ or $p=q$. \noindent Note that the case of $L_{1}$ as a potential target space of a Lipschitz embedding is special because $L_{1}$ does not have (RNP). Nevertheless, it still holds that $L_{1}$ does not contain any subset Lipschitz equivalent to $L_{p}$ for $p>2$ because of a cotype obstruction: if $L_{p}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_{1}$ then $L_{p}$ would have been isomorphic to a subspace of $L_{1}^{\ast\ast}$, which has cotype 2 versus the cotype of $L_{p}$ for $p>2$ that is only $p$. \medskip For future reference and the convenience of the reader we summarize some facts that follow immediately from the above and that will be used repeatedly from now on. \begin{proposition}\label{differentiationargument} Let $\mathcal M$ be a metric space. \begin{enumerate} \item[(i)] If $\mathcal M$ contains a Lipschitz copy of $L_1$ then $\mathcal M$ cannot be Lipschitz embeddable into a Banach space with (RNP). \item[(ii)] If $\mathcal M$ contains a Lipschitz copy of a Banach space $X$ and $\mathcal M$ admits a Lipschitz embedding into a Banach space $Y$ with (RNP), then $X$ embeds isomorphically into $Y$. \end{enumerate} \end{proposition} Eventually, throughout this section it will also be helpful to be aware of the following recent embedding results that can be found in \cite{Albiac2008}. \begin{theorem}\label{embeddings} Let $0<p<q\le 1$. Then: \begin{enumerate} \item[(i)] $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} \ensuremath{(\ell_q,d_q)}$. \item[(ii)] $\ensuremath{(\ell_q,d_q)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} \ensuremath{(\ell_p,d_p)}$. \item[(iii)] $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} \ensuremath{(L_p,d_p)}$.\\ \item[(iv)] $\ensuremath{(L_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} (L_1,d_1)=L_1$ and $L_1\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$. \item[(v)] $\ensuremath{(L_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_q,d_q)}$ when $0<p,q\le 1$. \end{enumerate} \end{theorem} Here and subsequently, if $X$ and $Y$ are Banach spaces, $X\ensuremath{\underset{\approx}{\lhook\joinrel\relbar\joinrel\rightarrow}} Y$ will denote the existence of a (linear) isomorphic embedding from $X$ into $Y$, and $X\ensuremath{\underset{\cong}{\lhook\joinrel\relbar\joinrel\rightarrow}} Y$ will stand for a linear isometric embedding. We will write $X\equiv Y$ if there exists a linear onto isometry between them, and $X\approx Y$ if they are linearly isomorphic. \subsection{Lipschitz nonembeddability of $L_p$ into $\ell_q$}\ \medskip \noindent The first question we tackle is the embeddability of $L_p$ into $\ell_q$ for $0<p,q<\infty$. The outcome is crystal clear since no embedding is possible. \begin{proposition} Let $0<p,q<\infty$, $p,q\neq 2$. Then, endowed with their ad-hoc metrics, $L_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}}\ell_q.$ \end{proposition} \begin{proof} In view of the introductory background of the section, there remain two possible scenarios to examine. (a) The mixte regime, i.e., $0<p<1\le q$. From Theorem~\ref{embeddings}, $L_1\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$, hence if $\ensuremath{(L_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_q$ then we would have $L_1\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_q$, which is in contradiction with Proposition~\ref{differentiationargument} (i). If $0<q<1\le p$ it suffices to know that $\ensuremath{(\ell_q,d_q)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_1$, hence if $L_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_q,d_q)}$ then $L_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_1$, a contradiction. \smallskip (b) The $\mathsf F$-space regime, i.e., $0<p,q<1$. Since $L_1\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$, if $\ensuremath{(L_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_q,d_q)}$ then $L_1\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_1$, which is impossible as we know from linear theory. \end{proof} \subsection{Lipschitz embeddability of $\ell_p$ into $\ell_q$}\ \medskip \noindent The following proposition tells us that when trying to Lipschitz embed $\ell_p$ into $\ell_q$, this is only possible in the $\mathsf F$-space regime under some restriction on the values of $p$ and $q$. \begin{proposition}\label{Proplpintolq}\ \begin{enumerate} \item[(i)] If $1\le p,q<\infty$, then $\ell_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}}\ell_q$.\\ \item[(ii)] If $0<p<1<q$, then $\ell_q\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_p,d_p)}$ and $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}}\ell_q$.\\ \item[(iii)] If $0<p<q\le 1$, then $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_q,d_q)}$ but $\ensuremath{(\ell_q,d_q)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_p,d_p)}$. \end{enumerate} \end{proposition} \begin{proof} Only (ii) requires a proof. Note that $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_1$, hence if $\ell_q\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_p,d_p)}$ then we would have $\ell_q\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_1$, a contradiction. For the other statement, if $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_q$ then passing to ultraproducts we have that $(\ensuremath{(\ell_p,d_p)})_{\mathcal{U}}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q(\nu)$. But it follows from Naor's master thesis \cite{Naorthesis} that $(\ensuremath{(\ell_p,d_p)})_{\mathcal{U}}$ contains a Lipschitz copy of $L_1$, therefore so does $L_q(\nu)$. Contradiction since $q>1$. \end{proof} \subsection{Lipschitz embeddability of $\ell_p$\ into $L_q$}\label{Section4.3}\ \medskip \noindent This is the state of affaires. \begin{proposition}\label{FlorentProposition5.4}\ \begin{enumerate} \item[(i)] Suppose $1\le p,q<\infty$. \begin{itemize} \item[(a)] If $2<p\neq q$, then $\ell_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q$; \item[(b)] If $1\le p<q\le 2$, then $\ell_q\ensuremath{\underset{\cong}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_p$ but $\ell_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q$; \item[(c)] If $1\le p<2<q$, then $\ell_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q$ and $\ell_q\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_p$. \end{itemize} \item[(ii)] Suppose $0<p<1\le q$. Then, $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q$ and \begin{itemize} \item[(a)] if $0<p<1\le q\le 2$, $\ell_q\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$; \item[(b)] if $0<p<1<2<q$, $\ell_q\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$. \end{itemize} \item[(iii)] If $0<p,q\le 1$, then $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_q,d_q)}$. \end{enumerate} \end{proposition} \begin{proof} (i) follows from the reminders we made at the beginning of this section between the Lipschitz and the linear structure of $L_{p}$ for $p\ge 1$. For $0<p<1$, the fact that $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q$ when $1\le q$ can be proved using an ultraproduct argument like in the proof of Proposition~\ref{Proplpintolq} (ii). To see (ii) (a), it suffices to recall that $\ell_q\ensuremath{\underset{\cong}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q \ensuremath{\underset{\cong}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_1$ for the range $1<q\le 2$ and use the embedding $L_1\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$ of Theorem~\ref{embeddings} (iv). For (ii) (b) notice that $\ensuremath{(L_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_1\ensuremath{\underset{coarse}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_2$, hence if $\ell_q\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$ then $\ell_q\ensuremath{\underset{coarse}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_2$. But this is impossible using a metric cotype argument \cite{MendelNaor2008} or the result of Johnson and Randrianarivony that appeared in \cite{JohnsonRandrianarivony2006} . Part (iii) follows from the diagram $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_q,d_q)}.$ \end{proof} \subsection{Lipschitz embeddability of $L_p$ into $L_q$}\ \medskip \noindent The situation is the exact same one as in \textsection\ref{Section4.3}. The proof of the following proposition goes along the same lines as the proof of the Proposition~\ref{FlorentProposition5.4}, and so we omit the details. \begin{proposition}\label{usedtobeProp5.5}\ \\ \begin{enumerate} \item[(i)] Suppose $1\le p,q<\infty$. Then, \begin{itemize} \item[(a)] if $2<p\neq q$, $L_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q$; \item[(b)] if $1\le p<q\le 2$, $L_q\ensuremath{\underset{\cong}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_p$ but $L_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q$; \item[(c)] if $1\le p<2<q$, $L_q\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_p$ and $L_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q$. \end{itemize} \item[(ii)] Suppose $0<p<1<q$. Then, $\ensuremath{(L_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q$ and \begin{itemize} \item if $0<p<1<q\le 2$, $L_q\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$; \item if $0<p<1<2<q$, $L_q\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$. \end{itemize} \item[(iii)] If $0<p,q\le 1$, then $\ensuremath{(L_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_q,d_q)}$. \end{enumerate} \end{proposition} \subsection{Application to unique Lipschitz subspace structure problems.}\ \medskip \noindent Let $X$ and $Y$ be $\mathsf F$-spaces. The space $X$ is said to have a {\it unique Lipschitz $\mathsf F$-subspace structure} if the following equivalence holds: \begin{equation}\label{LipsubsProb}Y\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} X \Longleftrightarrow Y\ensuremath{\underset{\approx}{\lhook\joinrel\relbar\joinrel\rightarrow}} X.\end{equation} \smallskip If we only allow $Y$ to be a Banach space we will refer to this problem as the {\it unique Lipschitz subspace structure problem for $\mathsf F$-spaces}. The {\it unique Lipschitz subspace structure problem for Banach spaces} is classical and has been thoroughly investigated. We refer to \cite{BenyaminiLindenstrauss2000} for a detailed account. A linear isomorphic embedding between Banach spaces is automatically a Lipschitz embedding, hence, in that case, one implication in \eqref{LipsubsProb} is trivial. This is no longer true for $\mathsf F$-spaces and both implications have to be independently checked. In the category of Banach spaces the unique Lipschitz subspace structure problem is still widely open. In fact, there are separable Banach spaces (such as $c_{0}$) with a unique Lipschitz structure that fail to have a unique Lipschitz subspace structure. As already mentioned in the introduction of Section~\ref{Section5}, the Radon-Nikod\'{y}m property was clearly identified as being a sufficient condition for a separable space to have a unique Lipschitz subspace structure. It is doubtful that a similar strategy could be carried out in the $\mathsf F$-space framework. We will now place the results of this section in the context just described and we will analyze and compare the various and relevant versions of the unique Lipschitz subspace structure for the spaces $L_p$ or $\ell_p$ for the entire range $0<p<\infty$. We first discuss the Lipschitz $\mathsf F$-subspace structure problem for the classical function and sequence Banach spaces. \medskip \noindent $\bullet$ The reflexive spaces $L_p$ and $\ell_p$ for $p>1$ have a unique Lipschitz subspace structure. The theory is consistent if we consider the classical $\mathsf F$-subspaces $L_q$ and $\ell_q$ when $0<q<1$. Indeed, it follows from the results of this section (respectively, \cite{KaltonPeckRoberts1984}) that none of the spaces $L_p$ or $\ell_p$ for $p>1$ contains a Lipschitz (respectively, isomorphic) copy of $L_q$ nor $\ell_q$ when $0<q<1$. \medskip \noindent $\bullet$ As a separable dual, the space $\ell_1$ has a unique Lipschitz subspace structure. However, the fact that $\ensuremath{(\ell_q,d_q)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} \ell_1$ but $\ell_q\ensuremath{\underset{\approx}{\lhook\joinrel\relbar\joinrel\not\rightarrow}} \ell_1$ for all $q<1$ shows that $\ell_1$ does not have a unique Lipschitz $\mathsf F$-subspace structure. \medskip \noindent $\bullet$ The unique Lipschitz subspace structure problem for $L_1$ is still open. As it happens, the space $L_1$ does not have a unique Lipschitz $\mathsf F$-subspace structure since $\ensuremath{(L_q,d_q)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_1$ for all $0<q<1$ but $L_q\ensuremath{\underset{\approx}{\lhook\joinrel\relbar\joinrel\not\rightarrow}} L_1$. \medskip The case of $\ell_{1}$ shows that there can be Banach spaces with (RNP) that do not have a unique Lipschitz $\mathsf F$-subspace structure, but so far the following question is still unanswered. \begin{question} Does a reflexive Banach space have a unique Lipschitz $\mathsf F$-subspace structure? \end{question} Let us turn to the Lipschitz subspace structure problem for the classical function and sequence $\mathsf F$-spaces. \medskip \noindent $\bullet$ The situation for the spaces $\ell_q$ when $q\in(0,1)$ is simple. They do not contain neither a Lipschitz nor a isomorphic copy of any Banach space. This follows from \cite[Corollary 2.8]{KaltonPeckRoberts1984} for the isomorphic embedding and Proposition~\ref{nolabel} for the Lipschitz one. Thus they have trivially a unique Lipschitz subspace structure. \medskip \noindent $\bullet$ The case of $L_q$ for $q\in(0,1)$ is more subtle. Recall that every Banach space has cotype $\ge 2$. If $Y$ is a Banach space with cotype {\it strictly } greater than $2$, then there is neither a Lipschitz nor a isomorphic embedding of $Y$ into $L_q$. Indeed, assume there is an isomorphic embedding $T:Y\to L_{q}$. Since the topologies induced in $L_{q}$ by the distance and the quasi-norm are uniformly equivalent then $T$ is a linear map satisfying an inequality of the form $$\Vert x-y\Vert_Y\lesssim \Vert Tx-Ty\Vert_q\lesssim \Vert x-y\Vert_ Y.$$ After raising $q$th-power we obtain that $(Y,\Vert\cdot\Vert_Y^q)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} \ensuremath{(L_q,d_q)}$. This gives a coarse embedding of $(Y,\Vert\cdot\Vert_Y)$ into $\ensuremath{(L_q,d_q)}$. But, as already seen in this section, $\ensuremath{(L_q,d_q)}$ embeds isometrically into $L_1$, which in turn coarsely embeds into a Hilbert space. We thus reach a contradiction since a Banach space with cotype {\it strictly } greater than $2$ cannot coarsely embed into a Hilbert space. We can skip the first step in the proof to get the same conclusion if $T$ is just a Lipschitz embedding. Now if $Y$ is one of the spaces $L_p$ for some $p\in[1,2]$, it has cotype $2$. Using $p$-stable random variables we can construct an explicit isomorphic embedding of $Y$ into $L_q$. Composing the isometric embedding of $Y$ into $L_1$ (provided again by $p$-stable random variables) with the isometric embedding of $L_1$ into $\ensuremath{(L_q,d_q)}$ of the present section we get an isometric embedding into $\ensuremath{(L_q,d_q)}$. Unfortunately, we do not know how to handle the other Banach spaces $Y$ that have cotype $2$ (e.g., the dual of the James space). This suggests: \begin{question} Do the classical $\mathsf F$-spaces $L_q$ for $0<q<1$ have a unique Lipschitz subspace structure? \end{question} Finally, let us mention that none of the classical $\mathsf F$-spaces $L_q$ nor $\ell_q$ when $0<q<1$ has a unique Lipschitz $\mathsf F$-subspace structure. Indeed, each of them contains a Lipschitz copy of an $\mathsf F$-space of the same family that cannot be isomorphic to a linear subspace. \section{Embedding snowflakings of $L_p$ and $\ell_{p}$ for $0<p<\infty$}\label{Section4} \noindent We have been able to track down essentially two results of this kind in the literature. The first one is due to Bretagnolle and al., who in \cite{Bretagnolleetal1965} proved that $(L_p,\Vert\cdot\Vert^{p/q})$ is isometric to a subset of $L_q$ for $1\le p<q\le 2$. Later, Mendel and Naor \cite{MendelNaor2004} generalized this result. Indeed they observed that, since the complex space $L_{q}$ embeds isometrically as a real space into the real space $L_{q}$, in order to embed $L_{p}$ in $L_{q}$ for $p<q$ it suffices to embed $L_{p}(\mathbb R)$ into the complex space $L_{q}(\mathbb R\times \mathbb R;\mathbb C)$. This is accomplished via the map \[ T: L_p(\ensuremath{\mathbb{R}}) \to L_q(\ensuremath{\mathbb{R}}\times\ensuremath{\mathbb{R}};\mathbb C),\qquad f \mapsto T(f)(s,t)=\displaystyle c\frac{1-e^{itf(s)}}{\vert t\vert^{(p+1)/q}}, \] where $$ c^{-q}=2^{q/2}\left(\int_{-\infty}^{\infty}\frac{(1-\cos(u))^{q/2}}{\vert u\vert^{p+1}}\,dt\right). $$ Therefore for $L_{p}$-spaces we have: \begin{theorem}[Mendel and Naor \cite{MendelNaor2004}]\label{MendelNaooor}\ \begin{enumerate} \item[(i)] If $0< p<q\le 1$, then $\ensuremath{(L_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} \ensuremath{(L_q,d_q)}$. \item[(ii)] If $0< p<1<q$, then $(L_p,d_p^{1/q})\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q$. \item[(iii)] If $1\le p\le q$, then $(L_p,\Vert\cdot\Vert_p^{p/q})\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q$. \end{enumerate} \end{theorem} \subsection{Embedding snowflakings of $\ell_p$-spaces}\ \medskip \noindent The aim of this section is to utilize techniques from Section~\ref{Section3} to give a simple explicit Lipschitz embedding between the $\ell_{p}$-spaces and some of their snowflakings. Unfortunately, unlike the case of $L_p$-spaces we do not know if there is an isometric version of the following proposition: \begin{proposition}\label{sellp}\ \begin{enumerate} \item[(i)] If $0<p<q\le1$, then $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_q,d_q)}$. \item[(ii)] If $0< p\le1<q$, then $(\ell_p,d_p^{1/q})\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} \ell_q$. \item[(iii)] If $1\le p\le q$, then $(\ell_p,\Vert\cdot\Vert_p^{p/q})\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} \ell_q$. \end{enumerate} \end{proposition} \begin{proof} Let $$\ell_p(\ensuremath{\mathbb{N}}\times\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}})=\Big\{ (z_{i,j,k})_{(i,j,k)\in\ensuremath{\mathbb{N}}\times\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}}}\in \ensuremath{\mathbb{R}}^{\ensuremath{\mathbb{N}}\times\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}}}:\ \sum_{i\in\ensuremath{\mathbb{N}}}\sum_{j\in\ensuremath{\mathbb{Z}}}\sum_{k\in\ensuremath{\mathbb{Z}}}\vert z_{i,j,k}\vert^p<\infty\Big\},$$ and define the mapping \[ f:\ell_p(\ensuremath{\mathbb{N}},\ensuremath{\mathbb{R}}) \to \ell_q(\ensuremath{\mathbb{N}}\times\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}),\quad (x_i)_{i\in\ensuremath{\mathbb{N}}} \mapsto (\psi_{j,k}(x_i)-\psi_{j,k}(0))_{(i,j,k)\in\ensuremath{\mathbb{N}}\times\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}}}. \] Applying Theorem \ref{ellpsnow} coordinate-wise yields that for all $(x_i),(y_i)\in \ell_p(\ensuremath{\mathbb{N}},\ensuremath{\mathbb{R}})$, \begin{equation}\label{eqpag11.1} A_{p,q}\sum_{i\in\ensuremath{\mathbb{N}}}\vert x_i-y_i\vert^p\le \sum_{i\in\ensuremath{\mathbb{N}}}\sum_{j\in\ensuremath{\mathbb{Z}}}\sum_{k\in\ensuremath{\mathbb{Z}}}\vert \psi_{j,k}(x_i)-\psi_{j,k}(y_i)\vert^q\le B_{p,q}\sum_{i\in\ensuremath{\mathbb{N}}}\vert x_i-y_i\vert^p.\end{equation} Then: \noindent$\circ$ If $0<p<q\le 1$, inequality \eqref{eqpag11.1} tells us exactly that $$\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}(\ell_q(\ensuremath{\mathbb{N}}\times\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}}),d_q)\equiv\ensuremath{(\ell_q,d_q)}.$$ \noindent $\circ$ If $0<p<1<q$, raising to the power $1/q$ we get $$(\ell_p,d_p^{1/q})\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_q(\ensuremath{\mathbb{N}}\times\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}})\equiv\ell_q.$$ \noindent $\circ$ If $1\le p<q$, raising to the power $1/q$ and writing $1/q=\frac{1}{p}\cdot \frac{p}{q}$ we obtain $$(\ell_p,\Vert\cdot\Vert_p^{p/q})\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_q(\ensuremath{\mathbb{N}}\times\ensuremath{\mathbb{Z}}\times\ensuremath{\mathbb{Z}},\ensuremath{\mathbb{R}})\equiv\ell_q.$$ \end{proof} \begin{remark}\label{remark}\ \\ \noindent $\bullet$ One can apply Assouad's theorem almost as a black box to prove Proposition \ref{sellp} in the case where $(0<p<q)$ and $(q\ge 1)$ as follows: the real line $(\ensuremath{\mathbb{R}}, \vert\cdot\vert)$ is a doubling space, and therefore for every $0<s<1$ and in particular $s=p/q$, $(\ensuremath{\mathbb{R}}, \vert\cdot\vert^s)$ embeds bi-Lipschitzly for any $1\le q<\infty$ into $\ell_q^N$ for some dimension $N$ by an appeal to the equivalence of finite-dimensional norms. Then taking $\ell_q$-sums, where the $\ell_q$-sum of a sequence of metric linear spaces is defined in the obvious way, we get the desired embeddings. \medskip \noindent $\bullet$ The distortion of the embeddings in Proposition \ref{sellp} blows up when $q$ is close to $p$ and we do not know if we can find embeddings without this, a priori, unexpected behavior. \medskip \noindent$\bullet$ For the sake of completeness we include another well known isometric embedding of the $1/2$-snowflaked version of $L_1$ into a Hilbert space. \[ T:L_1(\ensuremath{\mathbb{R}},\ensuremath{\mathbb{R}}) \to L_2(\ensuremath{\mathbb{R}}\times\ensuremath{\mathbb{R}},\ensuremath{\mathbb{R}}),\quad f \mapsto T(f)(s,t)= \begin{cases} 1 &\textrm{ if }\; 0\le t\le\ f(s),\\ -1&\textrm{ if }\; f(s)<t<0,\\ 0&\textrm{ otherwise}. \end{cases} \] \end{remark} \subsection{On the membership of $L_p$ and $\ell_{p}$ in the classes $\ensuremath{\mathsf{S}_{\mathsf{D}}}$ and $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$}\label{Section4.2}\ \medskip \noindent It follows easily from Aharoni \cite{Aharoni1974} that any snowflaked version of $c_0$ Lipschitz embeds into $c_0$ itself. Similarly, it can be easily derived from the work of Schoenberg \cite{Schoenberg1938} that any snowflaking of a Hilbert space isometrically embeds into a Hilbert space. In other words, $c_0$ and $\ell_2$ are in $\ensuremath{\mathsf{S}_{\mathsf{D}}}$. In the next proposition we show that certain $L_p(\mu)$-spaces belong to $\ensuremath{\mathsf{S}_{\mathsf{D}}}$ as well. \begin{proposition}\ \begin{enumerate} \item[(i)] Let $1\le p< 2$ and ${p}/{2}\le s< 1$. Then $(L_p,\Vert\cdot\Vert^{s})\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_p$. \item[(ii)] $(L_1,\Vert\cdot\Vert^{s})\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_1$, for all $0<s<1$. \item[(iii)] For $0<p<1$, $(L_p,d_p^{s})\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(L_p,d_p)}$, for all $0<s<1$. \end{enumerate} \end{proposition} \begin{proof} (i) Fix $1\le p< 2$ and let $1\le p< q\le 2$ and $s={p}/{q}$. Then, $$(L_p,\Vert\cdot\Vert^{s})\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_p.$$ We proved (ii) already for ${1}/{2}\le s<1$ in (i). For $0< s<{1}/{2}$ we proceed as follows. Let $0<\lambda<1$. Then $N(x,y)=\Vert x-y\Vert_1^\lambda$ is a negative definite kernel. Hence, by Schoenberg's theorem there is $T\colon L_1\to L_2$ such that $\Vert Tx-Ty\Vert_2^2=\Vert x-y\Vert_1^\lambda$ for all $x,y\in L_{1}$. Therefore $(L_1,\Vert\cdot\Vert^{\frac{\lambda}{2}})\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_2\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_1$, for all $0<\lambda<1$. (iii) Since $\ensuremath{(L_p,d_p)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ L_1$ we have $(L_p,d_p^s)\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_1^{(s)}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_1\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} \ensuremath{(L_p,d_p)}$. \end{proof} Now we will investigate which spaces $L_{p}$ and $\ell_{p}$ belong to the class $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$ defined in Section 2. In other words, we want to know if a given space $L_{p}$ or $\ell_{p}$ can be Lipschitz embedded into one of its snowflakings. It turns out that this question has a negative answer for the entire range $0<p<\infty$. Thanks to the deep results of Naor and Schechtman \cite{NaorSchechtman2002}, UMD Banach spaces are in $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$. Recall that a Banach space $X$ is called a {\it UMD $p$-space} ($1<p<\infty$) if there exists a constant $\gamma_{p,X}$ such that for every finite $L_{p}$-martingale difference sequence $(d_{j})_{j=1}^{n}$ with values in $X$ and every $\{-1,1\}$-valued sequence $(\varepsilon_{j})_{j=1}^{n}$ we have \[ \left(\mathbb E\Big\Vert\sum_{j=1}^{n}\varepsilon_{j}d_{j}\Big\Vert^{p} \right)^{1/p}\le \gamma_{p,X}\left( \mathbb E\Big\Vert\sum_{j=1}^{n}d_{j}\Big\Vert^{p}\right)^{1/p}. \] It can be shown using Burkholder's good $\lambda$-inequalities that if $X$ is a UMD $p$-space for some $1<p<\infty$, then it is a UMD $p$-space for all $1<p<\infty$, hence a space with this property will simply be called UMD space. Basic examples of UMD spaces are all Hilbert spaces and the spaces $L_{p}(\mu)$ for $1<p<\infty$ where $\mu$ is a $\sigma$-finite measure. Amongst other things Naor and Schechtman proved that for UMD Banach spaces the notions of Enflo-type and of Rademacher-type coincide. Using this powerful result we can state the following theorem. \begin{theorem}\label{snowtarget} Let $0<p,q<\infty$ and $0<\alpha,\beta\le1$. Let us consider the function $$ \tau(p) =\begin{cases} 1 & \text{if}\;\; 0< p\le 1,\\ p & \text{if}\;\;1\le p\le 2,\\ 2 & \text{if}\;\; p\ge 2. \end{cases}$$ If $ {\tau(p)}/{\tau(q)} >{\alpha}/{\beta},$ then $\ell_p^{(\alpha)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_q^{(\beta)}$. In particular, the spaces $L_{p}$ and $\ell_{p}$ belong to the class $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$ for all $0<p<\infty$. \end{theorem} \begin{proof} The Enflo-type of $L_p(\mu)$-spaces is given by the function $\tau$ and we apply Proposition \ref{restriction}. \end{proof} We next include an alternative way to show that every space $\ell_{p}$ and $L_p$ belongs to $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$. This proof does not require an Enflo-type argument but relies instead on Mendel-Naor's embedding, our analogue for $\ell_p$-spaces, and the Lipschitz structure of the spaces $\ell_{p}$ and $L_p$ that is described in Section~\ref{Section5}. \begin{proposition}\label{snowlptarget} Let $0<s<1$. Then, \begin{enumerate} \item[(i)] $L_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} L_p^{(s)}$ for any $0<p<\infty$. \item[(ii)] $\ell_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} \ell_p^{(s)}$ for any $0<p<\infty$. \end{enumerate} \end{proposition} \begin{proof} (i) Consider first the case $1\le p<\infty$. Given $0<s<1$, we can write $s={p}/{q}$ for some $p<q<\infty$. Using Theorem~\ref{MendelNaooor} (iii), $L_p^{({p}/{q})}\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q$, hence if $L_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_p^{(s)}$ it would follow that $L_p\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q$, an absurdity. Suppose now that $0<p<1$ and let $0<s<1$. We can write $s=1/q$ for some $1<q<\infty$. Another appeal to Theorem~\ref{MendelNaooor} (ii) gives $(L_p,d_p^{1/q})\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q$, therefore if $(L_p,d_p)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_p^{(s)}$ then we would obtain $(L_p,d_p)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q$, thus contradicting Proposition~\ref{usedtobeProp5.5} (ii). To show (ii), we argue exactly as in (i), replacing Theorem~\ref{MendelNaooor} with Proposition~\ref{sellp} in combination with the linear structure of the $\ell_p$-spaces. \end{proof} It was pointed out in \cite{Albiac2008} that the quasi-Banach space $\ell_p$ (respectively, $L_p$) does not Lipschitz embed into the quasi-Banach space $\ell_q$ (respectively, $L_q$) for $0< p<q\le1$. Moreover, the author was able to prove a stronger result for $L_p$-spaces, namely, every Lipschitz map from the quasi-Banach space $\ell_p$ (respectively, $L_p$) into the quasi-Banach space $\ell_q$ (respectively, $L_q$) is constant. The re-statement of these results in the metric context has the following form. \begin{theorem}\label{Albiac} Suppose $0< p<q\le1$. Then, \begin{enumerate} \item[(i)] $\ensuremath{(\ell_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} (\ell_q, d_q^{p/q})$. \item[(ii)] $\ensuremath{(L_p,d_p)}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} (L_q, d_q^{p/q})$. \end{enumerate} \end{theorem} Notice that Theorem \ref{snowtarget} extends Theorem~\ref{Albiac}. We end this section with an alternative proof of the following strengthening of the first assertion in Theorem~\ref{Albiac}. \begin{proposition}\label{snowfrechettarget} Suppose $0< p,q\le1$. Then $\ell_{p}\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}} \ell_q^{(s)}$ for any $0<s<1$. \end{proposition} \begin{proof} If $0< q\le p\le 1$, there is actually no Lipschitz map. If $0< p\le q\le1$, suppose there is $f:\ell_p\to \ell_q$ so that for some constant $K>0$, $$K^{-1}\Vert x-y\Vert_p^{p/s}\le \Vert f(x)-f(y)\Vert_q^{q}\le K\Vert x-y\Vert_p^{p/s},\quad\forall x,y\in \ell_{p}.$$ Given any $\{x_i\}_{i=1}^n\subset\ell_p$, denote $z_k=x_1+\dots+x_k$ and put $z_0=0$. Then, \begin{align*} \Big\Vert\sum_{i=1}^n x_i\Big\Vert_p^{p/s}&=\Vert z_n-z_0\Vert_p^{p/s}\\ &\le K^s\Vert f(z_n)-f(z_0)\Vert_q^q\\ & \le K^s\sum_{k=1}^n\Vert f(z_k)-f(z_{k-1})\Vert_q^q\\ & \le K^{2s}\sum_{k=1}^n\Vert z_k-z_{k-1}\Vert_p^ {p/s}\\ &=K^{2s}\sum_{k=1}^n\Vert x_k\Vert_p^{p/s}. \end{align*} By the Aoki-Rolewicz theorem this would imply that space $\ell_{p}$ seen as a quasi-Banach space can be equipped with an equivalent ${p}/{s}$-quasi-norm, which is impossible because $p/s>p$ (see \cite[Lemma 2.7]{KaltonPeckRoberts1984}). \end{proof} \newpage{} \section{Concluding remarks and open questions} The following tables, to be read clockwise summarize the Lipschitz embeddability status of the classical Lebesgue spaces endowed with their ad-hoc metrics or their snowflakings. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|}\hline & & \\ $\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}$ & $\ell_q$ & $L_q$\\ & & \\ \hline & & \\ $\ell_p$ & if and only if $\left\{\begin{array}{l} 0<p<q\le 1\\ p=q\\ \end{array}\right.$ & if and only if $\left\{\begin{array}{l} 0<p<q\le 1\\ 0<q\le p\le 2\\ p=q\\ p=2, q>0\\ \end{array}\right.$ \\ & & \\ & & (isometric) \\ & & \\ \hline & & \\ $L_p$ & Never unless $p=q=2$ & if and only if $\left\{\begin{array}{l} 0<p,q\le 1\\ 0<q\le p\le 2\\ p=q\\ p=2, q>0\\ \end{array}\right.$ \\ & & \\ & (isometric) & (isometric) \\ & & \\ \hline & & \\ $\ell_p^{(s)}$& if $\left\{\begin{array}{l} 1\le p\le q \textrm{ and }s=p/q\\ 0< p\le 1\le q \textrm{ and }s=1/q\\ p=q=2 \textrm{ and }0<s\le1\\ \end{array}\right.$ & if $\left\{\begin{array}{l} 1\le p\le q \textrm{ and }s=p/q\\ 0< p\le 1\le q \textrm{ and }s=1/q\\ p=q=2 \textrm{ and }0<s\le1\\ \end{array}\right.$\\ & & \\ \hline & & \\ $\ensuremath{\underset{=}{\lhook\joinrel\relbar\joinrel\rightarrow}}$ & $\ell_q$ & $L_q$\\ & & \\ \hline & & \\ $L_p^{(s)}$& if $p=q=2 \textrm{ and }0<s\le1$ & if $\left\{\begin{array}{l} 1\le p\le q \textrm{ and }s=p/q\\ 0< p\le 1\le q \textrm{ and }s=1/q\\ 0<p=q\le 1 \textrm{ and }0<s\le1\\ p=q=2 \textrm{ and }0<s\le1\\ 1\le p=q<2 \textrm{ and }p/2\le s\le1\\ 0< q\le 1\le p\le 2 \textrm{ and }s=q\\ \end{array}\right.$\\ & & \\ \hline \end{tabular} \end{center} \end{table} \newpage{} \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|} \hline $\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\not\joinrel\relbar\joinrel\rightarrow}}$ & $\ell_q^{(t)}$ & $L_q^{(t)}$\\ \hline & & \\ $\ell_p^{(s)}$& $\begin{array}{l} \textrm{ if } 0<p,q\le 1 \textrm{ with }\\ \\ s=1 \textrm{ and } 0<t<1\\ \end{array}$ & if $\left\{\begin{array}{l} 0<p,q\le 1 \textrm{ and }s<t\\ p,q\ge 1 \textrm{ and }\displaystyle\frac{s}{t}<\displaystyle\frac{\min\{p,2\}}{\min\{q,2\}}\\ 0<p\le 1\le q \textrm{ and }\displaystyle\frac{s}{t}<\displaystyle\frac{1}{\min\{q,2\}}\\ 0<q\le 1\le p \textrm{ and }\displaystyle\frac{s}{t}<\min\{p,2\}\\ \end{array}\right.$\\ & & \\ \hline \end{tabular} \end{center} \end{table} \noindent Our work leaves a myriad of open questions and sets what we think is a different and alternative approach to the classical study of the nonlinear geometry of Banach spaces. We will introduce several parameters and highlight several questions that we found particularly interesting. $\bullet$ If one wants to measure quantitatively how close can we possibly be to a Lipschitz embedding between two different $\ell_p$-spaces it is natural to define the following parameter for $0<p\neq q<\infty$ $$s_{p\to q}=\sup\{s\le 1: (\ell_p, d_p^s)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} (\ell_q, d_q)\},$$ \noindent with the convention that $s_{p\to q}=0$ if the latter set is empty. In this paper we obtained tight estimates for $s_{p\to q}$ in several situations. However, when $1\le p\le 2< q$ we proved that $\frac{p}{q}\le s_{p\to q}\le\frac{p}{2}$ but we do not know if we can close this gap. Similarly, for $2\le p< q$ we have $\frac{p}{q}\le s_{p\to q}\le 1$ and $1$ cannot be attained. Rather frustrating is the fact that we have no indication on the parameter $s_{p\to q}$ when $0<q<p\le 1$. A weaker question would be: \begin{question}\label{q1} If $0< q<p\le 2$, do we have $(\ell_p, d_p^{s})\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_q,d_q)}$ for some $0<s<1$? \end{question} $\bullet$ Another natural parameter attached to a metric space $(\mathcal M,d)$ is $$\sigma_{\mathcal M}=\sup\{t\ge 0:(\mathcal M, d^{1-t})\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} (\mathcal M, d)\}.$$ $\sigma_{\mathcal M}$ is related to the classes $\ensuremath{\mathsf{S}_{\mathsf{D}}}$ and $\ensuremath{\mathsf{NS}_{\mathsf{D}}}$ introduced in Section~\ref{Section2}. For instance, $\ensuremath{\mathsf{S}_{\mathsf{D}}}$ is the class of metric spaces such that $\sigma_{\mathcal M}=1$. We obtained that $\frac{2-p}{2}\le\sigma_{L_p}\le 1$ if $1\le p<2$ and that $\sigma_{L_p}=1$ for $0<p\le 1$ but we have no estimate whatsoever neither for $\sigma_{L_p}$ when $p>2$ nor for $\sigma_{\ell_p}$ when $0<p<\infty$. A related question is: \begin{question}\label{q2} If $0< p<q<1$, do we have $(\ell_p, d_p^{s})\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ensuremath{(\ell_q,d_q)}$ for some $0<s<1$? \end{question} We can answer Question \ref{q2} affirmatively if $\sigma_{\ell_p}$ is nontrivial, i.e., $\sigma_{\ell_p}>0$. \smallskip $\bullet$ We now introduce a last parameter $$\beta_{\mathcal M}=\sup\{t\ge 0:(\mathcal M, d)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} (\mathcal M, d^{1-t})\}.$$ This parameter is related to the class $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$ and its complement, the class $$\complement{\ensuremath{\mathsf{NS}_{\mathsf{T}}}}=\Big\{(\mathcal M,d): (\mathcal M,d)\ensuremath{\underset{Lip}{\lhook\joinrel\relbar\joinrel\rightarrow}} (\mathcal M,d^s)\;\text{for some}\,0<s\le1 \Big\},$$ formed by the collection of metric spaces $\mathcal M$ that Lipschitz embed in their own snowflaked versions $\mathcal M^{(s)}$ for some $0<s<1$. Indeed, a metric space belongs to $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$ if and only if $\beta_{\mathcal M}=0$. \smallskip It is quite clear from the sections above that the class $\ensuremath{\mathsf{NS}_{\mathsf{T}}}$ is very large but what can be said about its complement. Possible candidates have to be amongst the metric spaces with infinite Enflo-type for instance. Unfortunately it can be rather complicated to compute or even estimate the Enflo-type of a given metric space. However, we know one family in $\complement{\ensuremath{\mathsf{NS}_{\mathsf{T}}}}$, namely, ultrametric spaces. \begin{question}\label{q3} Describe the class $\complement{\ensuremath{\mathsf{NS}_{\mathsf{T}}}}$? Are there metric spaces $\mathcal M$ other than ultrametrics such that $\beta_{\mathcal M}>0$? \end{question} $\bullet$ As a byproduct of our work we obtain a new and more direct path to prove that $\ell_q\ensuremath{\underset{coarse}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_p$ for $1\le p<q\le 2$ by composing the embedding of the $2/q$-snowflaked version of $\ell_q$ into $\ell_2$ and the coarse embedding from $\ell_2$ into $\ell_p$ of Nowak \cite{Nowak2006} based on Dadarlat-Guentner criterion from \cite{DadarlatGuentner2003}. As we already mentioned this is not possible if $2\le p<q$. On the other hand, we know that $L_p\ensuremath{\underset{coarse}{\lhook\joinrel\relbar\joinrel\rightarrow}} L_q$ if $0<p< q<\infty$ but the following one remains elusive: \begin{question}\label{q4} If $2< p< q<\infty$, does $L_p\ensuremath{\underset{coarse}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_q$? \end{question} The answer to this question would be affirmative if we could solve positively the following stronger question: \begin{question}\label{q5} If $2< p<\infty$, does $L_p\ensuremath{\underset{coarse}{\lhook\joinrel\relbar\joinrel\rightarrow}}\ell_p$? \end{question} \medskip \noindent{\it Aknowledgements.} The second author is very grateful to Bill Johnson and Gideon Schechtman for various enlightening discussions on the topics covered in this article. \begin{bibsection} \begin{biblist} \bib{AbrahamBartalNeiman2011}{article}{ author={Abraham, I.}, author={Bartal, Y.}, author={Neiman, O.}, title={Advances in metric embedding theory}, journal={Adv. Math.}, fjournal={Advances in Mathematics}, volume={228}, year={2011}, number={6}, pages={3026--3126}, } \bib{Aharoni1974}{article}{ author={Aharoni, I.}, title={Every separable metric space is Lipschitz equivalent to a subset of $c\sp {+}\sb {0}$}, journal={Israel J. Math.}, volume={19}, date={1974}, pages={284--291}, } \bib{Albiac2008}{article}{ author={Albiac, F.}, title={Nonlinear structure of some classical quasi-Banach spaces and F-spaces}, journal={J. Math. Anal. Appl.}, volume={340}, date={2008}, pages={1312\ndash 1325}, } \bib{AlbiacKalton2006}{book}{ author={Albiac, F.}, author={Kalton, N. J.}, title={Topics in Banach space theory}, series={Graduate Texts in Mathematics}, volume={233}, publisher={Springer}, place={New York}, date={2006}, pages={xii+373}, } \bib{Aronszajn1976}{article}{ author={Aronszajn, N.}, title={Differentiability of Lipschitzian mappings between Banach spaces}, journal={Studia Math.}, volume={57}, date={1976}, pages={147--190}, } \bib{Assouad1983}{article}{ author={Assouad, P.}, title={Plongements lipschitziens dans ${\it R}\sp {n}$}, language={French, with English summary}, journal={Bull. Soc. Math. France}, volume={111}, date={1983}, pages={429--448}, } \bib{BenyaminiLindenstrauss2000}{book}{ author={Benyamini, Y.}, author={Lindenstrauss, J.}, title={Geometric nonlinear functional analysis. Vol. 1}, series={American Mathematical Society Colloquium Publications}, volume={48}, publisher={American Mathematical Society}, place={Providence, RI}, date={2000}, } \bib{Bretagnolleetal1965}{article}{ author={Bretagnolle, J.}, author={Dacunha-Castelle, D.}, author={Krivine, J.-L.}, title={Fonctions de type positif sur les espaces $L\sp {p}$}, language={French}, journal={C. R. Acad. Sci. Paris}, volume={261}, date={1965}, pages={2153--2156}, } \bib{BuragoBuragoIvanov2001}{book}{ author={Burago, D.}, author={Burago, Y.}, author={Ivanov, S.}, title={A course in metric geometry}, series={Graduate Studies in Mathematics}, volume={33}, publisher={American Mathematical Society}, address={Providence, RI}, year={2001}, pages={xiv+415}, isbn={0-8218-2129-6}, mrclass={53C23}, mrnumber={1835418 (2002e:53053)}, mrreviewer={Mario Bonk}, } \bib{Christensen1973}{article}{ author={Christensen, J. P. R.}, title={Measure theoretic zero sets in infinite dimensional spaces and applications to differentiability of Lipschitz mappings}, note={Actes du Deuxi\`eme Colloque d'Analyse Fonctionnelle de Bordeaux (Univ. Bordeaux, 1973), I, pp. 29--39}, journal={Publ. D\'ep. Math. (Lyon)}, volume={10}, date={1973}, pages={29--39}, } \bib{DadarlatGuentner2003}{article}{ AUTHOR = {M. Dadarlat and E. Guentner}, TITLE = {Constructions preserving {H}ilbert space uniform embeddability of discrete groups}, JOURNAL = {Trans. Amer. Math. Soc.}, VOLUME = {355}, YEAR = {2003}, NUMBER = {8}, PAGES = {3253--3275 (electronic)}, } \bib{Enflo1970}{article}{ author={Enflo, P.}, title={Uniform structures and square roots in topological groups. I, II}, journal={Israel J. Math. 8 (1970), 230-252; ibid.}, volume={8}, date={1970}, pages={253--272}, } \bib{Enflo1978}{article}{ author={Enflo, P.}, title={On infinite-dimensional topological groups}, conference={ title={S\'eminaire sur la G\'eom\'etrie des Espaces de Banach (1977--1978)}, }, book={ publisher={\'Ecole Polytech.}, place={Palaiseau}, }, date={1978}, pages={Exp. No. 10--11, 11}, } \bib{Gromov1993}{article}{ author={Gromov, M.}, title={Asymptotic invariants of infinite groups}, conference={ title={Geometric group theory, Vol.\ 2}, address={Sussex}, date={1991}, }, book={ series={London Math. Soc. Lecture Note Ser.}, volume={182}, publisher={Cambridge Univ. Press}, place={Cambridge}, }, date={1993}, pages={1--295}, } \bib{Heinonen2001}{book}{ author={Heinonen, J.}, title={Lectures on analysis on metric spaces}, series={Universitext}, publisher={Springer-Verlag}, place={New York}, date={2001}, } \bib{JohnsonRandrianarivony2006}{article}{ author={Johnson, W. B.}, author={Randrianarivony, N. L.}, title={$l\sb p\ (p>2)$ does not coarsely embed into a Hilbert space}, journal={Proc. Amer. Math. Soc.}, volume={134}, date={2006}, pages={1045--1050 (electronic)}, } \bib{Kahane1981}{article}{ author={Kahane, J. P.}, title={H\'elices et quasi-h\'elices}, booktitle={Mathematical analysis and applications, {P}art {B}}, series={Adv. in Math. Suppl. Stud.}, volume={7}, pages={417--433}, publisher={Academic Press}, address={New York}, year={1981}, mrclass={43A35 (26A99 46G99)}, mrnumber={634251 (84i:43005)}, mrreviewer={Author's review}, } \bib{KaltonPeckRoberts1984}{book}{ author={Kalton, N. J.}, author={Peck, N. T.}, author={Roberts, J. W.}, title={An $F$-space sampler}, series={London Mathematical Society Lecture Note Series}, volume={89}, publisher={Cambridge University Press}, place={Cambridge}, date={1984}, } \bib{Mankiewicz1973}{article}{ author={Mankiewicz, P.}, title={On the differentiability of Lipschitz mappings in Fr\'echet spaces}, journal={Studia Math.}, volume={45}, date={1973}, pages={15--29}, } \bib{MendelNaor2004}{article}{ author={Mendel, M.}, author={Naor, A.}, title={Euclidean quotients of finite metric spaces}, journal={Adv. Math.}, volume={189}, date={2004}, pages={451--494}, } \bib{MendelNaor2008}{article}{ author={Mendel, M.}, author={Naor, A.}, title={Metric cotype}, journal={Ann. of Math.(2)}, volume={168}, date={2008}, pages={247\ndash 298}, } \bib{Naorthesis}{article}{ author={Naor, A.}, title={Master Thesis}, date={1998}, } \bib{NaorNeiman}{article}{ author={Naor, A.}, author={Neiman, O.}, title={Assouad's theorem with dimension independent of the snowflaking}, journal={arXiv:1012.2307}, volume={}, date={}, pages={}, } \bib{NaorSchechtman2002}{article}{ author={Naor, A.}, author={Schechtman, G.}, title={Remarks on non linear type and Pisier's inequality}, journal={J. Reine Angew. Math.}, volume={552}, date={2002}, pages={213--236}, } \bib{Nowak2006}{article}{ author={Nowak, P. W.}, title={On coarse embeddability into {$l_p$}-spaces and a conjecture of {D}ranishnikov}, journal={Fund. Math.}, volume={189}, year={2006}, number={2}, pages={111--116}, } \bib{Pisier1986b}{article}{ author={Pisier, G.}, title={Probabilistic methods in the geometry of Banach spaces}, booktitle={Probability and analysis (Varenna, 1985)}, series={Lecture Notes in Math.}, volume={1206}, pages={167\ndash 241}, publisher={Springer}, place={Berlin}, date={1986}, } \bib{Schoenberg1938}{article}{ author={Schoenberg, I. J.}, title={Metric spaces and positive definite functions}, journal={Trans. Amer. Math. Soc.}, volume={44}, date={1938}, pages={522--536}, } \bib{Talagrand1992}{article}{ author={Talagrand, M.}, title={Approximating a helix in finitely many dimensions}, journal={Ann. Inst. H. Poincar\'e Probab. Statist.}, volume={28}, year={1992}, number={3}, pages={355--363}, } \bib{Westonandall}{article}{ author={Faver, T.}, author={Kochalski, K.}, author={Murugan, M.}, author={Verheggen, H.}, author={Wesson, E.}, author={Weston, A.}, title={Classifications of ultrametric spaces according to roundness}, journal={arXiv:1201.6669v2}, volume={}, date={}, } \bib{Yu2000}{article}{ author={Yu, G.}, title={The coarse Baum-Connes conjecture for spaces which admit a uniform embedding into Hilbert space}, journal={Invent. Math.}, volume={139}, date={2000}, pages={201--240}, } \end{biblist} \end{bibsection} \end{document} Recall that a $\mathsf F$-space is a metric linear space. It is a classical result that it can be endowed with an equivalent metric which is translation invariant. A (linear) isomorphic embedding between two $\mathsf F$-spaces is a $1-1$, linear and continuous map with a linear and continuous inverse, where continuity is with respect to the topology induced by the metrics. Classical examples of $\mathsf F$-spaces are $\ensuremath{(L_q,d_q)}$ and $\ensuremath{(\ell_q,d_q)}$ when $q\in(0,1)$. They are also quasi-Banach spaces. However the topology as a quasi-Banach spaces and as an $\mathsf F$-spaces are uniformly equivalent. Therefore an isomorphic embedding with respect to the metric is an isomorphic embedding with respect to the quasi-norm and vice-versa. A natural generalization to $\mathsf F$-spaces of a classical problem in Banach space theory can be stated as follow:
1,116,691,498,787
arxiv
\section{Introduction} Gaussian measures on topological vector spaces are a central object of study in probability theory. The primary examples of Gaussian measures on topological vector spaces are continuous Gaussian processes realized as Gaussian measures on the space of continuous functions $\mathcal C([0,T],\mathbb R^n)$ under the sup norm --- the canonical example of a continuous Gaussian process being Brownian motion. Since then, Gaussian measures have been generalized to Banach spaces and furthermore to more general topological vector spaces. See \cite{Bogachevbook} for a general introduction to infinite dimensional Gaussian measure theory. It is interesting to see what properties of the canonical example of classical Wiener measure generalize to Gaussian measures on more general spaces. In \cite{Baldi}, the author recalled that there is an ``intermediate space" for Brownian motion. Let $W_0^{1,2}([0,T])$ denote the Cameron-Martin or Reproducing Kernel Hilbert Space of absolutely continuous functions with $L^2$ weak derivative. Let $\mathcal C_s^{\alpha}([0,T])$ denote the set of ``small" $\alpha$-H\"{o}lder functions $f$ so that the modulus of continuity \[\omega(\delta):=\sup_{0\leq s\leq t\leq T, |t-s|\leq \delta} |f(t)-f(s)|\] is such that \[\lim_{\delta \to 0^+} \frac{\omega(\delta)}{\delta^\alpha}=0.\] Denote by $\mathcal C_0([0,T])$ the space of continuous functions starting at $0$, with the sup norm and the classical Wiener measure. The author of \cite{Baldi} recalled that for $0< \alpha <\frac{1}{2}$, there is the sequence of compact embeddings \[W_0^{1,2}\hookrightarrow \mathcal C_s^{\alpha}\hookrightarrow \mathcal C_0, \] with $\mu(\mathcal C_s^{\alpha})=1.$ The author showed that this is a general phenomenon for Gaussian measures on separable Banach spaces. More precisely, the author showed the following theorem. \begin{theorem}[{\cite[Thm.~1.1]{Baldi}}]\label{t:Baldi_intermediate} Let $E$ be a separable Banach space and $\mu$ a centered Gaussian measure on $E$ and $\RKHS$ the corresponding RKHS. Then there exists a Banach space $\tilde{E}$, separable and such that $\mu(\tilde{E}) = 1$ and the embeddings $E \hookleftarrow \tilde{E} \hookleftarrow \RKHS$ are compact. \end{theorem} In this article, we present a generalization to the case of separable Fr\'echet spaces. We furthermore characterize the full measure intermediate spaces through ``shape functions". More precisely, we have the following theorem. \begin{theorem}\label{t:main} Let $\mu$ be a centered Gaussian measure on a separable infinite dimensional Fr\'echet space $X$ with Cameron-Martin space $\RKHS$. Then there exists a linear subspace $E$ with a norm $\|\cdot\|_E$ lower semicontinuous in metric, such that \[\RKHS\hookrightarrow E \hookrightarrow X,\] with the embeddings compact. In particular, there exists $E$ such that $\mu(E) = 1$. Conversely, any space $E$ (full measure or not) with such property can be generated by a ``shape function" $\phi$. \end{theorem} See Prop.~\ref{p:full_measure_existence} along with Prop.~\ref{p:converse} for full statements and proofs. One application of Theorem \ref{t:main} is exponential tightness of Gaussian measures, as noted in \cite{Baldi}. \section{Preliminaries} In this section, we collect some useful results from functional analysis and from infinite dimensional Gaussian measure theory. \subsection{Gaussian measures and Cameron-Martin spaces} In this subsection, we collect some results on Gaussian measures on topological vector spaces. For a comprehensive introduction, see \cite{Bogachevbook}. \begin{notation}\label{d:X} Let $X$ be an infinite dimensional separable \F{} space over $\mathbb{R}$ equipped with a centered Gaussian measure $\mu$. \end{notation} Since $X$ is a separable metric space, $\mu$ is Radon (\cite[17.11]{KechrisBook}). The following definitions are taken from \cite{Bogachevbook}, which we reproduce here for convenience. \begin{definition}[Gaussian measure, centered] Let $\mathcal{E}(X)$ be the minimal $\sigma$-algebra, with respect to which all continuous linear functionals on $X$ are measurable. A probability measure $\mu$ defined on $\mathcal{E}(X)$ generated by $X^*$ is \emph{Gaussian} if, for any $f \in X^*$ the induced measure $\mu \circ f^{-1}$ is a Gaussian measure on $\mathbb{R}$. The measure $\mu$ is \emph{centered} if all measures $\mu \circ f^{-1}$, $f \in X^*$ are centered. \end{definition} Without loss of generality, we assume that the support of $\mu$ is $X$, by which $\mu$ is strictly positive on $X$. Separability under this assumption turns out to be equivalent to $\mu$ being Radon (Thm.~\ref{t:Radon_RKHS_separable}, \ref{t:RKHS_dense}). \begin{definition}[Covariance operator] Let $(\cdot)^*$ denote the continuous dual and $(\cdot)'$ the algebraic dual. The operator $R_\mu : X^* \to (X^*)'$, \begin{equation} R_\mu : f \mapsto \left((\cdot) : g \mapsto \int_X (f(x) - \overline{f}) (g(x) - \overline{g})\,\mu({\rm d}x)\right) \end{equation} is called the \emph{covariance operator} of $\mu$, where overline denotes the $\mu$-mean operator, defined as: \begin{equation} \overline{f} = \int f(x)\,\mu({\rm d}x). \end{equation} \end{definition} We hereafter without loss of generality only consider centered Gaussian measures, for which $\overline{f} = \overline{g} = 0$, and \begin{equation} R_\mu(f)(g) = \int_X f(x) g(x)\,\mu({\rm d}x). \end{equation} The $L^2(\mu)$ closure of $X^*$, denoted by $X_\mu^*$, is the reproducing kernel Hilbert space (RKHS) of $(X, \mu)$; for a centered measure, $R_\mu$ can be easily extended to $X_\mu^*$. \begin{definition}[Cameron-Martin space]\label{d:CM-space} In light of Riesz representation theorem, the space $\{h \in X : h = R_\mu(g), g \in X_\mu^*\}$ is known as the Cameron-Martin space, denoted by $\RKHS$. The topology on $\RKHS$ is characterized through $h = R_\mu(g)$; that is, $\langle h_1, h_2 \rangle_{X_\mu^*} = \langle g_1, g_2 \rangle_{\RKHS}$. \end{definition} This definition is equivalent to $\RKHS = \{h \in X : \sup\{l(h) : l \in X^*, R_\mu(l)(l) \leq 1\} < \infty\}$. $\RKHS$ will be used from now on to denote the Cameron-Martin space of $X$. The primary reason we care about the Cameron-Martin space $\RKHS$ is the celebrated Cameron-Martin theorem, which states that Gaussian measures are quasi-invariant under translation of $x \in X$ if and only if $x \in \RKHS$. We include some theorems used in the proofs later regarding the properties of Cameron-Martin space. The theorems may have been specialized to our setting, including making the notation consistent. \begin{theorem}[{\cite[3.2.2]{Bogachevbook}}]\label{t:RKHS_invariant} Let $\gamma$ be a Radon Gaussian measure on a locally convex space $X$ which is continuously and linearly embedded into a locally convex space $Y$. Then the set $H(\gamma)$ is independent of whether $\gamma$ is considered on $X$ or on $Y$. \end{theorem} \begin{theorem}[{\cite[3.2.4]{Bogachevbook}}]\label{t:H_ball_compact} Let $\mu$ be a Radon Gaussian measure on $X$. Then the closed unit ball $U_H$ from $\RKHS$ is compact in $X$. \end{theorem} \begin{theorem}[{\cite[3.2.7]{Bogachevbook}}]\label{t:Radon_RKHS_separable} Let $\mu$ be a Radon Gaussian measure on $X$. Then $\RKHS$ is separable. \end{theorem} \begin{theorem}[{\cite[2.5.5, see also 2.5.2]{Bogachevbook}}]\label{t:zero_one} Let $\mu$ be a Radon Gaussian measure on $X$ and let $L$ be a $\mu$-measurable affine subspace in $X$. Then either $\gamma(L) = 0$ or $\gamma(L) = 1$. \end{theorem} \begin{theorem}[{\cite[3.6.1]{Bogachevbook}}]\label{t:RKHS_dense} Let $\mu$ be a centered Radon Gaussian measure on $X$. Then the topological support of $\mu$ coincides with $\overline{\RKHS}$ (closure in $X$). In particular, $\overline{\RKHS}$ is separable. \end{theorem} \subsection{Topological vector spaces} Before we proceed, we make some remarks on the topology of metric spaces and topological vector spaces in general that we've found useful for the proofs. See \cite{Rudinbook} for more information. \begin{notation} Let $X$ be a topological vector space and $U \subset X$. Denote the closure of $U$ in $X$ by $\cl_X U$. Denote the convex hull of $U$ by $\convhull(U)$. Then the closed convex hull of $U$ is $\cl_X \convhull(U)$. \end{notation} \begin{definition}[Bounded, topologically] Let $S \subset X$. $S$ is \emph{bounded} if and only if for every neighborhood $U$ of $0$ there exists some $r \in \mathbb{R}$ such that $S \subset r U$. \end{definition} \begin{theorem}[{\cite[3.1.12]{NBbook}}]\label{t:completeness} Let $\mathcal{T}_s$ and $\mathcal{T}_w$ be Hausdorff group topologies for a group $X$. Let $V_s(0)$ denote the filter of $\mathcal{T}_s$-neighborhoods of $0$. If $\mathcal{T}_s$ is stronger than $\mathcal{T}_w$ and there exists a base $\mathcal{B}_w$ of $\mathcal{T}_w$-complete sets for $V_s(0)$, then $X$ is $\mathcal{T}_s$-complete. \end{theorem} A ``neighborhood'' in \cite{NBbook} refers to a set that contains an open set. We do not use this convention; except for this one case, all neighborhood refers to open neighborhoods. \begin{theorem}[{\cite[3.20bc]{Rudinbook}}]\label{t:convex_hull_preserves_compactness} If $X$ is a locally convex topological vector space and $E \subset X$ is totally bounded, then $\convhull(E)$ is totally bounded. If $X$ is a \F{} space and $K \subset X$ is compact, then $\cl_X \convhull(K)$ is compact. \end{theorem} \subsection{Objective and some relevant prior work} \begin{definition}\label{d:intermediate_space} A linear subspace $E \subset X$ is an \emph{intermediate space} if and only if $\RKHS \hookrightarrow E \hookrightarrow X$ and both embeddings are compact. \end{definition} The objective of this paper is to construct intermediate spaces with additional desirable properties, namely being normed, complete, and of full measure. The following two theorems give similar results, which we present here for comparison. \begin{theorem}[{\cite[3.6.5]{Bogachevbook}}]\label{t:Bogachev_intermediate} Let $\mu$ be a Radon measure on a \F{} space $X$. Then there exists a linear subspace $E \subset X$ with the following properties: \begin{enumerate} \setlength\parskip{0pt} \item There is a norm $\|\cdot\|_E$ on $E$ with respect to which $E$ is a reflexive separable Banach space such that the closed unit ball in this norm is compact in $X$; \item $|\mu|(X \setminus E) = 0$. \end{enumerate} \end{theorem} { \renewcommand{\thetheorem}{\ref{t:Baldi_intermediate}} \begin{theorem}[{\cite[Thm.~1.1]{Baldi}}] Let $E$ be a separable Banach space and $\mu$ a centered Gaussian probability on $E$ and $\RKHS$ the corresponding RKHS. Then there exists a Banach space $\tilde{E}$, separable and such that $\mu(\tilde{E}) = 1$ and the embeddings $E \hookleftarrow \tilde{E} \hookleftarrow \RKHS$ are compact. \end{theorem} } The result we will show is an extension of Thm.~\ref{t:Baldi_intermediate} to \F{} spaces. It is also mostly implied by Thm.~\ref{t:Bogachev_intermediate}, for $|\mu|(X \setminus E) = 0$ implies $E$ is full measure. Given $E$ is full measure, the Cameron-Martin space of $X$ is also that of $E$; then $\RKHS$ embeds compactly into $E$ by Thm.~\ref{t:H_ball_compact}. However, the proof we will present defines the intermediate space by manipulating the ``shape'' of the unit ball using Definition \ref{d:phi}, through which it provides a local characterization of Banach intermediate spaces. \section{Results} We divide the construction process into three steps (and correspondingly into three subsections). We firstly look for spaces that are ``small'' enough to embed into $X$ compactly; then for spaces that are ``large'' enough into which $\RKHS$ can embed compactly; and finally we investigate some properties and flexibilities of thus constructed spaces. We only use the topological property of $\RKHS$ that its closed unit ball is compact in $X$ for the first two steps. Thus, the construction till then is valid for finding intermediate spaces between two arbitrary spaces where one embeds compactly into another. Later, we construct intermediate spaces with other properties such as being full measure. \subsection{Embedding into \texorpdfstring{$X$}{X} compactly} Recall the definition of $X$ in Def.~\ref{d:X} and that a compact operator is one that maps bounded set to relatively compact (totally bounded) sets. For convenience in construction, we only look for those that map closed balls to compact sets. Consider the closed unit ball of the intermediate space. \begin{notation}\label{n:K} Let $K \subset X$ denote a symmetric convex compact set throughout this section. \end{notation} A symmetric convex set is balanced and therefore $K$ is absolutely convex. \begin{definition}\label{d:E} Let $E$ be the linear span of $K$; that is, $E = \{ r x : x \in K, r \in \mathbb{R} \} = \bigcup_{r > 0} r K$. \end{definition} By construction, $K$ is absorbing in $E$. The Minkowski functional $p_K$ defines a seminorm on $E$. Let $M_K = \sup \{ d(0, x) : x \in K \}$; since $K$ is compact, $M_K$ is finite. Without loss of generality, assume $M_K = 1$. Note that then $p_K(x) \geq d(0, x)$ for all $x$, which shows the $p_K$-topology on $E$ is finer than its subspace topology inherited from $X$. Since $K$ is compact, $K$ is bounded in $X$ ($K$ does not contain any nontrivial vector subspace), therefore $p_K$ separates points, and it is in fact a norm. By Thm.~\ref{t:completeness} ($\mathcal{B}_w = \{r K : r \in \mathbb{R}\}$), $E$ is complete under $p_K$. $E$ is hence a Banach space. \begin{lemma}\label{l:proper_containment} Let $A$ and $X$ be F-spaces . If there exists a compact linear map $\Lambda : A \to X$, then $\Lambda$ is not surjective. \end{lemma} \begin{proof} Since compact maps are bounded, $\Lambda$ is continuous. Since $\Lambda$ is compact, there exists some $A$-neighborhood $U$ of $0$ such that $\Lambda(U) \subset K$ where $K$ is compact in $X$. Since both $A$ and $X$ are F-spaces, if $\Lambda$ is surjective, then it is open by open mapping theorem. In particular, $\Lambda(U)$ is open and relatively compact. This contradicts with $X$ being infinite dimensional and not locally compact. \end{proof} Let $E$ be an intermediate space. Letting i) $A := E$, $X := X$, ii) $A := \RKHS$, $X := E$, and $\Lambda = \ident$ be the identity map, the lemma shows that $\RKHS \subsetneq E \subsetneq X$. Since the norm topology is no weaker than metric subspace topology, $K = \cl_X \convhull(S) = \cl_E \convhull(S)$. $K$ is the closed unit ball of $E$. The norm as a function is not continuous in the metric topology because Minkowski functional is continuous if and only if the underlying set is a neighborhood of $0$. However, $\|\cdot\| : E \to \mathbb{R}$ is lower semicontinuous under metric because the norm closed balls $a K$ are closed in metric as well for all $a \in \mathbb{R}$. Reader may refer to \cite[Ch.\ IV \S 6.2]{BourbakiBook} for more information about semicontinuity. \subsection{Embedding \texorpdfstring{$\RKHS$}{H} compactly} \begin{notation} Define $\|x\| : x \mapsto p_K(x)$ for all $x \in E$. \end{notation} \begin{notation} Unless otherwise specified, the topology on $X$, $E$, $\RKHS$ are the metric, norm, and inner product topology, respectively. Let $B^\RKHS$, $B^K$, $B^d$ be the open unit ball of $\RKHS$, $E$, and $X$ (centered at 0), respectively. Let $S^\RKHS$, $S^K$, $S^d$ be the unit sphere of corresponding spaces. The notation $x + B^\cdot_r$ denotes a ball centered at $x$ with radius $r$. \end{notation} In particular, $B^\cdot_r$ is identical to $r B^\cdot$ for normed spaces, but not for general metrizable spaces. For our convenience, we fix a complete and translation-invariant metric $d$ such that $\sup_{x \in B^\RKHS} d(0, x) = 1$. A ball in the following proofs always has positive radius and never degenerates to a singleton or empty set. We now define ``shape functions'' which is used to ensure that $\RKHS$ is embedded compactly into $E$. \begin{definition}[Shape function]\label{d:phi} A shape function is any function $\phi : S^\RKHS \to \cl_X(B^d) \cap \RKHS \setminus B^\RKHS$, such that when its domain and codomain are considered under the metric topology of $X$: \begin{enumerate}\setlength{\parskip}{0pt} \item[a)] for all $x$, there exists some $k(x)\in \mathbb R_{>0}$ so that $\phi(x) = k(x) x$ and $\phi(-x) = -\phi(x)$, \emph{Denote $|\phi|(x) = k$}; \item[b)] there exists some compact $T \subset X$ such that for every neighborhood $U \subset X$ of $T$, $\phi^{-1}\{U\} \supset V_U \cap S^\RKHS$ where $V_U$ is a metric neighborhood of $0$; \item[c)] $\phi(S^\RKHS \setminus B^d_\epsilon) \subset \RKHS$ is (inner product) bounded for all $\epsilon > 0$ (cf.\ d); \item[d)] $\lim_{x \to 0} |\phi|(x) = \infty$. \end{enumerate} \end{definition} An example of shape functions is $\phi : x \mapsto \lfloor d(0, x)^\alpha \rfloor x$ for any $\alpha \in (-1, 0)$ ($T = \{0\}$ for b). \begin{notation} The symbol $\phi$ will denote a shape function from now on. Upon clarification, $\phi$ may also denote its homogeneous extension to $\RKHS$, that is, $\phi(x) = \sqrt{\langle x, x \rangle}\phi(\sqrt{\langle x, x \rangle}^{-1} x)$ for all $x \neq 0$ and $\phi(0) = 0$. \end{notation} \begin{definition}[Generated space]\label{d:construction} Let $S = \phi(S^\RKHS)$ and $K = \cl_X \convhull (S)$. Let $\tilde{E}$ be the span of $K$ (cf. Def.~\ref{d:E}), the closed linear span of $\RKHS$ therein is called the \emph{generated space}, denoted by $E$. \end{definition} Specifically, by ``generated space", we mean the space $E$ equipped with the induced norm $p_K$. We will verify this agrees with Notation \ref{n:K} in the following proof. \begin{proposition}\label{t:existence_of_K} The $K$ in Def.~\ref{d:construction} agrees with Notation~\ref{n:K} and its generated $E$ is an intermediate space. \end{proposition} \begin{proof} We claim $S$ (the range of $\phi$) is totally bounded in $X$ by showing that it has a finite cover consisting of metric $r$-balls for all $r$. Since $T$ is compact, the cover $\{ t + B^d_r \}_{t \in T}$ has a finite subcover, the union $U$ over which is a (finite) union of open sets and therefore open. By Def.~\ref{d:phi}b, $\phi^{-1}\{U\}$ contains some metric ball $B^d_{\epsilon_r}$. Then by Def.~\ref{d:phi}c, $\phi(S^\RKHS \setminus B^d_{\epsilon_r}) \subset B^\RKHS_R$ for some $R$, where the latter is totally bounded in $X$ by Thm.~\ref{t:H_ball_compact}. Hence $S \subset U \cup B^\RKHS_R$; $S$ then has a finite cover consisting of metric balls with radius $r$ for all $r > 0$. By Thm.~\ref{t:convex_hull_preserves_compactness} and property a), $K$ is compact and symmetric, respectively; Def.~\ref{d:construction} agrees with Notation \ref{n:K}. We see the norm unit ball $B^K$ is totally bounded in $X$. ($\RKHS \hookrightarrow E$ compactly.) We show that $B^\RKHS$ is relatively compact in $E$. Consider sequential compactness. Since $B^\RKHS$ is relatively compact in $X$ (Thm.~\ref{t:H_ball_compact}), every sequence taking values therein has a convergent (therefore Cauchy) subsequence $\{x_i\}_{i \in \mathbb{N}}$. We show $\{x_i\}$ is norm-Cauchy. By Def.~\ref{d:phi}d, $\liminf_{N \to \infty} \{ |\phi|(x_i - x_j) : i, j > N\} = \infty$. Then there exists some unbounded $\chi : \mathbb{N} \to \mathbb{R}$ such that $\chi(N) (x_i - x_j) \in K$. By homogeneity of norm, $\limsup_{N \to \infty} \{ \|x_i - x_j\| : i, j > N \} \leq \lim_{N \to \infty} 2 \chi(N)^{-1} = 0$; the factor $2$ is due to $B^\RKHS - B^\RKHS = 2 B^\RKHS$ and the homogeneous extension of $\phi$. Hence the sequence is norm-Cauchy. \end{proof} \subsection{Constructing intermediate spaces to specifications} \begin{remark}[Denseness] We note that $\RKHS$ may not be dense in thus constructed $E$, for although $K = \cl_X \convhull S$ where $S \in \RKHS$, it may happen that $\cl_E \convhull S \subsetneq K$ as the norm topology is finer. Since $\mu(E) = 1$, by Thm.~\ref{t:RKHS_invariant}, $\RKHS$ is also the Cameron-Martin space of $E$. Then by Thm.~\ref{t:RKHS_dense} (and definition of topological support), $\mu(\cl_E H) = 1$. This shows that every full measure intermediate space admits a subspace with full measure where $\RKHS$ is dense in norm. \end{remark} $K$ can also be chosen such that $E$ is a full measure subspace, by the following results: \begin{lemma}\label{l:enlargement} Given any intermediate space with closed unit ball $K$, any (absolutely convex, compact) $K' \supset K$ also generates an intermediate space. \end{lemma} \begin{proof} Since $K' \supset K$, $E_{K'} \supset E_K$. In particular, on $E_K$, the topology induced by $K$ is no coarser than the subspace topology inherited from $E_{K'}$. Hence from $E_K$ to $E_{K'}$, we see the topology as being coarsened or unchanged. In either case, compactness is preserved. \end{proof} Note Lemma~\ref{l:enlargement} does not imply $E_{K'}$ is distinct from $E_K$. \begin{proposition}\label{p:full_measure_existence} Every infinite dimensional separable \F{} space $X$ admits an intermediate space with full measure. \end{proposition} \begin{proof} By Thm.~\ref{t:RKHS_dense}, $\mu$ is strictly positive (supported everywhere) and $\mu(B^d) > 0$. Since $\mu$ is Radon, there exists some $K' \subset B_d$ such that $\mu(K') > 0$. Define a new $K$ to be the closed convex hull of symmetric hull of $K' \cup K$ and generate a new $E$. Since $K \subset E$, $\mu(E) \geq \mu(K) > 0$. By Thm.~\ref{t:zero_one}, $\mu(E) = 1$. \end{proof} It remains unclear to the authors if explicit requirements can be put onto shape functions such that $\mu(K)$ or $\mu(E)$ is positive for general $X$. \begin{remark}[Reflexivity] After applying Prop.~\ref{p:full_measure_existence}, the same approach used for proof of Thm.~\ref{t:Bogachev_intermediate} (citing \cite[Ch.~5~\S4~Thm.~1]{DiestelNotes}) is still valid if a reflexive space is desirable, although apparently $\phi$ loses control over the unit ball. We note that $E$ (as in Def.~\ref{d:construction}) is weakly compactly generated (by the unit ball of $\RKHS$) and \cite{DiestelNotes} contains many properties and characterizations of such spaces. \end{remark} \begin{proposition} If $\RKHS$ is dense in an intermediate space $E$, then $E$ is separable. \end{proposition} \begin{proof} Since $\mu$ is a Radon Gaussian measure, $\RKHS$ is separable by Thm.~\ref{t:Radon_RKHS_separable}. Since $\RKHS$ is dense in $E$, for all $r > 0$, $\RKHS + B^K_r$ covers $E$. Since $\RKHS$ is separable, there exists some countable dense subset $S \subset \RKHS$ such that $S + B^\RKHS_r$ covers $\RKHS$. Since $S + B^\RKHS_r \subset S + B^K_r$, the latter also covers $\RKHS$. We see that $(S + B^K_r) + B^K_r$ covers $E$ for all $r > 0$. Since $B^K$ is convex \cite[p.~38]{Rudinbook}, we have that $B^K_r + B^K_r = B^K_{2 r}$. Given that the initial choice of $r$ is arbitrary, we conclude that $S$ is a countable dense subset of $E$ as well. \end{proof} Finally, we show a converse to Prop.~\ref{t:existence_of_K}. \begin{proposition}\label{p:converse} Given any (complete) normed intermediate space $E$ where the norm is lower semicontinuous with respect to the topology on $X$, there exists a shape function $\phi$ that generates it as described in Def.~\ref{d:construction}. (Completeness is redundant by Thm.~\ref{t:completeness}.) \end{proposition} \begin{proof} Define $|\phi| : f \mapsto \|f\|^{-1}$ and $\phi : f \mapsto |\phi|(f) f$ (then homogeneously extended). By slight abuse of notation, we will continue to use $B^K$ to denote the open unit ball of the given $E$. Def.~\ref{d:phi}a is satisfied by construction. Choose $T = \cl_X B^K$, which is compact since $E \hookrightarrow X$ compactly. Def.~\ref{d:phi}b is then satisfied because $T$ contains the image of $\phi$. Def.~\ref{d:phi}c is true because $|\phi|(f) = \|f\|^{-1} \leq d(0, f)^{-1}$ up to a constant ($B^K$ is totally bounded in $X$); the latter meets the requirement. (Def.~\ref{d:phi}d.) Let $\{x_i\}_{i \in \mathbb{N}} \in S^\RKHS$ be a sequence that converges to 0 in metric. We claim that it also converges to 0 in norm. To see this, assume not. Then there exists some $r$ such that $x_i \notin B^K_r$ for infinitely many $i$; such $x_i$ form a sequence $\{x_i\}_{i \in I}$ ($I \subset \mathbb{N}$). Since $S^\RKHS$ is relatively compact in $E$, $\{x_i\}_{i \in I} \in S^\RKHS$ has a convergent subsequence in $E$. But since both the metric topology and norm topology are Hausdorff, $\{x_i\}_{i \in \mathbb{N}}$ and $\{x_i\}_{i \in I}$ shall have the same limit. This shows that every sequence taking values in $S^\RKHS$ converges to $0$ in norm if (and only if) in metric. By homogeneity of norm, $\lim_{x \to 0} |\phi|(x) \geq \lim_{i \to \infty} \|x_i\|^{-1} = \lim_{x \to 0} \|x\|^{-1} = \infty$. Since $\cl_E B^K = \cl_X B^K$ is itself convex, the norm closed unit ball coincides with its closed convex hull in $X$. Therefore, the generated space of $\phi$ is $E$. \end{proof} \section{Example: H\"older spaces and the Wiener space} In this section, we demonstrate the classical case of an intermediate space that was first noted in \cite{Baldi}. Recall that $\mathcal C_0([0,T],\mathbb R)$ is the space of all real valued continuous functions thereon whose initial values are 0 (i.e.\ $f(0) = 0$ for all $f \in \mathcal C_0$). Let $\mathcal C_0$ have the $\sup$-norm, which is equivalent to $\sup_{a, b \in [0, 1]} |f(a) - f(b)|$. Let $W_0^{1, 2}$ denote the space of all absolutely continuous real valued functions whose weak derivative is square integrable and initial value is 0. Let $\mathcal C_0^{0, \alpha}$ denote space of $\alpha$-H\"older functions with initial value 0. It is well known that $W_0^{1, 2} \hookrightarrow \mathcal C_0^{0, \alpha} \hookrightarrow \mathcal C_0$ compactly for $\alpha \in (0, \frac{1}{2})$. We show that there is a corresponding $\phi$ that generates a subspace of the H\"older spaces $\mathcal C_0^{0, \alpha}$ satisfying the four properties required in Def.~\ref{d:phi} without resorting to Proposition \ref{p:converse}. Denote $\|f\|_\alpha = \sup_{a, b \in [0, 1], a \neq b} \frac{f(a) - f(b)}{|a - b|^\alpha}$, which is a (separating) norm on $C_0^{0, \alpha}$. Define $|\phi| : f \mapsto \|f\|_\alpha^{-1}$ ($0 < \alpha < \frac{1}{2}$) to be the reciprocal of H\"older constant and $\phi : f \mapsto |\phi|(f) f$. (Both defined on $S^{W_0^{1, 2}}$.) By construction, $\phi$ satisfies Def.~\ref{d:phi}a. Def.~\ref{d:phi}b is satisfied by simply choosing $T$ to be the closed unit ball of $C_0^{0, \alpha}$. We note that for small H\"older space $C_s^\alpha$, we may choose $T = \{0\}$. The other two properties are given below. \begin{proposition}[Property c] If $\sup |f| > \epsilon > 0$, then there exists some $M$ such that $|\phi|(f) < M$. \end{proposition} \begin{proof} \begin{equation} |\phi|(f)^{-1} = \sup_{a, b \in [0, 1], a \neq b} \frac{f(a) - f(b)}{|a - b|^\alpha} \geq \frac{\sup |f|}{1^\alpha} = \sup |f| > \epsilon. \end{equation} Choose $M = \epsilon^{-1}$. \end{proof} \begin{proposition}[Property d] For any $M > 0$, there exists some $\epsilon > 0$ such that for all $f \in W_0^{1, 2}$, if $\int f'(x)^2\,{\rm d}x = 1$ and $\sup |f| < \epsilon$, then $|\phi|(f) > M$. \end{proposition} \begin{proof} By Cauchy-Schwartz inequality and the fundamental theorem of calculus, for any $a, b$, \begin{equation} f(a) - f(b) = \int_a^b 1 \cdot f'(x)\,{\rm d}x \leq \sqrt{\int_a^b 1^2\,{\rm d}x} \sqrt{\int_a^b f'(x)^2\,{\rm d}x} \leq \sqrt{b - a}. \end{equation} Hence for any $a, b$, $|f(a) - f(b)| \leq \min\{\sqrt{b - a}, 2 \epsilon\} \leq (2 \epsilon)^{1 - 2 \alpha} |a - b|^\alpha$ (the latter is obtained by solving $M (2 \epsilon)^{2 \alpha} = 2 \epsilon$ for $M$). Since $\alpha \in (0, \frac{1}{2})$, $\lim_{\epsilon \to 0} (2 \epsilon)^{1 - 2 \alpha} = 0$. Therefore, appropriate $\epsilon$ can always be found to make the H\"older constant sufficiently small and thereby $|\phi|$ sufficiently large. \end{proof} Now we carry out the procedure of Def.~\ref{d:construction} and find $K$. $K = \cl_{\mathcal{C}_0} \convhull(\phi(S^{W_0^{1, 2}}))$ and $\phi(S^{W_0^{1, 2}}) = W_0^{1, 2} \cap B^{\mathcal{C}_0^{0, \alpha}}$; clearly $K \subseteq \cl_{\mathcal{C}_0} B^{\mathcal{C}_0^{0, \alpha}}$. Since the H\"older norm is lower semicontinuous in $\sup$-norm, $\cl_{\mathcal{C}_0^{0, \alpha}} B^{\mathcal{C}_0^{0, \alpha}} = \cl_{\mathcal{C}_0} B^{\mathcal{C}_0^{0, \alpha}}$; this shows \underline{$K$ is contained in the $\alpha$-H\"older closed unit ball}. Since $W_0^{1, 2}$ is a linear space and $B^{\mathcal{C}_0^{0, \alpha}}$ is convex, $K = \cl_{\mathcal{C}_0} (W_0^{1, 2} \cap B^{\mathcal{C}_0^{0, \alpha}})$. Consider the closure of $W_0^{1, 2}$ in $\alpha$-H\"older space, that is, the small $\alpha$-H\"older space $\mathcal{C}_s^\alpha$. Since $W_0^{1, 2}$ is dense in $C_s^\alpha$ and $B^{\mathcal{C}_0^{0, \alpha}}$ being the open unit ball has nonempty interior, $K \supseteq \cl_{\mathcal{C}_0^{0, \alpha}} (W_0^{1, 2} \cap B^{\mathcal{C}_0^{0, \alpha}}) \supseteq C_s^\alpha \cap B^{\mathcal{C}_0^{0, \alpha}}$. That is, \underline{$K$ contains the open unit ball of $C_s^\alpha$}. We conclude that $E$ is set-theoretically bounded between $C_0^{0, \alpha}$ and $C_s^\alpha$, with a norm equivalent to the $\alpha$-H\"older norm. In particular, $\mu(E) \geq \mu(C_s^\alpha) = 1$. \section{Conclusions} In this article, we showed that any centered Gaussian measure on a separable Fr\'echet space has a full measure Banach intermediate space. We conclude with a question. \begin{question} Can $\phi$ be chosen so that the generated space has full measure for non-normable spaces, without appealing to inner regularity? \end{question} \begin{question} Is there a Banach intermediate space $E$ of some \F{} space $X$ where the norm is not lower semicontinuous in $X$, or equivalently, where the closed unit ball of $E$ is not closed in $X$? \end{question} \bibliographystyle{plain}
1,116,691,498,788
arxiv
\section{} \centerline{\Large\bf Model structures, categorial quotients and } \centerline{\Large\bf representations of super commutative Hopf algebras II} \centerline{\Large\bf The case $\bf Gl(m\vert n)$} \bigskip\noindent \centerline{R.Weissauer} \goodbreak \bigskip\noindent \bigskip\noindent \section{Introduction} \bigskip\noindent Let $k^{m\vert n}$ be a $\mathbb{Z}/2$-graded vector space $X=k^m \oplus k^n$ over a field $k$ of characteristic zero with even part $k^m$ and odd part $k^n$. The super linear group $G=Gl(m\vert n)$ contains the classical reductive algebraic $k$-group $Gl(m)\times Gl(n)$. The Lie super algebra of $G$ is $Lie(G)=End_k(k^m\oplus k^n)$, which decomposes into the subspace of endomorphisms which preserve respectively do not preserve the $\mathbb{Z}/2\mathbb{Z}$-grading. The even part of $Lie(G)$ can be identified with $Lie(Gl(m)\times Gl(n))$. The Lie super bracket on $End_k(k^m\oplus k^n)$ is defined by $[X,Y] = X\circ Y - (-1)^{\vert X\vert \vert Y\vert} Y\circ X$ for graded endomorphisms $X,Y$ in $End_k(k^m\oplus k^n)$. We suppose $m\geq n$. \bigskip\noindent An algebraic representation of $G$ over $k$ is a homomorphism $$\rho: Gl(m)\times Gl(n) \longrightarrow Gl(V)$$ of algebraic groups over $k$, where $V=V_+ \oplus V_-$ is a finite dimensional $\mathbb{Z}/2\mathbb{Z}$-graded $k$-vectorspace, together with a $k$-linear map $$ Lie(\rho): Lie(G) \longrightarrow End(V) $$ so that \begin{enumerate} \item The parity on $V$ is defined by the eigenspaces of $\rho(E_m,-E_n)$, \item $Lie(\rho)$ is parity preserving for the natural $\mathbb{Z}/2\mathbb{Z}$-grading on $End_k(V)$ induced by $V=V_+\oplus V_-$, \item $Lie(\rho)$ is $\rho$-equivariant and coincides with the Lie derivative of $\rho$ on the even part $Lie(Gl(m)\times Gl(n))$ of $Lie(G)$, \item $Lie(\rho)$ respects the Lie super bracket. \end{enumerate} A one dimensional representation $\rho$ of $G$, which is the analog of the determinant, is the Berezin $Ber_{m\vert n}$ defined by $\rho(g_1\times g_2)= det(g_1)/det(g_2)$ so that $Lie(\rho)$ is the super trace on $End(k^{m\vert n})$. The representation space of the Berezin is $k^{1\vert 0}$ or $k^{0\vert 1}$ depending on $n$ modulo two. \bigskip\noindent Let $\cal T$ denote the abelian $k$-linear tensor category of algebraic representations of $G$. As a $k$-linear abelian category $\cal T$ decomposes into a direct sum of blocks $\Lambda$. We show that there exists a purely transcendent field extension $K/k$ of transcendency degree $n$ and a $K$-linear weakly exact tensor functor (see section \ref{semisim}) $$ \varphi:\ {\cal T}\otimes_k K \ \longrightarrow \ sRep_K(H) \quad , \quad H=Gl(m-n) $$ from the $K$-linear scalar extension of $\cal T$ to the semisimple category of finite dimensional algebraic super representations of the reductive algebraic $K$-group $H=Gl(m-n)$ defined over $K$. The simple objects of $sRep_K(H)$ are the $\rho[i]$ for the irreducible algebraic representations $\rho$ of $H$, up to a parity shift of the grading ($i=0$ or $i=1$). We prove in corollary \ref{Wakimoto} that the image of a simple object $V$ in $\cal T$ becomes zero under the functor $\varphi$ if and only if $V$ is a simple object, which is not maximal atypical (see section \ref{weights}). On the other hand, in the main theorem of section \ref{mainth} we prove that for maximal atypical simple objects the image $\varphi(V)$ is an isotypic representation $m(V) \cdot \rho(V)[p(V)]$ in $sRep_K(H)$, where the multiplicity $m(V)$ is $>0$ and where $\rho(V)$ is an irreducible representation of $H$ which only depends on the block $\Lambda$ of $V$. The parity shift $p(V)$ can be easily computed. The computation of the multiplicity $m(V)$ is more subtle. We show $$1 \ \leq \ m(V)\ \leq \ n! \ .$$ If $V$ is simple with highest weight $\mu$, we prove in section \ref{Weyl} the recursion formula $$ m(\mu) \ = \ { n \choose n_1,\ldots,n_r } \cdot\ \prod_{i=1}^r \ m(\mu_i) $$ for $m(V)=m(\mu)$, which allows to express $m(\mu)$ in terms of multinomials coefficients and multiplicies $m(\mu_i)$ for smaller $n$. Via $$ sdim_k(V) \ = \ (-1)^{p(V)} \cdot m(V) \cdot dim_k(\rho(V)) $$ this formula for $m(\mu)$ gives a rather explicit formula for the super dimension of a maximal atypical irreducible representation $V$ with weight $\mu$, since the classical Weyl dimension formula for $Gl(m-n)$ computes $dim_k(\rho(V))$. \bigskip\noindent \section{Weights} \label{weights} \bigskip\noindent Let $k$ be a field of characteristic zero, and let $G= Gl(m\vert n)$ denote the superlinear group over $k$. In the following we always assume $m\geq n$. \bigskip\noindent In this section we review some fundamental facts on highest weights. For more details see [BS1] and [BS4]. Let ${\cal T}$ denote the $k$-linear rigid tensor category $Rep_k(\mu,G)$ of $k$-linear algebraic super representations $\rho$ of $G$ on $\mathbb{Z}/2\mathbb{Z}$-graded finite dimensional $k$-super vector spaces, such that $\rho(id_m,-id_n)$ induces the parity automorphism of the underlying $\mathbb{Z}/2\mathbb{Z}$-grading of the representation space of $\rho$. Here $ \mu: {\mathbb{Z}}/2{\mathbb{Z}} \mapsto (id_m,-id_n) \in Gl(m)\times Gl(n) \subset Gl(m\vert n)$ as in [BS4]. The category $\cal T$ admits a $k$-linear anti-involution ${}^*\! : \cal T\to T$ so that $A^* \cong A$ holds for all simple objects $V$ and all simple projective objects $V$ of $\cal T$. The intrinsic dimension $\chi(V)$ in a rigid tensor category $\cal T$ with $End_{\cal T}({\mathbf 1}) \cong k$ is $\chi(A)= eval_V\circ coeval_V$. It is preserved by tensor functors. In our case $\chi(V)$ is the super dimension $sdim_k(A) = dim_k(V_+) - dim_k(V_-)$ of the underlying super vectorspace $V=V_+\oplus V_-$. This is easily seen using the forget functor ${\cal T}\to svec_k$ to the category $svec_k$ of finite dimensional $k$-super vector spaces. \bigskip\noindent The isomorphism classes $X^+$ of the irreducible finite dimensional representations of $Gl(m\vert n)$ are indexed by their highest weights $\lambda=(\lambda_1,..\lambda_m; \lambda_{m+1},.. ,\lambda_{m+n})$. Here $\lambda_1 \geq ... \geq \lambda_m$ and $\lambda_{m+1} \geq ... \geq \lambda_{m+n}$ are integers, and every $\lambda\in {\mathbb{Z}}^{m+n}$ satisfying these inequalities occurs as the highest weight of an irreducible representation $L(\lambda)$. The trivial representation $\mathbf 1$ corresponds to $\lambda =0$. Assigned to each highest weight $\lambda\in X^+$ are two subsets of the numberline $\bf Z$, namely the set $$ I_\times(\lambda)\ =\ \{ \lambda_1 , \lambda_2 - 1, .... , \lambda_m - m +1 \} $$ of cardinality $m$, respectively the set of cardinality $n$ $$ I_\circ(\lambda)\ = \ \{ 1 - m - \lambda_{m+1} , 2 - m - \lambda_{m+2} , .... , n-m - \lambda_{m+n} \} \ .$$ Following the notations of [BS4] the integers in $ I_\times(\lambda) \cap I_\circ(\lambda) $ are labeled by $\vee$, the remaining ones in $I_\times(\lambda)$ resp. $I_\circ(\lambda)$ are labeled by $\times$ resp. $\circ$. All other integers are then labeled by $\wedge$. This labeling of the numberline $\mathbb{Z}$ uniquely characterizes the weight vector $\lambda$. If the label $\vee$ occurs $r$ times in the labeling, then $r$ is called the degree of atypicality of $\lambda$. Notice that $0 \leq r \leq n$, and $\lambda$ is called maximal atypical if $r=n$. Let $\cal A \subset T$ denote the full abelian subcategory generated by the representations $L(\lambda)$ for maximal atypical weights $\lambda$. In the following we usually identify $X^+$ with the set of all labelings of the numberline, such that $\vee$ occurs $r$ times for some $0\leq r\leq n$ and $\times$ respectively $\circ$ occurs $m-r$ respectively $n-r$ times. There are two natural orderings on $X^+$, the Bruhat ordering and the coarser weight ordering. For $\lambda\in X^+$ let ${\cal T}^{\leq \lambda}$ respectively ${\cal T}^{< \lambda}$ denote the full subcategories of $\cal T$ generated by objects, all whose Jordan-H\"older constituents are simple modules $L(\mu)$ with highest weights $\mu\leq \lambda$ (resp. $\mu < \lambda$) with respect to the weight ordering. \bigskip\noindent The abelian category $\cal T$ decomposes into blocks $\Lambda$, defined by the eigenvalues of a certain elements in the center of the universal enveloping algebra of the Lie superalgebra $gl(m\vert n)$. Two irreducible representations $L(\lambda)$ and $L(\mu)$ are in the same block if and only if the weights $\lambda$ and $\mu$ define labelings with the same position of the labels $\times$ and $\circ$. The degree of atypicality is a block invariant, and the blocks $\Lambda$ of atypicality $r$ are in 1-1 correspondence with pairs of disjoint subsets of $\mathbb{Z}$ of cardinality $m-r$ resp. $n-r$. The irreducible representations of each block $\Lambda$ are in 1-1 correspondence to the subsets of cardinality $r$ in the numberline with the subset of all $\times$ and all $\circ$ removed. \bigskip\noindent {\it Example}. The Berezin representation $Ber=Ber_{m\vert n}$ of $Gl(m\vert n)$ has highest weight $\lambda=(1,..,1,1..,1;-1 ,..., -1)$ with $m$ digits $1$ and $n$ digits $-1$. Its superdimension is $sdim_k(Ber_{m\vert n}) = (-1)^n$ and its dimension is $1$. All powers $Ber^k$ for $k\in\mathbb{Z}$ of the Berezin are maximal atypical, and $L(\mu)\in \cal A$ iff $L(\mu) \otimes Ber^k \in \cal A$. \bigskip\noindent \section{Maximal atypical blocks $\Lambda$} \bigskip\noindent A block $\Lambda$ is maximal atypical if and only if it does not contain any label $\circ$. Assume $\Lambda$ is maximal atypical. Let $j$ then be minimum of the subset of all $x$ defined by $\Lambda$, or $j=1$ if there is no cross. The uniquely defined weight $\lambda$, where the labels $\vee$ are at the positions $j-1,...,j-n$ is called the {\it ground state} of the block. There are higher ground states for $N=0,1,2,3,..,$ where the labels $\vee$ are at the positions $j-1-N,...,j-N-n$. The corresponding weight vectors of these ground states are $$ \lambda_N\ =\ (\lambda_1,...,\lambda_{m-n},\lambda_{m-n}-N,...,\lambda_{m-n}-N; -\lambda_{m-n}+N,...,-\lambda_{m-n}+N) \ $$ where $\{ \lambda_1,\lambda_2 -1,...,\lambda_{m-n}+1-m+n \}$ gives the positions of the labels $\times$. For $m=n$ these are the powers $Ber^{-N}$ of the Berezin, and the ground state is the trivial representation $\mathbf 1$. All ground states define irreducible representations $L(\lambda_N)$ in the given maximal atypical block $\Lambda$. For $\mu_i=\lambda_i - \lambda_{m-n}$, the representation $$ L(\lambda_0) \otimes Ber^{-\lambda_{m-n}} \ =\ L(\mu_1,...,\mu_{m-n},0,...,0;0,...,0) \ ,$$ again is an irreducible maximal atypical representation (usually in another block). It is covariant in the following sense: \bigskip\noindent {\it Covariant representations}. Let $\lambda_1 \geq \lambda_2 \geq ... $ be any partition $\lambda$ of some natural number $N=deg(\lambda):=\sum_{\nu} \lambda_\nu$. Associated to this partion is the covariant representation $$\{ \lambda \}\ := \ Schur_{\lambda}(k^{m\vert n})\ $$ as a direct summand defined by a Schur projector of the $N$-fold tensor product $X^{\otimes N}$ of the standard representation $X$ of $Gl(m\vert n)$ on $k^{m\vert n}$. The representation $\{ \lambda \}$ so defined is zero iff $\lambda_{m+1} > n$, and is nontrivial and irreducible otherwise ([BR]). If the {\it hook condition} $\lambda_{m+1}\leq n$ is satisfied, we may visualize this by considering the Young diagram attached to $\lambda$ with first column $\lambda_1$, second column $\lambda_2$ and so on. Let $\beta$ denote the intersection of the Young diagram with the box $\{ (x,y) \vert x \leq m, y \leq n \}$. Then $\lambda$ has the following shape with subpartitions $\alpha,\beta,\gamma$ obtained by intersection we the three hook sectors $$ \lambda\ =\ \begin{matrix} \alpha & \cr \beta & \gamma \cr & \cr\end{matrix} \ .$$ The transposed $\gamma^*$ of the partition $\gamma$ is again a partition with $(\gamma^*)_i=0$ for $i>n$. We quote from [BR] and [JHKTM] the assertion of the next \bigskip\noindent \begin{Lemma} \label{cov}{\it If $\{ \lambda \}$ is not zero, then $\{ \lambda \} \cong L(\mu)$ is irreducible with highest weight $\mu$ defined by $\mu_i = \lambda_i$ for $i=1,...,m$, and $\mu_{m+i} = max(0,(\lambda^*)_i-m)=(\gamma^*)_i$ for $i=1,...,n$. In other words $$ \Biggl\{ \begin{matrix} \alpha & \cr \beta & \gamma \cr \end{matrix}\Biggr\} \ \cong \ L\biggl( \begin{matrix} \alpha \cr \beta \cr \end{matrix} \ ; \begin{matrix} \cr\gamma^* \cr \end{matrix} \biggr) \ .$$} \end{Lemma} \bigskip\noindent This implies \begin{Lemma} \label{atyp} A covariant representation $\{ \lambda\}$ attached to a partion $\lambda$ with the hook condition is maximal atypical if and only if $\lambda_{m-n+1}=0$, and then $\mu=\lambda$ holds and $$ \{ \lambda\}\ =\ L(\lambda_1,...,\lambda_{m-n},0,..,0;0,..,0) \ .$$ \end{Lemma} \bigskip\noindent {\it Proof}. One direction is clear. For $\lambda_{m-n+1}=0$ the representation $\{\lambda\}$ corresponds to the ground state of some maximal atypical block as explained above. For the converse assertion notice that $I_\times(\mu) \subset [1-m,..,\infty)$ for the highest weight $\mu$ of the representation $\{\lambda\}$ by lemma \ref{cov}. Hence $1-m-\mu_{m+1}$ is in $I_\circ(\mu)$, but not in $I_\times(\mu)$ if $\mu_{m+1} = (\gamma^*)_1 >0$. Hence, if $L(\mu)$ is maximal atypical, we conclude $\gamma^*=0$ and hence $\gamma=0$. So we can assume $m>n$. Then $I_\circ(\mu) = [1-m,2-m,...,n-m]$, since $\gamma^*=0$. Therefore none of the $\mu_1, \mu_2-1,..,\mu_{m-n}+n-m+1$ is in $I_\circ(\mu)$, since $\mu_i = \lambda_i \geq 0$ for $i=1,..,m$. If $L(\mu)$ is maximal atypical, then all remaining $\mu_i + 1- i $ for $i=m-n+1,..,m$ must be contained in $I_\circ(\mu)$. Since $\lambda_{m-n+1}=\mu_{m-n+1}$ is $\geq 0$, this implies $\lambda_{m-n+1} + n- m \in I_0(\mu)$ and therefore $\lambda_{m+n+1}=0$. QED \bigskip\noindent Consider $$ \Pi\ =\ \Lambda^{m-n}(X) \otimes Ber_{m\vert n}^{-1}\ =\ L(0,..,0,-1,..,-1;1,..,1) $$ with $n$ digits $1$ and $-1$, the first higher ground state in the $\mathbf 1$-block. \begin{Lemma} \label{ground} Let $\Lambda$ be a maximal atypical block. For the ground states $L(\lambda_N)$ of order $N=0,1,2,..$ in this block $\Lambda$ we obtain $$ L(\lambda_{N}) \otimes \Pi \ \cong \ L(\lambda_{N+1}) \oplus R_N $$ for certain $R_N$, whose projection to all maximal atypical blocks of $\cal T$ are zero. \end{Lemma} \bigskip\noindent {\it Proof}. By a twist with $Ber^{\lambda_{m-n}-N-1}$ we can easily reduce to the case $N=0$ and $\Lambda_{m-n+1}=0$. Then $\lambda_N=\lambda$ for $ \lambda=(\lambda_1,...,\lambda_{m-n},0,..,0; 0,..,0) $ defines a covariant representation $L(\lambda_N)=\{\lambda\}$. By the well known properties of Schur projectors, $$ L(\lambda_N) \otimes \Lambda^{m-n}(X) \ = \ \bigoplus_{\rho} \ [\rho: \lambda,\Lambda^{m-n}] \cdot \{ \rho \}$$ holds with the Littlewood-Richardson coefficients $[\rho: \lambda,\Lambda^{m-n}]$. It is well known that $[\rho: \lambda,\Lambda^{m-n}]\neq 0$ implies $\rho_{m-n+1}>0$ unless $\rho=\lambda+\Lambda^{m-n}$. Hence by lemma \ref{atyp} all summands $\{\rho \}$ are not maximal atypical, except for $\{\rho\} = \{\lambda + \Lambda^{m-n}\}$. By twisting with $Ber^{-1}$ our claim follows. QED \bigskip\noindent \bigskip\noindent \section{\bf The stable category $\cal K$} \bigskip\noindent The abelian category ${\cal T=\cal T}_{m\vert n}$ is a Frobenius category, i.e. it has enough projective objects and the injective and projective objects coincide. Let ${\cal K=\cal K}_{m\vert n}$ be the quotient category with the same objects as $\cal T$, but with $Hom_{{\cal K}}(A,B)$ defined as the quotient of $Hom_{{\cal T}}(A,B)$ by the $k$-subvectorspace of all homomorphisms which factorize over a projective module. The natural functor $$\alpha: {\cal T}\longrightarrow {\cal K}$$ is a $k$-linear tensor functor. The category ${\cal K}$ is a triangulated category with a suspension functor $S(A)=A[1]$, and the quotient functor $\alpha$ associates to exact sequences in $\cal T$ distinguished triangles in ${\cal K}$. Furthermore $$ Ext_{\cal T}^i(A,B)\ \cong\ Hom_{{\cal K}}(A,B[i]) \quad , \quad \forall \ i> 0 \ .$$ For simple $X$ in $\cal T$ either $X$ is projective in $\cal T$ and hence zero in ${\cal K}$, or $X$ is not zero in ${\cal K}$ and $$ Hom_{{\cal K}}(X,X) \ = \ k \cdot id_X \ $$ is one dimensional. Notice that $A[1] \cong I/A$ for an embedding $A \hookrightarrow I$ into a projective object $I$, and similarly $A[-1] \cong Kern(P \to A)$ for a projective resolution $P\to A$. Since $P^* \cong P$ and $\cal T$ is a Frobenius category, this implies that ${}^*$ induces an involution of the stable category ${\cal K}$ such that $(A[n])^* \cong A^*[-n]$. \bigskip\noindent \begin{Theorem} \label{ext} ([BS2] corollary 5.15). $dim_k(Ext_{\cal T}^i(L(\lambda), L(\mu)))$ is equal to $$\sum_{j+k=i}\ \sum_{\nu} \ \ dim_k(Ext_{\cal T}^j(V(\nu), L(\lambda))) \cdot dim_k(Ext_{\cal T}^k(V(\nu), L(\mu))) \ .$$ \end{Theorem} \bigskip\noindent \begin{Theorem} ([BS2], corollary 5.5). $dim_k(Ext_{\cal T}^i(V(\lambda), L(\mu)))=0$ unless $\lambda\leq \mu$ and $i \equiv l(\lambda,\mu)$ modulo 2. \end{Theorem} \bigskip\noindent Here $l(\lambda,\mu)$ denoted the minimum number of transpositions of neighbouring $\vee\!\wedge$ pairs needed to get from $\lambda$ to $\mu$, where neighbouring means separated only by $\circ$'s and $x$'s. Put $p(\lambda)=\sum_{i=1}^n \lambda_{m+i}$. If $\lambda$ and $\mu$ are maximal atypical, then $p(\lambda)\equiv p(\mu) + l(\mu,\nu)$ modulo two. Indeed it suffices to show this in the case of neighbours $\lambda$ and $\mu$ where $l(\lambda,\mu)=1$. For maximal atypical weights $I_x \cap I_0 = I_0$, a single transposition modifies $\sum_{j \in I_x\cap I_0} j = -\sum_{i=1}^{n} \lambda_{m+i} + \sum_{i=1}^{n} (i-m) $ by one. Hence the last two theorems imply \bigskip\noindent \begin{Lemma} \label{parity} For $L(\lambda)$ and $L(\mu)$ in $\cal A$ and $i \geq 0$ we have $Hom_{{\cal K}}(L(\lambda), L(\mu)[i])=0$ unless $p(\lambda) \equiv p(\mu) + i$ modulo two. \end{Lemma} \begin{Lemma} \label{trivial} Let $\cal A$ be a $k$-linear category and let $A$ and $B$ be objects of $\cal A$ such that $End_{\cal A}(A)\cong k$ and $End_{\cal A}(B) \cong k$. Let $\varphi: B\to A$ and $\psi:A\to B$ be morphisms in $\cal A$. Then, either $\varphi$ and $\psi$ are isomorphisms, or $\varphi\circ\psi=0$ and $\psi\circ\varphi =0$. \end{Lemma} \bigskip\noindent {\it Proof}. Suppose $\varphi\circ\psi\neq 0$. Then by a rescaling we can assume $\varphi \circ \psi = id_A$. Then $\varphi \circ \psi \circ \varphi = \varphi$. Since $\psi \circ \varphi = \lambda \cdot id_B$ for some $\lambda\in k$ and since $\varphi\neq 0$ by our assumption, we conclude $\lambda =1$. Hence $\varphi$ and $\psi$ are isomorphisms. The same conclusions hold if $\psi\circ\varphi \neq 0$. QED \bigskip\noindent \section{Kostant weights}\label{Kosw} \bigskip\noindent By [BS2], lemma 7.2 a weight $\mu$ is a Kostant weight, i.e. satisfies for every $\nu\in \Lambda$ $$ \sum_{i=0}^\infty \ dim_k( Ext^i_{\cal T}(V(\nu),L(\mu)) \ \leq \ 1 \ , $$ if and only if no subsequence of type $\vee\! \wedge\! \vee\! \wedge$ occurs in its labeling. All ground states $\lambda_N$ of the maximal atypical blocks are Kostant weights. In particular there is at most one index $i\! =\! i(\nu)$, depending on $\lambda_N$ and $\nu$, such that $Ext^i_{\cal T}(V(\nu),L(\lambda_N)) \neq 0$, and in this case $Ext^{i(\nu)}_{\cal T}(V(\nu),L(\lambda_N)) \leq 1$. \bigskip\noindent By [BS2] corollary 5.5 and the complementary formula (5.3) for $p_{\nu,\mu}$ in loc. cit. $ Ext^i_{\cal T}(V(\nu),L(\mu))=0$ holds unless $\nu,\mu$ are in the same block $\Lambda$, unless $\nu \leq \mu$ in the Bruhat ordering and $i \leq l(\nu,\mu)$. Suppose $\nu,\mu$ are in the same block $\Lambda$, and suppose $\nu\leq \mu$ holds in the Bruhat ordering. Then for $i=l(\nu,\mu)$ we have $Ext_{\cal T}^i(V(\nu),L(\mu))\neq 0$ by inspection of the formula (5.3) in [BS2]. Indeed in the set $D(\nu,\mu)$ of labeled cap diagrams $C$ defined in loc. cit., there exists at least one cap diagram with $\vert C\vert =0$, since $\nu\leq \mu$ in the Bruhat ordering implies $l_i(\nu,\mu)\geq 0$ for all $i\in I(\Lambda)$ in the notations of loc. cit. Since the leftmost vertex of a small cap is always $\vee$ and hence is contained in $I(\Lambda)$, there exists some $C\in D(\nu,\mu)$ with $\vert C\vert=0$. This implies \begin{Lemma} \label{Kostant} For $L=L(\mu)$ and a Kostant weight $\mu$ the $k$-vectorspace $Ext^i_{\cal T}(V(\nu),L)$ is one dimensional, if $\nu,\mu$ are in the same block $\Lambda$ and $\nu\leq \mu$ holds in the Bruhat ordering and $i=l(\nu,\mu)$, and it is zero otherwise. \end{Lemma} \bigskip\noindent If $\mu=\lambda_N$ is one of the ground state weights of the block $\Lambda$, then the conditions $\nu\in \Lambda$, $\nu\leq \mu$ and $i=i(\nu,\mu)$ only depend on the relative positions of the labels $\vee$ in the numberline after the crosses defined by the block $\Lambda$ are removed. So they do not depend on the block, but only on the number $n$ of labels $\vee$ of the Kostant weight. This implies \begin{Corollary} \label{dimen} For any block $\Lambda$ of $\cal T$ and ground state representation $L=L(\lambda_N)$ of this block we have for all $j\geq 0$ $$ dim_k(Ext^j_{\cal T}(L,L))\ =\ dim_k(Ext^j_{\cal T}({\mathbf 1},{\mathbf 1})) \ .$$ \end{Corollary} \bigskip\noindent {\it Proof}. By theorem \ref{ext} we get $dim_k(Ext_{\cal T}^j(L,L) = \sum_\nu dim_k(Ext_{\cal T}^{i(\nu)}(V(\nu),L))^2 $ with summation over all $\nu$ such that $j=2i(\nu)$. Since by lemma \ref{Kostant} the summation conditions and the dimensions in this sum only depend on $j$ and not on the chosen ground state $L$ or block $\Lambda$, we may replace $L$ by the ground state ${\mathbf 1}$ of the trivial block. QED \bigskip\noindent \bigskip\noindent \section{\bf The localization $\cal B$ of the tensor category} \bigskip\noindent For the moment let $\cal K$ be any $k$-linear triangulated tensor category (meaning symmetric monoidal). Then there exists a triangulated tensor functor ${\cal K \to \cal K}^\sharp$ of idemcompletion (see [BS]). \bigskip\noindent Let $k_{\cal K}=End_{\cal K}({\mathbf 1})$ be the central $k$-algebra of $\cal K$. Let $u\in \cal K$ be an invertible element (we use this for $u={\mathbf 1}[1]$). The monoidal symmetry $\sigma_u: u\otimes u \cong u\otimes u$ is given by multiplication with an element $\epsilon_u \in k_{\cal K}^*$ with $(\epsilon_u)^2=1$. Furthermore $$R^\bullet_{\cal K} \ = \ \bigoplus_{i\in\mathbb{Z}} \ Hom_{\cal K}({\mathbf 1},u^{\otimes i} )$$ becomes a supercommutative ring with the parity $\epsilon_u$, i.e. $f g = (\epsilon_u)^{ij} g f$ for homogenous elements of degree $i$ and $j$. See [Ba], Prop. 3.3. Let $R \subset R^\bullet_{\cal K}$ be the graded subring generated by the elements of degree $\geq 0$. Let $S \subset R^\bullet_{\cal K}$ be a multiplicative (and even, if $\epsilon_u \neq 1$) subset, then the ring localization $S^{-1}R^\bullet_{\cal K}$ is defined. Define a new category by the degree zero elements $$ Hom_{S^{-1}\cal K}(A,B) : = (S^{-1} Hom_{\cal K}^\bullet(A,B))^0 $$ of the localization of the graded $R^\bullet_{\cal K}$ module $$Hom_{\cal K}^\bullet(A,B) \ =\ \bigoplus_{i\in\mathbb{Z}} \ Hom_{\cal K}(A,u^{\otimes i} \otimes B)\ .$$ For $M\in \cal K$ the annulator of $Hom_{\cal K}^\bullet(M,M))$ in $R$ is a graded ideal, which defines the support variety $V(M)$ as the spectrum of the quotient ring. \bigskip\noindent \begin{Theorem}([Ba], thm. 3.6). {\it As a tensor category $S^{-1}\cal K$ is equivalent to the Verdier quotient of the tensor category $\cal K$ divided by the thick triangulated tensor ideal generated by the cones of the morphisms in $S$. The quotient category $\cal B$ is a $S^{-1}R^{\bullet}_{\cal K}$-linear category. The quotient functor is a $k$-linear triangulated tensor functor}. \end{Theorem} \bigskip\noindent In the following we only apply this for the stable category $\cal K$ of the representation category $\cal T$ for $u={\mathbf 1}[1]$. In this case $\epsilon_u=-1$. Then we have \bigskip\noindent \begin{Proposition}(\mbox{[BKN1], 8.11}).\label{grad} {\it The graded ring $R$ of the Lie super algebra $psl(n\vert n)$ is isomorphic to a graded polynomial ring $k[\zeta_2,\zeta_4,...,\zeta_{2n-2},\xi_n,\eta_n]$ of transcendency degree $n+1$. } \end{Proposition} \bigskip\noindent By using restriction from $Gl(m\vert n)$ for $m>n$ one finds that for $Gl(n\vert n)$ the isomorphisms $\zeta_2,..,\zeta_{2n-2}$ also exist. Now suppose $m=n$. Then by proposition \ref{grad} there exists a power $\cal L$ of the Berezin such that $\xi_n: {\mathbf 1} \to {\cal L}[n]$ and $\eta_n: {\mathbf 1} \to {\cal L}^{-1}[n]$. The product $\zeta_{2n}=\eta_n\xi_n$ is in $R$. In the appendix \ref{class} we show ${\cal L} = Ber_{n\vert n}$. \bigskip\noindent \begin{Proposition}([BKN2], p.23). \label{pol}{\it The graded ring $R$ of the category $Gl(m\vert n)$ is isomorphic to a graded polynomial ring $S= k[\zeta_2,\zeta_4,...,\zeta_{2n}]$ of transcendency degree $n$ for all $m\geq n$.} \end{Proposition} \bigskip\noindent For the support variety $V(M)$ of an object $M\in \cal K$, being defined as above, we quote from [BKN1], section 7.2, p. 29 and [BKN2], 4.8.1 \bigskip\noindent \begin{Theorem}([BKN2], thm. 4.8.1). \label{1}{\it For $Gl(m\vert n)$ the dimension of the support variety of a simple object $L(\lambda)$ is equal to the degree of atypicality of $L(\lambda)$.} \end{Theorem} \bigskip\noindent \begin{Theorem}([BKN2], thm. 4.5.1 and 4.8.1). \label{2} {\it For $Gl(m\vert n)$ the support variety of a simple maximal atypical object $L(\lambda)$ of $\cal T$ is $Spec(R)$.} \end{Theorem} \bigskip\noindent Now fix the supergroup $Gl(m\vert n)$. Put $K=Quot(R)=k(\zeta_{2},...,\zeta_{2n})$, and put $S=R \setminus \{ 0\}$. Notice, then as required $S$ only contains even elements by proposition \ref{pol} above. Furthermore, since $R$ is an integral domain, $S^{-1}R$ is isomorphic to the extension field $K$ of $k$. Let ${\cal B} =R^{-1}{\cal K}$ be the corresponding localized category. If $\cal B$ is not idemcomplete, we replace it by its idemcompletion $\cal B^\sharp$ from now on. ${\cal B}$ is a $K$-linear category. It is obvious that the functor ${}^*$ respects $R$, hence induces a corresponding $K$-equivariant functor of ${\cal B}$. The natural quotient functor $$ \beta: {\cal K} \ \longrightarrow \ {\cal B} \ $$ is a $k$-linear triangulated tensor functor. \bigskip\noindent \section{The homotopy category ${\cal H}$} \bigskip\noindent In the last section we defined the quotient category $\cal B$ of the stable category ${\cal K}$. We now define another Verdier quotient category $\cal H$ of ${\cal K}$, which is obtained by dividing ${\cal K}$ by the thick triangulated $\otimes$-ideal of ${\cal K}$ generated by the anti Kac modules. Recall that for each highest weight $\lambda \in X^+$ there exists a cell module $V(\lambda)$ (or Kac module) in the sense of [BS4] in the category $\cal T$. We define the anti Kac modules by $V(\lambda)^*$ via the antiinvolution ${}^*$. \bigskip\noindent \begin{Lemma}{\it For (anti)-Kac modules $V$ and $\zeta_{2i}\in R$ the space $\bigoplus_{n\in \mathbb{Z}} Hom_{\cal K}(V,V[n])$ is annihilated by a sufficiently high powers of $\zeta_{2i}$.} \end{Lemma} \bigskip\noindent {\it Proof}. By [BKN2], thm. 3.2.1 the groups $Ext_{\cal T}^j(V,M)$ vanish for fixed $M\in\cal T$ if $j>>0$. Hence a high enough power of $\zeta_{2i}$ annihilates $Hom_{\cal K}(V,V[n])$. By applying the functor ${}^*$ this carries over to $V^*$. QED \bigskip\noindent Since $\zeta_{2i}$ becomes invertible in $\cal B$, this implies $Hom_{\cal B}(V,V)=0$ for each Kac module $V$. Hence \bigskip\noindent \begin{Lemma}\label{cell} {\it The image of a (anti)-Kac module $V$ is zero in $\cal B$.} \end{Lemma} \bigskip\noindent Recall that in [W] we defined the homotopy category ${\cal H}$, which is equivalent as a $k$-linear tensor category to the Verdier quotient of the category ${\cal K}$ devided by the thick tensor ideal generated by all anti-Kac modules. By the last lemma we obtain \bigskip\noindent \begin{Corollary} {\it The quotient functor $\beta: {\cal K} \to {\cal B}$ factorizes over the homotopy quotient functor $\gamma: {\cal K} \to {\cal H}$.} \end{Corollary} \bigskip\noindent We quote from [W] the following \begin{Theorem} \label{hot} In the category $\cal H$ for simple objects $M$ and $N$ in $\cal T$ with highest weights $\mu$ and $\lambda$ the following holds: \begin{enumerate} \item $End_{\cal H}(M) = k \cdot id_M$, \item $Hom_{\cal H}(M,N)$ is a finite $k$-vectorspace, \item $Hom_{\cal H}(V,N) \cong Hom_{\cal T}(V,N)$ for every cell module $V=V(\mu)$. \item $Hom_{\cal H}(M,N) =0$ if $\mu < \lambda$ holds with respect to the weight ordering, \item Let ${\cal H}^{\leq \lambda}$ denote the full subcategory quasi equivalent to the image of ${\cal T}^{\leq \lambda}$ and similar ${\cal H}^{< \lambda}$ for ${\cal T}^{< \lambda}$ . Then suspension induces a functor $$ [1]: \ {\cal H}^{\leq \lambda}\ \longrightarrow \ {\cal H}^{< \lambda} \ .$$ \end{enumerate} \end{Theorem} \bigskip\noindent \begin{Lemma} \label{unit} $End_{\cal B}(1) = K$. \end{Lemma} \bigskip\noindent {\it Proof}. By [Ba], prop 3.3 we have $End_{\cal B}(1) = S^{-1}R^\bullet_{\cal K}$. Hence it suffices to show for $n>0$ that all morphisms $$ \psi: {\mathbf 1} \longrightarrow {\mathbf 1}[-n] $$ are annihilated by a power of the element $\zeta_{2} \in R$. Suppose $n=2i-1$ is odd. Then this is obvious, since $(\zeta_2)^i \circ \psi: {\mathbf 1}\to u={\mathbf 1}[1]$ vanishes by parity reasons. Indeed $Hom_{\overline {\cal T}}({\mathbf 1},{\mathbf 1}[1])=0$ by lemma \ref{parity}. Now suppose $n=2i$ is even and consider $(\zeta_2)^i\cdot \psi$ in $End_{\cal K}({\mathbf 1})=k$. By the trivial lemma \ref{trivial} the morphism $\psi$ is an isomorphism unless $(\zeta_2)^i \cdot \psi =0$. If $\psi$ were an isomorphism in $\cal K$, then also in $\cal B$ and therefore also in $\cal H$. However $Hom_{\cal H}({\mathbf 1},{\mathbf 1}[-n])=0$ holds for all $n>0$ by the assertions 4) and 5) of theorem \ref{hot}. QED \bigskip\noindent The same argument shows \bigskip\noindent \begin{Lemma}\label{nonvan}{\it For simple objects $M$ in $\cal T$ the image of $Hom_{\cal K}(M,M[-n])$ under the natural map $$ Hom_{\cal K}(M,M[-n]) \longrightarrow S^{-1} Hom^\bullet_{\cal K}(M,M)^0 = Hom_{\cal B}(M,M) $$ is zero for $n>0$.} \end{Lemma} \bigskip\noindent {\it Proof}. $Hom_{\cal H}(M,M[-n])$ vanishes for simple $M$ and all $n>0$ by weight reasons. Use part 4) and 5) of theorem \ref{hot}. QED \bigskip\noindent By lemma \ref{nonvan} for simple objects $M$ the endomorphism ring $Hom_{\cal B}(M,M)$ is obtained by quotients $f/r$ for $f:M\to M[i]$ for $f\in Hom_{\cal K}(M,M[i])$ and $r:{\mathbf 1}\to {\mathbf 1}[i]$ in $r\in Hom_{\cal K}({\mathbf 1},{\mathbf 1}[i])$ only for positive degrees $i\geq 0$, of course modulo the usual equivalence defined by the localization $S^{-1}$. We may suppose that the simple object $M$ is not projective, since otherwise $M$ vanishes in $\cal K$ and hence in $\cal B$. Then $M\neq 0$ in $\cal M$, and $Hom_{\cal T}(M,M) \cong Hom_{\cal K}(M,M) \cong k$. Hence $Ann_R(Ext^\bullet_{\cal T}(M,M)) = Ann_R(\bigoplus_{i=0}^\infty Hom_{\cal K}(M,M[i]))$. Hence $r\in Ann_R(Ext^\bullet_{\cal T}(M,M))$ iff $r \cdot f =0$ for all $f\in Ext^\bullet_{\cal T}(M,M)$. This is related to the support variety $V(M)$ of the simple object $M$ $$V(M) = Spec(R/Ann_R(Ext^\bullet_{\cal T}(M,M))) \ $$ as follows. There exists an $r\neq 0$ in $R$ that annihilates $\bigoplus_{i=0}^\infty Hom_{\cal K}(M,M[i])$ iff the support of $V(M)$ is not equal to $Spec(R)$. The first statement is equivalent to $Hom_{\cal B}(M,M)=0$. Hence \begin{Corollary} Let $M$ be a simple object of $\cal T$. Then $M$ vanishes in $\cal B$ if and only if the support variety $V(M)$ is a proper subset of $Spec(R)$. \end{Corollary} \bigskip\noindent \begin{Corollary} \label{Wakimoto} {\it A simple object $M$ of $\cal T$ vanishes in $\cal B$ iff $M$ is not maximal atypical.} \end{Corollary} \bigskip\noindent {\it Proof}. Use theorem \ref{1} and \ref{2}. \bigskip\noindent \begin{Lemma}{\it For simple objects $M$ and $N$ in $\cal T$ the space $Hom_{\cal B}(M,N)$ vanishes unless $M$ and $N$ have the same parity in the sense of lemma \ref{parity}.} \end{Lemma} \bigskip\noindent {\it Proof}. As explained above, any morphism in $\cal B$ between $M$ and $N$ is of the form $f/r$ for $f:M\to N[i]$ in $\cal K$, $r\in R$ for some even $i\geq 0$. Hence $f$ corresponds to an element in $Ext_{\cal T}^i(M,N)$. We can assume that $M$ and $N$ are maximal atypical, since otherwise $M$ and $N$ are zero in $\cal B$. Then $Ext_{\cal T}^i(M,N)$ vanishes by lemma \ref{parity} unless $M$ and $N$ have the same parity. QED \bigskip\noindent We remark that $\zeta_{2}$ becomes an isomorphism in $\cal B$. Hence \bigskip\noindent \begin{Lemma} ${\mathbf 1}[2] \cong {\mathbf 1}$ in $\cal B$. \end{Lemma} \bigskip\noindent \section{The ground state categories ${\cal Z}_\Lambda$}\label{groundcat} \bigskip\noindent {\bf Definition}. Let $\Lambda$ be a maximal atypical block of $\cal T$ and $L$ be the ground state representation of this block. Let ${\cal Z}_\Lambda$ denote the full subcategory of $\cal B$ of all objects isomorphic to a finite direct sum of $L$ and $L[1]$. For the unit block where $L={\mathbf 1}$ we simply write $\cal Z$. \bigskip\noindent \begin{Lemma} \label{thick} ${\cal Z}_\Lambda$ is a thick idemcomplete triangulated subcategory of $\cal B$, i.e. it is closed under retracts and extensions and the shift functor. \end{Lemma} \bigskip\noindent {\it Proof}. First suppose $L={\mathbf 1}$. Notice $$ Hom_{\cal B}(a\cdot {\mathbf 1} \oplus b\cdot {\mathbf 1}[1], c\cdot {\mathbf 1} \oplus d\cdot {\mathbf 1}[1]) \ \cong \ Hom_{K}(K^a,K^c) \oplus Hom_K(K^b,K^d)$$ by lemma \ref{parity} and lemma \ref{unit}. Hence by usual properties of matrix rings the category $\cal Z$ is an idempotent split category. By the same reasons $\cal Z$ is closed under retracts as well as under cones. Since ${\mathbf 1}[2]\cong {\mathbf 1}$, shifts preserve $\cal Z$. Hence $\cal Z$ is a thick idempotent split triangulated subcategory of $\cal B$. The same carries over to ${\cal Z}_\Lambda$ by the next lemma. QED \begin{Lemma} Suppose $L$ is a ground state of a maximal atypical block $\Lambda$ in $\cal T$. Then $Ext^\bullet_{\cal T}(L,L)$ is a graded free module over $R=Ext^\bullet_{\cal T}({\mathbf 1},{\mathbf 1})$, hence in particular $$End_{\cal B}(L)\ \cong\ K \cdot id_L\ .$$ \end{Lemma} \bigskip\noindent {\it Proof}. By theorem \ref{2} the annihilator of $Ext_{\cal T}^\bullet(L,L)$ in $R$ is trivial. Hence the graded $R$-homomorphism $R\cdot id_L \to Ext_{\cal T}^\bullet(L,L)$ is injective. By corollary \ref{dimen} the comparison of $k$-dimensions shows that it is an isomorphism. QED \bigskip\noindent \begin{Lemma} \label{trivi} Suppose the image of an irreducible maximal atypical highest weight representation $L(\lambda)$ in $\cal B$ is contained in $\cal Z$. Then $$ sdim_k(L(\lambda)) = (-1)^{p(\lambda)} \cdot m(\lambda) \ $$ for some integer $m(\lambda)>0$. \end{Lemma} \bigskip\noindent {\it Proof}. By corollary \ref{Wakimoto} the object $L(\lambda)$ is not zero in $\cal B$. Lemma \ref{parity} together with the assumption $ L(\lambda) \in \cal Z$ hence implies $L(\lambda) \cong c \cdot {\mathbf 1}[p'(\lambda)]$ for some $c\neq 0$ in $\mathbb{Z}$, where $p'(\lambda)\in\{0,1\}$ is uniquely defined. Then $sdim_k(L(\lambda)) = (-1)^{p'(\lambda)} c$. Two remarks. First $\chi_{\cal H}({\mathbf 1}[1]) = \chi_{\cal B}({\mathbf 1}[1])=-1$ for $u={\mathbf 1}[1]$, since $\epsilon_u = -1$. Notice, this holds in the homotopy category $\cal H$ by [W] and hence in $\cal B$. Secondly for all $L(\lambda)$ in $\cal Z$ we have $p'(\lambda) = p(\lambda) + const$, where $const \in Z$ is independent from $L(\lambda)$ in $\cal Z$ by lemma \ref{parity}. For $\lambda =0$ and $L(\lambda)=1$ this shows $const \in 2\mathbb{Z}$. Therefore $(-1)^{p'(\lambda)}=(-1)^{p(\lambda)}$. \bigskip\noindent Lemma \ref{ground} and corollary \ref{Wakimoto} imply for the atypical representations $L(\lambda_N), L(\lambda_{N+1})$ and $\Pi$ the following relation in $\cal B$ \begin{Lemma} \label{N-state} In $\cal B$ we have $L(\lambda_N) \otimes \Pi \cong L(\lambda_{N+1})$ for all $N\geq 0$. \end{Lemma} \section{The reductive group $Gl(m-n)$}\label{mainth} \bigskip\noindent Consider the tensor category ${\cal T}={\cal T}_{m\vert n}$ as before. Let $H=Gl(m-n)$ denote the linear group over $k$ and let $Rep_k(H)$ denote the $k$-linear rigid semisimple tensor category of all algebraic representations of $Gl(m-n)$ on finite dimensional $k$-vectorspaces. \bigskip\noindent Embedded in $Gl(m\vert n)$, with an immersion in the obvious way, is subgroup $H \times Gl(n\vert n)$. Restriction of a super representation of $Gl(m\vert n)$ on a finite dimensional $k$-super vectorspace to the subgroup $H \times Gl(n\vert n)$ defines a $k$-linear exact tensor functor $$Res: {\cal T}_{m\vert n} \longrightarrow Rep_k(H) \otimes_k {\cal T}_{n\vert n}\ .$$ The restriction of a projective representation $P$ in ${\cal T}_{m\vert n}$ decomposes into a direct sum of isotypic representations $P=\bigoplus_\rho P_\rho$ with respect to the action of the reductive group $H$. Each $P_\rho$ is a $Gl(n\vert n)$ representation, which is projective as a direct summand of the projective object $P$ viewed as a super representation in ${\cal T}_{n\vert n}$. Hence $Res({\cal P}) \subset Rep_k(H) \times {\cal P}$. Similarly a morphisms in ${\cal T}_{m\vert n}$, which is stably equivalent to zero, restricts to a direct sum of morphisms $f_\rho$ with respect to the action of $H$, such that each of the morphisms $f_\rho$ is stably equivalent to zero in ${\cal T}_{n\vert n}$. Hence the restriction induces a tensor functor $$ res: \ {\cal K}_{m\vert n} \ \longrightarrow \ Rep_k(Gl(m-n)) \otimes_k {\cal K}_{n\vert n} \ .$$ The suspension $(.)[1]$ thereby maps to the suspension $id_{Rep_k(H)} \otimes_k (.)[1]$, since an embedding $X \hookrightarrow I $ decomposes into $Res(X)=\bigoplus_\rho Res(X)_\rho \hookrightarrow \bigoplus_\rho Res(I)_\rho$. Thus $res$ becomes a triangulated $k$-linear tensor functor. The triangulated structure on $Rep_k(Gl(m-n)) \otimes_k {\cal K}_{n\vert n} $ is induced by the triangulated structure on ${\cal K}_{n\vert n} $ in an obvious way, noticing that $Rep_k(H)$ is semisimple. \bigskip\noindent Using detecting subalgebras as in [BKN] one can show that $Ext^n_{{\cal T}_{m\vert n}}(k,k)$ restricts properly and surjectively to $Ext^n_{{\cal T}_{n\vert n}}(k,k)$. By the universal property of the Verdier quotient categories the functor $res$ induces a functor from the Verdier quotient categories ${\cal B}_{m\vert n}$ of ${\cal T}_{m\vert n}$ $$ \gamma: {\cal B}_{m\vert n} \ \longrightarrow \ Rep_k(H) \otimes_k {\cal B}_{n\vert n} \ .$$ This functor is a $K$-linear triangulated tensor functor. \bigskip\noindent Now use \begin{Theorem} \label{Z} For $m \geq n$ the image of the block of the trivial representation in the category ${\cal B}_{m\vert n}$ is equivalent as a $K$-linear triangulated tensor subcategory of ${\cal B}_{m\vert n}$ to the $K$-linear triangulated tensor category ${\cal Z} \sim svec_K$ of finite dimensional $K$-super vectorspaces. \end{Theorem} \bigskip\noindent Taking this theorem for granted at the moment we proceed as follows: We apply the last theorem for $m=n$, which allows us to consider $\gamma$ as a functor $K$-linear triangulated tensor functor $$ \gamma: {\cal B}_{m\vert n} \ \longrightarrow \ Rep_k(H) \otimes_k svec_K \ .$$ The right side can be viewed as the category of finite dimensional $K$-algebraic super representations of the reductive group $Gl(m-n)$ over $K$. Up to twist by powers of the determinant representation of $Gl(m-n)$ a basis of simple objects is given by the representations $Schur_\mu(K^{m-n})$ and $Schur_\mu(K^{m-n})[1]$, where $\mu=\mu_1 \geq \mu_2 \geq ... \geq \mu_{m-n}\geq 0$ runs over the partions of length $\leq m-n$. \bigskip\noindent Let us look what the functor $\gamma$ does with the image in ${\cal B}_{m\vert n}$ of the standard representation $X_{m\vert n} = k^{m\vert n}$ of $Gl(m\vert n)$. This standard representation restricts to $Res(X_{m\vert n})= (k^{m-n} \otimes_k {\mathbf 1}) \bigoplus ({\mathbf 1} \otimes_k X_{n\vert n}) $. $X_{n\vert n}$ becomes zero in ${\cal B}_{n\vert n}$ by lemma \ref{Wakimoto}, since it is non maximal atypical. Hence $$ \gamma(X_{m\vert n}) \ \cong\ K^{m-n} $$ is the standard $K$-linear representation of $Gl(m-n,K)$ on $K^{m-n}$. Now we use the following stronger version of the last theorem \begin{Theorem} \label{aZ} As a $K$-linear triangulated category the full image of each block $\Lambda$ of ${\cal T}_{m\vert n}$ in ${\cal B}_{m\vert n}$ is isomorphic to the $K$-linear triangulated category $svec_K$ spanned by the ground state $L(\lambda_0)$ of the block $\Lambda$. \end{Theorem} \bigskip\noindent Theorem \ref{aZ} immediately implies theorem \ref{Z}. Since $\gamma$ is a $K$-linear triangulated tensor functor, theorem \ref{aZ} also implies that $\gamma$ is an exact $K$-linear equivalence of $K$-linear triangulated abelian tensor categories once we know \begin{Lemma} \label{ss} The category ${\cal B}_{m\vert n}$ is semisimple and hence abelian. \end{Lemma} \bigskip\noindent Indeed the lemma implies exactness of the functor $\gamma$, and then corollary \ref{Wakimoto} and theorem \ref{aZ} imply faithfulness. Hence $\varphi$ induces a faithful embedding of categories. That $\gamma$ is full then is an immediate consequence. Put $\varphi= \gamma\circ \beta\circ \alpha$. Then theorem \ref{aZ} and lemma \ref{ss} imply the following generalization of theorem \ref{Z} \bigskip\noindent {\bf Main Theorem}. {\it As a rigid $K$-linear triangulated tensor category ${\cal B}_{m\vert n}$ is semisimple and hence abelian, and as a $K$-linear abelian tensor category ${\cal B}_{m\vert n}$ is equivalent to the category of $K$-algebraic finite dimensional $K$-linear super representations of the reductive $K$-group $Gl(m-n)$. Each simple maximal atypical object $M=L(\lambda)$ maps to $$ \varphi(M) \ =\ m(\lambda) \cdot L(\lambda_0)\,[p(\lambda)]\ ,$$ where the multiplicity $m(\lambda)$ is an integer $>0$ and $L(\lambda_0)$ denotes the ground state in the block $\Lambda$ defined by $L(\lambda)$.} \bigskip\noindent {\it Remark}. In particular this confirms the conjecture of Kac and Wakimoto in the case of superlinear groups. \bigskip\noindent {\it Remark}. For the object $\Pi_{m\vert n} = \Lambda^{m-n}(X) \otimes Ber^{-1}$ we have $$ \varphi(\Pi_{m\vert n}) \ =\ {\mathbf 1}_{m-n} \otimes (Ber_{n\vert n})^{-1} \ $$ in $Rep_k(H) \otimes_k {\cal B}_{n\vert n}$. By the main theorem ${\cal B}_{m\vert n} \cong Rep_k(H) \otimes_k {\cal B}_{n\vert n}$. Hence $\Pi_{m\vert n}$ is invertible in the tensor category ${\cal B}_{m\vert n}$. Since $Ber_{n\vert n}$ is invertible in ${\cal B}_{n\vert n} \cong svec_K$ it follows that $Ber_{n\vert n} \cong {\mathbf 1}[p(Ber_{n\vert n})] = {\mathbf 1}[n]$, hence $Ber_{n\vert n} \cong {\mathbf 1}[n]$ in ${\cal B}_{n\vert n}$. Therefore theorem \ref{aZ} and lemma \ref{ss} imply \begin{Corollary} \label{invertible} The object $\Pi=\Pi_{m\vert n} = \Lambda^{m-n}(X) \otimes (Ber_{m\vert n})^{-1}$, where $X=k^{m\vert n}$ is the standard representation, becomes isomorphic to ${\mathbf 1}[n]$ in the triangulated tensor category ${\cal B}_{m\vert n}$ $$ \Pi_{m\vert n} \ \cong \ {\mathbf 1}[n] \ .$$ \end{Corollary} \bigskip\noindent By lemma \ref{N-state} this in turn implies \begin{Corollary} \label{leftshift} $L(\lambda_N)\ \cong\ L(\lambda_{N+1})[n]\ $ in $\cal B$. \end{Corollary} \bigskip\noindent {\it Proof of lemma \ref{ss} using theorem \ref{aZ}}. This lemma follows from corollary \ref{abelian} of the section \ref{semisim}, since the conditions for this corollary are provided by the parity lemma \ref{parity} and theorem \ref{aZ}, which will be proved in the next sections \ref{moves}, \ref{three} and \ref{Proof}. \bigskip\noindent \bigskip\noindent \section{\bf Basic moves} \label{moves} \bigskip\noindent We consider blocks $\Lambda$ for the group $Gl(m\vert n)$ of maximal atypical type. As explained in section \ref{weights} they are described by an associated set of $m-n$ crosses $\times$ on the numberline $\mathbb{Z}$. The weight $\lambda$ in this block is uniquely described by $n$ labels $\vee$, which are at position different from the crosses. Attached to a weight $\lambda$ is its cup diagram $\underline{\lambda}$ (right move) and the oriented cup diagram $\underline{\lambda}\lambda$. \bigskip\noindent {\it Some simplification}. In the cup diagrams of [BS1] for many arguments the crosses $\times$ often do not play a role. This is also true for our discussion below. Hence, for the simplicity of exposition, we often assume $m=n$ in this section, although all statements hold for $m\geq n$ without changes. So assume $m=n$. Then ${\cal B}={\cal B}_{n\vert n}$, so that there are no crosses for maximal atypical weights. The $n$ labels $\vee$ attached to a maximal atypical weight define a subset $J=\{x_1,..,x_n\}$ of the numberline $\mathbb{Z}$. We order the integers such that $x_1> ... >x_n$ and put $\lambda_j = x_j + j-1$. Then $\lambda=(\lambda_1,..,\lambda_n;-\lambda_n,..,-\lambda_1)$ gives the associated weight vector of a maximal atypical simple object $L(\lambda)$. \bigskip\noindent {\it Sectors and segments}. Every cup diagram for a weight with $n$ labels $\vee$ contains $n$ lower cups. Some of them may be nested. If we remove all inner parts of the nested cups there remains a cup diagram defined by the (remaining) outer cups. We enumerate these cups from left to right. The starting points of the $j$-th lower cups is denoted $a_j$, its endpoint is denoted $b_j$. Then there is a label $\vee$ at the position $a_j$ and a label $\wedge$ at position $b_j$. The interval $[a_j,b_j]$ of the numberline will be called the $j$-th sector of the cup diagram. Adjacent sectors, i.e with $b_j=a_{j+1} -1$ will be grouped together into segments. The segments again define intervals in the numberline. Let $s_j$ be the starting point of the $j$-th segment and $t_j$ the endpoint of the $j$-th segment. Between any two segments there is a distance at least $\geq 1$. The interior $I^0$ of a sector, which is obtained by removing the start and end point of the sector, always is a segment. Hence sectors, and therefore also segments have even length. \bigskip\noindent {\it Example $n=2$}. For the weight $$ \mu \quad ... \ \wedge\! \wedge\! \wedge\! \wedge\! \vee\! \vee\! \wedge\! \wedge\! \wedge\! \wedge \ ... \ \ ,$$ with labels $\vee$ at the positions $j,j+1$ and all other labels equal to $\wedge$, the cup diagram $\underline{\mu}$ is described by one segment (which is a single sector) $$ [j,j+1,j+2,j+3] \ .$$ Graphically it corresponds to a nested pair of outer cups, one from $j+1$ to $j+2$, and one below from $j$ to $j+3$. \bigskip\noindent Now we fix some weight, which we denote $\lambda_{\vee\wedge} =(\lambda_1,..,\lambda_n,-\lambda_n,..,-\lambda_1)$ for reasons to become clear immediately. For the weight $\lambda_{\vee\wedge}$ we pick one of the labels $x_j \in J$ at the position $i:=x_j$ such that $i+1$ is not contained in the set of labels $J$ of the weight $\lambda_{\vee\wedge}$. Equivalently this means $\lambda_j < \lambda_{j+1}$ in terms of the weight vector. We define a new weight $\lambda$ (which is in another block, and in particular is not maximal atypical) by replacing in $\lambda_{\vee\wedge}$ the label $\vee$ at the position $i$ by a cross $x$, and the label $\wedge$ at the position $i+1$ by a circle $\circ$. Attached to this new weight $\lambda$ is an irreducible, but not maximal atypical representation $L(\lambda)$. \bigskip\noindent Now consider the functor $F_i$ defined in [BS4] on p.6ff and p.10ff, which is attached to the admissible matching diagram $t$ $$ \xymatrix{...& \bullet \ar@{-}[d] & \bullet\ar@{-}[d] & \times & \circ & \bullet \ar@{-}[d] & \bullet\ar@{-}[d] & ... \cr ... & \bullet &\bullet & \bullet \ar@/^7mm/[r] & \bullet & \bullet & \bullet & ...\cr } $$ with $\times$ at position $i$ and $\circ$ at position $i+1$, and the maximal atypical object $$ F_i(L(\lambda)) \ = \ {\bf F}_{\lambda} \ .$$ According to [BS2], lemma 4.11 this object is indecomposable and maximal atypical with irreducible socle and cosocle isomorphic to $L(\lambda_{\vee\wedge})$. \bigskip\noindent \begin{Lemma}\label{Loewy}{\it The Loewy diagram of ${\bf S}_{\lambda}$ looks like $$ \xymatrix{ L(\lambda_{\vee\wedge}) \ar@{-}[d]\cr {F}_{\lambda} \ar@{-}[d]\cr L(\lambda_{\vee\wedge}) \cr} $$ with a semisimple object ${F}_{\lambda} $ in the middle.} \end{Lemma} \bigskip\noindent For the proof we give a description of the simple constituents of ${F}_{\lambda} $ below using [BS4] case (v), subcase (b), which shows that all of these constituents have the same parity (different from the parity of $\lambda_{\vee\wedge}$). This suffices to show the claim that $F_\lambda$ is semisimple, using lemma \ref{parity}. QED \bigskip\noindent Next we quote from [BS4] formula (2.13) and corollary 2.9 (of course for arbitrary $m\geq n$) \bigskip\noindent \begin{Lemma} {\it ${\bf F}_{\lambda}$ is a direct summand of the representation $L(\lambda)\otimes X$, where $X$ denotes the standard representation on $k^{m\vert n}$}. \end{Lemma} \bigskip\noindent Since $L(\lambda)$ is not maximal atypical, it becomes trivial in $\cal B$ by corollary \ref{Wakimoto}. Hence the same holds for the tensor product $L(\lambda)\otimes X$, and any of its direct summands. \bigskip\noindent \begin{Corollary} \label{null} {\it ${\bf F}_\lambda \cong 0$ in $\cal B$. } \end{Corollary} \begin{Corollary} \label{sum} {\it $F_\lambda[1] \cong L(\lambda_{\vee\wedge}) \oplus L(\lambda_{\vee\wedge}) = 2 \cdot L(\lambda_{\vee\wedge})$. } \end{Corollary} \bigskip\noindent {\it Proof}. Corollary \ref{null} gives a distinguished triangle in $\cal B$ $$ L(\lambda_{\vee\wedge})[-1] \to F_\lambda \to L(\lambda_{\vee\wedge})[1] \to L(\lambda_{\vee\wedge}) \ $$ whose last arrow vanishes by lemma \ref{parity}. This proves the claim, since $1[-1] \cong 1[1]$ holds in $\cal B$. \bigskip\noindent {\it The rules of [BS2], theorem 4.11}. The constituents of ${\bf F}_\lambda$ correspond to the maximal atypical weights $\mu$ with defect $n$ such that \begin{enumerate} \item The (unoriented) cup diagram $\underline\lambda$ is a lower reduction of the oriented cup diagram $\underline\mu t$ for our specified matching diagram $t$. \item The rays in each "lower line" in the oriented diagram $\underline{\mu}\mu t$ are oriented so that exactly one arrow is $\vee$ and one arrow is $\wedge$ in each such line. \item $\mu$ appears with the multiplicity $2^{n(\mu)}$ as a constituent of ${\bf F}_\lambda$, where $n(\mu)$ is the number of "lower circles" in $\mu t$. \end{enumerate} We remark that the lower reduction (for more details see [BS] II, p.5ff) is obtained by removing all "lower lines" and all "lower circles" of the diagram $\mu t$, i.e. those which do not cross the upper horizontal numberline. \bigskip\noindent Let $I$ be the set of labels $\vee$ defining the maximal atypical weight $\lambda_{\vee\wedge}$. Then $i\in I$. To evaluate these conditions in more detail consider the segment $J$ of $I$ containing $i \in I$. Then also $i+1\in J$. Notice that $J$ is an interval. This segment decomposes into a disjoint union of sectors, which completely cover the interval $J$. We distinguish two cases \bigskip\noindent {\it The unencapsulated case}. Here the interval $[i,i+1]$ is one of the sectors of $J$. We write $J=[a+1,...,i,i+1,...,b-1]$ for the segment and call $a$ and $b$ the left and right boundary lines of the segment. Then the label of $\lambda$ at $a$ and $b$ must be $\wedge$ by definition. We write $I=[a,..,b]$. \bigskip\noindent {\it The encapsulated case}. By definition this means that the interval $[i,i+1]$ lies nested inside one of the sectors of $J$. Hence there exists a maximal $a < i$ defining a left starting point of a cup within the cup diagram of $\lambda$, that has right end point $b$ such that $i+1 < b$. We write $I=[a,...,i,i+1,...,b]$ for this subinterval of $J$ and call $a$ and $b$ the left and right boundary of $I$. The label of $\lambda$ at $a$ is $\vee$ and the label at $b$ is $\wedge$ by definition. \bigskip\noindent In both cases consider the sectors within $I^0=[a+1,...,b-1]$. By the maximality a $a$ $[i,i+1]$ is one of the sectors of $I^0$. The remaining sectors to the left of $[i,i+1]$ and to the right of $[i,i+1]$ will be called the lower and upper internal sectors. Let $a_j$ denote the left starting points and $b_j$ the right ending point of the $j$-th internal sector. The labels at the points $a_j$ are $\vee$, and the labels at the points $b_j$ are $\wedge$. There may be no such internal upper or lower sectors. If there are, then we will see that to each of them corresponds an irreducible summand $L(\mu)$ of $S_\lambda$, which we will see is uniquely described by the corresponding internal sector. \bigskip\noindent We summarize. In both cases the interval $I^0$ is completely filled out by the disjoint union of the internal sectors, and one of these internal sectors is $[i,i+1]$. \bigskip\noindent {\bf List of summands of ${\bf F}_\lambda$.} \bigskip\noindent \begin{itemize} \item {\it Socle and cosocle}. They are defined by $L(\mu)$ for $\mu=\lambda_{\vee\wedge}$. \item {\it The upward move}. It corresponds to the weight $\mu = \lambda_{\wedge\vee}$ which is obtain from $\lambda_{\vee\wedge}$ by switching $\vee$ and $\wedge$ at the places $i$ and $i+1$. It is of type $\lambda_{\wedge\vee}$. \item {\it The nonencapsulated boundary move}. It only occurs in the nonencapsulated case. It moves the $\vee$ in $\lambda_{\vee\wedge}$ from position $i$ to the left boundary position $a$. The resulting weight $\mu$ is of type $\lambda_{\wedge\wedge}$. \item {\it The internal upper sector moves}. For every internal upper sector $[a_j,b_j]$ (i.e. to the right of $[i,i+1]$) there is a summand whose weight is obtained from $\lambda_{\vee\wedge}$ by moving the label $\vee$ at $a_j$ to the position $i+1$. These moves define new weights $\mu$ of type $\lambda_{\vee\vee}$. \item {\it The internal lower sector moves}. For every internal lower sector $[a_j,b_j]$ (i.e. to the left of $[i,i+1]$) there is a summand whose weight is obtained from $\lambda_{\vee\wedge}$ by moving the label $\vee$ from the position $i$ to the position $b_j$. These moves define new weights $\mu$ of type $\lambda_{\wedge\wedge}$. \end{itemize} \bigskip\noindent {\it Proof of lemma \ref{Loewy}}. Except for the first in the list of summands of ${\bf F}_\lambda$, the moves of this list define the weights $\mu$ of the constituents $L(\mu)$ of $F_\lambda$. The parity of these weights $\mu$ is always different from $\lambda_{\vee\wedge}$. The reason for this is, that sectors always have even length. The unique label $\vee$ changing its position during the move, is moved by an odd number of steps. As already explained, this suffices to prove lemma \ref{Loewy}. QED \bigskip\noindent {\it Remark 1}. All upper and lower internal sector moves change the weight $\lambda_{\vee\wedge}$ into weights $\mu$, whose cup diagram restricted to $I^0$ has a strictly smaller number of sectors. Hence in the nonencapsulated case, the full cup diagram of any of these $\mu$ has a strictly smaller number of sectors than the cup diagram of $\lambda_{\vee\wedge}$. \bigskip\noindent {\it Remark 2}. Similarly, the nonencapsulated boundary move changes the starting weight $\lambda_{\vee\wedge}$ into a weight $\mu$, whose cup diagram has a strictly smaller number of sectors except for the case $a=i-1$ (where $a=i-1$ is equivalent to $\lambda_{j-1} < \lambda_j$). \bigskip\noindent {\it Remark 3}. Except for the first case in the list of summands of ${\bf F}_\lambda$, all other moves belong to diagrams without "lower circles". Hence $n(\mu)=1$ holds in these cases. \bigskip\noindent {\it Remark 4}. In the encapsulated case the diagrams $\underline{\mu} t$ do not contain "lower lines". \bigskip\noindent \section{Three algorithms} \label{three} \bigskip\noindent For $Gl(m\vert n)$ we discuss now three algorithms, which can be successively applied to a cup diagram of some maximal atypical weight within a block $\Lambda$ to reduce this weight to a collection of the ground state weights of this block $\Lambda$ which have the form $(\lambda_1,..,\lambda_{m-n},-N,...,-N;N,...,N)$ for certain large integers $N \geq 0$. Notice, the integers $\lambda_1,..,\lambda_{m-n}$ are fixed and describe the given block $\Lambda$. \bigskip\noindent In fact, since these algorithms applies within a fixed maximal atypical block $\Lambda$, it suffices to describe these algorithms in the case $m=n$. This simplifies the exposition. For this purpose assume $m=n$. \bigskip\noindent {\bf Algorithm I}. The first algorithm deals with a union of different segments. The aim is to move all labels $\vee$ to the left in order to eventually reduce everything to a single segment. For a given maximal atypical weight $\lambda$ let $S_j=[s_j,t_j]$ from left to right denote the segments of its cup diagram $\underline\lambda$. Let denote $0\leq c_j =\# S_j \leq n$ their cardinalies and let denote $-\infty \leq d_j = 1 - \vert s_{j+1} - t_i\vert \leq 0$ the negative distance between two neighbouring segments. We endow the set $C$ of pairs of integers $\gamma=(c,d)$ with the lexicographic ordering. Next we endow the set $C^n$ of all $((c_1,d_1),(c_2,d_2),.....) = (\gamma_1,\gamma_2,.....)$ with the corresponding induced lexicographical ordering. A cup diagram defines a maximal element in this ordering if and only if it contains a single segment, in which case $\gamma_1=(n,-\infty)$. \bigskip\noindent {\bf Claim}. Moving the starting point of the second segment to the left increases the ordering. To be more precise: Suppose there exist at least two segments in the cup diagram. Put $i=s_2-1$ and $i+1=s_2$ and the weight $\lambda_{\vee\wedge}$ obtained from the given weight $\lambda_{\wedge\vee}$ defining the cup diagram $c$ by interchange at $i$ and $i+1$. Then $[i,i+1]$ is a sector of the new cupdiagram $c'$ obtained in this way attached to $\lambda_{\vee\wedge}$. Let $[a_j,b_j]$ denote the sectors of the second segment $S_2$ with $a_1=s_1$. There are two possibilities: \begin{itemize} \item Then $[i,i+1][a_{1}+1,b_{1}-1]$ is a segment of $c'$ (namely the second segment, whose first sector is $[i,i+1]$. This is the case if and only if $d_1 < -1$; \item or $[s_1,..,t_1][i,i+1][s_2+1,..,b_1 -1]$ is the first segment of $c'$. This is the case if and only if $d_1=-1$. \end{itemize} In the first case $s'_1 =s_1$ but $d'_1 > d_1$. In the second case $s'_1 = s_1 +1 $. Hence $c'$ is larger than $c$ with respect to our ordering. \bigskip\noindent Now we consider the (unencapsulated) move centered at $[i,i+1]$ for the cup diagram $c'_0$ attached to the weight $\lambda_{\vee\wedge}$. Moving up gives the cup diagram $c$ we started from. The down move, corresponding to the left boundary move, either gives as second segment $[i-1,i]$ with unchanged first segment. Or, if $d_1=-1$, it increases the cardinality of the first segment to $s_1+1$. The same holds for all internal lower sector moves. Finally for the internal upper sector moves. All these moves give cup diagrams of the following type: With second segment $[i,i+1][a_2+1,.., ]..[.. b_2-1]$ if $d_1 < -1$ or with first segment $[s_1,..,t_1][i,i+1][a_2+1,.., ]..[.. b_2-1]$ if $d_1 < -1$. Indeed they all have the same segment structure as $c'_0$, but different sector structure. However we see that algorithm I relates the given cup diagram $c$ to a finite number of cup diagrams $c'$ such that $c' > c$ in our lexicographic ordering. \bigskip\noindent {\bf Algorithm II}. Decreasing the number of sectors within a segment. Suppose $c$ is a maximal atypical cup diagram attaches to a weight $\lambda_{\wedge\vee}$ with only one segment. Let $[a_j,b_j]$ for $j=1,..,r$ denote its sectors, counted from left to right. Assume there are at least two sectors, i.e. assume $r>1$. Put $i=b_j$ and $i+1=a_{j+1}$ for some $1\leq j < r$. For this recall, that any sector starts with a $\vee$ and ends with a $\wedge$. Define $\lambda_{\vee\wedge}$ by exchanging the position of $\vee$ and $\wedge$ in $\lambda_{\wedge\vee}$ at $i$ and $i+1$. This gives a new cup diagram $c'_0$. It has only one segment, the same as the segment of $c$. However the $j$-th and the $j+1$-th sectors have become melted in $c'_0$ into one single sector. The other sectors remain unchanged. So the numbers of sectors in the segment decreases by one. Now consider the (encapsulated) move at $[i,i+1]$ starting from the cup diagram $c'0$. Its move up gives the cup diagram $c$ we started from. All internal lower and upper moves occur within the sector $[a_j,..,b_{j+1}]$, i.e. the lower bound $a\geq a_j$ and the upper bound is $b\leq b_{j+1}$. None of these moves changes the cup starting from $a_j$ and ending in $b_{j+1}$. Hence the internal moves all yield cup diagrams with the same sector structure as $c'$. Hence algorithm II relates the given cup diagram $c$ (with one segment $S$ and $r$ sectors) to a finite number of cup diagrams $c'$, each of them with the same segment $S$ but with $r-1$ sectors. \bigskip\noindent {\bf Algorithm III}. Now assume $c$ is a cup diagram with one segment, which consists of a single sector $[a,...,b]$. The sector cup from $a$ to $b$ encloses an internal cup diagram with $n-1$ labels $\vee$. This internal cup diagram necessarily defines one segment, namely the segment $[a+1,..,b-1]$. We now apply algorithm II to this internal segment. This finally ends up into some Kostant weights (see [BS] II, lemma 7.2 and section \ref{Kosw}) \bigskip\noindent {\it Further iteration}. We remark that we can start all over again and move the left starting point of the sector of a Kostant weight further to the left using algorithm I, and then repeat the whole procedure of applying algorithms I, II and III. At the end this allows to replace the given Kostant weight by some other Kostant weights further shifted to the left on the numberline (with all crosses $\times$ removed in case $m\geq n$). If we repeat this down shift of Kostant weights sufficiently often we end up with a bunch of Kostant weights, that are ground states of the block, i.e. whose associated irreducible representation is one of the ground state representations $L(\lambda_N)$ for large $N$. \bigskip\noindent \section{Proof of theorem \ref{aZ}} \label{Proof} \bigskip\noindent To prove the theorem \ref{aZ} we now fix a maximal atypical block $\Lambda$ of $\cal T$ and its ground state representation $L=L(\lambda_0)$. Consider the thick triangulated subcategory ${\cal Z}_{\Lambda}$ of $\cal B$ associated to $L$ as defined in section \ref{groundcat}. To show that a given simple maximal atypical representation $L(\mu)$ of $\Lambda$ has image in ${\cal Z}_\Lambda $ it suffices that it is zero in ${\cal B}/{\cal Z}_\Lambda$. If this holds for all simple objects of the block $\Lambda$, then it also holds for all objects of the block $\Lambda$. \bigskip\noindent An object $A$ will be called a virtual ground state object, if there exists an isomorphism in $\cal B$ of the form $A \oplus A' \cong A''$ where $A'$ and $A''$ is isomorphic to a finite direct sum of higher ground state objects $L_N$ of the block $\Lambda$. We can apply the algorithms I, II and III and corollary \ref{sum} to show by induction that there exist virtual ground state objects $Y$ and $Y'$ and an isomorphism in $\cal B$ $$ L(\mu) \oplus Y \ \cong \ Y' \ .$$ This immediately implies that also $L(\mu)$ is a virtual ground state object. \bigskip\noindent In lemma \ref{down} we show, that all higher ground states $L_N$ of $\Lambda$ (for all $N\geq 0$) are in ${\cal Z}_\Lambda$. For this we use the next \bigskip\noindent {\bf Algorithm IV}. Let $\lambda$ be a Kostant weight. By [BS] II, lemma 7.2 this means that the labels $\vee$ of $\lambda$ define an interval $[a,..,a+n-1]$ after the crosses have been removed. Starting from the label $\lambda$ we make successively moves with the right most label $\vee$ away to the right by $i$ steps. Let us call these new weights $S^i$ so that $S^0$ is the Kostant weight we started from, and so on. Put $i=a+n-1$. Make a first move with $[i,i+1]=[a+n-1,a+n]$. This move is encapsulated and gives the Loewy diagram $$ \xymatrix@C-9mm{ S^0 \cr S^1\ar@{-}[u]\cr S^0 \ar@{-}[u] \cr} $$ Next move for $[i+1,i+2]$ gives the Loewy diagram with four irreducible constituents $$ \xymatrix{ & S^1 & \cr S^2 & \oplus \ar@{-}[u] & S^0 \cr & S^1\ar@{-}[u] & \cr} $$ and so one until the first move, which is not encapsulated. Here we end up in a Loewy diagram of type $$ \xymatrix@C-9mm{ & S^{n-1} & \cr S^n \ \ \ \oplus & \Pi \ar@{-}[u] & \oplus \ \ \ S^{n-2}\cr & S^{n-1}\ar@{-}[u] & \cr} $$ with additional fifth constituent $\Pi$, where $\Pi$ is of Kostant type in the given block such that compared to the Kostant weight $\lambda$ we started from all labels $\vee$ have been shifted to the left by one, and hence are at the positions $[a-1,...,a+n-2]$. \bigskip\noindent \begin{Lemma} \label{down}{\it Suppose $L(\lambda_N) \in {\cal Z}_\Lambda$, then also $L(\lambda_{N+1})\in {\cal Z}_\Lambda$.} \end{Lemma} \bigskip\noindent {\it Proof}. Use that ${\cal Z}_\Lambda$ is a thick triangulated subcategory of $\cal B$ by lemma \ref{thick}. Hence it suffices that $S^0 =0$ in ${\cal B}/{\cal Z}_\Lambda$ implies $S^i=0$ and hence $\Pi=0$ in ${\cal B}/{\cal Z}_\Lambda$, which obviously follows from the Loewy diagrams displayed above. Since $\Pi = L(\lambda_{N+1})$, if $S^0=L(\lambda_N)$, we are done. QED \bigskip\noindent This shows that for every simple object $L(\mu)$ there exist $A$ and $A'$ in ${\cal Z}_\Lambda$ such that $L(\mu) \oplus A \cong A'$ in $\cal B$. Hence $L(\mu) =0$ in ${\cal B}/{\cal Z}_\Lambda$. Therefore $L(\mu)\in {\cal Z}_\Lambda$. By parity reasons therefore $L(\mu) \cong m(\mu) \cdot L[p(\mu)]$ for the uniquely defined parity $p(\mu)$ (which is easily computed from the Bruhat distance from the ground state $\lambda_0$). This proves theorem \ref{aZ}. \bigskip\noindent \section{Odds and ends} \bigskip\noindent Consider a maximal atypical block of $\cal T$. Since $Ber_{m\vert n}$ is invertible, the tensor product with $Ber_{m\vert n}$ defines equivalences between maximal atypical blocks and their twisted images. Hence using a twist by a power of the Berezin we may, without restriction of generality, assume that the block $\Lambda$ contains a ground state weight vector of the special form $\lambda_0 =(\lambda_1,...,\lambda_{m-n}\ ,\ 0,...,0\ ;\ 0,...,0)$ where $\lambda_1 \geq \lambda_2 \geq ... \geq \lambda_{m-n}=0$. For this see the remarks following lemma \ref{ground}. Hence we may assume that the ground state is a covariant representation $L\cong \{ \lambda \}$ associated to the partition $\lambda_1 + \lambda_2 + .. + \lambda_{m-n-1}$. We say that $\Lambda$ is a block with normalized ground state. \bigskip\noindent Consider the $K$-linear triangulated tensor functor $$ \varphi: {\cal T}_{m\vert n} \longrightarrow Rep_k(H) \otimes_k {\cal B}_{n\vert n} \ $$ for $L=\{ \lambda \} = Schur_\lambda(X)$. Since $\varphi(X) = k^{m-n} \otimes_k {\mathbf 1}$ (the standard representation is not maximal atypical for $m=n$ and corollary \ref{Wakimoto}), we conclude $$\varphi(L) \ \cong\ Schur_\lambda(k^{m-n}) \otimes_k {\mathbf 1}\ .$$ Since $\mathbf 1$ is the ground state of the unique maximal atypical block of ${\cal T}_{n\vert n}$, this implies \begin{Lemma} \label{groundtoground} For blocks with normalized ground states the functor $\varphi$ maps ground states to ground states. \end{Lemma} \begin{Lemma} \label{Ber} $\varphi(Ber_{m\vert n}) = det \otimes_k Ber_{n\vert n} =det \otimes {\mathbf 1}[n]$. \end{Lemma} \bigskip\noindent {\it Proof}. Obvious. \bigskip\noindent \section{Multiplicities}\label{Weyl} \bigskip\noindent Fix a maximal atypical block $\Lambda$. Let the ground state vector of $\Lambda$ be $$\lambda = (\lambda_1,...,\lambda_{n-m},M,..,M;-M,...,-M)$$ for $M=\lambda_{n-m}$. The block $\Lambda$ is characterized by $(\lambda_1,..,\lambda_{m-n})$ respectively the corresponding irreducible representation $\rho= Schur_{\lambda_1,..,\lambda_{m-n}}(k^{m-n})$ in $Rep_k(H)$ with the following convention. For $M=\lambda_{m-n} < 0$ we define $Schur_{\lambda_1,..,\lambda_{m-n}}(k^{m-n}) := Schur_{\lambda_1-M,..,\lambda_{m-n}-M}(k^{m-n}) \otimes det^M $ by abuse of notation. By Lemma \ref{Ber} this notation behaves nicely with respect to the triangulated tensor functor $$ \varphi: {\cal T}_{m\vert n}\ \longrightarrow\ Rep_k(H) \otimes_k {\cal B}_{n\vert n} \ .$$ Indeed, $\varphi(L(\lambda)) = \varphi( Ber_{m\vert n}^M \otimes Schur_{\lambda_1-M,..,\lambda_{m-n}-M}(X)) $ for $X=k^{m\vert n}$ coincides with $ det^{M} \otimes Schur_{\lambda_1-M,..,\lambda_{m-n}-M})(\varphi(X)) \otimes_k Ber_{n\vert n}^M$. Since $\varphi(X) \cong k^{m-n} \otimes_k 1$ in $\cal B$, this gives with the convention above $\varphi(L(\lambda)) = Schur_{\lambda_1,..,\lambda_{m-n}}(k^{m-n}) \otimes_k Ber_{n\vert n}^M$. Recall ${\cal B}_{n\vert n} = {\cal Z}\cong svec_K$. Using corollary \ref{leftshift} an obvious reexamination of the proof of theorem \ref{aZ} now shows \bigskip\noindent \begin{Theorem} \label{indep} {\it For each weight $\mu$ in the fixed maximal atypical block $\Lambda$ we have $$ \varphi(L(\mu)) \ \cong \ m(\mu) \cdot Schur_{\lambda_1,..,\lambda_{m-n}}(k^{m-n}) \otimes_k {\mathbf 1}[p(\mu)] $$ in $Rep_k(H) \otimes {\cal Z}$ for some integral multiplicity $m(\mu) \geq 1$, which only depends on the relative position of the weight $\mu$ with respect to the ground state weight, considered on the numberline $\mathbb{Z}$ with all crosses $\times$ removed. For the ground states the multiplicity is one.}\end{Theorem} \bigskip\noindent In particular, this theorem shows that the computation of the multiplicities $m(\mu)$ can be reduced to the case $m\! =\! n$. The computation of the parity $p(\mu)=\sum_{i=1}^n \mu_{m+i}$ is reduced to the case of the ground state. By lemma \ref{Ber} one can reduce to the case of a block with a normalized ground state, where the parity is even by the computation preceding the theorem. \bigskip\noindent {\it The muliplicities $m(\mu)$}. As already explained, as a consequence of theorem \ref{indep}, we may assume $m\! =\! n$ for the computation of the multiplicities. These multiplicities are numbers attached to cup diagrams $\underline\mu$ with $n$ cups (and without lines). We have already shown that the multiplicity $m(\mu)$ is one for the ground state $\mu=\mathbf 1$. The same holds for all powers $Ber^N$ of the Berezin by lemma \ref{Ber}. Hence for any completely nested cup the multiplicity $m(\mu)$ is one. To deal with a general maximal atypical weight our strategy is the following. We consider cup diagrams for various $n$ with the aim to reduce the computation of $m(\mu)$ for a cup diagram with $n$ cups to the case of cup diagrams with $<n$ cusps. \bigskip\noindent For a completely nested cup the multiplicity $m(\mu)$ is one. In general let $\mu$ have the sectors $S_1,..,S_r$ with length $2n_1,..,2n_r$ with corresponding partial cup diagrams $\underline{\mu_1},...,\underline{\mu_r}$. Notice $n=n_1+...+n_r$. Each $S_i$ defines a number interval $[a_i,b_i]$. Using the Berezin we see that the multiplicity does not change under a translation of the cup diagram. Now the algorithms II and III applied to the cup diagram of $\mu$ show, that all nested cups $\underline{\mu_i}$ can be reordered to become completely nested without destroying the sector structure of the original cup diagram $\mu$. In this way the cup diagram can be rearranged so that all nested cups are completely nested cups. This process proves the formula $$ (*) \quad \quad m(\mu) \ = \ m(\nu) \cdot \prod_{i=1}^r \ m(\mu_i) \ ,$$ where $\nu$ is the cup diagram with the same sectors as $\mu$, but so that each sector $S_i$ defines a maximal nested cup diagram with labels $\vee$ at the position $a_i,a_i+1,...,a_i + n_i$. \bigskip\noindent \begin{Lemma} \label{mult} Suppose $\nu$ is a maximal atypical weight $\nu$ with $r$ sectors. If all sectors of the cup diagram of $\nu$ have completely nested cup diagrams of lengths say $2n_1,..,2n_r$, then $$m(\nu) = { n \choose n_1,\ldots ,n_r } \quad \mbox{(multinomial coefficient)} \ .$$ \end{Lemma} \begin{Lemma} For maximal atypical weights $\mu$ with $n$ labels $\vee$ the multiplicity $$ m(\mu) \ = \ { n \choose n_1,\ldots,n_r } \cdot\ \prod_{i=1}^r \ m(\mu_i) $$ satisfies the inequality $$1 \leq m(\mu) \leq n! \ .$$ Equality at the right side holds if and only if the cup diagram of $\mu$ is completely unnested (i.e. all sectors have length 2). Equality on the left holds if and only if the cup diagram has only one sector which is a completely nested sector (translates of the ground state). \end{Lemma} \bigskip\noindent {\it Proofs}. Induction on $n$ using formula (*), theorem \ref{indep} and lemma \ref{mult}. \bigskip\noindent \section{Proof of lemma \ref{mult}} \bigskip\noindent {\it Case of one segment}. Suppose $\nu$ has only one segment. Then $m(\nu)=m(n_1,..,n_r)$ depends only on the sector lengths $n_1,..,n_r$. Algorithm II applied to the first two sectors, combined with formula (*) from above, gives the recursion formula $$ 2 \cdot m(u\! +\! v, n_3,..,n_r) m(u\! -\! 1,1,v\! -\! 1) \ - \ m(u,v,n_3,..,n_r) \ = \ $$ $$ m(u\! +\! v,n_3,..,n_r)m(u\! -\! 1,v)m(1,v\! -\! 2)\ +\ m(u\! +\! v,n_3,..,n_r) m(u,v\! -\! 1) m(u\! -\! 2,1) $$ for $m(u,v,n_3,..,n_r)$ in $u$ and $v$. All terms except $m(u,v,n_3,..,n_r)$ involve either fewer variables or less labels $\vee$. This allows to verify the claim by induction on the number $\sum_{i=1}^r n_i$ of labels $\vee$ and then of sectors $r$. The verification of the induction start $r=1$ is obvious by definition. So it suffices that the multinomial coefficient $m(n_1,..,n_r)=(\sum_{i=1}^r n_i)!/\prod_{i=1}^r (n_i)!$ satisfies the recursion relation of algorithm I. The trivial property $$m(n_1,n_2,n_3,..,n_r)\ =\ m(n_1,n_2) \cdot m(n_1+n_2,n_3,..,n_r)$$ of multinomial coefficients allows to assume $r=2$. The recursion formula then boils down to the identity $2uv= (u+v) + v(u-1) + u(v-1)$. This proves the assertion if there is only one segment. \bigskip\noindent {\it The case of more than one segment}. Now suppose $\nu$ is a maximal atypical totally nested weight with $s >1$ segments and with a total number of $r$ sectors of lengths $2n_1,..,2n_r$. Notice that all segments are sectors by our assumption on $\nu$. We then symbolically write $$ \nu \ = \ ...\ \wedge\!\wedge\ (S_1 \!\wedge ... \ q\ ... \wedge\! S_2)\ \wedge ...\ \ \mbox{rest with higher segments} \ $$ for the segment diagram, where $q$ denotes the distance between the first and second segment. To show that the multiplicity formula of lemma \ref{mult} also holds in general we now use algorithm I to increase the size of the first sector. We assume by induction that the formula holds for maximal atypical totally nested weight with $<s$ segments or for maximal atypical totally nested weight with $\geq s$ segments and more than $2n_1$ elements in the first sector or with $\geq s$ segments and $2n_1$ elements in the first sector but smaller distance $q$ between the first and second sector. This start of the induction is the case with one segment already considered. \bigskip\noindent {\it First case}. Suppose the distance $q=1$. \bigskip\noindent a) Then for completely nested sectors $S_1$ and $S_2$ of length $2n_1$ and $2n_2$ $$ \nu \ = \ ... \ \wedge (\wedge \ S_1 \wedge S_2) \wedge ...\ \ \mbox{rest with higher segments} \ . $$ b) Let $\lambda_{\vee\wedge}$ denote the weight obtained by moving the starting point of the second sector $S_2$ one step down so that that it touches the end of the first sector $S_1$. This new weight $\lambda_{\vee\wedge}$ has $s-1$ segments with segment structure $$ \lambda_{\vee\wedge} \ = \ ... \ \wedge (\wedge\ T_1\ \wedge) \wedge ...\ \ \mbox{rest with higher segments} $$ whose first segment $T_1$ has length $2(n_1+n_2)$ with three completely nested sectors of lengths $2n_1,2,2(n_2-1)$ respectively. Algorithm I gives three further weights: \bigskip\noindent c) The boundary move weight with segment diagram $$ ... \ \wedge (S'_1 \wedge S'_2 \ \wedge) \wedge ...\ \ \mbox{rest with higher segments} \ . $$ where the first and second segments $S'_1$ and $S'_2$ are completely nested sectors of lengths $2(n_1+1)$ and $2(n_2-1)$. \bigskip\noindent d) The interval lower sector move gives a weight with $s-1$ segments and diagram $$ \ ...\ \wedge (\wedge\ T'_1\ \wedge) \wedge ...\ \ \mbox{rest with higher segments} $$ where the segment $T'_1$ of length $2(n_1+n_2)$ has two sectors. The second sector is completely nested of length $2(n_2-1)$. The first sector $I$ has length $2(n_1+1)$ and its interior segment $I^0$ decomposes into two completely nested sectors of lengths $2(n_1-1)$ and $2$ respectively. \bigskip\noindent e) The interval upper sector move gives a weight with $s-1$ segments and diagram $$ ...\ \wedge (\wedge\ T''_1\ \wedge) \wedge ...\ \ \mbox{rest with higher segments} $$ where the first segment $T''_1$ has length $2(n_1+n_2)$ with two sectors. The first sector is completely nested of length $2n_1$. The second sector $I$ has length $2n_2$ and its interior segment $I^0$ decomposes into two completely nested sectors of lengths $2$ and $2(n_1-2)$ respectively. \bigskip\noindent Again we show that the multinomial coefficient $m(n_1,..,n_r)$ satisfies the recursion relation of algorithm I. This suffices to prove our assertions. Again the trivial property $m(n_1,n_2,n_3,..,n_r)\ =\ m(n_1,n_2) \cdot m(n_1+n_2,n_3,..,n_r)$ of multinomial coefficients allows to assume $r\! =\! 2$. The desired recursion equation of algorithm I, that the sum of the multiplicities of a), c), d) and e) is twice the multiplicity of b), then boils down to the binomial identity $$2 \!\cdot\! { n_1 + n_2 \choose n_1,1,n_2\! -\! 1 } = { n_1\! +\! n_2 \choose n_1 } + { n_1\! +\! n_2 \choose n_1\! +\! 1 } + { n_1\! +\! n_2 \choose n_1\! +\!1 } { n_1 \choose 1 } + { n_1\! +\! n_2 \choose n_1 } { n_2\! -\! 1 \choose 1} \ .$$ \bigskip\noindent {\it Second case}. Now suppose $q\geq 2$ for the distance $q$ between the first and the second sector. \bigskip\noindent a) Then $ \nu \ = \ ...\ \wedge (S_1\! \wedge ... \ q\ ... \wedge\! S_2) \wedge ...\ \ \mbox{rest with higher segments} $. The first and second segments $S_1,S_2$ are completely nested sectors of length $2n_1$ respectively $2n_2$. \bigskip\noindent b) Consider the weight $\lambda_{\vee\wedge}$ which is obtained by moving the starting point of the second sector $S_2$ one step down to the left. Since $q>1$ it does not touch the end of the first sector $S_1$. The new weight $\lambda_{\vee\wedge}$ still has $s$ segments, but now with the segment structure $$ \lambda_{\vee\wedge} \ = \ ...\ \wedge (S_1\! \wedge ...\ q-1\ ... \wedge\! S'_2\ \wedge ) \wedge ... \ \ \mbox{rest with higher segments} $$ where the second segment $S'_2$ of lenght $2n_2$ has two completely nested sectors of lengths $2$ and $2(n_2-1)$. The algorithm I gives two further weights: \bigskip\noindent c) If $q>2$, the boundary move weight gives a diagram with $s+1$ segments $$ ...\ \wedge (S_1 \!\wedge ...\ q-2\ ... \wedge\! T_{23}\ \wedge ) \wedge ...\ \ \mbox{rest with higher segments} \ $$ where $T_{23} = [\vee\wedge]\wedge T'_2$ has two completely nested segments $T'_1=[\vee\wedge]$ and $T'_2$ of lengths $2$ respectively $2(n_2-1)$. \bigskip\noindent If $q=2$ we get a diagram with $s$ segments $$ ...\ \wedge (S'_1 \wedge T'_2 \ \wedge) \wedge ...\ \ \mbox{rest with higher segments} \ $$ where the first segment $S'_1 $ has two completely nested sectors of lengths $2n_1$ respectively $2$ and the second segment $T'_2$ is a completely nested sector of length $2(n_2-1)$; with distance $q=1$ from the first sector. \bigskip\noindent e) The interval upper sector move gives a weight with $s$ segments and diagram $$ ...\ \wedge (S_1 \!\wedge ...\ q-1\ ...\wedge\! T_2\ \wedge) \wedge ...\ \ \mbox{rest with higher segments} $$ where the second segment $T_2$ is a sector of length $2n_2$, its interior decomposes into two completely nested sectors of lengths $2$ and $2(n_2-2)$. \bigskip\noindent The recursion relation of algorithm II, that twice the multiplicity of b) is the sum of the multiplicities of a), c) and e), holds for the multinomial coefficient. This amounts to the binomial identity $$ 2\!\cdot\! {n_1\! +\! n_2 \choose n_1,1,n_2\! -\! 1} = {n_1\! +\! n_2 \choose n_1} + {n_1\! +\! n_2 \choose n_1} {n_2\! -\! 1 \choose 1} + {n_1\! +\! n_2 \choose n_1,1,n_2\! -\! 1} \ .$$ This finally completes the proof of lemma \ref{mult} using induction on the distance $q$. \bigskip\noindent \section{Appendix: The class $\xi_n$} \label{class} \bigskip\noindent Consider the exact BGG complex [BS2], thm. 7.3 for the Kostant weight $\mu=0$ given by $ ... \to V_{2} \to V_{1} \to V_0 \to {\mathbf 1} \to 0 $ with $$ V_{j} \ = \ \bigoplus_{\lambda \leq 1, \ l(\lambda,1)=j} \ V(\lambda) \ .$$ \bigskip\noindent \begin{Proposition}\label{xi} {\it For $Gl(n\vert n)$ there is a nontrivial morphism $\xi_n: {\mathbf 1}\to Ber_{n\vert n}[n]$ in ${\cal K}$ which becomes an isomorphism in $\cal B$. Hence $Ber$ is contained in $\cal Z$.} \end{Proposition} \bigskip\noindent {\it Proof}. Appling the antiinvolution ${}^*$ we get $$ 0\to {\mathbf 1} \to V_0^*\to V_1^* \to \cdots \to V_{n-1}^* \to V_n^* \to \ ,$$ which defines a Yoneda extension class $\xi \in Ext^n(Q,1)$ $$ 0\to {\mathbf 1} \to V_0^*\to V_1^* \to \cdots \to V_{n-1}^* \to Q \to 0 $$ for $ Q \ \cong\ im(d^*: V_{n-1}^* \to V_n^*) \ \hookrightarrow \ V_n^*$. \bigskip\noindent Now $V_i^*=0$ in $\cal H$, since $V([\lambda])^*=0$ holds in ${\cal H}$ for all cell modules $V([\lambda]$ by the definition of $\cal H$. Hence in $\cal H$, and therefore in $\cal B$, we get $$Q \cong {\mathbf 1}[n] \ .$$ We will now construct a map $i: Ber^{-1} \hookrightarrow Q$ in $\cal T$, which defines a nontrivial morphism in $\cal B$. Then $\xi$ is nontrivial in ${\cal K}$, hence $i^*(\xi)$ defines a nontrivial extension in $Ext^n_{\cal T}(Ber^{-1},1)$. \bigskip\noindent {\it Nontriviality}. To show that a given morphism $i: Ber^{-1} \hookrightarrow Q$ is nontrivial in $\cal B$ is equivalent to show that the transposed morphism $i^*: Q^* \to Ber^{-1}$ is nontrivial in $\cal B$. We first show that $i^*: Q^* \to Ber^{-1}$ is nonzero in $\cal H$. Then $i^*$ remains nonzero in $\cal B$, using $Q^* \cong {\mathbf 1}[-n]\cong {\mathbf 1}[n]$ in $\cal H$ and using that after restriction to $psl(n,n)$ the morphism $i^*$ is in the central graded ring $R^\bullet_{\cal K}$ and of positive degree, hence by proposition \ref{grad} the morphism $i^*$ can not become a zero divisor for the localization $\cal B$ and thus is a nonzero morphism in $\cal B$. \bigskip\noindent To show that $i^*\neq 0$ in $\cal H$ we argue as follows: If $i^*=0$ in $\cal H$, then the composite morphism $V_n \twoheadrightarrow Q^* \twoheadrightarrow Ber^{-1}$, defined in $\cal T$, becomes zero in $\cal H$. Since $Ber^{-1}$ is simple and $V_n$ is a cell object, we can apply theorem \ref{hot} to obtain $Hom_{\cal H}(V_n,Ber^{-1}) = Hom_{\cal K}(V_n,Ber^{-1})= Hom_{\cal T}(V_n,Ber^{-1})$. Since $V_n \twoheadrightarrow Ber^{-1}$ is an epimorphism in $\cal T$, the composite map is nonzero in $\cal T$ and therefore nonzero in $\cal H$. This completes the proof that $i^*$ is not zero in $\cal H$. Therefore, once we have constructed an epimorphism $i^*: Q^* \to Ber^{-1}$ in $\cal T$, this proves the proposition. \bigskip\noindent {\it Existence}. To define $i^*$ recall, that the boundary morphisms $d^*$ are dual to the morphism $d$ defined in [BS2]: With respect to the decomposition $$ V_n \ = \bigoplus_{\mu \leq 0\ ,\ l(\mu,0)=n} V(\mu) \ ,$$ the $d:V_{n-1} \to V_n$ are defined as the sum of morphisms $f_{\lambda\mu}$. See [BS2], lemma 7.1. These $f_{\lambda\mu}$ are obtained as follows: There exists a projective $P=P(\lambda)$ and an endomorphism $f:P\to P$, an filtration $N \subset M \subset P$ with $P/M =V(\lambda)$ and $M/N = V(\mu)$ such that $f(P)\subset M$ and $f(M)\subset N$ so that $f$ induces the morphism $f_{\lambda\mu}:P/M=V(\lambda) \to M/N=V(\mu)$. \bigskip\noindent To define an epimorphism $$ i^*: Q^* \cong Im\bigl(d:V_n \to V_{n-1}\bigr) \ \twoheadrightarrow Ber^{-1} \ $$ notice that $d:V_n \to V_{n-1}$ is $\sum d_\mu$ for $d_\mu=d\vert_{V(\mu)}: \ V(\mu) \to V_{n-1}$ where $\mu < 0$ and $l(\mu,0)=n$. The cosocle of $V(\mu)$ is the simple object $L(\mu)$. Hence $V(\mu)=V(Ber^{-1})$ is the unique summand of $V_n$ with cosocle $Ber^{-1}$. Since none of the morphisms $d_\mu$ is trivial in $\cal T$ (see [BS2]), the summand $Ber^{-1}$ in the cosocle of $V_n$ maps nontrivially to the cosocle of its image $Q^*$ in $V_{n-1}$. The only indecomposable summand $V(\mu)$ of $V_n$ containing $Ber^{-1}$ in its cosocle is $V(Ber^{-1})$. The cosocle of $V(Ber^{-1})$ therefore injects into $Q$ by the definition of the morphism $d$. This completes the proof of proposition \ref{xi}. \bigskip\noindent \section{Appendix: Semisimplicity}\label{semisim} \bigskip\noindent \bigskip\noindent We consider a triangulated category\footnote{As pointed out by Heidersdorf, that this is a special case of a triangulated category $\cal B$ with a cluster tilting cotorsion pair $(\cal U,V)$ (see [Na]).} $\cal B$, such that there are strictly full additive subcategories ${\cal B}_0$ and ${\cal B}_1$ with the following properties \begin{enumerate} \item ${\cal B} = {\cal B}_0 \oplus {\cal B}_1 $ \item ${\cal B}_0[1] = {\cal B}_1$ \item ${\cal B}_1[1] = {\cal B}_0$ \item $Hom_{\cal B}({\cal B}_0,{\cal B}_1)=0$. \end{enumerate} \bigskip\noindent By property 1. and 4. it easily follows, that the decomposition 1. is functorial such that there are adjoint functors $\tau_0:{\cal B} \to {\cal B}_0$ and $\tau_1: {\cal B\to \cal B}_1$ with functorial distinguished triangles $(\tau_0(X),X,\tau_1(X))$, s.t. $Hom_{\cal B}(A,X)= Hom_{\cal B}(A,\tau_0(X))$ for $A\in {\cal B}_0$ and similarly $Hom_{\cal B}(A,X)= Hom_{\cal B}(A,\tau_1(X))$ for $A\in {\cal B}_1$. The next lemma immediately follows from the long exact $Hom$-sequences attached to distinguished triangles \bigskip\noindent {\bf Extension Lemma}. {\it If $A,C\in {\cal B}_0$ and $(A,B,C)$ is a distinguished triangle, then $B\in {\cal B}_0$.} \bigskip\noindent \begin{Lemma} {\it ${\cal B}_0$ is an abelian category.} \end{Lemma} \bigskip\noindent {\it Proof}. For a morphism $f:X\to Y$ in ${\cal B}_0$ let $Z_f$ be a cone in $\cal B$. Put $Ker_f = \tau_0(Z_f[-1])$ and $Ker_f = \tau_0(Z_f)$. Then $Ker_f,Koker_f$ are in ${\cal B}_0$ and represent the kernel resp. kokernel of $f$ in ${\cal B}_0$. This is an immediate consequence of the long exact $Hom$-sequences and property 4. Furthermore $=b\circ a$ factorizes in the form $a:X\to Z$ and $b:Z\to Y$ by the octaeder axiom, where $Z$ is a cone of the composed morphism $i$ defined by $Ker_f \to Z_f[-1] \to X$. By the octaeder axiom there exists a distinguished triangle $(Koker_f,Z,Y)$. Hence by the extension lemma $Z\in {\cal B}_0$. The octaeder axiom provides the morphisms $a$ and $b$ and proves $a_*: Z \cong Koker_i$ and similarly $b_*: Z \cong Ker_\pi$ for the morphism $\pi: Y \to Koker_f$. Since obviously ${\cal B}_0$ is an additive subcategory by the functoriality of the decomposition 1., this implies that ${\cal B}_0$ is an abelian category. QED \bigskip\noindent By construction the exact sequences $0\to A\to B\to C\to 0$ in ${\cal B}_0$ correspond to the distinguished triangles $(A,B,C)$ in ${\cal B}_0$. \begin{Lemma} \label{sem}{\it The abelian category ${\cal B}_0$ is semisimple.} \end{Lemma} \bigskip\noindent {\it Proof}. For a short exact sequence the corresponding triangle $(A,B,C)$ splits, since the morphism $C\to A[1]$ vanishes by property 4. QED \begin{Corollary}\label{abelian} {\it ${\cal B}$ is a semisimple abelian category.} \end{Corollary} \bigskip\noindent Suppose $(X,Y,Z)$ is a distinguished triangle in $\cal B$. Then for $Z\in {\cal B}_0$ there exists an exact sequence $$ 0 \longrightarrow \tau_0(X) \longrightarrow \tau_0(Y) \longrightarrow \tau_0(Z) $$ in ${\cal H}_0$. Similary if $X\in {\cal B}_0$ there exists an exacts sequence $$ \tau_0(X) \longrightarrow \tau_0(Y) \longrightarrow \tau_0(Z) \longrightarrow 0 \ . $$ These statements follow immediately from the long exact $Hom$-sequences and the assumptions 2. and 3. Using the argument of [KW], thm. 4.4 this implies \begin{Lemma}. {\it Put $H^i(X)=\tau_0(X[i])$. Then for a distinguished triangle $(X,Y,Z)$ in $\cal B$ there exists a long exact cohomology sequence in ${\cal B}_0$} $$ ... \to H^{-1}(Z) \to H^0(X) \to H^0(Y) \to H^0(Z) \to H^1(X) \to ... \ .$$ \end{Lemma} \bigskip\noindent In our case ${\cal B}={\cal B}_{m\vert n}$ satisfies these properties 1.-4., and indeed the suspension functor is induced by the parity shift functor $\Pi$ on $svec_K$ via the equivalence ${\cal B} \sim Rep_k(H) \otimes_k svec_K$ of categories. Using $A[2] \cong A$ defined by $id_A \otimes \zeta_2^{-1}$ and the identification ${\cal B} = sRep_K(H)$, we may identify the suspension functor with the parity shift functor, with functorial isomorphisms $H^{2i}(A)\cong A_+$ and $H^{2i+1}(A) \cong A_-$ for $A=A_+ \oplus A_-$ in $sRep_K(H)$. Hence the long exact sequence of the cohomology becomes a hexagon. ${\cal B}_{m\vert n}$ is a $\Pi$-category, a triangulated category enhanced by a super space structure, in the following sense: \bigskip\noindent By definition a $\Pi$-category is a triangulated category with the properties 1.-4. from above such that there exist functorial isomorphisms $\Pi^2(A)\cong A$ for the suspension $\Pi(A) := A[1]$. For a functor $F:{\cal A} \to \cal B$ then notice $F(A) \cong F(A)_+ \oplus F(A)_-$, where $H^0F(A)=F(A)_+$ and $H^1F(A)=F(A)_-$. An additive functor $F$ from an abelian category $\cal A$ to a $\Pi$-category $\cal B$ will be called weakly exact, if $F(A)_+$ and $F(A)_-$ transform short exact sequences $0\to A \to B\to C \to 0$ into exact hexogons in ${\cal B}_0$ $$ \xymatrix{ & F(B)_+ \ar[r]& F(C)_+ \ar[dr]& \cr F(A)_+ \ar[ur]& & & F(A)_- \ar[dl]\cr & F(C)_- \ar[ul]& F(B)_- \ar[l]& \cr} $$ From the definition of the functor $\varphi = \gamma\circ \beta \circ \alpha$ the following then is obvious \bigskip\noindent \begin{Lemma} The functor $\varphi: {\cal T}_{m\vert n} \to sRep_K(H)$ is weakly exact. \end{Lemma} \newpage \centerline{\bf References} \bigskip\noindent [AK] Andre Y., Kahn B., Nilpotence, Radiceaux et structures monoidales, \hfill\break arXiv/math/020327343 (2002) \bigskip\noindent [BR] Berele A., Regev A., Hook Young diagrams with applications to combinatorics and to representation theory of Lie superalgebras, Adv. in Math. 64 (1987), 118 - 175 \bigskip\noindent [BKN1], Boe B.D., Kujawa J.R., Nakano D.K, Cohomology and support varieties for Lie superalgebras (2008) \bigskip\noindent [BKN2], Boe B.D., Kujawa J.R., Nakano D.K, Cohomology and support varieties for Lie superalgebras II (2008) \bigskip\noindent [Ba] Balmer P., Spectra, spectra, spectra - Tensor triangular spectra versus Zariski spectra of endomorphism rings, preprint \bigskip\noindent [Br] Brundan J., Kazhdan-Lusztig polynomials and character formulae for the Lie superalgebra $gl(m\vert n)$, Journal of the AMS, vol. 16, n.1, p. 185 -- 231 (2002) \bigskip\noindent [BS] Balmer P., Schlichting M., Idempotent completion of triangulated categories, J. Algebra, 236(2) (2001), 819 - 834 \bigskip\noindent [BS1] Brundan J., Stroppel C., Highest weight categories arising from Khovanov's diagram algebra I: Cellularity, arXiv (2009) \bigskip\noindent [BS2] Brundan J., Stroppel C., Highest weight categories arising from Khovanov's diagram algebra II: Kostantity, arXiv (2009) \bigskip\noindent [BS3] Brundan J., Stroppel C., Highest weight categories arising from Khovanov's diagram algebra III: Category $\cal O$, arXiv (2010) \bigskip\noindent [BS4] Brundan J., Stroppel C., Highest weight categories arising from Khovanov's diagram algebra IV: The general linear supergroup, arXiv (2010) \bigskip\noindent [D] Deligne P., Categories tensorielles, Moscow Mathematical Journal, vol 2, number 2 (2002), 227 - 248 \bigskip\noindent [D2] Deligne P., Categories Tannakiennes, The Grothendieck Festschrift, vol II, Progress in Mathematics 87, Birkh\"auser 1990 \bigskip\noindent [DM] Deligne P., Milne J.S., Tannakian Categories, in Hodge Cycles, Motives, and Shimura Varieties, Springer Lecture Notes 900, (1982) \bigskip\noindent [DS] Duflo M., Serganowa V., On associated variety for Lie Superalgebras, 2008 \bigskip\noindent [DS] Dwyer W.G., Spalinski J., Homotopy theories and model categories, Handbook of algebraic topology ed. I.M.James, Elsevier (1995) \bigskip\noindent [G] Garmonie J., Indecomposable representations of special linear Lie superalgebras, J. of Algebra 209, 367 - 401 (1998) \bigskip\noindent [GM] Gelfand S.I., Manin.Y.I, Methods of Homological Algebra, Springer Monographs in Mathematics \bigskip\noindent [He1] Heidersdorf T., Semisimple quotients of representation categories of Lie superalgebras and the case of sl(2,1), preprint (2010) \bigskip\noindent [He2] Heidersdorf T., Representations of the Lie superalgebra osp(2,2n) and the Lie algebra sp(2n-2), preprint (2010) \bigskip\noindent [Ha] Happel D., Triangulated categories in the representation theory of finite dimensional algebras, London Math Society Lect. series 119, Cambridge universty press (1988) \bigskip\noindent [Hi] Hirschhorn P.S., Model categories and their localizations, (2003), ISBN 0-8218-3279-4 \bigskip\noindent [H] Hovey M., Palmieri J. H., Strickland N.P., Axomatic stable homotopy theory, Amer. Math. Soc. 128 (1997), no. 610, x + 114, AMS \bigskip\noindent [H] Hovey M., Model categories, Mathematical Surveys and Monographs vol 63, AMS \bigskip\noindent [JHKTM] Van der Jeugt J., Hughes J.W.B., King R.C., Thierry-Mieg J., Character formulas for irreducible modules of the Lie superalgebra $sl(m\vert n)$, J. Math. Phys. 31 (1990), no.9. 2278 -2304 \bigskip\noindent [JHKTM2] Van der Jeugt J., Hughes J.W.B., King R.C., Thierry-Mieg J., A character formulas for simply atypical modules of the Lie superalgebra $sl(m\vert n)$, Comm. Algebra 18 (1990), no.10, 3453 - 3480, 2278 -2304 \bigskip\noindent [Ke] Keller B., Derived categories and tilting, in: Handbook of tilting theory, 49 - 104, London Math Society Lect. series 332, Cambridge universty press (2007) \bigskip\noindent [Ke2] Keller B., Derived categories and their uses, preprint \bigskip\noindent [Li] Littlewood D.E., The theory of group characters, Oxford university press, Oxford 1950 \bigskip\noindent [Na] Nakaoka H., General heart construction on a triangulated category (I): Unifying t-structures and cluster tilting subcategories, arXiv:0907.2080v6 \bigskip\noindent [N] Neeman A., Triangulated categories, Annals of mathematics studies 148, Princeton university press 2001 \bigskip\noindent [Sch] Scheunert M., The Theory of Lie Superalgebras, Lecture Notes in Mathematics 716, Springer 1979 \bigskip\noindent [Se] Sergeev A.N., Tensor algebra of the identity representation as a module over the Lie superalgebras $Gl(n,m)$ and $Q(n)$, Mat. Sb 123 (165) (1984), no.3, 422 - 430 \bigskip\noindent [V] Verdier J.L., Categories derivees, in SGA 4$\frac{1}{2}$, Springer Lecture Notes 569, Cohomologie Etale \bigskip\noindent [W] Weissauer R., Model structures, categorial quotients and representations of super commutative Hopf algebras I, preprint 2010 \bigskip\noindent [W1] Weissauer R., Semisimple algebraic tensor categories, arXiv:0909.1793v2 \end{document}
1,116,691,498,789
arxiv
\section{Introduction} \label{S-1} The topic of boundary value problems for elliptic operators in the upper half-space is a venerable subject which has received much attention throughout the years. While there is a wealth of results in which the smoothness of solutions and boundary data are measured on the scales of Sobolev, Besov, and Triebel-Lizorkin spaces (cf., e.g., \cite{ADNI}, \cite{ADNII}, \cite{FrRu}, \cite{Joh}, \cite{KMR1}, \cite{KMR2}, \cite{LionsMagenes}, \cite{Lop}, \cite{MaMiSh}, \cite{MazShap}, \cite{RuSi96}, \cite{Shap}, \cite{Sol1}, \cite{Sol2}, \cite{Taylor}, \cite{Tr83}, \cite{Tr95}, \cite{WRL} and the literature cited therein), the scenario in which the boundary traces are taken in a nontangential pointwise sense and the size of the solutions is measured using the nontangential maximal operator is considerably less understood. A notable exception is the case when the differential operator involved is the Laplacian, a situation dealt with at length in a number of monographs (cf., e.g., \cite{ABR}, \cite{GCRF85}, \cite{St70}, \cite{Stein93}, \cite{SW}). However, such undertakings always seem to employ rather specialized properties of harmonic functions, so new ideas are required when dealing with more general second-order elliptic systems in place of the Laplacian. In a sequence of recent works (cf. \cite{Madrid}, \cite{Holder-MMM}, \cite{H-MMMM}, \cite{K-MMMM}, \cite{S-MMMM}, \cite{BMO-MMMM}, \cite{SCGC}, \cite{B-MMMM}) the authors have systematically studied Fatou-type theorems and boundary value problems in the upper half-space for second-order elliptic homogeneous constant complex coefficient systems, in a formulation which emphasizes the nontangential pointwise behavior of the solutions on the boundary. The goal of this paper is to present for the first time a coherent, inclusive account of the progress registered so far in \cite{Madrid}-\cite{B-MMMM}. In \S\ref{S-2}, much attention is devoted to the topics of Poisson kernel and Fatou-type theorem. Complex problems typically call for a structured approach, and this is the path we follow vis-a-vis to the notion of Poisson kernel. Its original development is typically associated with the names of Agmon, Douglis, Nirenberg, Lopatinski\u{i}, Shapiro, Solonnikov, among others (cf. \cite{ADNI}-\cite{ADNII}, \cite{Lop}, \cite{Shap}-\cite{Sol2}), and here we further contribute to the study of Poisson kernels associated with second-order elliptic systems from the point of view of harmonic analysis. As regards the second topic of interest in \S\ref{S-2} mentioned earlier, recall that the trademark blueprint of a Fatou-type theorem is that certain size and integrability properties of a null-solution of an elliptic equation in a certain domain (often formulated in terms of the nontangential maximal operator) imply the a.e. existence of the pointwise nontangential boundary trace of the said function. Our Fatou-type theorems follow this design and are also quantitative in nature since the boundary trace does not just simply exist but encodes significant information regarding the size of the original function. In \S\ref{S-3} such results are used as tools for proving that a variety of boundary value problems for elliptic systems in the upper half-space are well-posed. In particular, here we monitor how the format of the problem changes as the space of boundary data morphs from the Lebesgue scale $L^p$ with $1<p<\infty$, to the space of essentially bounded functions, to the space of functions of bounded mean oscillations and, further, to the space of H\"older continuous functions (or, more generally, the space of functions with sublinear growth). A significant number of results are new, and particular care is paid to understanding the extent to which the emerging theory is optimal. Along the way, a large number of relevant open problems are singled out for further study. We proceed to describe the class of systems employed in this work. Throughout, fix $n\in{\mathbb{N}}$ satisfying $n\geq 2$, along with $M\in{\mathbb{N}}$. Consider a second-order, homogeneous, $M\times M$ system, with constant complex coefficients, written (with the usual convention of summation over repeated indices always in place, unless otherwise mentioned) as \begin{equation}\label{L-def} Lu:=\Bigl(\partial_r(a^{\alpha\beta}_{rs}\partial_s u_\beta)\Bigr)_{1\leq\alpha\leq M}, \end{equation} when acting on $u=(u_\beta)_{1\leq\beta\leq M}$ whose components are distributions in an open subset of ${\mathbb{R}}^n$. Assume that $L$ is elliptic in the sense that there exists some $c\in(0,\infty)$ such that \begin{equation}\label{L-ell.X} \begin{array}{c} {\rm Re}\,\bigl[a^{\alpha\beta}_{rs}\xi_r\xi_s\overline{\eta_\alpha} \eta_\beta\,\bigr]\geq c|\xi|^2|\eta|^2\,\,\mbox{ for every} \\[8pt] \xi=(\xi_r)_{1\leq r\leq n}\in{\mathbb{R}}^n\,\,\mbox{ and }\,\, \eta=(\eta_\alpha)_{1\leq\alpha\leq M}\in{\mathbb{C}}^M. \end{array} \end{equation} Examples include scalar operators, such as the Laplacian $\Delta=\sum\limits_{j=1}^n\partial_j^2$ or, more generally, operators of the form ${\rm div}A\nabla$ with $A=(a_{rs})_{1\leq r,s\leq n}$ an $n\times n$ matrix with complex entries satisfying the ellipticity condition \begin{equation}\label{YUjhv-753} \inf_{\xi\in S^{n-1}}{\rm Re}\,\big[a_{rs}\xi_r\xi_s\bigr]>0, \end{equation} (where $S^{n-1}$ denotes the unit sphere in ${\mathbb{R}}^n$), as well as complex versions of the Lam\'e system of elasticity \begin{equation}\label{TYd-YG-76g} \begin{array}{c} L:=\mu\Delta+(\lambda+\mu)\nabla{\rm div}\,\,\text{ where the Lam\'e moduli }\,\,\lambda,\mu\in{\mathbb{C}} \\[6pt] \text{satisfy }\,\,{\rm Re}\,\mu>0\,\,\mbox{ and }\,\,{\rm Re}\,(2\mu+\lambda)>0. \end{array} \end{equation} The last condition above is equivalent to the demand that the Lam\'e system \eqref{TYd-YG-76g} is Legendre-Hadamard elliptic (in the sense of \eqref{L-ell.X}). While the Lam\'e system is symmetric, we stress that the results in this paper require no symmetry for the systems involved. We shall work in the upper half-space \begin{equation}\label{RRR-UpHs} {\mathbb{R}}^{n}_{+}:=\big\{x=(x',x_n)\in {\mathbb{R}}^{n}={\mathbb{R}}^{n-1}\times{\mathbb{R}}:\,x_n>0\big\} \end{equation} whose topological boundary we shall henceforth identify with the horizontal hyperplane ${\mathbb{R}}^{n-1}$ via $\partial{\mathbb{R}}^{n}_{+}\ni(x',0)\equiv x'\in{\mathbb{R}}^{n-1}$. The origin in ${\mathbb{R}}^{n-1}$ is denoted by $0'$, and we agree to let $B_{n-1}(x',r):=\{y'\in{\mathbb{R}}^{n-1}:\,|x'-y'|<r\}$ stand for the $(n-1)$-dimensional ball centered at $x'\in{\mathbb{R}}^{n-1}$ and of radius $r>0$. We shall also let ${\mathbb{N}}_0$ stand for the collection of all non-negative integers. Finally, we will adopt the standard custom of allowing the letter $C$ to denote constants which may vary from one occurrence to another. \section{Poisson Kernels and General Fatou-Type Results} \label{S-2} Poisson kernels for elliptic operators in a half-space have a long history (see, e.g., \cite{ADNI}, \cite{ADNII}, \cite{Lop}, \cite{Shap}, \cite{Sol1}, \cite{Sol2}). In the theorem below we single out the most essential features which identify these objects uniquely. \begin{theorem}\label{thm:Poisson} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Then there exists a matrix-valued function \begin{equation}\label{PP-OO} P^L=\big(P^L_{\alpha\beta}\big)_{1\leq\alpha,\beta\leq M}: \mathbb{R}^{n-1}\longrightarrow\mathbb{C}^{M\times M} \end{equation} {\rm (}called the Poisson kernel for $L$ in $\mathbb{R}^{n}_{+}${\rm )} satisfying the following properties: \begin{list}{$(\theenumi)$}{\usecounter{enumi}\leftmargin=.8cm \labelwidth=.8cm\itemsep=0.2cm\topsep=.1cm \renewcommand{\theenumi}{\alph{enumi}}} \item There exists $C\in(0,\infty)$ such that \begin{equation}\label{eq:IG6gy} |P^L(x')|\leq\frac{C}{(1+|x'|^2)^{\frac{n}2}}\quad\mbox{for each }\,\,x'\in\mathbb{R}^{n-1}. \end{equation} \item The function $P^L$ is Lebesgue measurable and \begin{equation}\label{eq:IG6gy.2} \int_{\mathbb{R}^{n-1}}P^L(x')\,dx'=I_{M\times M}, \end{equation} where $I_{M\times M}$ denotes the $M\times M$ identity matrix. \item If one sets \begin{equation}\label{eq:Gvav7g5} \begin{array}{c} K^L(x',t):=P^L_t(x')=t^{1-n}P^L(x'/t) \\[6pt] \mbox{for each }\,\,x'\in\mathbb{R}^{n-1}\,\,\,\mbox{ and }\,\,t>0, \end{array} \end{equation} then the $\mathbb{C}^{M\times M}$-valued function $K^L$ satisfies {\rm (}with $L$ acting on the columns of $K^L$ in the sense of distributions{\rm )} \begin{equation}\label{uahgab-UBVCX} LK^L=0\cdot I_{M\times M}\,\,\text{ in }\,\,\big[{\mathcal{D}}'(\mathbb{R}^{n}_{+})\big]^{M\times M}. \end{equation} \item The Poisson kernel $P^L$ is unique in the class of $\mathbb{C}^{M\times M}$-valued functions defined in ${\mathbb{R}}^{n-1}$ and satisfying $(a)$-$(c)$ above. \end{list} \end{theorem} Concerning Theorem~\ref{thm:Poisson}, we note that the existence part follows from the classical work of S.\,Agmon, A.\,Douglis, and L.\,Nirenberg in \cite{ADNII} (cf. also \cite{Lop}, \cite{Shap}-\cite{Sol2}). The uniqueness property has been recently proved in \cite{K-MMMM}. The Poisson kernel introduced above is the basic tool used to construct solutions for the Dirichlet problem for the system $L$ in the upper half-space. This is most apparent from Theorem~\ref{thm:Poisson.II} stated a little further below. For now, we proceed to define the nontangential maximal operator and the nontangential boundary trace. Specifically, having fixed some aperture parameter $\kappa>0$, at each point $x'\in\partial{\mathbb{R}}^{n}_{+}\equiv{\mathbb{R}}^{n-1}$ we define the conical nontangential approach region with vertex at $x'$ as \begin{equation}\label{NT-1} \Gamma_\kappa(x'):=\big\{y=(y',t)\in{\mathbb{R}}^{n}_{+}:\,|x'-y'|<\kappa\,t\big\}. \end{equation} Given a continuous vector-valued function $u:{\mathbb{R}}^{n}_{+}\to{\mathbb{C}}^M$, we then define the nontangential maximal operator acting on $u$ by setting \begin{equation}\label{NT-Fct} \big({\mathcal{N}}_\kappa u\big)(x'):=\sup\big\{|u(y)|:\,y\in\Gamma_\kappa(x')\big\},\qquad x'\in{\mathbb{R}}^{n-1}. \end{equation} We shall also need a version of the nontangential maximal operator in which the supremum is now taken over cones truncated near the vertex. Specifically, given a continuous vector-valued function $u:{\mathbb{R}}^{n}_{+}\to{\mathbb{C}}^M$, for each $\varepsilon>0$ define \begin{equation}\label{NT-Fct-EP} \big({\mathcal{N}}^{(\varepsilon)}_\kappa u\big)(x'):=\sup\big\{|u(y)|:\, y=(y',t)\in\Gamma_\kappa(x')\,\text{ with }\,t>\varepsilon\big\} \end{equation} at each $x'\in{\mathbb{R}}^{n-1}$. Whenever meaningful, the $\kappa$-nontangential pointwise boundary trace of a continuous vector-valued function $u:{\mathbb{R}}^{n}_{+}\to{\mathbb{C}}^M$ is given by \begin{equation}\label{nkc-EE-2} \Big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}\Big)(x') :=\lim_{\Gamma_{\kappa}(x')\ni y\to (x',0)}u(y) \,\,\mbox{ for }\,\,x'\in\partial{\mathbb{R}}^{n}_{+}\equiv{\mathbb{R}}^{n-1}. \end{equation} It is then clear from definitions that for any continuous vector-valued function $u:{\mathbb{R}}^{n}_{+}\to{\mathbb{C}}^M$ and any $\varepsilon,\kappa>0$ we have \begin{equation}\label{NT-Fct-EP.anB} \begin{array}{c} {\mathcal{N}}^{(\varepsilon)}_\kappa u,\,\,\,{\mathcal{N}}_\kappa u \,\,\text{ are lower semicontinuous}, \\[6pt] 0\leq{\mathcal{N}}^{(\varepsilon)}_\kappa u\leq{\mathcal{N}}_\kappa u \,\,\text{ on }\,\,\partial{\mathbb{R}}^{n}_{+}\equiv{\mathbb{R}}^{n-1}. \end{array} \end{equation} In addition, for each such function $u$ we have \begin{equation}\label{6543} \|u\|_{[L^\infty(\mathbb{R}^n_{+})]^M}=\|\mathcal{N}_\kappa u\|_{L^\infty(\mathbb{R}^{n-1})}. \end{equation} Finally, whenever the nontangential boundary trace exists, we have \begin{equation}\label{nkc-EE-4} \begin{array}{c} u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\,\,\text{ is a Lebesgue measurable function} \\[6pt] \text{and }\,\,\left|u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\right|\leq{\mathcal{N}}_\kappa u \,\,\text{ on }\,\,\partial{\mathbb{R}}^{n}_{+}\equiv{\mathbb{R}}^{n-1}. \end{array} \end{equation} Prior to stating our next result, we make some comments further clarifying notation and terminology. Throughout, we agree to denote by $\mathcal{M}$ the Hardy-Littlewood maximal operator on $\mathbb{R}^{n-1}$. This acts on each vector-valued function $f$ with components in $L^1_{\rm loc}({\mathbb{R}}^{n-1})$ according to \begin{equation}\label{MMax} \big(\mathcal{M}f\big)(x'):=\sup_{Q\ni x'}\frac{1}{{\mathcal{L}}^{n-1}(Q)} \int_Q|f|\,d{\mathcal{L}}^{n-1},\qquad\forall\,x'\in\mathbb{R}^{n-1}, \end{equation} where the supremum runs over all cubes $Q$ in $\mathbb{R}^{n-1}$ containing $x'$, and where ${\mathcal{L}}^{n-1}$ denotes the $(n-1)$-dimensional Lebesgue measure in ${\mathbb{R}}^{n-1}$. Next, pick some integrability exponent $q\in(1,\infty)$ (whose actual choice is ultimately immaterial), and fix an arbitrary $p\in\big(\tfrac{n-1}{n}\,,\,1\big]$. Recall that a Lebesgue measurable function $a:\mathbb{R}^{n-1}\rightarrow\mathbb{C}$ is said to be an $(p,q)$-atom if for some cube $Q\subset\mathbb{R}^{n-1}$ one has \begin{equation}\label{defi-atom} {\rm supp}\,a\subset Q,\quad\|a\|_{L^q(\mathbb{R}^{n-1})}\leq{\mathcal{L}}^{n-1}(Q)^{1/q-1/p}, \quad\int_{\mathbb{R}^{n-1}}a\,d{\mathcal{L}}^{n-1}=0. \end{equation} One may then define the Hardy space $H^p(\mathbb{R}^{n-1})$ as the collection of all tempered distributions $f\in{\mathcal{S}}'(\mathbb{R}^{n-1})$ which may be written as \begin{equation}\label{66tt} f=\sum_{j\in{\mathbb{N}}}\lambda_j\,a_j\,\,\text{ in }\,\,{\mathcal{S}}'(\mathbb{R}^{n-1}) \end{equation} for some sequence $\{a_j\}_{j\in{\mathbb{N}}}$ of $(p,q)$-atoms and a sequence $\{\lambda_j\}_{j\in{\mathbb{N}}}\in\ell^p$. For each $f\in H^p(\mathbb{R}^{n-1})$ we then set $\|f\|_{H^p(\mathbb{R}^{n-1})}:=\inf\Big(\sum_{j\in{\mathbb{N}}}|\lambda_j|^p\Big)^{1/p}$ with the infimum taken over all atomic decompositions of $f$ as $\sum_{j\in{\mathbb{N}}}\lambda_j\,a_j$. In relation to this we wish to make three comments. First, the very definition of the quasi-norm $\|\cdot\|_{H^p(\mathbb{R}^{n-1})}$ implies that whenever $f\in H^p(\mathbb{R}^{n-1})$ is written as in \eqref{66tt} then the series actually converges in $H^p(\mathbb{R}^{n-1})$. Second, from the definition of $\|\cdot\|_{H^p(\mathbb{R}^{n-1})}$ we also see that each $f\in H^p(\mathbb{R}^{n-1})$ has a quasi-optimal atomic decomposition, i.e., $f$ may be written as in \eqref{66tt} with \begin{equation}\label{66tt.222} \frac{1}{2}\Big(\sum_{j\in{\mathbb{N}}}|\lambda_j|^p\Big)^{1/p} \leq\|f\|_{H^p(\mathbb{R}^{n-1})}\leq\Big(\sum_{j\in{\mathbb{N}}}|\lambda_j|^p\Big)^{1/p}. \end{equation} Third, consider the vector case, i.e., the space $\big[H^p({\mathbb{R}}^{n-1})\big]^M$. In such a setting we find it convenient to work with $\mathbb{C}^M$-valued $(p,q)$-atoms. Specifically, these are functions \begin{equation}\label{defi-atom-CM} \begin{array}{c} a\in\big[L^q(\mathbb{R}^{n-1})\big]^M\,\,\text{ such that for some cube $Q\subset\mathbb{R}^{n-1}$ one has} \\[6pt] {\rm supp}\,a\subset Q,\quad\|a\|_{[L^q(\mathbb{R}^{n-1})]^M}\leq{\mathcal{L}}^{n-1}(Q)^{1/q-1/p}, \\[6pt] \text{and }\displaystyle\int_{\mathbb{R}^{n-1}}a\,d{\mathcal{L}}^{n-1}=0\in{\mathbb{C}}^M. \end{array} \end{equation} Suppose now that some $f=(f_\beta)_{1\leq\beta\leq M}\in\big[H^p({\mathbb{R}}^{n-1})\big]^M$ has been given. Then each $f_\beta$ has an atomic decomposition $f_\beta=\sum_{j=1}^\infty\lambda_{\beta j}a_{\beta j}$ (no summation on $\beta$ here) where each $a_{\beta j}$ is a $(p,q)$-atom and $\{\lambda_{\beta j}\}_{j\in{\mathbb{N}}}\in\ell^p$, which is quasi-optimal, hence \begin{align}\label{exist:u-123.Wer.1aaa} \|f_\beta\|_{H^p({\mathbb{R}}^{n-1})}\approx\big(\sum_{j=1}^\infty|\lambda_{\beta j}|^p\big)^{1/p} \,\,\text{ for each }\,\,\beta\in\{1,\dots,M\}. \end{align} Using the Kronecker symbol formalism, introduce ${\mathbf{e}}_\beta:=(\delta_{\gamma\beta})_{1\leq\gamma\leq M}\in{\mathbb{C}}^M$ for each index $\beta\in\{1,\dots,M\}$, then write \begin{align}\label{exist:u-123.Wer.1} f=\sum_{\beta=1}^Mf_\beta{\mathbf{e}}_\beta =\sum_{\beta=1}^M\sum_{j=1}^\infty\lambda_{\beta j}a_{\beta j}{\mathbf{e}}_\beta =\sum_{\beta=1}^M\sum_{j=1}^\infty\lambda_{\beta j}A_{\beta j} \end{align} with convergence in $\big[H^p({\mathbb{R}}^{n-1})\big]^M$, where \begin{align}\label{exist:u-123.Wer.2} A_{\beta j}:=a_{\beta j}{\mathbf{e}}_\beta\,\,\text{ for each }\,\, \beta\in\{1,\dots,M\}\,\,\text{ and }\,\,j\in{\mathbb{N}} \end{align} are ${\mathbb{C}}^M$-valued functions as in \eqref{defi-atom-CM}, hence $\mathbb{C}^M$-valued $(p,q)$-atoms. If we then relabel the sequences $\big\{A_{\beta j}\big\}_{\substack{1\leq\beta\leq M\\ j\in{\mathbb{N}}}}$ and $\big\{\lambda_{\beta j}\big\}_{\substack{1\leq\beta\leq M\\ j\in{\mathbb{N}}}}$ simply as $\big\{a_j\big\}_{j\in{\mathbb{N}}}$ and $\big\{\lambda_j\big\}_{j\in{\mathbb{N}}}$, respectively, we may re-cast \eqref{exist:u-123.Wer.1aaa}-\eqref{exist:u-123.Wer.2} as \begin{equation}\label{exist:u-123.Wer.3} \begin{array}{c} f=\sum_{j=1}^\infty\lambda_ja_j\,\,\text{ with convergence in }\,\,\big[H^p({\mathbb{R}}^{n-1})\big]^M, \\[6pt] \text{where each $a_j$ is a $\mathbb{C}^M$-valued $(p,q)$-atom (cf. \eqref{defi-atom-CM})}, \\[6pt] \text{and }\,\,\|f\|_{[H^p({\mathbb{R}}^{n-1})]^M}\approx\big(\sum_{j=1}^\infty|\lambda_j|^p\big)^{1/p}. \end{array} \end{equation} Since the Poisson kernel $P^L$ and the kernel function $K^L$ from Theorem~\ref{thm:Poisson} are of fundamental importance to the work described in this paper, a more in-depth analysis of their main properties is in order. Before stating our theorem addressing this analysis, for each real number $m$ we agree to denote by $L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{m}}\big)$ the space of ${\mathbb{C}}$-valued Lebesgue measurable functions which are absolutely integrable in ${\mathbb{R}}^{n-1}$ with respect to the weighted Lebesgue measure $\tfrac{dx'}{1+|x'|^{m}}$. \begin{theorem}\label{thm:Poisson.II} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Then the Agmon-Douglis-Nirenberg Poisson kernel $P^L$ and the kernel function $K^L$ from Theorem~\ref{thm:Poisson} satisfy the following properties: \begin{list}{$(\theenumi)$}{\usecounter{enumi}\leftmargin=.8cm \labelwidth=.8cm\itemsep=0.2cm\topsep=.1cm \renewcommand{\theenumi}{\alph{enumi}}} \item The function $P^L$ belongs to $\big[{\mathcal{C}}^\infty(\mathbb{R}^{n-1})\big]^{M\times M}$ and satisfies the following non-dege\-neracy property: \begin{equation}\label{eq:IG6gy.2-Lambda} \parbox{6.90cm}{for each $a\in{\mathbb{C}}^M\setminus\{0\}$ one may find some $\lambda>0$ such that $\int_{S^{n-2}}\big|P^L(\lambda\omega)a\big|\,d\omega>0$.} \end{equation} One may extend $K^L$ to a function belonging to $\big[{\mathcal{C}}^\infty\big(\overline{{\mathbb{R}}^n_{+}}\setminus\{0\}\big)\big]^{M\times M}$. Consequently, formula \eqref{uahgab-UBVCX} also holds in a pointwise sense in ${\mathbb{R}}^n_{+}$. Moreover, there exists some constant $C\in(0,\infty)$ such that \begin{equation}\label{Uddcv} |K^L(x',t)|\leq Ct/(t^2+|x'|^2)^{n/2}\,\,\text{ for each } \,\,(x',t)\in{\mathbb{R}}^n_{+}, \end{equation} an estimate which further implies $K^L(x',0)=0$ for each $x'\in{\mathbb{R}}^{n-1}\setminus\{0'\}$. In addition, one has $\int_{\mathbb{R}^{n-1}}K^L(x'-y',t)\,dy'=I_{M\times M}$ for all $(x',t)\in{\mathbb{R}}^n_{+}$, as well as $K^L(\lambda x)=\lambda^{1-n}K^L(x)$ for all $x\in{\mathbb{R}}^n_{+}$ and $\lambda>0$. In particular, for each multi-index $\alpha\in{\mathbb{N}}_0^n$ there exists $C_\alpha\in(0,\infty)$ with the property that \begin{equation}\label{eq:Kest} \big|(\partial^\alpha K^L)(x)\big|\leq C_\alpha\,|x|^{1-n-|\alpha|},\qquad \forall\,x\in{\overline{{\mathbb{R}}^n_{+}}}\setminus\{0\}. \end{equation} \item The following semi-group property holds: \begin{equation}\label{u7tffa} P^L_{t_0+t_1}=P^L_{t_0}\ast P^L_{t_1}\,\,\text{ for all }\,\,t_0,t_1>0. \end{equation} \item Given a Lebesgue measurable function $f=(f_\beta)_{1\leq\beta\leq M}:\mathbb{R}^{n-1}\rightarrow\mathbb{C}^M$ satisfying \begin{equation}\label{exist:f} \int_{\mathbb{R}^{n-1}}\frac{|f(x')|}{1+|x'|^n}\,dx'<\infty, \end{equation} at each point $(x',t)\in{\mathbb{R}}^n_{+}$ set \begin{align}\label{exist:u} u(x',t) &:=(P^L_t\ast f)(x') \nonumber\\[6pt] &:=\Bigg(\int_{{\mathbb{R}}^{n-1}}t^{1-n}P^L_{\alpha\beta}\big((x'-y')/t\big)f_\beta(y')\,dy'\Bigg)_{1\leq\alpha\leq M}. \end{align} Then $u:\mathbb{R}^n_{+}\to\mathbb{C}^M$ is meaningfully defined via an absolutely convergent integral, satisfies {\rm (}for each given aperture parameter $\kappa>0${\rm )} \begin{equation}\label{exist:u2} \begin{array}{c} u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[8pt] \text{and }\,\, u\big|_{\partial\mathbb{R}^{n}_{+}}^{{}^{\kappa-{\rm n.t.}}}=f \,\,\mbox{ at each Lebesgue point of $f$} \\[6pt] {\rm (}\text{hence, in particular, at ${\mathcal{L}}^{n-1}$-a.e. point in $\mathbb{R}^{n-1}$}{\rm )}, \end{array} \end{equation} and there exists a constant $C=C(L,\kappa)\in(0,\infty)$ with the property that \begin{equation}\label{exist:Nu-Mf} \big(\mathcal{N}_\kappa u\big)(x')\leq C\,\mathcal{M} f(x')\,\,\text{ for each }\,\,\,x'\in\mathbb{R}^{n-1}. \end{equation} Also, if $L=\Delta$, the Laplacian in ${\mathbb{R}}^n$, then the opposite inequality in \eqref{exist:Nu-Mf} is true as well. Furthermore, the following unrestricted convergence result holds \begin{equation}\label{exist:Nu-Mf-LIM} \begin{array}{c} \lim\limits_{{\mathbb{R}}^n_{+}\ni x\to (x'_0,0)}u(x)=f(x'_0) \\[6pt] \text{if $x'_0\in{\mathbb{R}}^{n-1}$ is a continuity point for $f$}. \end{array} \end{equation} In other words, $u$ given by \eqref{exist:u} extends by continuity to ${\mathbb{R}}^n_{+}\cup\{(x'_0,0)\}$ whenever $x'_0\in{\mathbb{R}}^{n-1}$ is a continuity point for $f$. In particular, \begin{equation}\label{exist:Nu-Mf-LIM.222} \parbox{10.00cm}{whenever $f\in\Big[L^1\Big({\mathbb{R}}^{n-1}\,,\,\frac{dx'}{1+|x'|^{n}}\Big)\cap{\mathcal{C}}^0({\mathbb{R}}^{n-1})\Big]^M$ and $u$ is as in \eqref{exist:u}, then $u$ extends uniquely to a function in $\big[\mathcal{C}^0(\overline{\mathbb{R}^n_{+}})\big]^M$.} \end{equation} \item For each $p\in\big(\tfrac{n-1}{n}\,,\,1\big]$ and each $\alpha\in{\mathbb{N}}_0^n$ with $|\alpha|>0$, the kernel function $K^L$ satisfies \begin{equation}\label{grefr} \begin{array}{c} (\partial^\alpha K^L)(x'-\cdot,t)\in\big[H^p(\mathbb{R}^{n-1})\big]^{M\times M} \,\text{ for all }\,(x',t)\in{\mathbb{R}}^n_{+},\,\text{ and} \\[6pt] \sup\limits_{(x',t)\in{\mathbb{R}}^n_{+}} \big\|t^{|\alpha|-(n-1)(\frac1p-1)}(\partial^\alpha K^L)(x'-\cdot,t)\big\|_{[H^p(\mathbb{R}^{n-1})]^{M\times M}}<\infty. \end{array} \end{equation} In fact, for each $p\in\big(\tfrac{n-1}{n}\,,\,1\big]$, each $q\in[1,\infty]$ with $q>p$, each $\alpha\in{\mathbb{N}}_0^n$ with $|\alpha|>0$, and each $(x',t)\in{\mathbb{R}}^n_{+}$, the function \begin{equation}\label{lY54dvb} m^\alpha_{x',t}:=t^{|\alpha|-(n-1)(\frac1p-1)}(\partial^\alpha K^L)(x'-\cdot,t) \end{equation} is, up to multiplication by some fixed constant $C\in(0,\infty)$ {\rm (}which depends exclusively on $L,p,q,n,\alpha${\rm )}, a ${\mathbb{C}}^{M\times M}$-valued $L^q$-normalized molecule relative to the ball $B_{n-1}(x',t)$ for the Hardy space $\big[H^p(\mathbb{R}^{n-1})\big]^{M\times M}$. More precisely, \begin{equation}\label{RWWQD-bb} \int_{\mathbb{R}^{n-1}}m^\alpha_{x',t}\,d{\mathcal{L}}^{n-1}=0\cdot I_{M\times M}, \end{equation} and there exists $C\in(0,\infty)$ such that one has \begin{equation}\label{RWWQD-cc} \big\|m^\alpha_{x',t}\big\|_{[L^q(B_{n-1}(x',t))]^{M\times M}} \leq C{\mathcal{L}}^{n-1}\big(B_{n-1}(x',t)\big)^{\frac1q-\frac1p}, \end{equation} and, using the abbreviation $\varepsilon:=|\alpha|/(n-1)$, for each $k\in{\mathbb{N}}$ one also has \begin{align}\label{RWWQD-cc.222} &\hskip -0.30in \big\|m^\alpha_{x',t}\big\|_{[L^q(B_{n-1}(x',2^{k}t)\setminus B_{n-1}(x',2^{k-1}t))]^{M\times M}} \nonumber\\[6pt] &\hskip 0.80in \leq C2^{k(n-1)\big(\frac1q-1-\varepsilon\big)}{\mathcal{L}}^{n-1}\big(B_{n-1}(x',t)\big)^{\frac1q-\frac1p}. \end{align} \item Given any $\alpha\in{\mathbb{N}}_0^n$, for each fixed $t>0$ the function $(\partial^\alpha K^L)(\cdot,t)$ belongs to the H\"older space $\big[{\mathcal{C}}^\theta({\mathbb{R}}^{n-1})\big]^{M\times M}$ for each exponent $\theta\in(0,1)$. As a consequence of this, given an arbitrary $f=(f_\beta)_{1\leq\beta\leq M}\in\big[H^p(\mathbb{R}^{n-1})\big]^{M}$ with $p\in\big(\tfrac{n-1}{n}\,,\,1\big]$, one may meaningfully define \begin{align}\label{exist:u-123} u(x',t) &:=(P^L_t\ast f)(x') \\[6pt] &:=\Bigg\{\Big\langle f_\beta\,,\,\big[K^L_{\alpha\beta}(x'-\cdot,t)\big]\Big\rangle\Bigg\}_{1\leq\alpha\leq M} \,\text{ for }\,(x',t)\in{\mathbb{R}}^n_{+}, \nonumber \end{align} where $\langle\cdot,\cdot\rangle$ is the pairing between distributions belonging to the Hardy space $H^p(\mathbb{R}^{n-1})$ and equivalence classes {\rm (}modulo constants{\rm )} of functions belonging to the homogeneous H\"older space $\dot{\mathcal{C}}^{(n-1)(1/p-1)}({\mathbb{R}}^{n-1})$ if $p<1$, and to ${\rm BMO}({\mathbb{R}}^{n-1})$ if $p=1$ {\rm (}cf., e.g., \cite[Theorem~5.30, p.307]{GCRF85}{\rm )}. Then \begin{equation}\label{exist:u2-Hp} \begin{array}{c} u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[8pt] \text{and }\,\,u\big|_{\partial\mathbb{R}^{n}_{+}}^{{}^{\kappa-{\rm n.t.}}} \,\,\mbox{ exists ${\mathcal{L}}^{n-1}$-a.e. in $\mathbb{R}^{n-1}$}. \end{array} \end{equation} Moreover, there exists a constant $C=C(L,\kappa,p)\in(0,\infty)$ with the property that \begin{equation}\label{exist:Nu-Mf-Hp} \begin{array}{c} \big\|\mathcal{N}_\kappa u\big\|_{L^p(\mathbb{R}^{n-1})} \leq C\|f\|_{[H^p(\mathbb{R}^{n-1})]^M}\,\,\text{ whenever} \\[6pt] f\in\big[H^p(\mathbb{R}^{n-1})\big]^{M}\,\,\text{ and $u$ is as in \eqref{exist:u-123}.} \end{array} \end{equation} \end{list} \end{theorem} We wish to note that, in sharp contrast with \eqref{exist:u2}, in the context of \eqref{exist:u2-Hp} we no longer expect the nontangential pointwise trace $u\big|_{\partial\mathbb{R}^{n}_{+}}^{{}^{\kappa-{\rm n.t.}}}$ to be directly related to the (generally speaking tempered distribution) $f\in\big[H^p(\mathbb{R}^{n-1})\big]^M$. For example, in \eqref{FCT-TR.nnn}-\eqref{rt-vba-jg} we present an example in which the said trace vanishes at ${\mathcal{L}}^{n-1}$-a.e. point in $\mathbb{R}^{n-1}$ even though $f\not=0$. \begin{proof}[Proof of Theorem~\ref{thm:Poisson.II}] For items {\it (a)}-{\it (c)} see \cite{K-MMMM}, \cite{S-MMMM}, \cite{SCGC}, \cite{B-MMMM}. To deal with the claims in item {\it (d)}, fix $p,q,\alpha,x',t$ as in the statement. Then \begin{align}\label{est-theta-vanish} \int_{\mathbb{R}^{n-1}}m^\alpha_{x',t}(y')\,dy' =t^{|\alpha|-(n-1)(\frac{1}{p}-1)}\,\partial^\alpha\int_{\mathbb{R}^{n-1}}K^L(x'-y',t)\,dy'=0 \end{align} since $\int_{\mathbb{R}^{n-1}}K^L(x'-y',t)\,dy'=I_{M\times M}$ and $|\alpha|>0$. This proves \eqref{RWWQD-bb}. Also, based on \eqref{eq:Kest} we may estimate \begin{align}\label{RWWQD-dd} \int_{B_{n-1}(x',t)}\big|m^\alpha_{x',t}(y')\big|^q\,dy' &\leq C\,\int_{B_{n-1}(x',t)}\frac{t^{q|\alpha|-q(n-1)(\frac{1}{p}-1)}}{(t+|x'-y'|)^{q(n-1+|\alpha|)}}\,dy' \nonumber\\[4pt] &\leq C\,\int_{B_{n-1}(x',t)}\frac{t^{q|\alpha|-q(n-1)(\frac{1}{p}-1)}}{t^{q(n-1+|\alpha|)}}\,dy' \nonumber\\[4pt] &=C\Big[{\mathcal{L}}^{n-1}\big(B_{n-1}(x',t)\big)^{\frac1q-\frac1p}\Big]^q, \end{align} and, if $\varepsilon:=|\alpha|/(n-1)$, for every $k\in{\mathbb{N}}$ we may write \begin{align}\label{RWWQD-ee} \int_{B_{n-1}(x',2^kt)\setminus B_{n-1}(x',2^{k-1}t)} &\big|m^\alpha_{x',t}(y')\big|^q\,dy' \nonumber\\[4pt] &\hskip -0.60in \leq C\,\int_{2^{k-1}\,t<|x'-y'|<2^k\,t}\frac{t^{q|\alpha|-q(n-1)(\frac{1}{p}-1)}}{(t+|x'-y'|)^{q(n-1+|\alpha|)}}\,dy' \nonumber\\[4pt] &\hskip -0.60in \leq C\int_{B_{n-1}(x',2^kt)}\frac{t^{q|\alpha|-q(n-1)(\frac{1}{p}-1)}}{(2^k\,t)^{q(n-1+|\alpha|)}}\,dy' \nonumber\\[4pt] &\hskip -0.60in =C\Big[2^{k(n-1)\big(\frac1q-1-\varepsilon\big)}{\mathcal{L}}^{n-1}\big(B_{n-1}(x',t)\big)^{\frac1q-\frac1p}\Big]^q, \end{align} for some constant $C\in(0,\infty)$ independent of $k$, $x'$, and $t$. From these, the estimates claimed in \eqref{RWWQD-cc}, \eqref{RWWQD-cc.222} readily follow. Going further, the first claim in item {\it (e)} is a consequence of the fact that, as seen from \eqref{eq:Kest}, for each $\alpha\in{\mathbb{N}}_0^n$ there exists $C_\alpha\in(0,\infty)$ such that $\big\|(\partial^\alpha K^L)(\cdot,t)\big\|_{[L^\infty({\mathbb{R}}^{n-1})]^{M\times M}}\leq C_\alpha t^{1-n-|\alpha|}$ for every $t>0$, together with an elementary observation to the effect that any bounded Lipschitz function in ${\mathbb{R}}^{n-1}$ belongs to the H\"older space ${\mathcal{C}}^\theta({\mathbb{R}}^{n-1})$ for each exponent $\theta\in(0,1)$. In concert with the identification of the duals of Hardy spaces (cf., e.g., \cite{GCRF85}) this shows that the pairings in \eqref{exist:u-123} are meaningful. To prove \eqref{exist:Nu-Mf-Hp}, fix some $q\in(1,\infty)$ and assume first that the scalar components of $f$ are $(p,q)$-atoms. Hence, we need to consider \begin{equation}\label{eq:Fb} u(x',t):=(P^L_t\ast a)(x'),\qquad\forall\,(x',t)\in{\mathbb{R}}^n_{+}, \end{equation} where $a:\mathbb{R}^{n-1}\rightarrow\mathbb{C}^M$ is a $(p,q)$-atom (cf. \eqref{defi-atom-CM}). Then, on account of \eqref{exist:Nu-Mf}, H\"older's inequality, the $L^{q}$-boundedness of the Hardy-Littlewood maximal operator, and the normalization of the atom we may write \begin{align}\label{eq:NBV1uj} \int_{\sqrt{n}Q}\big({\mathcal{N}}_\kappa u\big)^p\,d{\mathcal{L}}^{n-1} &\leq C\int_{\sqrt{n}Q}\big({\mathcal{M}}a\big)^p\,d{\mathcal{L}}^{n-1} \nonumber\\[4pt] &\leq C{\mathcal{L}}^{n-1}(Q)^{1-p/q}\Big(\int_{\sqrt{n}Q}\big({\mathcal{M}}a\big)^{q}\,d{\mathcal{L}}^{n-1}\Big)^{p/q} \nonumber\\[4pt] &\leq C{\mathcal{L}}^{n-1}(Q)^{1-p/q}\Big(\int_{\mathbb{R}^{n-1}}\big({\mathcal{M}}a\big)^q\,d{\mathcal{L}}^{n-1}\Big)^{p/q} \nonumber\\[4pt] &\leq C{\mathcal{L}}^{n-1}(Q)^{1-p/q}\|a\|_{[L^q(\mathbb{R}^{n-1})]^M}^p\leq C, \end{align} for some constant $C\in(0,\infty)$ depending only on $n,L,\kappa,p,q$. To proceed, fix an arbitrary point $x'\in{\mathbb{R}}^{n-1}\setminus\sqrt{n}Q$. If $\ell(Q)$ and $x'_Q$ are, respectively, the side-length and center of the cube $Q$, this choice entails \begin{equation}\label{eq:rEEb} |z'-x_Q'|\leq\max\{\kappa,2\}\big(t+|z'-\xi'|\big), \quad\forall\,(z',t)\in\Gamma_\kappa(x'),\,\,\forall\,\xi'\in Q. \end{equation} Indeed, if $(z',t)\in\Gamma_\kappa(x')$ and $\xi'\in Q$ then, first, $|z'-x'_Q|\leq |z'-\xi'|+|\xi'-x'_Q|$ and, second, $|\xi'-x'_Q|\leq\frac{\sqrt{n}}{2}\ell(Q)\leq\frac{1}{2}|x'-x'_Q|\leq\frac{1}{2}(|x'-z'|+|z'-x'_Q|) \leq\frac{1}{2}(\kappa t+|z'-x'_Q|)$, from which \eqref{eq:rEEb} follows. Next, using \eqref{eq:Gvav7g5}, the vanishing moment condition for the atom, the Mean Value Theorem together with \eqref{eq:Kest} and \eqref{eq:rEEb}, H\"older's inequality and, finally, the support and normalization of the atom, for each $(z',t)\in\Gamma_\kappa(x')$ we may estimate \begin{align}\label{rrff4rf} |(P^L_t\ast a)(z')| & =\Big|\int_{{\mathbb{R}}^{n-1}}\big[K^L(z'-y',t)-K^L(z'-x'_Q,t)\big]a(y')\,dy'\Big| \nonumber\\[4pt] &\leq\int_{Q}\big|K^L(z'-y',t)-K^L(z'-x'_Q,t)\big||a(y')|\,dy' \nonumber\\[4pt] &\leq C\frac{\ell(Q)}{\big(t+|z'-x'_Q|\big)^n}\int_{Q}|a(y')|\,dy' \nonumber\\[4pt] &\leq C\frac{\ell(Q)}{\big(t+|z'-x'_Q|\big)^n}\, {\mathcal{L}}^{n-1}(Q)^{1-1/q}\|a\|_{[L^q(\mathbb{R}^{n-1})]^M} \nonumber\\[4pt] &\leq\frac{C{\mathcal{L}}^{n-1}(Q)^{1-1/p}\ell(Q)}{\big(t+|z'-x'_Q|\big)^n}. \end{align} In turn, \eqref{rrff4rf} implies that for each $x'\in{\mathbb{R}}^{n-1}\setminus\sqrt{n}Q$ we have \begin{align}\label{rrff4rf.2} \big({\mathcal{N}}_\kappa u\big)(x') &=\sup_{(z',t)\in\Gamma_\kappa(x')}|(P^L_t\ast a)(z')| \\[6pt] &\leq\sup_{(z',t)\in\Gamma_\kappa(x')}\frac{C{\mathcal{L}}^{n-1}(Q)^{1-1/p}\ell(Q)}{\big(t+|z'-x'_Q|\big)^n} =\frac{C{\mathcal{L}}^{n-1}(Q)^{1-1/p}\ell(Q)}{|x'-x'_Q|^n}, \nonumber \end{align} hence \begin{equation}\label{eq:NBVGaa} \int_{{\mathbb{R}}^{n-1}\setminus\sqrt{n}Q}\big({\mathcal{N}}_\kappa u\big)^p\,d{\mathcal{L}}^{n-1} \leq C\int_{{\mathbb{R}}^{n-1}\setminus\sqrt{n}Q}\frac{{\mathcal{L}}^{n-1}(Q)^{p-1}\ell(Q)^p}{|x'-x'_Q|^{np}}\,dx'=C, \end{equation} for some constant $C\in(0,\infty)$ depending only on $n,L,p,q$. From \eqref{eq:NBV1uj} and \eqref{eq:NBVGaa} we deduce that whenever $u$ is as in \eqref{eq:Fb} then, for some constant $C\in(0,\infty)$ independent of the atom, \begin{equation}\label{eq:NBhf} \int_{{\mathbb{R}}^{n-1}}\big({\mathcal{N}}_\kappa u\big)^p\,d{\mathcal{L}}^{n-1}\leq C. \end{equation} Next, consider the general case when the function $u$ is defined as in \eqref{exist:u-123} for some arbitrary $f\in\big[H^p({\mathbb{R}}^{n-1})\big]^M$. Writing $f$ as in \eqref{exist:u-123.Wer.3} then permits us to express (in view of the specific manner in which the duality pairing in \eqref{exist:u-123} manifests itself), for each fixed $(x',t)\in{\mathbb{R}}^n_{+}$, \begin{align}\label{exist:u-123.aDS} u(x',t)=(P^L_t\ast f)(x')=\sum_{j=1}^\infty\lambda_j(P^L_t\ast a_j)(x')=\sum_{j=1}^\infty\lambda_ju_j(x',t), \end{align} where $u_j(x',t):=(P^L_t\ast a_j)(x')$ for each $j\in{\mathbb{N}}$. Consequently, based on the sublinearity of the nontangential maximal operator, the fact that $p<1$, the estimate established in \eqref{eq:NBhf} (presently used with $a:=a_j$), and the quasi-optimality of the atomic decomposition for $f$, we may write \begin{align}\label{eq:NBhf.2iii} \int_{{\mathbb{R}}^{n-1}}\big({\mathcal{N}}_\kappa u\big)^p\,d{\mathcal{L}}^{n-1} &\leq\int_{{\mathbb{R}}^{n-1}}\Big(\sum_{j=1}^\infty{\mathcal{N}}_\kappa(\lambda_ju_j)\Big)^p\,d{\mathcal{L}}^{n-1} \nonumber\\[6pt] &\leq\int_{{\mathbb{R}}^{n-1}}\sum_{j=1}^\infty\big({\mathcal{N}}_\kappa(\lambda_ju_j)\big)^p\,d{\mathcal{L}}^{n-1} \nonumber\\[6pt] &=\sum_{j=1}^\infty|\lambda_j|^p\int_{{\mathbb{R}}^{n-1}}\big({\mathcal{N}}_\kappa u_j\big)^p\,d{\mathcal{L}}^{n-1} \nonumber\\[6pt] &\leq C\sum_{j=1}^\infty|\lambda_j|^p\leq C\|f\|^p_{[H^p({\mathbb{R}}^{n-1})]^M}. \end{align} This proves \eqref{exist:Nu-Mf-Hp}. Finally, the claims in the first line of \eqref{exist:u2-Hp} are seen by differentiating inside the duality bracket, while the existence of the nontangential boundary trace in the second line of \eqref{exist:u2-Hp} is a consequence of the corresponding result in \eqref{exist:u2}, the estimate in \eqref{exist:Nu-Mf-Hp}, the density of $H^p({\mathbb{R}}^{n-1})\cap L^2({\mathbb{R}}^{n-1})$ in $H^p({\mathbb{R}}^{n-1})$, and a well-known abstract principle in harmonic analysis (see, e.g., \cite[Theorem~2.2, p.\,27]{Duoan}, \cite[Theorem~3.12, p.\,60]{SW} for results of similar flavor). \end{proof} Let $L$ be an $M\times M$ system with constant complex coefficients as in \eqref{L-def}-\eqref{L-ell.X} and fix an integrability exponent $p\in(1,\infty)$. From items {\it (b)}-{\it (c)} in Theorem~\ref{thm:Poisson.II} we then see that the family $T=\{T(t)\}_{t\geq 0}$ where $T(0):=I$, the identity operator on $\big[L^p({\mathbb{R}}^{n-1})\big]^M$ and, for each $t>0$, \begin{equation}\label{eq:Taghb8} \begin{array}{c} T(t):\big[L^p({\mathbb{R}}^{n-1})\big]^M\longrightarrow\big[L^p({\mathbb{R}}^{n-1})\big]^M, \\[6pt] \big(T(t)f\big)(x'):=(P^L_t\ast f)(x')\,\text{ for all }\, f\in\big[L^p({\mathbb{R}}^{n-1})\big]^M,\,\,x'\in{\mathbb{R}}^{n-1}, \end{array} \end{equation} is a $C_0$-semigroup on $\big[L^p({\mathbb{R}}^{n-1})\big]^M$, which satisfies \begin{equation}\label{eq:Taghb8.77} \sup_{t\geq 0}\big\|T(t)\big\|_{[L^p({\mathbb{R}}^{n-1})]^M\to[L^p({\mathbb{R}}^{n-1})]^M}<\infty. \end{equation} We now proceed to present several Fatou-type theorems and Poisson integral representation formulas for null-solutions of homogeneous constant complex coefficient elliptic second-order systems defined in ${\mathbb{R}}^n_{+}$ and subject to a variety of size conditions. \begin{theorem}\label{thm:FP.111} Let $L$ be an $M\times M$ system with constant complex coefficients as in \eqref{L-def}-\eqref{L-ell.X}, and fix some aperture parameter $\kappa>0$. Suppose $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ satisfies $Lu=0$ in $\mathbb{R}_{+}^n$, as well as \begin{equation}\label{UGav-5hH9i} \int_{{\mathbb{R}}^{n-1}}\big({\mathcal{N}}^{(\varepsilon)}_\kappa u\big)(x')\frac{dx'}{1+|x'|^{n-1}}<\infty \text{ for each fixed }\,\,\varepsilon>0, \end{equation} and also assume that there exists $\varepsilon_0>0$ such that the following finiteness integral condition holds: \begin{equation}\label{u-integ-TR} \int_{\mathbb{R}^{n-1}}\frac{\sup_{0<t<\varepsilon_0}|u(x',t)|}{1+|x'|^n}\,dx'<\infty. \end{equation} Then \begin{equation}\label{Tafva.2222} \left\{ \begin{array}{l} \big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)(x') \,\,\text{ exists at ${\mathcal{L}}^{n-1}$-a.e. point }\,\,x'\in{\mathbb{R}}^{n-1}, \\[10pt] \displaystyle u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\,\,\text{ belongs to the space }\,\, \Big[L^1\Big({\mathbb{R}}^{n-1}\,,\,\frac{dx'}{1+|x'|^{n}}\Big)\Big]^M, \\[12pt] u(x',t)=\Big(P^L_t\ast\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)\Big)(x') \,\,\text{ for each }\,\,(x',t)\in{\mathbb{R}}^n_{+}, \end{array} \right. \end{equation} where $P^L$ is the Agmon-Douglis-Nirenberg Poisson kernel in ${\mathbb{R}}^n_{+}$ associated with the system $L$ as in Theorem~\ref{thm:Poisson}. In particular, from \eqref{nkc-EE-4}, \eqref{Tafva.2222}, and \eqref{exist:Nu-Mf} it follows that there exists a constant $C=C(L,\kappa)\in(0,\infty)$ with the property that \begin{equation}\label{Tafva.2222.iii.1233} \big|u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big| \leq{\mathcal{N}}_\kappa u\leq C{\mathcal{M}}\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big) \,\,\text{ in }\,\,{\mathbb{R}}^{n-1}. \end{equation} \end{theorem} It is natural to think of \eqref{Tafva.2222.iii.1233}, which implies that for almost every point $x'\in{\mathbb{R}}^{n-1}$ the supremum of the function $u$ in the cone $\Gamma_{\kappa}(x')$ lies in between the (absolute value of the) boundary trace $u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}$ evaluated at $x'$ and a fixed multiple of the Hardy-Littlewood operator acting on the boundary trace $u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}$ at $x'$, as some type of ``Pointwise Maximum Principle.'' Theorem~\ref{thm:FP.111} is optimal from multiple perspectives. First, observe from item $(a)$ of Theorem~\ref{thm:Poisson.II} and item $(c)$ of Theorem~\ref{thm:Poisson} that for each $a\in{\mathbb{C}}^M\setminus\{0\}$ the function \begin{equation}\label{FCT-TR} u_a(x',t):=t^{1-n}P^L(x'/t)a=K^L(x',t)a,\,\,\text{ for each }\,\,(x',t)\in\mathbb{R}^n_{+}, \end{equation} satisfies $u_a\in\big[\mathcal{C}^{\infty}(\overline{\mathbb{R}^n_{+}}\setminus\{0\})\big]^M$, $Lu_a=0$ in $\mathbb{R}_{+}^n$, and $\Big(u_a\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}\Big)(x')=0$ for every aperture parameter $\kappa>0$ and every point $x'\in{\mathbb{R}}^{n-1}\setminus\{0'\}$. Moreover, $u_a$ is not identically zero since $\int_{\mathbb{R}^{n-1}}u_a(x',t)\,dx'=a$ for each $t>0$ by \eqref{eq:IG6gy.2}. As such, the Poisson integral representation formula in the last line of \eqref{Tafva.2222} fails for each $a\in{\mathbb{C}}^M\setminus\{0\}$. Let us also observe that having \begin{equation}\label{ijfdghba} |u_a(y',t)|\leq C|a|t(t^2+|y'|^2)^{-n/2}\,\,\text{ for each }\,\, (y',t)\in{\mathbb{R}}^n_{+} \end{equation} entails that for each $\varepsilon>0$ there exists $C_{a,\varepsilon}\in(0,\infty)$ such that \begin{equation}\label{UGav-5hH9i-agg} \big({\mathcal{N}}^{(\varepsilon)}_\kappa u_a\big)(x')\leq\frac{C_{a,\varepsilon}}{1+|x'|^{n-1}} \,\,\text{ for each }\,\,x'\in{\mathbb{R}}^{n-1}. \end{equation} Hence, condition \eqref{UGav-5hH9i} is presently satisfied by each $u_a$. In light of Theorem~\ref{thm:FP.111} the finiteness integral condition stipulated in \eqref{u-integ-TR} then necessarily should fail for each $u_a$. To check directly that this is the case, recall from \eqref{eq:IG6gy.2-Lambda} that for each $a\in{\mathbb{C}}^M\setminus\{0\}$ there exists some $\lambda>0$ such that $\int_{S^{n-2}}\big|P^L(\lambda\omega)a\big|\,d\omega\in(0,\infty)$. In turn, this permits us to estimate \begin{align}\label{y65trta} \int_{\mathbb{R}^{n-1}}&\frac{\sup_{0<t<\varepsilon_0}|u_a(x',t)|}{1+|x'|^n}\,dx' \geq\int_{B_{n-1}(0',\lambda\varepsilon_0)}\frac{|u_a(x',|x'|/\lambda)|}{1+|x'|^n}\,dx' \nonumber\\[6pt] &=\int_{B_{n-1}(0',\lambda\varepsilon_0)}\frac{(|x'|/\lambda)^{1-n}\big|P^L(\lambda x'/|x'|)a\big|}{1+|x'|^n}\,dx' \\[6pt] &=\Big(\int_{S^{n-2}}\big|P^L(\lambda\omega)a\big|\,d\omega\Big) \Big(\int_0^{\lambda\varepsilon_0}\frac{\lambda^{n-1}}{\rho(1+\rho^n)}\,d\rho\Big)=\infty, \nonumber \end{align} using \eqref{FCT-TR} and passing to polar coordinates. Thus, \eqref{u-integ-TR} fails for each $u_a$. Second, the absolute integrability condition \eqref{u-integ-TR} may not be in general replaced by membership to the corresponding weak Lebesgue space. For example, in the case $L:=\Delta$, the Laplacian in ${\mathbb{R}}^n$, if \eqref{u-integ-TR} is weakened to the demand that \begin{equation}\label{u-integ-TR.adfg.jk} \begin{array}{c} \mathbb{R}^{n-1}\ni x'\mapsto\frac{\sup_{0<t<\varepsilon_0}|u(x',t)|}{1+|x'|^n}\in[0,\infty] \\[6pt] \text{is a function belonging to }\,\,L^{1,\infty}({\mathbb{R}}^{n-1}) \end{array} \end{equation} then Theorem~\ref{thm:FP.111} may fail. Indeed, this may be seen by considering the nonzero harmonic function $u(x',t)=t(t^2+|x'|^2)^{-n/2}$ for each $(x',t)\in{\mathbb{R}}^n_{+}$, which satisfies \eqref{UGav-5hH9i} and \eqref{u-integ-TR.adfg.jk}. However, the Poisson integral representation formula in the last line of \eqref{Tafva.2222} fails since $\Big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}\Big)(x')=0$ for every $x'\in{\mathbb{R}}^{n-1}\setminus\{0'\}$. Third, one cannot relax the formulation of the finiteness integral condition \eqref{u-integ-TR} by placing the supremum outside the integral sign. To see this, fix an arbitrary $a\in{\mathbb{C}}^M\setminus\{0\}$ and take $u_a$ as in \eqref{FCT-TR}. Then, thanks to \eqref{eq:IG6gy}, we have \begin{align}\label{hgggf} \sup_{0<t<\varepsilon_0}\int_{\mathbb{R}^{n-1}}\frac{|u_a(x',t)|}{1+|x'|^n}\,dx' &\leq\sup_{0<t<\varepsilon_0}\int_{\mathbb{R}^{n-1}}|u_a(x',t)|\,dx' \nonumber\\[6pt] &=\sup_{0<t<\varepsilon_0}\int_{\mathbb{R}^{n-1}}\big|P_t^{L}(x')a\big|\,dx' \nonumber\\[6pt] &\leq|a|\int_{\mathbb{R}^{n-1}}\big|P^{L}(x')\big|\,dx'<\infty. \end{align} Yet, again, the Poisson representation formula in the last line of \eqref{Tafva.2222} fails. \vskip 0.06in One notable consequence of Theorem~\ref{thm:FP.111} is the Fatou-type theorem and its associated Poisson integral formula presented below. \begin{theorem}\label{thm:FP} Let $L$ be an $M\times M$ system with constant complex coefficients as in \eqref{L-def}-\eqref{L-ell.X}, and fix some aperture parameter $\kappa>0$. Then having \begin{equation}\label{jk-lm-jhR-LLL-HM-RN.w} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^{\infty}({\mathbb{R}}^n_{+})\big]^M,\quad Lu=0\,\,\text{ in }\,\,{\mathbb{R}}^n_{+}, \\[8pt] \displaystyle \int_{\mathbb{R}^{n-1}}\big({\mathcal{N}}_{\kappa}u\big)(x')\,\frac{dx'}{1+|x'|^{n-1}}<\infty, \end{array} \right. \end{equation} implies that \begin{equation}\label{Tafva.2222.iii} \left\{ \begin{array}{l} \big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)(x') \,\,\text{ exists at ${\mathcal{L}}^{n-1}$-a.e. point }\,\,x'\in{\mathbb{R}}^{n-1}, \\[10pt] \displaystyle u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\,\,\text{ belongs to the space }\,\, \Big[L^1\Big({\mathbb{R}}^{n-1}\,,\,\frac{dx'}{1+|x'|^{n-1}}\Big)\Big]^M, \\[12pt] u(x',t)=\Big(P^L_t\ast\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)\Big)(x') \,\,\text{ for each }\,\,(x',t)\in{\mathbb{R}}^n_{+}, \end{array} \right. \end{equation} where $P^L$ is the Agmon-Douglis-Nirenberg Poisson kernel in ${\mathbb{R}}^n_{+}$ associated with the system $L$ as in Theorem~\ref{thm:Poisson}. In particular, there exists a constant $C=C(L,\kappa)\in(0,\infty)$ such that the following Pointwise Maximum Principle holds: \begin{equation}\label{Taf-UHN} \big|u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big| \leq{\mathcal{N}}_\kappa u\leq C{\mathcal{M}}\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big) \,\,\text{ in }\,\,{\mathbb{R}}^{n-1}. \end{equation} \end{theorem} \vskip 0.08in \begin{proof}[Proof of the fact that Theorem~\ref{thm:FP.111} implies Theorem~\ref{thm:FP}] Given $u$ as in \eqref{jk-lm-jhR-LLL-HM-RN.w}, we have \begin{equation}\label{u-integ-TR.atr} \int_{\mathbb{R}^{n-1}}\frac{\sup_{0<t<\varepsilon_0}|u(x',t)|}{1+|x'|^n}\,dx' \leq C_n\int_{\mathbb{R}^{n-1}}\big({\mathcal{N}}_{\kappa}u\big)(x')\,\frac{dx'}{1+|x'|^{n-1}}<\infty. \end{equation} In view of this and \eqref{NT-Fct-EP.anB}, we conclude that the conditions stipulated in \eqref{UGav-5hH9i}-\eqref{u-integ-TR} are valid. As such, Theorem~\ref{thm:FP.111} guarantees that the properties listed in the first and third lines of \eqref{Tafva.2222.iii} hold. In addition, thanks to \eqref{nkc-EE-4} we now have the membership claimed in the middle line of \eqref{Tafva.2222.iii}. \end{proof} A direct, self-contained proof of Theorem~\ref{thm:FP} (without having to rely on Theorem~\ref{thm:FP.111}) has been given in \cite{Madrid}. Here we shall indicate how Theorem~\ref{thm:FP} self-improves to Theorem~\ref{thm:FP.111}. In the process, we shall need the following weak-* convergence result from \cite{SCGC}. \begin{lemma}\label{WLLL} Suppose $\{f_j\}_{j\in{\mathbb{N}}}\subseteq L^1\big({\mathbb{R}}^{n-1}\,,\,\frac{dx'}{1+|x'|^{n}}\big)$ is a sequence of functions satisfying \begin{equation}\label{523563fve-NNN} \int_{\mathbb{R}^{n-1}}\frac{\sup_{j\in{\mathbb{N}}}|f_j(x')|}{1+|x'|^n}\,dx'<\infty. \end{equation} Then there exist $f\in L^1\big(\mathbb{R}^{n-1},\frac{dx'}{1+|x'|^n}\big)$ and a sub-sequence $\big\{f_{j_k}\big\}_{k\in{\mathbb{N}}}$ of $\{f_j\}_{j\in{\mathbb{N}}}$ with the property that \begin{equation}\label{2135racvs-NNN} \lim_{k\to\infty}\int_{\mathbb{R}^{n-1}}\phi(y')\,f_{j_k}(y')\,\frac{dy'}{1+|y'|^n} =\int_{\mathbb{R}^{n-1}}\phi(y')\,f(y')\,\frac{dy'}{1+|y'|^n} \end{equation} for every function $\phi$ belonging to $\mathcal{C}^0_b(\mathbb{R}^{n-1})$, the space of ${\mathbb{C}}$-valued continuous and bounded functions in $\mathbb{R}^{n-1}$. \end{lemma} We are ready to provide a proof of Theorem~\ref{thm:FP.111} which relies on Theorem~\ref{thm:FP}. \vskip 0.08in \begin{proof}[Proof of Theorem~\ref{thm:FP.111}] For each $\varepsilon>0$ define $u_\varepsilon(x',t):=u(x',t+\varepsilon)$ for each $(x',t)\in\overline{{\mathbb{R}}^n_{+}}$. Also, set $f_\varepsilon(x'):=u(x',\varepsilon)$ for each $x'\in{\mathbb{R}}^{n-1}$. Since we have ${\mathcal{N}}_{\kappa}u_\varepsilon\leq{\mathcal{N}}^{(\varepsilon)}_{\kappa}u$ on $\mathbb{R}^{n-1}$, we conclude that \begin{equation}\label{jk-lm-jhR-LLL-HM-RN.w.123} \begin{array}{c} u_\varepsilon\in\big[{\mathcal{C}}^{\infty}(\overline{{\mathbb{R}}^n_{+}})\big]^M,\quad u_\varepsilon\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}=f_\varepsilon\,\,\text{ on }\,\,\mathbb{R}^{n-1}, \quad Lu_\varepsilon=0\,\,\text{ in }\,\,{\mathbb{R}}^n_{+}, \\[8pt] \displaystyle \int_{\mathbb{R}^{n-1}}\big({\mathcal{N}}_{\kappa}u_\varepsilon\big)(x')\,\frac{dx'}{1+|x'|^{n-1}} \leq\int_{\mathbb{R}^{n-1}}\big({\mathcal{N}}^{(\varepsilon)}_{\kappa}u\big)(x')\,\frac{dx'}{1+|x'|^{n-1}}<\infty. \end{array} \end{equation} As such, Theorem~\ref{thm:FP} applies to each $u_\varepsilon$, and the Poisson integral representation formula in the last line of \eqref{Tafva.2222.iii} presently guarantees that for each $\varepsilon>0$ we have \begin{align}\label{TaajYGa} u(x',t+\varepsilon) &=u_\varepsilon(x',t)=\big(P^L_t\ast f_\varepsilon\big)(x') \nonumber\\[6pt] &=\int_{{\mathbb{R}}^{n-1}}P^L_t(x'-y')f_\varepsilon(y')\,dy' \,\,\text{ for each }\,\,(x',t)\in{\mathbb{R}}^n_{+}. \end{align} On the other hand, property \eqref{u-integ-TR} entails \begin{equation}\label{eq:16t44.iii} \int_{\mathbb{R}^{n-1}}\frac{\sup_{0<\varepsilon<\varepsilon_0}|f_\varepsilon(x')|}{1+|x'|^n}\,dx' =\int_{\mathbb{R}^{n-1}}\frac{\sup_{0<\varepsilon<\varepsilon_0}|u(x',\varepsilon)|}{1+|x'|^n}\,dx'<\infty. \end{equation} Granted this finiteness property, the weak-$\ast$ convergence result recalled in Lemma~\ref{WLLL} may be used for the sequence $\big\{f_{\varepsilon_0/(2j)}\big\}_{j\in{\mathbb{N}}}\subset \big[L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^n}\big)\big]^M$ to conclude that there exist some function $f\in\big[L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^n}\big)\big]^M$ and some sequence $\{\varepsilon_k\}_{k\in{\mathbb{N}}}\subset(0,\varepsilon_0)$ which converges to zero, such that \begin{equation}\label{eq:16t44.MAD} \lim_{k\to\infty}\int_{{\mathbb{R}}^{n-1}}\phi(y')f_{\varepsilon_k}(y')\frac{dy'}{1+|y'|^{n}} =\int_{{\mathbb{R}}^{n-1}}\phi(y')f(y')\frac{dy'}{1+|y'|^{n}} \end{equation} for every continuous bounded function $\phi:{\mathbb{R}}^{n-1}\to{\mathbb{C}}^{M\times M}$. Estimate \eqref{eq:IG6gy} ensures for each fixed point $(x',t)\in{\mathbb{R}}^n_{+}$ the assignment \begin{equation}\label{Fvabbb-7tF.Tda} \begin{array}{c} \displaystyle {\mathbb{R}}^{n-1}\ni y'\mapsto\phi(y'):=(1+|y'|^{n})P^L_t(x'-y')\in{\mathbb{C}}^{M\times M} \\[6pt] \text{is a bounded, continuous, ${\mathbb{C}}^{M\times M}$-valued function}. \end{array} \end{equation} At this stage, from \eqref{TaajYGa} and \eqref{eq:16t44.MAD} used for the function $\phi$ defined in \eqref{Fvabbb-7tF.Tda} we obtain (bearing in mind that $u$ is continuous in ${\mathbb{R}}^n_{+}$) that \begin{align}\label{AA-lm-jLL-HM-RN.ppp.wsa.2} u(x',t)=\int_{{\mathbb{R}}^{n-1}}P^L_t(x'-y')f(y')\,dy'\,\,\,\text{ for each }\,\,x=(x',t)\in{\mathbb{R}}^n_{+}. \end{align} With this in hand, and recalling that $f\in\Big[L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n}}\big)\Big]^M$, we may invoke Theorem~\ref{thm:Poisson.II} (cf. \eqref{exist:u2}) to conclude that \begin{equation}\label{exist:u2.b} \text{$u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial\mathbb{R}^{n}_{+}}$ exists and equals $f$ at ${\mathcal{L}}^{n-1}$-a.e. point in $\mathbb{R}^{n-1}$}. \end{equation} With this in hand, all conclusions in \eqref{Tafva.2222} are implied by \eqref{AA-lm-jLL-HM-RN.ppp.wsa.2}-\eqref{exist:u2.b}. \end{proof} Moving on, we consider two families of semi-norms on the class of continuous functions in ${\mathbb{R}}^n_{+}$, namely \begin{equation}\label{semi-norms} \|u\|_{*,\rho}:=\rho^{-1}\cdot\sup_{B(0,\rho)\cap\mathbb{R}^n_{+}}|u|\in[0,+\infty] \,\,\text{ for each }\,\,\rho\in(0,\infty), \end{equation} and \begin{equation}\label{semi-norms-eps} \|u\|_{*,\varepsilon,\rho}:=\rho^{-1}\cdot\sup_{\substack{\varepsilon<t<\rho\\ |x'|<\rho}}|u(x',t)| \,\,\,\text{ whenever }\,\,0<\varepsilon<\rho<\infty. \end{equation} Whenever $\liminf\limits_{\rho\to\infty}\|u\|_{*,\rho}=0$ we shall say that $u$ has subcritical growth. In this vein, it is worth observing that \begin{equation}\label{subcritical-EQUI} \parbox{9.50cm}{a continuous function $u:\mathbb{R}^n_{+}\to\mathbb{C}$ has $\lim\limits_{\rho\to\infty}\|u\|_{*,\rho}=0$ if and only if $u$ is bounded on any bounded subset of ${\mathbb{R}}^n_{+}$ and $u(x)=o(|x|)$ as $x\in{\mathbb{R}}^n_{+}$ satisfies $|x|\to\infty$.} \end{equation} In relation to the family of seminorms introduced in \eqref{semi-norms}-\eqref{semi-norms-eps}, let us also observe that for each continuous function $u:{\mathbb{R}}^n_{+}\to{\mathbb{C}}$ we have \begin{equation}\label{fvqafr} \|u\|_{*,\rho}=\frac1{\log 2}\int_\rho^{2\rho}\|u\|_{*,\rho}\,\frac{dt}{t} \leq\frac2{\log 2}\int_\rho^{2\rho}\|u\|_{*,t}\,\frac{dt}{t} \end{equation} for each $\rho\in(0,\infty)$, and that \begin{equation}\label{q34t3g3a} \|u\|_{*,\varepsilon,\rho}\leq\sqrt{2}\|u\|_{*,\sqrt{2}\rho} \leq\rho^{-1}\|u\|_{L^\infty(\mathbb{R}^n_{+})} \,\,\,\text{ whenever }\,\,0<\varepsilon<\rho<\infty. \end{equation} Our next major theorem is a novel Fatou-type result (plus a naturally accompanying Poisson integral representation formula), recently established in \cite{SCGC}, of the sort discussed below. \begin{theorem}\label{thm:Fatou} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Let $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ be such that $Lu=0$ in $\mathbb{R}_{+}^n$. In addition, assume \begin{equation}\label{subcritical:mild} \liminf_{\rho\to\infty}\|u\|_{*,\varepsilon,\rho}=0\,\,\text{ for each fixed }\,\,\varepsilon>0, \end{equation} and suppose that there exists $\varepsilon_0>0$ such that the following finiteness integral condition holds: \begin{equation}\label{u-integ} \int_{\mathbb{R}^{n-1}}\frac{\sup_{0<t<\varepsilon_0}|u(x',t)|}{1+|x'|^n}\,dx'<\infty. \end{equation} Then, for each aperture parameter $\kappa>0$, \begin{align}\label{Tafva.2222-ijk} \begin{array}{l} \big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)(x') \,\,\text{ exists at ${\mathcal{L}}^{n-1}$-a.e. point }\,\,x'\in{\mathbb{R}}^{n-1}, \\[10pt] \displaystyle u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\,\,\text{ belongs to the space }\,\, \Big[L^1\Big({\mathbb{R}}^{n-1}\,,\,\frac{dx'}{1+|x'|^{n}}\Big)\Big]^M, \\[12pt] u(x',t)=\Big(P^L_t\ast\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)\Big)(x') \,\,\text{ for each }\,\,(x',t)\in{\mathbb{R}}^n_{+}. \end{array} \end{align} As a consequence, there exists a constant $C=C(L,\kappa)\in(0,\infty)$ with the property that the following Pointwise Maximum Principle holds: \begin{equation}\label{Taf-UHN.ER.2} \big|u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big| \leq{\mathcal{N}}_\kappa u\leq C{\mathcal{M}}\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big) \,\,\text{ in }\,\,{\mathbb{R}}^{n-1}. \end{equation} \end{theorem} The Fatou-type result established in Theorem~\ref{thm:Fatou} is optimal from a multitude of perspectives. First, the mildly weaker version of the subcritical growth condition stated in \eqref{subcritical:mild} cannot be relaxed. Indeed, fix $a\in{\mathbb{C}}^M\setminus\{0\}$ and consider the function $u(x',t):=ta$ for each $(x',t)\in\partial{\mathbb{R}}^n_{+}$. Then $\|u\|_{*,\varepsilon,\rho}=|a|>0$, hence \eqref{subcritical:mild} fails while \eqref{u-integ} and the first two properties listed in \eqref{Tafva.2222-ijk} hold. Nonetheless, the Poisson integral representation formula claimed in the last line of \eqref{Tafva.2222-ijk} fails (since $u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}=0$ everywhere on ${\mathbb{R}}^{n-1}$ whereas $u$ is nonzero). Second, the finiteness integral condition \eqref{u-integ} may not be dropped. To justify this claim, bring in the Poisson kernel $P^L:\mathbb{R}^{n-1}\to\mathbb{C}^{M\times M}$ associated with the system $L$ as in Theorem~\ref{thm:Poisson} and, having fixed some $a\in{\mathbb{C}}^M\setminus\{0\}$, consider the function $u_a$ defined as in \eqref{FCT-TR}. In addition to the properties this function enjoys mentioned earlier, for each $\varepsilon>0$ fixed we have $\|u_a\|_{*,\varepsilon,\rho}\leq C|a|\rho^{-1}\varepsilon^{1-n}\to 0$ as $\rho\to\infty$. However, the Poisson integral representation formula claimed in the last line of \eqref{Tafva.2222-ijk} obviously fails. The source of the failure is the fact that \eqref{u-integ} does not presently materialize (as already noted in \eqref{y65trta}). Third, as seen from \eqref{hgggf}, one cannot relax the formulation of the finiteness integral condition \eqref{u-integ} by placing the supremum outside the integral sign. \medskip In particular, Theorem~\ref{thm:Fatou} implies a uniqueness result, to the effect that whenever $L$ is an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$ one has \begin{equation}\label{Fatou-Uniqueness} \left. \begin{array}{r} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M,\,\,Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+} \\[4pt] \text{$u$ satisfies both \eqref{subcritical:mild} and \eqref{u-integ}} \\[6pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=0 \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,\mathbb{R}^{n-1} \end{array} \right\} \Longrightarrow u\equiv 0\,\,\text{ in }\,\,{\mathbb{R}}^n_{+}. \end{equation} This should be compared with the following uniqueness result within the class of null-solutions of the system $L$ exhibiting subcritical growth (also established in \cite{SCGC}). \begin{theorem}\label{thm:uniq-subcritical} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$ and fix an aperture parameter $\kappa>0$. Assume $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ is such that $Lu=0$ in $\mathbb{R}_{+}^n$, $u$ satisfies the subcritical growth condition \begin{equation}\label{subcritical} \liminf_{\rho\to\infty}\|u\|_{*,\rho}=0, \end{equation} and that $u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=0$ at ${\mathcal{L}}^{n-1}$-a.e. point in $\mathbb{R}^{n-1}$. Then $u\equiv 0$ in $\mathbb{R}^n_{+}$. \end{theorem} We wish to note that the subcritical growth condition \eqref{subcritical} is sharp. Concretely, while $\liminf\limits_{\rho\to\infty}\|u\|_{*,\rho}$ always exists and is a non-negative number, its failure to vanish does not force $u$ to be identically zero. Indeed, for any $a\in{\mathbb{C}}^M\setminus\{0\}$ the function $u(x',t):=ta$ satisfies $u\in\big[\mathcal{C}^\infty(\overline{\mathbb{R}^n_{+}})\big]^M$, $Lu=0$ in $\mathbb{R}_{+}^n$, and $u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=0$ everywhere in $\mathbb{R}^{n-1}$ (for any aperture parameter $\kappa>0$). This being said, $\sup_{B(0,\rho)\cap\mathbb{R}^n_{+}}|u|=|a|\rho$ for each $\rho>0$, hence \begin{equation}\label{subcritical-counter} \liminf_{\rho\to\infty}\Big(\rho^{-1}\sup_{B(0,\rho)\cap\mathbb{R}^n_{+}}|u|\Big)=|a|>0. \end{equation} A comment on the genesis of the subcritical growth condition \eqref{subcritical} is in also order. Suppose $L:=\Delta$ (the Laplacian in ${\mathbb{R}}^n$) and one is interested in establishing a uniqueness result in the class of functions \begin{equation}\label{UUU.uuu} u\in{\mathcal{C}}^\infty({\mathbb{R}}^n_{+})\cap{\mathcal{C}}^0(\overline{{\mathbb{R}}^n_{+}}) \,\,\text{ with }\,\,\Delta u=0\,\,\text{ in }\,\,{\mathbb{R}}^n_{+}, \end{equation} to the effect that the boundary trace $u\big|_{\partial{\mathbb{R}}^n_{+}}$ determines $u$. Since $u(x',t)=t$ for each $(x',t)\in{\mathbb{R}}^n_{+}$ is a counterexample, a further demand must be imposed, in addition to \eqref{UUU.uuu}, to rule out this pathological example. To identify this demand, consider a function $u$ as in \eqref{UUU.uuu} which satisfies $u\big|_{\partial{\mathbb{R}}^n_{+}}=0$. Then Schwarz's reflection principle ensures that \begin{equation}\label{UUU.uuu.2} \widetilde{u}(x',t):=\left\{ \begin{array}{ll} u(x',t) &\text{ if }\,\,t\geq 0, \\[4pt] -u(x',-t) &\text{ if }\,\,t>0, \end{array} \right. \qquad\forall\,(x',t)\in{\mathbb{R}}^n, \end{equation} is a harmonic function in ${\mathbb{R}}^n$. Interior estimates then imply the existence of a dimensional constant $C_n\in(0,\infty)$ with the property that \begin{equation}\label{UUU.uuu.3} \big|(\nabla\widetilde{u})(x)\big|\leq C_n\rho^{-1}\cdot\sup_{B(x,\rho)}\big|\widetilde{u}\big| \,\,\text{ for each }\,\,x\in{\mathbb{R}}^n\,\,\text{ and }\,\,\rho>0. \end{equation} In this context, it is clear that the subcritical growth condition \eqref{subcritical} is a quantitatively optimal property guaranteeing the convergence to zero of the right-hand side of the inequality in \eqref{UUU.uuu.3} as $\rho\to\infty$, for each $x\in{\mathbb{R}}^n$ fixed. And this is precisely what is needed here since this further implies $\nabla\widetilde{u}\equiv 0$ in ${\mathbb{R}}^n$, which ultimately forces $u\equiv 0$ in ${\mathbb{R}}^n$. The new challenges in Theorem~\ref{thm:uniq-subcritical} stem from the absence of a Schwarz's reflection principle in the more general class of systems we are currently considering, and the lack of continuity of the function at boundary points. Our proof of Theorem~\ref{thm:uniq-subcritical} circumvents these obstacles by making use of Agmon-Douglis-Nirenberg estimates near the boundary. In turn, Theorem~\ref{thm:Fatou} is established using Theorem~\ref{thm:uniq-subcritical}. \medskip Pressing ahead, it is also worth contrasting the subcritical growth condition \eqref{subcritical} with the finiteness integral condition \eqref{u-integ}. Concretely, whenever $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ satisfies \eqref{subcritical} and $f:=u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}$ exists at ${\mathcal{L}}^{n-1}$-a.e. point in ${\mathbb{R}}^{n-1}$ then necessarily $f$ is a locally bounded function; in fact, \begin{equation}\label{zxcvhsdr5-XXX} \liminf_{\rho\to\infty}\rho^{-1}\|f\|_{[L^\infty(B_{n-1}(0',\rho))]^M} \leq\liminf_{\rho\to\infty}\|u\|_{*,\rho}=0. \end{equation} As such, in the context of boundary value problems for the system $L$ in the upper half-space, the subcritical growth condition \eqref{subcritical} is most relevant whenever the formulation of the problem in question involves boundary data functions which are locally bounded (more precisely, satisfying the condition formulated in \eqref{zxcvhsdr5-XXX}). On the other hand, having a function $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ satisfying the finiteness integral condition \eqref{u-integ} and such that $f:=u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}$ exists at ${\mathcal{L}}^{n-1}$-a.e. point in ${\mathbb{R}}^{n-1}$, guarantees that $f$ belongs to the weighted Lebesgue space $\Big[L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n}}\big)\Big]^M$. This membership (which is the most general condition allowing one to define null-solutions to the system $L$ by taking the convolution with the Poisson kernel $P^L$ described in Theorem~\ref{thm:Poisson}) does not force $f$ to be locally bounded. Comparing the uniqueness statements from Theorem~\ref{thm:uniq-subcritical} and \eqref{Fatou-Uniqueness}, it is worth noting that the subcritical growth condition \eqref{subcritical} appearing in Theorem~\ref{thm:uniq-subcritical} decouples into \eqref{subcritical:mild} and \begin{equation}\label{semi-norms-eps.YYY} \liminf_{\rho\to\infty}\Bigg[\rho^{-1}\sup_{\substack{0<t<\varepsilon\\ |x'|<\rho}}|u(x',t)|\Bigg] =0\,\,\,\text{ for each fixed }\,\,\varepsilon>0. \end{equation} By way of contrast, in \eqref{Fatou-Uniqueness} in place of \eqref{semi-norms-eps.YYY} we are employing the finiteness integral condition \eqref{u-integ}. In relation to the Fatou-type results discussed so far we wish to raise the following issue. \vskip 0.08in {\bf Open Question~1.} {\it Can the format of the Fatou-type result from Theorem~\ref{thm:FP.111} be reconciled with that of Theorem~\ref{thm:Fatou}? In other words, are these two seemingly distinct results particular manifestations of a more general, inclusive phenomenon?} \vskip 0.08in Moving on, we say that a Lebesgue measurable function $f:\mathbb{R}^{n-1}\rightarrow\mathbb{C}$ belongs to the class of functions with subcritical growth, denoted ${\rm SCG}(\mathbb{R}^{n-1})$, provided \begin{equation}\label{SCG-f} \int_{\mathbb{R}^{n-1}}\frac{|f(x')|}{1+|x'|^n}\,dx'<\infty\,\,\text{ and }\,\, \lim_{\rho\to\infty}\Big[\rho^{-1}\|f\|_{L^\infty(B_{n-1}(0',\rho))}\Big]=0. \end{equation} As indicated in the corollary below (which appears in \cite{SCGC}), there is a Fatou-type result in the context of Theorem~\ref{thm:uniq-subcritical} provided we slightly strengthen the condition demanded in \eqref{subcritical}. \begin{corollary}\label{thm:Fatou-SCG} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Assume $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ satisfies $Lu=0$ in $\mathbb{R}_{+}^n$, as well as the following Dini type condition at infinity: \begin{equation}\label{41fgfgv} \int_1^\infty\|u\|_{*,t}\,\frac{dt}{t} =\int_1^\infty\Big(\sup_{B(0,t)\cap\mathbb{R}^n_{+}}|u|\Big)\,\frac{dt}{t^2}<\infty. \end{equation} Then, for each aperture parameter $\kappa>0$, \begin{align}\label{Tafva.222234t5w5} \begin{array}{l} \big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)(x') \,\,\text{ exists at ${\mathcal{L}}^{n-1}$-a.e. point }\,\,x'\in{\mathbb{R}}^{n-1}, \\[10pt] \displaystyle u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\,\,\text{ belongs to }\,\, \big[{\rm SCG}({\mathbb{R}}^{n-1})\big]^M\subset\Big[L^1\Big({\mathbb{R}}^{n-1}\,,\,\frac{dx'}{1+|x'|^{n}}\Big)\Big]^M, \\[12pt] u(x',t)=\Big(P^L_t\ast\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)\Big)(x') \,\,\text{ for each point }\,\,(x',t)\in{\mathbb{R}}^n_{+}. \end{array} \end{align} In particular, there exists a constant $C=C(L,\kappa)\in(0,\infty)$ for which the following Pointwise Maximum Principle holds: \begin{equation}\label{Taf-UHN.ER.3} \big|u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big| \leq{\mathcal{N}}_\kappa u\leq C{\mathcal{M}}\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big) \,\,\text{ in }\,\,{\mathbb{R}}^{n-1}. \end{equation} \end{corollary} Indeed, from \eqref{fvqafr}, \eqref{q34t3g3a}, \eqref{41fgfgv}, and Lebesgue’s Dominated Convergence Theorem it follows that \eqref{subcritical:mild} holds. Also, based on a dyadic decomposition argument and \eqref{fvqafr} one can show that \begin{equation}\label{trrr} \int_{\mathbb{R}^{n-1}}\frac{\sup_{0<t<1}|u(x',t)|}{1+|x'|^n}\,dx' \leq C_n\int_1^\infty\|u\|_{*,t}\,\frac{dt}{t}. \end{equation} In view of \eqref{41fgfgv}, this means that \eqref{u-integ} holds. As a result, Theorem~\ref{thm:Fatou} applies and gives \eqref{Tafva.2222-ijk}. Together with the fact that the subcritical growth property is hereditary, we then conclude that all claims in \eqref{Tafva.222234t5w5} are true. \section{Well-Posedness of Boundary Value Problems} \label{S-3} In this section we shall use the Poisson kernels and Fatou-type theorems from \S\ref{S-2} as tools for establishing the well-posedness of a variety of boundary value problems in the upper half-space ${\mathbb{R}}^n_{+}$ for second-order, homogeneous, constant complex coefficient, elliptic systems in ${\mathbb{R}}^n$. \subsection{The Dirichlet problem with boundary data from weighted $L^1$} The template of the Fatou-type result from Theorem~\ref{thm:FP} prefigures the format of the well-posedness result discussed in the theorem below. \begin{theorem}\label{Them-Gen} Let $L$ be an $M\times M$ system with constant complex coefficients as in \eqref{L-def}-\eqref{L-ell.X}, and fix an aperture parameter $\kappa>0$. Then for each function \begin{align}\label{76tFfaf-7GF} \begin{array}{c} f:{\mathbb{R}}^{n-1}\to{\mathbb{C}}^{M}\,\,\text{ Lebesgue measurable} \\[6pt] \text{and }\,\,{\mathcal{M}}f\in L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n-1}}\big) \end{array} \end{align} {\rm (}recall that ${\mathcal{M}}$ is the Hardy-Littlewood maximal operator in ${\mathbb{R}}^{n-1}${\rm )} the boundary value problem \begin{equation}\label{jk-lm-jhR-LLL-HM-RN.w.BVP} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^{\infty}({\mathbb{R}}^n_{+})\big]^M, \quad Lu=0\,\,\text{ in }\,\,{\mathbb{R}}^n_{+}, \\[8pt] \displaystyle \int_{\mathbb{R}^{n-1}}\big({\mathcal{N}}_{\kappa}u\big)(x')\,\frac{dx'}{1+|x'|^{n-1}}<\infty, \\[12pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} is uniquely solvable. Moreover, the solution $u$ of \eqref{jk-lm-jhR-LLL-HM-RN.w.BVP} is given by \eqref{exist:u} and satisfies \begin{align}\label{jk-lm-jhR-LLL-HM-RN.w.BVP.2} \int_{\mathbb{R}^{n-1}}\frac{|f(x')|}{1+|x'|^{n-1}}\,dx' &\leq\int_{\mathbb{R}^{n-1}}\big({\mathcal{N}}_{\kappa}u\big)(x')\,\frac{dx'}{1+|x'|^{n-1}} \nonumber\\[6pt] &\leq C\int_{\mathbb{R}^{n-1}}\big({\mathcal{M}}f\big)(x')\,\frac{dx'}{1+|x'|^{n-1}} \end{align} for some constant $C=C(n,L,\kappa)\in(0,\infty)$ independent of $f$. \end{theorem} For each $f$ as in \eqref{76tFfaf-7GF}, the membership of ${\mathcal{M}}f$ to $L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n-1}}\big)$ implies that ${\mathcal{M}}f<\infty$ at ${\mathcal{L}}^{n-1}$-a.e. point in ${\mathbb{R}}^{n-1}$, which further entails $f\in\big[L^1_{\rm loc}({\mathbb{R}}^{n-1})\big]^M$. Granted this, Lebesgue's Differentiation Theorem applies and gives $|f|\leq{\mathcal{M}}f$ at ${\mathcal{L}}^{n-1}$-a.e. point in ${\mathbb{R}}^{n-1}$. From this, the fact that $f$ is Lebesgue measurable, and the last property in \eqref{76tFfaf-7GF}, we ultimately conclude that \begin{align}\label{76tFfaf-7GF.ewq} f\in\Big[L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n-1}}\big)\Big]^M \subset\Big[L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n}}\big)\Big]^M. \end{align} In particular, it is meaningful to define $u$ as in \eqref{exist:u}, and this ensures that the properties claimed in the first and last lines of \eqref{jk-lm-jhR-LLL-HM-RN.w.BVP} hold. Also, \eqref{nkc-EE-4} and \eqref{exist:Nu-Mf} imply \eqref{jk-lm-jhR-LLL-HM-RN.w.BVP.2} which, in turn, validates the finiteness condition in the second line of \eqref{jk-lm-jhR-LLL-HM-RN.w.BVP}. This proves existence for the boundary value problem \eqref{jk-lm-jhR-LLL-HM-RN.w.BVP}, and uniqueness follows from Theorem~\ref{thm:FP}. \subsection{The Dirichlet problem with data from $L^p$ and other related spaces} The well-posedness of the $L^p$-Dirichlet boundary value problem was established in \cite{K-MMMM}. As noted in \cite{SCGC}, our earlier results yield an alternative approach. \begin{theorem}\label{thm:Lp} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$ and fix an aperture parameter $\kappa>0$. For any $p\in(1,\infty)$ the $L^p$-Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, i.e., \begin{equation}\label{Dir-BVP-p} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[4pt] \mathcal{N}_\kappa u\,\,\text{ belongs to the space }\,\,L^p(\mathbb{R}^{n-1}), \\[6pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[L^p(\mathbb{R}^{n-1})\big]^M$. Moreover, the solution $u$ of \eqref{Dir-BVP-p} is given by \eqref{exist:u} and satisfies \begin{equation}\label{eJHBbawvr} \|f\|_{[L^p(\mathbb{R}^{n-1})]^M}\leq \|\mathcal{N}_\kappa u\|_{L^p(\mathbb{R}^{n-1})}\leq C\|f\|_{[L^p(\mathbb{R}^{n-1})]^M} \end{equation} for some constant $C\in[1,\infty)$ that depends only on $L$, $n$, $p$, and $\kappa$. \end{theorem} Indeed, since \begin{align}\label{76tFfaf-7GF.abx} L^p(\mathbb{R}^{n-1})\hookrightarrow L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n-1}}\big) \,\,\text{ for each }\,\,p\in[1,\infty), \end{align} and since the Hardy-Littlewood maximal operator \begin{align}\label{76tFfaf-7GF.aby} {\mathcal{M}}:L^p({\mathbb{R}}^{n-1})\to L^p({\mathbb{R}}^{n-1})\,\,\text{ is bounded for each }\,\,p\in(1,\infty], \end{align} we may regard \eqref{Dir-BVP-p} as a ``sub-problem'' of \eqref{jk-lm-jhR-LLL-HM-RN.w.BVP}. As such, Theorem~\ref{Them-Gen} ensures existence (in the specified format) and uniqueness. The estimates claimed in \eqref{eJHBbawvr} are implied by \eqref{nkc-EE-4}, \eqref{exist:Nu-Mf}, and \eqref{76tFfaf-7GF.aby}. \begin{remark}\label{ttFCCa} A multitude of other important ``sub-problems'' of \eqref{jk-lm-jhR-LLL-HM-RN.w.BVP} present themselves. For example, if for each $p\in(1,\infty)$ and each Muckenhoupt weight $w\in A_p({\mathbb{R}}^{n-1})$ {\rm (}cf., e.g., \cite{GCRF85}{\rm )} we let $L^p_w(\mathbb{R}^{n-1})$ denote the space of Lebesgue measurable $p$-th power integrable functions in ${\mathbb{R}}^{n-1}$ with respect to the measure $w{\mathcal{L}}^{n-1}$, then the fact that (cf. \cite{K-MMMM}) \begin{align}\label{76tFfaf-7GF.abx.W} \begin{array}{c} L^p_w(\mathbb{R}^{n-1})\hookrightarrow L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n-1}}\big) \,\,\text{and} \\[6pt] {\mathcal{M}}:L^p_w({\mathbb{R}}^{n-1})\to L^p_w({\mathbb{R}}^{n-1})\,\,\text{ boundedly}, \end{array} \end{align} ultimately implies that for each integrability exponent $p\in(1,\infty)$, each weight $w\in A_p({\mathbb{R}}^{n-1})$, and each aperture parameter $\kappa>0$, the $L^p_w$-Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, i.e., \begin{equation}\label{Dir-BVP-p-WWW} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[4pt] \mathcal{N}_\kappa u\,\,\text{ belongs to the space }\,\,L^p_w(\mathbb{R}^{n-1}), \\[6pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[L^p_w(\mathbb{R}^{n-1})\big]^M$, and the solution $u$ of \eqref{Dir-BVP-p} {\rm (}which continues to be given by \eqref{exist:u}{\rm )} satisfies \begin{equation}\label{eJHBbawvr-EW} \|f\|_{[L^p_w(\mathbb{R}^{n-1})]^M}\leq \|\mathcal{N}_\kappa u\|_{L^p_w(\mathbb{R}^{n-1})}\leq C\|f\|_{[L^p_w(\mathbb{R}^{n-1})]^M}. \end{equation} Similarly, since for the Lorentz spaces $L^{p,q}({\mathbb{R}}^{n-1})$ with $p\in(1,\infty)$, $q\in(0,\infty]$, we also have (again, see \cite{K-MMMM}) \begin{align}\label{76tFfaf-7GF.abx.L} \begin{array}{c} L^{p,q}(\mathbb{R}^{n-1})\hookrightarrow L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n-1}}\big) \,\,\text{and} \\[6pt] {\mathcal{M}}:L^{p,q}({\mathbb{R}}^{n-1})\to L^{p,q}({\mathbb{R}}^{n-1})\,\,\text{ boundedly}, \end{array} \end{align} we also conclude that the version of the Dirichlet problem \eqref{Dir-BVP-p} naturally formulated in such a setting continues to be well-posed. To offer yet another example, recall the scale of Morrey spaces $\mathfrak{L}^{p,\lambda}({\mathbb{R}}^{n-1})$ in ${\mathbb{R}}^{n-1}$, defined for each $p\in(1,\infty)$ and $\lambda\in(0,n-1)$ according to \begin{equation}\label{MOR.1} \mathfrak{L}^{p,\lambda}({\mathbb{R}}^{n-1}):=\Big\{f\in L^p_{\rm loc}({\mathbb{R}}^{n-1}):\, \|f\|_{\mathfrak{L}^{p,\lambda}({\mathbb{R}}^{n-1})}<\infty\Big\} \end{equation} where \begin{equation}\label{MOR.2} \|f\|_{\mathfrak{L}^{p,\lambda}({\mathbb{R}}^{n-1})}:= \sup_{x'\in{\mathbb{R}}^{n-1},\,r>0}\Big(r^{-\lambda}\int_{B_{n-1}(x',r)}|f|^p\,d{\mathcal{L}}^{n-1}\Big)^{1/p}. \end{equation} Given that \begin{align}\label{MOR.9} \begin{array}{c} \mathfrak{L}^{p,\lambda}({\mathbb{R}}^{n-1})\subset L^1\big({\mathbb{R}}^{n-1}\,,\,\tfrac{dx'}{1+|x'|^{n-1}}\big) \\[6pt] \text{provided }\,\,1<p<\infty\,\,\text{ and }\,\,0<\lambda<n-1, \end{array} \end{align} and since (cf., e.g., \cite{CF}) \begin{align}\label{MOR.11} \parbox{8.0cm}{the Hardy-Littlewood operator ${\mathcal{M}}$ is bounded on $\mathfrak{L}^{p,\lambda}({\mathbb{R}}^{n-1})$ if $1<p<\infty$ and $0<\lambda<n-1$,} \end{align} we once again conclude that the version of the Dirichlet problem \eqref{Dir-BVP-p} naturally formulated in terms of Morrey spaces becomes well-posed. For more examples of this nature and further details the reader is referred to \cite{K-MMMM}. \end{remark} Later on, in Theorem~\ref{thm:Linfty}, we shall see that in fact the end-point $p=\infty$ is permissible in the context of Theorem~\ref{thm:Lp}; that is, the $L^\infty$-Dirichlet problem is well-posed. At the other end of the spectrum, i.e., for $p=1$, the very nature of \eqref{Dir-BVP-p} changes. Indeed, at least when $L=\Delta$, the Laplacian in ${\mathbb{R}}^n$, from \cite[Proposition~1, p.\,119]{Stein93} we know that for any harmonic function $u$ in $\mathbb{R}^{n}_{+}$ with $\mathcal{N}_\kappa u\in L^1(\mathbb{R}^{n-1})$ there exists $f\in H^1({\mathbb{R}}^{n-1})$ such that $u(x',t)=(P^\Delta_t\ast f)(x')$ for each $(x',t)\in{\mathbb{R}}^n_{+}$. In concert with Theorem~\ref{thm:FP} and the observation that $H^1({\mathbb{R}}^{n-1})$ is a subspace of $L^1({\mathbb{R}}^{n-1})$, this implies that any harmonic function $u$ in $\mathbb{R}^{n}_{+}$ with $\mathcal{N}_\kappa u\in L^1(\mathbb{R}^{n-1})$ (for some $\kappa>0$) has a nontangential boundary trace $u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}$ at ${\mathcal{L}}^{n-1}$-a.e. point in ${\mathbb{R}}^{n-1}$ which actually belongs to the Hardy space $H^1(\mathbb{R}^{n-1})$. Thus, the boundary data are necessarily in a Hardy space in this case. This feature accounts for the manner in which we now formulate the following well-posedness result. \begin{theorem}\label{thm:Lp-H111} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$ and fix an aperture parameter $\kappa>0$. Then the $(H^1,L^1)$-Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, i.e., \begin{equation}\label{Dir-BVP-p.H111} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[4pt] \mathcal{N}_\kappa u\,\,\text{ belongs to the space }\,\,L^1(\mathbb{R}^{n-1}), \\[6pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f$ belonging to the Hardy space $\big[H^1(\mathbb{R}^{n-1})\big]^M$. In addition, the solution $u$ of \eqref{Dir-BVP-p.H111} is given by \eqref{exist:u} and satisfies \begin{equation}\label{eJHBbawvr.H111} \|\mathcal{N}_\kappa u\|_{L^1(\mathbb{R}^{n-1})}\leq C\|f\|_{[H^1(\mathbb{R}^{n-1})]^M} \end{equation} for some constant $C\in(0,\infty)$ which depends only on $L$, $n$, and $\kappa$. \end{theorem} Theorem~\ref{thm:Lp-H111} has been originally established in \cite{K-MMMM}, and the present work yields an alternative proof. Indeed, existence follows from item {\it (e)} of Theorem~\ref{thm:Poisson.II}, while uniqueness is implied by Theorem~\ref{thm:FP}. In relation to the work discussed so far in this section we wish to formulate several open questions. We start by formulating a question which asks for allowing more general operators in the statement of \cite[Proposition~1, p.\,119]{Stein93}. \vskip 0.08in {\bf Open Question~2.} {\it Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Suppose $0<p\leq\infty$ and fix some $\kappa>0$. Also, consider a function $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ satisfying $Lu=0$ in $\mathbb{R}_{+}^n$. Show that $\mathcal{N}_\kappa u\in L^p(\mathbb{R}^{n-1})$ if and only if there exists $f\in\big[H^p(\mathbb{R}^{n-1})\big]^M$ such that $u(x',t)=(P^L_t\ast f)(x')$ for each $(x',t)\in{\mathbb{R}}^n_{+}$. Moreover, show that $\big\|\mathcal{N}_\kappa u\big\|_{L^p(\mathbb{R}^{n-1})}\approx\|f\|_{[H^p(\mathbb{R}^{n-1})]^M}$.} \vskip 0.08in Theorem~\ref{thm:Lp} provides an answer to this question in the range $p\in(1,\infty)$, while Theorem~\ref{thm:Linfty} (discussed later on) addresses the case $p=\infty$. Also, item {\it (e)} of Theorem~\ref{thm:Poisson.II} is directly relevant to the issue at hand in the range $p\in\big(\tfrac{n-1}{n}\,,\,1\big]$. Our next question asks for allowing more general operators in the formulation of \cite[Theorem~4.23, p.\,190]{GCRF85}. \vskip 0.08in {\bf Open Question~3.} {\it Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Suppose $0<p\leq\infty$ and fix some $\kappa>0$. Also, consider a function $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ satisfying $Lu=0$ in $\mathbb{R}_{+}^n$ and $\mathcal{N}_\kappa u\in L^p(\mathbb{R}^{n-1})$. Show that $f:=\lim\limits_{t\to 0^{+}}u(\cdot,t)$ exists in the sense of tempered distributions in ${\mathbb{R}}^{n-1}$, i.e., in $\big[{\mathcal{S}}'({\mathbb{R}}^{n-1})\big]^M$.} \vskip 0.08in The following question pertains to the well-posedness of a brand of Dirichlet problem in which the boundary trace is taken in a weak, distributional sense. \vskip 0.08in {\bf Open Question~4.} {\it Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Suppose $0<p\leq\infty$ and fix some $\kappa>0$. Show that for each $f\in\big[H^p({\mathbb{R}}^{n-1})\big]^M$ the following boundary value problem is uniquely solvable and a naturally accompanying estimate holds:} \begin{equation}\label{Dir-BVP-p.hphp} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[4pt] \mathcal{N}_\kappa u\,\,\text{ belongs to the space }\,\,L^p(\mathbb{R}^{n-1}), \\[6pt] \lim\limits_{t\to 0^{+}}u(\cdot,t)=f\,\,\text{ in }\,\,\big[{\mathcal{S}}'({\mathbb{R}}^{n-1})\big]^M. \end{array} \right. \end{equation} Our earlier work shows that \eqref{Dir-BVP-p.hphp} is indeed well-posed if $p\in[1,\infty]$. The question below has to do with the solvability of the so-called Regularity problem. This is a brand of Dirichlet problem in which the boundary data is selected from Sobolev spaces ($L^p$-based, of order one) and, as a result, stronger regularity is demanded of the solution. \vskip 0.08in {\bf Open Question~5.} {\it Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Fix an integrability exponent $p\in(1,\infty)$ along with an aperture parameter $\kappa>0$. Also, pick an arbitrary $f$ in the Sobolev space $\big[W^{1,p}({\mathbb{R}}^{n-1})\big]^M$. Find additional conditions, either on the system $L$ or the boundary datum $f$, guaranteeing that the Regularity problem formulated as} \begin{equation}\label{Dir-REG-p.} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[4pt] \mathcal{N}_\kappa u,\,\mathcal{N}_\kappa(\nabla u)\,\,\text{ belong to }\,\,L^p(\mathbb{R}^{n-1}), \\[6pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-.a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} {\it is solvable and a naturally accompanying estimate holds.} \vskip 0.08in Work relevant to this question may be found in \cite{H-MMMM} where a large class of systems $L$, including scalar operators (such as the Laplacian) as well as the Lam\'e system \eqref{TYd-YG-76g}, has been identified with the property that the Regularity problem \eqref{Dir-REG-p.} is uniquely solvable for each $f\in\big[W^{1,p}({\mathbb{R}}^{n-1})\big]^M$ with $1<p<\infty$. Also, in \cite{S-MMMM} the following link between the solvability of the Regularity problem \eqref{Dir-REG-p.}, and the domain of the infinitesimal generator of the $C_0$-semigroup $T=\{T(t)\}_{t\geq 0}$ associated with $L$ as in \eqref{eq:Taghb8}, has been established. \begin{theorem}\label{V-Naa.11} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Fix some $p\in(1,\infty)$ and consider the $C_0$-semigroup $T=\{T(t)\}_{t\geq 0}$ on $\big[L^p({\mathbb{R}}^{n-1})\big]^M$ associated with $L$ as in \eqref{eq:Taghb8}. Denote by ${\mathbf{A}}$ the infinitesimal generator of $T$, with domain $D({\mathbf{A}})$. Then $D({\mathbf{A}})$ is a dense linear subspace of $\big[W^{1,p}({\mathbb{R}}^{n-1})\big]^M$ and, in fact, \begin{align}\label{eq:tfc.1-new} D({\mathbf{A}})=\big\{f\in\big[W^{1,p}({\mathbb{R}}^{n-1})\big]^M:\,& \text{the problem \eqref{Dir-REG-p.} with} \nonumber\\[-2pt] &\text{boundary datum $f$ is solvable}\big\}. \end{align} In particular, $D({\mathbf{A}})=\big[W^{1,p}({\mathbb{R}}^{n-1})\big]^M$ if and only if the Regularity problem \eqref{Dir-REG-p.} is solvable for arbitrary data $f\in\big[W^{1,p}({\mathbb{R}}^{n-1})\big]^M$. \end{theorem} Moving on, we present a Fatou-type theorem from \cite{SCGC} which refines work in \cite[Theorem~6.1]{K-MMMM} and \cite[Corollary~6.3]{K-MMMM}. \begin{theorem}\label{TFac-gR} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Assume that $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ satisfies $Lu=0$ in $\mathbb{R}_{+}^n$. If $\mathcal{N}_\kappa u\in L^p(\mathbb{R}^{n-1})$ for some $p\in[1,\infty]$ and $\kappa>0$, then \begin{align}\label{Tafva.222fawcr} \begin{array}{l} \big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)(x') \,\,\text{ exists at ${\mathcal{L}}^{n-1}$-a.e. point }\,\,x'\in{\mathbb{R}}^{n-1}, \\[10pt] \displaystyle u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\,\,\text{ belongs to }\,\, \big[L^p({\mathbb{R}}^{n-1})\big]^M\subset\Big[L^1\Big({\mathbb{R}}^{n-1}\,,\,\frac{dx'}{1+|x'|^{n}}\Big)\Big]^M, \\[12pt] u(x',t)=\Big(P^L_t\ast\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)\Big)(x') \,\,\text{ for each point }\,\,(x',t)\in{\mathbb{R}}^n_{+}. \end{array} \end{align} \end{theorem} In view of \eqref{76tFfaf-7GF.abx}, when $p\in[1,\infty)$ all claims are direct consequences of Theorem~\ref{thm:FP} (also bearing \eqref{nkc-EE-4} in mind). The result corresponding to the end-point $p=\infty$ is no longer implied by Theorem~\ref{thm:FP} as the finiteness condition in the second line of \eqref{jk-lm-jhR-LLL-HM-RN.w} fails in general for bounded functions (the best one can say in such a scenario is that $\mathcal{N}_\kappa u\in L^\infty(\mathbb{R}^{n-1})$). Nonetheless, Corollary~\ref{thm:Fatou-SCG} applies and all desired conclusions now follow from this. As a corollary, we note that, given any aperture parameter $\kappa>0$ along with an integrability exponent $p\in(1,\infty]$, Theorem~\ref{TFac-gR} implies (together with \eqref{nkc-EE-4}, \eqref{exist:Nu-Mf}, and \eqref{76tFfaf-7GF.aby}) the following $L^p$-styled Maximum Principle: \begin{align}\label{Ta-jy6GGa-yT} \begin{array}{c} \big\|u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big\|_{[L^p({\mathbb{R}}^{n-1})]^M} \approx\big\|{\mathcal{N}}_\kappa u\big\|_{L^p({\mathbb{R}}^{n-1})}\,\,\text{ uniformly in} \\[6pt] \text{the class of functions $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ satisfying} \\[6pt] \text{$Lu=0$ in $\mathbb{R}_{+}^n$ as well as $\mathcal{N}_\kappa u\in L^p(\mathbb{R}^{n-1})$.} \end{array} \end{align} Theorem~\ref{TFac-gR} is sharp, in the sense that the corresponding result fails for $p\in\big(\tfrac{n-1}{n}\,,\,1\big)$. To see that this is the case, fix some vector $a\in{\mathbb{C}}^M\setminus\{0\}$ along with some point $z'\in{\mathbb{R}}^{n-1}\setminus\{0'\}$ and consider the function \begin{equation}\label{FCT-TR.nnn} u_\star(x',t):=K^L(x',t)a-K^L(x'-z',t)a\,\,\text{ for each }\,\,(x',t)\in\mathbb{R}^n_{+}. \end{equation} Then $u_\star$ belongs to the space $\big[\mathcal{C}^{\infty}(\overline{\mathbb{R}^n_{+}}\setminus\{(0',0),(z',0)\})\big]^M$, satisfies $Lu_\star=0$ in $\mathbb{R}_{+}^n$, and $\Big(u_\star\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}\Big)(x')=0$ for every aperture parameter $\kappa>0$ and every point $x'\in{\mathbb{R}}^{n-1}\setminus\{0',z'\}$. In addition, we may choose $a,z'$ such that $u_\star$ is not identically zero (otherwise this would force $K^L(x',t)$ to be independent of $x'$, a happenstance precluded by, e.g., \eqref{eq:Kest} and \eqref{eq:IG6gy.2}-\eqref{eq:Gvav7g5}). Hence, on the one hand, the Poisson integral representation formula in the last line of \eqref{Tafva.222fawcr} presently fails. On the other hand, from the well-known fact that \begin{equation}\label{Ka-jGG.1} \delta_{0'}-\delta_{z'}\in H^p({\mathbb{R}}^{n-1})\,\,\text{ for each }\,\,p\in\big(\tfrac{n-1}{n}\,,\,1\big), \end{equation} it follows that \begin{equation}\label{Ka-jGG.2} f:=(\delta_{0'}-\delta_{z'})a\in\big[H^p({\mathbb{R}}^{n-1})\big]^M \,\,\text{ for each }\,\,p\in\big(\tfrac{n-1}{n}\,,\,1\big). \end{equation} Moreover, $f$ is related to the function $u$ from \eqref{FCT-TR.nnn} via $u_\star(x',t)=(P^L_t\ast f)(x')$ at each point $(x',t)\in{\mathbb{R}}^n_{+}$, with the convolution understood as in \eqref{exist:u-123}. As such, \eqref{exist:Nu-Mf-Hp} implies that for each aperture parameter $\kappa>0$ we have \begin{equation}\label{exist:Nu-Mf-Hp.iii} \mathcal{N}_\kappa u_\star\in L^p(\mathbb{R}^{n-1}) \,\,\text{ for each }\,\,p\in\big(\tfrac{n-1}{n}\,,\,1\big). \end{equation} Parenthetically, we wish to pint out that the membership in \eqref{exist:Nu-Mf-Hp.iii} may also be justified directly based on \eqref{FCT-TR.nnn} and the estimates for the kernel function $K^L$ from item {\it (a)} in Theorem~\ref{thm:Poisson.II} which, collectively, show that \begin{equation}\label{eq:IG6gy.2-LaIP} \parbox{10.60cm}{for $x'\in{\mathbb{R}}^{n-1}$, the nontangential maximal function $\big(\mathcal{N}_\kappa u_\star\big)(x')$ behaves like $|x'|^{1-n}$ if $x'$ is near $0'$, like $|x'-z'|^{1-n}$ if $x'$ is near $z'$, like $|x'|^{-n}$ if $x'$ is near infinity, and is otherwise bounded.} \end{equation} Granted this, it follows that $\mathcal{N}_\kappa u_\star\in L^p(\mathbb{R}^{n-1})$ if and only if $p(n-1)<n-1$ and $pn>n-1$, a set of conditions equivalent to $p\in\big(\tfrac{n-1}{n}\,,\,1\big)$. To summarize, the function $u_\star$ defined in \eqref{FCT-TR.nnn} satisfies, for each aperture parameter $\kappa>0$, \begin{equation}\label{rt-vba-jg} \left\{ \begin{array}{l} u_\star\in\big[{\mathcal{C}}^{\infty}({\mathbb{R}}^n_{+})\big]^M,\quad Lu_\star=0\,\,\text{ in }\,\,{\mathbb{R}}^n_{+}, \\[6pt] {\mathcal{N}}_{\kappa}u_\star\in L^p(\mathbb{R}^{n-1})\,\,\text{ for each }\,\,p\in\big(\tfrac{n-1}{n}\,,\,1\big), \\[8pt] u_\star\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=0 \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \\[8pt] \text{the function $u_\star$ is not identically zero in ${\mathbb{R}}^n_{+}$.} \end{array} \right. \end{equation} This is in sharp contrast to Theorem~\ref{TFac-gR}, and points to the fact that when $p<1$ the pointwise nontangential boundary trace of a null-solution of the system $L$ no longer characterizes the original function. \subsection{The subcritical growth Dirichlet problem}\label{section:SCG-BVP} Recall that ${\rm SCG}(\mathbb{R}^{n-1})$ stands for the class of functions exhibiting subcritical growth in $\mathbb{R}^{n-1}$, defined as in \eqref{SCG-f}. In relation to this class, we have the following well-posedness result from \cite{SCGC}. \begin{theorem}\label{thm:SCG} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Also, fix an aperture parameter $\kappa>0$. Then the subcritical growth Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, formulated as \begin{equation}\label{Dir-BVP-SCG} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[4pt] \lim\limits_{\rho\to\infty}\|u\|_{*,\rho}=0, \\[8pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[{\rm SCG}(\mathbb{R}^{n-1})\big]^M$. Moreover, the solution $u$ of \eqref{Dir-BVP-SCG} is given by \eqref{exist:u} and satisfies the following Weak Local Maximum Principle: \begin{equation}\label{q43t3g} \sup_{B(0,\rho)\cap\mathbb{R}^n_{+}}|u|\leq C\,\Bigg(\|f\|_{[L^\infty(B_{n-1}(0',2\rho))]^M} +\int_{\mathbb{R}^{n-1}\setminus B_{n-1}(0',2\rho)}\frac{\rho|f(y')|}{\rho^n+|y'|^n}\,dy'\Bigg) \end{equation} for each $\rho\in(0,\infty)$, where $C\in[1,\infty)$ depends only on $L$ and $n$. \end{theorem} Note that having $f\in\big[{\rm SCG}(\mathbb{R}^{n-1})\big]^M$ ensures that $f$ satisfies \eqref{exist:f} which, in turn, allows us to define the solution $u$ via the convolution with the Poisson kernel (cf. item {\it (c)} of Theorem~\ref{thm:Poisson.II}). Uniqueness follows at once from Theorem~\ref{thm:uniq-subcritical}. To close, we remark that the second condition imposed on $f$ in \eqref{SCG-f}, which amounts to saying that $f$ has subcritical growth, is natural in the context of \eqref{Dir-BVP-SCG}. Indeed, whenever $u$ satisfies $\lim\limits_{\rho\to\infty}\|u\|_{*,\rho}=0$ and $f:=u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}$ exists ${\mathcal{L}}^{n-1}$-a.e. in ${\mathbb{R}}^{n-1}$ it is not difficult to see that \begin{equation}\label{zxcvhsdr5} \rho^{-1}\|f\|_{[L^\infty(B_{n-1}(0',\rho))]^M}\leq\|u\|_{*,\rho}\to 0\,\,\text{ as }\,\,\rho\to\infty, \end{equation} which ultimately implies the second condition in \eqref{SCG-f}. \subsection{The $L^\infty$-Dirichlet boundary value problem} Here we revisit Theorem~\ref{thm:Lp} and consider the (initially forbidden) end-point $p=\infty$. Our result below is well-known in the particular case when $L=\Delta$, the Laplacian in ${\mathbb{R}}^n$, but all known proofs (e.g., that of \cite[Theorem~4.8, p.\,174]{GCRF85}, or that of \cite[Proposition~1, p.\,199]{St70}) make use of specialized properties of harmonic functions. Following \cite{SCGC}, here we are able to treat the $L^\infty$-Dirichlet boundary value problem in $\mathbb{R}^{n}_{+}$ for any homogeneous constant complex coefficient elliptic second-order system in a conceptually simple manner, relying on our more general result from Theorem~\ref{thm:SCG}. \begin{theorem}\label{thm:Linfty} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$ and fix an aperture parameter $\kappa>0$. Then the $L^\infty$-Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, \begin{equation}\label{Dir-BVP-Linfty} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\cap L^\infty(\mathbb{R}^n_{+})\big]^M, \\[4pt] Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[6pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[L^\infty(\mathbb{R}^{n-1})\big]^M$. Moreover, the solution $u$ of \eqref{Dir-BVP-Linfty} is given by \eqref{exist:u} and satisfies the Weak Maximum Principle \begin{equation}\label{eJHBb} \|f\|_{[L^\infty(\mathbb{R}^{n-1})]^M}\leq \|u\|_{[L^\infty(\mathbb{R}^n_{+})]^M}\leq C\|f\|_{[L^\infty(\mathbb{R}^{n-1})]^M}, \end{equation} for some constant $C\in[1,\infty)$ that depends only on $L$ and $n$. \end{theorem} Since \begin{align}\label{76tFfaf-7GF.ab345} L^\infty(\mathbb{R}^{n-1})\subset{\rm SCG}(\mathbb{R}^{n-1}), \end{align} and since \eqref{q43t3g} readily implies \eqref{eJHBb}, we may regard \eqref{Dir-BVP-Linfty} as a ``sub-problem'' of \eqref{Dir-BVP-SCG}. This ensures existence (in the specified format), uniqueness, as well as the estimate claimed in \eqref{eJHBb}. \subsection{The classical Dirichlet boundary value problem} Given $E\subseteq{\mathbb{R}}^m$, for some $m\in{\mathbb{N}}$, define ${\mathcal{C}}^0_b(E)$ to be the space of ${\mathbb{C}}$-valued functions defined on $E$ which are continuous and bounded. The theorem below appears in \cite{SCGC}. The particular case when $L=\Delta$, the Laplacian in ${\mathbb{R}}^n$ is a well-known, classical result (see, e.g., \cite[Theorem~7.5, p.\,148]{ABR}, or \cite[Theorem~4.4, p.\,170]{GCRF85}), so the novelty here is the consideration of much more general operators. \begin{theorem}\label{thm:CLASSICAL} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$. Then the classical Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, \begin{equation}\label{Dir-BVP-CLASSICAL} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\cap{\mathcal{C}}^0_b(\overline{\mathbb{R}^{n}_{+}})\big]^M, \\[4pt] Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[4pt] u\big|_{\partial{\mathbb{R}}^{n}_{+}}=f\,\,\text{ in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[{\mathcal{C}}^0_b(\mathbb{R}^{n-1})\big]^M$. Moreover, the solution $u$ of \eqref{Dir-BVP-CLASSICAL} is given by \eqref{exist:u} and satisfies the Weak Maximum Principle \begin{equation}\label{eJHBb-CLASSICAL} \sup_{\mathbb{R}^{n-1}}|f|\leq\sup_{\overline{\mathbb{R}^n_{+}}}|u|\leq C\sup_{\mathbb{R}^{n-1}}|f| \end{equation} for some constant $C\in[1,\infty)$ that depends only on $L$ and $n$. \end{theorem} Existence is a consequence of item {\it (c)} of Theorem~\ref{thm:Poisson.II}, uniqueness is implied by Theorem~\ref{thm:Linfty}, and \eqref{eJHBb-CLASSICAL} follows from \eqref{eJHBb}. The nature of the constant $C$ appearing in the Weak Maximum Principle \eqref{eJHBb-CLASSICAL} (as well as other related inequalities) has been studied by G.~Kresin and V.~Maz'ya in \cite{KrMa}. \vskip 0.08in {\bf Open Question~6.} {\it In the context of Theorem~\ref{thm:CLASSICAL}, if the boundedness requirement is dropped both for the boundary datum and for the solution, does the resulting boundary value problem continue to be solvable?} \vskip 0.08in This is known to be the case when $L=\Delta$, the Laplacian in ${\mathbb{R}}^n$; see, e.g. \cite[Theorem~7.11, p.\,150]{ABR}. \subsection{The sublinear growth Dirichlet problem}\label{section:SLG-BVP} Given a Lebesgue measurable set $E\subseteq\mathbb{R}^n$ and $\theta\in[0,1)$ we define the space of sublinear growth functions of order $\theta$, denoted by ${\rm SLG}_\theta(E)$, as the collection of Lebesgue measurable functions $w:E\to{\mathbb{C}}$ satisfying \begin{equation} \|w\|_{{\rm SLG}_\theta(E)}:={\rm{ess\,sup}}_{x\in E}\frac{|w(x)|}{1+|x|^\theta}<\infty. \end{equation} Hence, ${\rm SLG}_0(E)=L^\infty(E)$. Also, it clear from definitions that for each continuous function $u\in{\rm SLG}_\theta(\mathbb{R}^{n}_{+})$ we have \begin{equation}\label{aerge} \|u\|_{*,\rho}\leq\frac{1+\rho^\theta}{\rho}\|u\|_{{\rm SLG}_\theta(\mathbb{R}^{n}_{+})} \,\,\text{ for each }\,\,\rho\in(0,\infty). \end{equation} As a consequence, any continuous function in ${\rm SLG}_\theta({\mathbb{R}}^n_{+})$ with $\theta\in[0,1)$ has subcritical growth, i.e., \begin{equation}\label{ahTva} \lim_{\rho\to\infty}\|u\|_{*,\rho}=0\,\,\text{ for each }\,\, u\in{\mathcal{C}}^0({\mathbb{R}}^n_{+})\cap{\rm SLG}_\theta({\mathbb{R}}^n_{+})\,\,\text{ with }\,\,\theta\in[0,1). \end{equation} In fact, for each continuous function $u:{\mathbb{R}}^n_{+}\to{\mathbb{C}}$ we have \begin{equation}\label{aerge.GDFS} \|u\|_{{\rm SLG}_\theta(\mathbb{R}^{n}_{+})}=\sup_{\rho>0}\frac{\rho}{1+\rho^\theta}\|u\|_{*,\rho}. \end{equation} Indeed, the right-pointing inequality is clear from \eqref{aerge}, while the left-pointing inequality in \eqref{aerge.GDFS} may be justified by writing \begin{align}\label{aerge.GDFS.2} \frac{|u(x)|}{1+|x|^\theta} &\leq\frac{|x|}{1+|x|^\theta}\Big(|x|^{-1}\cdot\sup_{B(0,|x|)}|u|\Big) =\frac{|x|}{1+|x|^\theta}\|u\|_{*,|x|} \nonumber\\[6pt] &\leq\sup_{\rho>0}\frac{\rho}{1+\rho^\theta}\|u\|_{*,\rho}\,\,\text{ for each }\,\,x\in\mathbb{R}^{n}_{+}, \end{align} and then taking the supremum over all $x\in\mathbb{R}^{n}_{+}$. Finally, we wish to note that \begin{equation}\label{ahTva.222} {\rm SLG}_\theta({\mathbb{R}}^{n-1})\subset{\rm SCG}(\mathbb{R}^{n-1}) \,\,\text{ whenever }\,\,\theta\in[0,1). \end{equation} The following result from \cite{SCGC} extends Theorem~\ref{thm:Linfty} (which corresponds to the case when $\theta=0$). \begin{theorem}\label{thm:SLG} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$, and fix an aperture parameter $\kappa>0$ along with some exponent $\theta\in[0,1)$. Then the sublinear growth Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, formulated as \begin{equation}\label{Dir-BVP-SLG} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\cap{\rm SLG}_\theta(\mathbb{R}^{n}_{+})\big]^M, \\[4pt] Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[8pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[{\rm SLG}_\theta(\mathbb{R}^{n-1})\big]^M$. Moreover, the solution $u$ of \eqref{Dir-BVP-SLG} is given by \eqref{exist:u} and satisfies \begin{equation}\label{q43t3gqdfr} \|f\|_{[{\rm SLG}_\theta(\mathbb{R}^{n-1})]^M}\leq \|u\|_{[{\rm SLG}_\theta(\mathbb{R}^{n}_{+})]^M} \leq C\|f\|_{[{\rm SLG}_\theta(\mathbb{R}^{n-1})]^M} \end{equation} for some constant $C\in[1,\infty)$ depending only on $L$, $n$, and $\theta$. \end{theorem} Thanks to \eqref{ahTva.222} plus the fact that \eqref{q43t3g} and \eqref{aerge.GDFS} readily imply \eqref{q43t3gqdfr}, we may regard \eqref{Dir-BVP-SLG} as a ``sub-problem'' of \eqref{Dir-BVP-SCG}. Such a point of view then guarantees existence (in the class of solutions specified in \eqref{Dir-BVP-SLG}), uniqueness, and also the estimate claimed in \eqref{q43t3gqdfr}. The linear function $u(x',t)=ta$ for each $(x',t)\in{\mathbb{R}}^n_{+}$ (where $a\in{\mathbb{C}}^M\setminus\{0\}$ is a fixed vector) serves as a counterexample to the version of Theorem~\ref{thm:SLG} corresponding to $\theta=1$. Thus, restricting the exponent $\theta$ to $[0,1)$ is optimal. As first noted in \cite{SCGC}, we also have a Fatou-type result in the context of functions with sublinear growth (extending the case $p=\infty$ of Theorem~\ref{TFac-gR} which corresponds to $\theta=0$). This reads as follows: \begin{theorem}\label{y7tgv} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$ and fix an aperture parameter $\kappa>0$. Assume $u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M$ satisfies $Lu=0$ in $\mathbb{R}_{+}^n$. If $u\in\big[{\rm SLG}_\theta(\mathbb{R}^{n}_{+})\big]^M$ for some $\theta\in[0,1)$ then \begin{align}\label{Tafva.2222loolo} \begin{array}{l} \big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)(x') \,\,\text{ exists at ${\mathcal{L}}^{n-1}$-a.e. point }\,\,x'\in{\mathbb{R}}^{n-1}, \\[10pt] \displaystyle u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\,\,\text{ belongs to }\,\, \big[{\rm SLG}_\theta({\mathbb{R}}^{n-1})\big]^M\subset\Big[L^1\Big({\mathbb{R}}^{n-1}\,,\,\frac{dx'}{1+|x'|^{n}}\Big)\Big]^M, \\[12pt] u(x',t)=\Big(P^L_t\ast\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)\Big)(x') \,\,\text{ for each point }\,\,(x',t)\in{\mathbb{R}}^n_{+}. \end{array} \end{align} As a consequence of this and Theorem~\ref{thm:SLG}, in the present setting the following version of the Maximum Principle holds: \begin{equation}\label{Taf-UHN.ER.4} \big\|u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big\|_{[{\rm SLG}_\theta(\mathbb{R}^{n-1})]^M} \approx\|u\|_{[{\rm SLG}_\theta(\mathbb{R}^{n}_{+})]^M}. \end{equation} \end{theorem} Since thanks to \eqref{aerge} we have \begin{equation}\label{yt6h} \int_1^\infty\|u\|_{*,t}\,\frac{dt}{t}\leq\|u\|_{[{\rm SLG}_\theta(\mathbb{R}^{n}_{+})]^M} \int_1^\infty\frac{1+t^\theta}{t^2}\,dt<\infty, \end{equation} we may invoke Corollary~\ref{thm:Fatou-SCG} to conclude that the properties listed in \eqref{Tafva.222234t5w5} hold. It remains to check that the second item in \eqref{Tafva.2222loolo} holds, and this may be seen directly from definitions. Once again, the linear function $u(x',t)=ta$ for each $(x',t)\in{\mathbb{R}}^n_{+}$ (where $a\in{\mathbb{C}}^M\setminus\{0\}$ is a fixed vector) becomes a counterexample to the version of Theorem~\ref{y7tgv} corresponding to the end-point case $\theta=1$. As such, restricting the exponent $\theta$ to $[0,1)$ is sharp. \subsection{The Dirichlet problem with boundary data in H\"older spaces} Given $E\subset\mathbb{R}^m$ (for some $m\in{\mathbb{N}}$) and $\theta>0$, we define the homogeneous H\"older space of order $\theta$ on $E$, denoted by $\dot{\mathcal{C}}^\theta(E)$, as the collection of functions $w:E\to{\mathbb{C}}$ satisfying \begin{equation}\label{yTFVC.1} \|w\|_{\dot{\mathcal{C}}^\theta(E)}:=\sup_{\substack{x,y\in E\\ x\not=y}}\frac{|w(x)-w(y)|}{|x-y|^\theta}<\infty. \end{equation} Also, define the inhomogeneous H\"older space of order $\theta$ on $E$ as \begin{equation}\label{yTFVC.2} {\mathcal{C}}^\theta(E):=\big\{w\in\dot{\mathcal{C}}^\theta(E):\,\sup_{E}|w|<\infty\big\}, \end{equation} and set $\|w\|_{{\mathcal{C}}^\theta(E)}:=\|w\|_{\dot{\mathcal{C}}^\theta(E)}+\sup_{E}|w|$ for each $w\in{\mathcal{C}}^\theta(E)$. Clearly, \begin{equation}\label{6er3dd} {\mathcal{C}}^\theta(E)\subseteq\dot{\mathcal{C}}^\theta(E)\subseteq{\rm SLG}_\theta(E)\,\,\text{ for each }\,\,\theta>0. \end{equation} In particular, together with \eqref{ahTva} this implies that any function in $\dot{\mathcal{C}}^\theta({\mathbb{R}}^n_{+})$ with $\theta\in(0,1)$ has subcritical growth. The well-posedness of the $\dot{\mathcal{C}}^\theta$-Dirichlet problem was studied in \cite{BMO-MMMM} (see also \cite{Holder-MMM}). Here we follow the approach in \cite{SCGC} which uses item {\it (d)} in Theorem~\ref{thm:Poisson.II} and Theorem~\ref{y7tgv} to give an alternative, conceptually simpler proof. \begin{theorem}\label{theor:Holder} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$, and fix $\theta\in(0,1)$. Then the $\dot{\mathcal{C}}^\theta$-Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, formulated as \begin{equation}\label{Dir-BVP-Holder} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\cap\dot{\mathcal{C}}^\theta(\overline{\mathbb{R}^{n}_{+}})\big]^M, \\[4pt] Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[6pt] u\big|_{\partial{\mathbb{R}}^{n}_{+}}=f\,\,\text{ on }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[\dot{\mathcal{C}}^\theta(\mathbb{R}^{n-1})\big]^M$. The solution $u$ of \eqref{Dir-BVP-Holder} is given by \eqref{exist:u} and there exists a constant $C=C(n,L,\theta)\in[1,\infty)$ with the property that \begin{equation}\label{Dir-BVP-BMO-Car-frac22} \|f\|_{[\dot{\mathcal{C}}^\theta(\mathbb{R}^{n-1})]^M}\leq \|u\|_{[\dot{\mathcal{C}}^\theta(\overline{{\mathbb{R}}^n_{+}})]^M} \leq C\,\|f\|_{[\dot{\mathcal{C}}^\theta(\mathbb{R}^{n-1})]^M}. \end{equation} \end{theorem} To prove existence, consider $p:=\big(1+\tfrac{\theta}{n-1}\big)^{-1}$ and note that this further implies $p\in\big(\tfrac{n-1}{n}\,,\,1\big)$ and $\theta=(n-1)\big(\tfrac{1}{p}-1\big)$. In particular, with $\sim$ denoting the equivalence relation identifying any two functions which differ by a constant (cf., e.g., \cite[Theorem~5.30, p.307]{GCRF85}), we have $\big(H^p(\mathbb{R}^{n-1})\big)^\ast=\dot{\mathcal{C}}^{\theta}({\mathbb{R}}^{n-1})\big/\sim$. Next, given an arbitrary function $f=(f_\beta)_{1\leq\beta\leq M}\in\big[\dot{\mathcal{C}}^\theta(\mathbb{R}^{n-1})\big]^{M}$, it is meaningful to define $u(x',t):=(P^L_t\ast f)(x')$ for all $(x',t)\in{\mathbb{R}}^n_{+}$. Then $u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M$ satisfies $Lu=0$ in $\mathbb{R}^{n}_{+}$, and for each $j\in\{1,\dots,n\}$ and each $(x',t)\in{\mathbb{R}}^n_{+}$ we have \begin{align}\label{exist:u-1Rfa.iy} t^{1-\theta}(\partial_j u)(x',t)=\Bigg\{\Big\langle t^{1-(n-1)(\frac{1}{p}-1)}(\partial_j K^L_{\alpha\beta})(x'-\cdot,t)\,,\, [f_\beta]\Big\rangle\Bigg\}_{1\leq\alpha\leq M} \end{align} where $\langle\cdot,\cdot\rangle$ is the pairing between distributions belonging to the Hardy space $H^p(\mathbb{R}^{n-1})$ and equivalence classes {\rm (}modulo constants{\rm )} of functions belonging to $\dot{\mathcal{C}}^{\theta}({\mathbb{R}}^{n-1})$. In turn, based on \eqref{exist:u-1Rfa.iy}, for each $(x',t)\in{\mathbb{R}}^n_{+}$ and $j\in\{1,\dots,n\}$ we may estimate \begin{align}\label{exist:u-1Rfa.iy.2} &\big|t^{1-\theta}(\partial_j u)(x',t)\big| \\[6pt] &\hskip 0.50in\leq \big\|t^{1-(n-1)(\frac{1}{p}-1)}(\partial_j K^L)(x'-\cdot,t)\big\|_{[H^p(\mathbb{R}^{n-1})]^{M\times M}} \|f\|_{[\dot{\mathcal{C}}^\theta(\mathbb{R}^{n-1})]^{M}}. \nonumber \end{align} In view of \eqref{grefr}, this further entails the existence of a constant $C\in(0,\infty)$ with the property that \begin{align}\label{exist:u-1Rfa.iy.3} \sup_{(x',t)\in{\mathbb{R}}^n_{+}}\Big\{t^{1-\theta}\big|(\nabla u)(x',t)\big|\Big\} \leq C\|f\|_{[\dot{\mathcal{C}}^\theta(\mathbb{R}^{n-1})]^{M}}. \end{align} On the other hand, a well-known elementary argument (of a purely real-variable nature, based solely on the Mean-Value Theorem; see, e.g., \cite[\S6, Step~4]{BMO-MMMM}) implies that, for some constant $C=C(n,\theta)\in(0,\infty)$, \begin{align}\label{exist:u-1Rfa.iy.4} \|u\|_{[\dot{\mathcal{C}}^\theta({\mathbb{R}}^n_{+})]^M} \leq C\sup_{(x',t)\in{\mathbb{R}}^n_{+}}\Big\{t^{1-\theta}\big|(\nabla u)(x',t)\big|\Big\}. \end{align} At this stage, \eqref{Dir-BVP-BMO-Car-frac22} follows by combining \eqref{exist:u-1Rfa.iy.3} with \eqref{exist:u-1Rfa.iy.4}, keeping in mind the natural identification $\dot{\mathcal{C}}^\theta({\mathbb{R}}^n_{+})\equiv\dot{\mathcal{C}}^\theta(\overline{{\mathbb{R}}^n_{+}})$. This finishes the proof of the existence for the problem \eqref{Dir-BVP-Holder}, and the justification of \eqref{Dir-BVP-BMO-Car-frac22}. In view of \eqref{6er3dd}, uniqueness for the problem \eqref{Dir-BVP-Holder} follows from Theorem~\ref{y7tgv}. \medskip As a byproduct of the above argument, we see that for each $\theta\in(0,1)$ we have \begin{align}\label{exist:u-1Rfa.iy.4tR} \begin{array}{c} \|u\|_{[\dot{\mathcal{C}}^\theta(\overline{{\mathbb{R}}^n_{+}})]^M} \approx\sup\limits_{(x',t)\in{\mathbb{R}}^n_{+}}\Big\{t^{1-\theta}\big|(\nabla u)(x',t)\big|\Big\} \approx\big\|u\big|_{\partial{\mathbb{R}}^{n}_{+}}\big\|_{[\dot{\mathcal{C}}^\theta(\mathbb{R}^{n-1})]^M} \\[12pt] \text{uniformly for $u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\cap\dot{\mathcal{C}}^\theta(\overline{\mathbb{R}^{n}_{+}})\big]^M$ with $Lu=0$ in $\mathbb{R}^{n}_{+}$}. \end{array} \end{align} In this regard, let us also remark that for each exponent $\theta\in(0,1)$ and each aperture parameter $\kappa>0$ we also have \begin{align}\label{exist:u-1Rfa.iy.4tR.2} \begin{array}{c} \|u\|_{[\dot{\mathcal{C}}^\theta(\overline{{\mathbb{R}}^n_{+}})]^M} \approx\sup\limits_{x'\in{\mathbb{R}}^{n-1}}\|u\|_{[\dot{\mathcal{C}}^\theta(\Gamma_\kappa(x'))]^M} \,\,\text{ uniformly for} \\[12pt] u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\cap\dot{\mathcal{C}}^\theta(\overline{\mathbb{R}^{n}_{+}})\big]^M \,\,\text{ with }\,\,Lu=0\,\,\text{ in }\,\,\mathbb{R}^{n}_{+}. \end{array} \end{align} To justify this, let the function $u$ be as in the last line above and set $f:=u\big|_{\partial{\mathbb{R}}^{n}_{+}}$. Then, given any $x',y'\in{\mathbb{R}}^{n-1}$, if $z$ is the point in $\overline{\Gamma_\kappa(x')}\cap\overline{\Gamma_\kappa(y')}$ closest to $\partial\mathbb{R}^{n}_{+}$ we may estimate \begin{align}\label{exist:u-1Rfa.iy.4tR.3} |f(x')-f(y')| &\leq|u(x')-u(z)|+|u(y')-u(z)| \nonumber\\[6pt] &\leq\|u\|_{[\dot{\mathcal{C}}^\theta(\Gamma_\kappa(x'))]^M}|x'-z|^\theta +\|u\|_{[\dot{\mathcal{C}}^\theta(\Gamma_\kappa(y'))]^M}|y'-z|^\theta \nonumber\\[6pt] &\leq C\Big(\sup\limits_{\xi\in{\mathbb{R}}^{n-1}}\|u\|_{[\dot{\mathcal{C}}^\theta(\Gamma_\kappa(\xi))]^M}\Big)|x'-y'|^\theta, \end{align} for some $C=C(\kappa,\theta)\in(0,\infty)$. Hence, \begin{align}\label{exist:u-1Rfa.iy.3amn} \big\|u\big|_{\partial{\mathbb{R}}^{n}_{+}}\big\|_{[\dot{\mathcal{C}}^\theta(\mathbb{R}^{n-1})]^M} =\|f\|_{[\dot{\mathcal{C}}^\theta(\mathbb{R}^{n-1})]^{M}} \leq C\sup\limits_{x'\in{\mathbb{R}}^{n-1}}\|u\|_{[\dot{\mathcal{C}}^\theta(\Gamma_\kappa(x'))]^M} \end{align} which, together with \eqref{exist:u-1Rfa.iy.4tR}, establishes the left-pointing inequality in the first line of \eqref{exist:u-1Rfa.iy.4tR.2}. Since the right-pointing inequality is trivial, this concludes the proof of \eqref{exist:u-1Rfa.iy.4tR.2}. \medskip As an immediate consequence of Theorem~\ref{theor:Holder} and Theorem~\ref{thm:Linfty} we obtain the following well-posedness result for the Dirichlet problem with boundary data from {\it inhomogeneous} H\"older spaces. \begin{corollary}\label{theor:Holder-IN} Let $L$ be an $M\times M$ homogeneous constant complex coefficient elliptic second-order system in ${\mathbb{R}}^n$, and fix $\theta\in(0,1)$. Then the ${\mathcal{C}}^\theta$-Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, formulated as \begin{equation}\label{Dir-BVP-Holder-IN} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\cap{\mathcal{C}}^\theta(\overline{\mathbb{R}^{n}_{+}})\big]^M, \\[4pt] Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[6pt] u\big|_{\partial{\mathbb{R}}^{n}_{+}}=f\,\,\text{ on }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[{\mathcal{C}}^\theta(\mathbb{R}^{n-1})\big]^M$. The solution $u$ of \eqref{Dir-BVP-Holder-IN} is given by \eqref{exist:u} and there exists a constant $C=C(n,L,\theta)\in[1,\infty)$ with the property that \begin{equation}\label{Dir-BVP-BMO-Car-frac22-IN} \|f\|_{[{\mathcal{C}}^\theta(\mathbb{R}^{n-1})]^M}\leq \|u\|_{[{\mathcal{C}}^\theta(\overline{{\mathbb{R}}^n_{+}})]^M} \leq C\,\|f\|_{[{\mathcal{C}}^\theta(\mathbb{R}^{n-1})]^M}. \end{equation} \end{corollary} As mentioned earlier in the narrative (cf. \eqref{TYd-YG-76g}), the Lam\'e system of elasticity fits into the general framework considered in this paper and, as such, all results so far apply to this special system. In this vein, it is of interest to raise the following issue: \vskip 0.08in {\bf Open Question~7.} {\it Formulate and prove Fatou-type theorems and well-posedness results for various versions of the Dirichlet problem in the upper half-space, of the sort discussed in this paper, for the Stokes system of hydrodynamics.} \vskip 0.08in \subsection{The Dirichlet problem with data in {\rm BMO} and {\rm VMO}} In his ground breaking 1971 article \cite{Fe}, C.~Fefferman writes ``{\it The main idea in proving {\rm [}that the dual of the Hardy space $H^1$ is the John-Nirenberg space ${\rm BMO}${\rm ]} is to study the ${\rm [}harmonic${\rm ]} Poisson integral of a function in ${\rm BMO}$.}'' For example, the key PDE result announced by C.~Fefferman in \cite{Fe} states that \begin{equation}\label{L-dJHG} \displaystyle \parbox{11.10cm}{a measurable function $f$ with $\displaystyle\int_{{\mathbb{R}}^{n-1}}|f(x')|(1+|x'|)^{-n}\,dx'<+\infty$ belongs to the space ${\rm BMO}({\mathbb{R}}^{n-1})$ if and only if its Poisson integral $u:{\mathbb{R}}^n_{+}\to{\mathbb{R}}$, with respect to the Laplace operator in ${\mathbb{R}}^n$, satisfies $\displaystyle\sup\limits_{x'\in{\mathbb{R}}^{n-1}}\sup\limits_{r>0} \Big\{r^{1-n}\int\limits_{|x'-y'|<r}\int_0^r|(\nabla u)(y',t)|^2\,t\,dt\,dx'\Big\}<+\infty$.} \end{equation} One of the primary aims in \cite{BMO-MMMM} was to advance this line of research by developing machinery capable of dealing with the scenario in which the Laplacian in \eqref{L-dJHG} is replaced by much more general second-order elliptic systems with complex coefficients. To review the relevant results in this regard, some notation is needed. A Borel measure $\mu$ in $\mathbb{R}^{n}_{+}$ is said to be a Carleson measure in $\mathbb{R}^{n}_{+}$ provided \begin{equation}\label{defi-Carleson} \|\mu\|_{\mathcal{C}(\mathbb{R}_{+}^{n})}:=\sup_{Q\subset\mathbb{R}^{n-1}} \frac{1}{{\mathcal{L}}^{n-1}(Q)}\int_{0}^{\ell(Q)}\int_Q d\mu(x',t)<\infty, \end{equation} where the supremum runs over all cubes $Q$ in $\mathbb{R}^{n-1}$ (with sides parallel to the coordinate axes), and $\ell(Q)$ is the side-length of $Q$. Call a Borel measure $\mu$ in $\mathbb{R}^{n}_{+}$ a vanishing Carleson measure whenever $\mu$ is a Carleson measure to begin with and, in addition, \begin{equation}\label{defi-CarlesonVan} \lim_{r\to 0^{+}}\left(\sup_{Q\subset\mathbb{R}^{n-1},\,\ell(Q)\leq r} \frac{1}{{\mathcal{L}}^{n-1}(Q)} \int_{0}^{\ell(Q)}\int_Q d\mu(x',t)\right)=0. \end{equation} Next, the Littlewood-Paley measure associated with a continuously differentiable function $u$ in ${\mathbb{R}}^n_{+}$ is $|\nabla u(x',t)|^2\,t\,dx'dt$, and we set \begin{equation}\label{ustarstar} \|u\|_{**}:=\sup_{Q\subset\mathbb{R}^{n-1}}\left(\frac{1}{{\mathcal{L}}^{n-1}(Q)} \int_{0}^{\ell(Q)} \int_Q|\nabla u(x',t)|^2\,t\,dx'dt\right)^\frac12. \end{equation} In particular, for a continuously differentiable function $u$ in ${\mathbb{R}}^n_{+}$ we have \begin{equation}\label{ncud} \|u\|_{**}<\infty\,\,\Longleftrightarrow\,\, |\nabla u(x',t)|^2\,t\,dx'dt\,\,\text{ is a Carleson measure in }\,\,{\mathbb{R}}^n_{+}. \end{equation} The John-Nirenberg space $\mathrm{BMO}(\mathbb{R}^{n-1})$, of functions of bounded mean oscillations in ${\mathbb{R}}^{n-1}$, is defined as the collection of complex-valued functions $f\in L^1_{\rm loc}(\mathbb{R}^{n-1})$ satisfying \begin{equation}\label{defi-BMO} \|f\|_{\mathrm{BMO}(\mathbb{R}^{n-1})}:= \sup_{Q\subset\mathbb{R}^{n-1}}\frac{1}{{\mathcal{L}}^{n-1}(Q)} \int_Q\big|f(x')-f_Q\big|\,dx'<\infty, \end{equation} where $f_Q:=\tfrac{1}{{\mathcal{L}}^{n-1}(Q)}\int_Qf\,d{\mathcal{L}}^{n-1}$ for each cube $Q$ in $\mathbb{R}^{n-1}$, and with the supremum taken over all such cubes $Q$. It turns out (cf., e.g., \cite{FS}) that \begin{equation}\label{eq:aaAabgr-22.aaa} {\rm BMO}\big({\mathbb{R}}^{n-1}\big)\subset L^1\Big({\mathbb{R}}^{n-1}\,,\,\frac{dx'}{1+|x'|^n}\Big) \end{equation} which opens the door for considering the convolution of the Poisson kernel from Theorem~\ref{thm:Poisson} with {\rm BMO} functions in $\mathbb{R}^{n-1}$ (cf. item {\it (c)} in Theorem~\ref{thm:Poisson.II}). Clearly, for every $f\in L^1_{\rm loc}(\mathbb{R}^{n-1})$ we have \begin{equation}\label{defi-BMO-CCC} \begin{array}{ll} \|f\|_{\mathrm{BMO}(\mathbb{R}^{n-1})}=\|f+C\|_{\mathrm{BMO}(\mathbb{R}^{n-1})}, & \forall\,C\in{\mathbb{C}}, \\[6pt] \|f\|_{\mathrm{BMO}(\mathbb{R}^{n-1})}=\|\tau_{z'}f\|_{\mathrm{BMO}(\mathbb{R}^{n-1})}, & \forall\,z'\in{\mathbb{R}}^{n-1}, \\[6pt] \|f\|_{\mathrm{BMO}(\mathbb{R}^{n-1})}=\|\delta_{\lambda}f\|_{\mathrm{BMO}(\mathbb{R}^{n-1})}, & \forall\,\lambda\in(0,\infty), \end{array} \end{equation} where $\tau_{z'}$ is the operator of translation by $z'$, i.e., $(\tau_{z'}f)(x'):=f(x'+z')$ for every $x'\in{\mathbb{R}}^{n-1}$, and $\delta_\lambda$ is the operator of dilation by $\lambda$, i.e., $(\delta_\lambda f)(x'):=f(\lambda x')$ for every $x'\in{\mathbb{R}}^{n-1}$. As visible from the first line of \eqref{defi-BMO-CCC}, it happens that $\|\cdot\|_{\mathrm{BMO}(\mathbb{R}^{n-1}}$ is only a seminorm. Indeed, for every $f\in L^1_{\rm loc}(\mathbb{R}^{n-1})$ we have $\|f\|_{\mathrm{BMO}(\mathbb{R}^{n-1})}=0$ if and only if $f$ is a constant (in ${\mathbb{C}}$) at ${\mathcal{L}}^{n-1}$-a.e. in ${\mathbb{R}}^{n-1}$. Occasionally, we find it useful to mod out its null-space, in order to render the resulting quotient space Banach. Specifically, for two $\mathbb{C}$-valued Lebesgue measurable functions $f,g$ defined in $\mathbb{R}^{n-1}$ we say that $f\sim g$ provided $f-g$ is constant ${\mathcal{L}}^{n-1}$-a.e. in ${\mathbb{R}}^{n-1}$. This is an equivalence relation and we let \begin{align}\label{jgsyjw-AASSS} [f]:=\big\{g:\mathbb{R}^{n-1}\to\mathbb{C}:\,\text{$g$ measurable and $f\sim g$}\big\} \end{align} denote the equivalence class of any given $\mathbb{C}$-valued Lebesgue measurable function $f$ defined in $\mathbb{R}^{n-1}$. In particular, the quotient space \begin{equation}\label{defi-BMO-tilde} \widetilde{\mathrm{BMO}}(\mathbb{R}^{n-1}):=\big\{[f]:\,f\in{\mathrm{BMO}}(\mathbb{R}^{n-1})\big\}. \end{equation} becomes complete (hence Banach) when equipped with the norm \begin{align}\label{defi-BMO-nbgxcr} \big\|\,[f]\,\big\|_{\widetilde{\mathrm{BMO}}(\mathbb{R}^{n-1})}:=\|f\|_{\mathrm{BMO}(\mathbb{R}^{n-1})} \,\,\text{ for each }\,\,f\in{\mathrm{BMO}}(\mathbb{R}^{n-1}). \end{align} Moving on, the Sarason space of $\mathbb{C}$-valued functions of vanishing mean oscillations in ${\mathbb{R}}^{n-1}$ is defined by \begin{align}\label{defi-VMO} {\mathrm{VMO}}(\mathbb{R}^{n-1})&:=\Bigg\{f\in{\mathrm{BMO}}(\mathbb{R}^{n-1}): \\[-6pt] &\hskip 0.30in \lim_{r\to 0^{+}}\left(\sup_{Q\subset\mathbb{R}^{n-1},\,\ell(Q)\leq r}\,\, \frac{1}{{\mathcal{L}}^{n-1}(Q)}\int_Q\big|f(x')-f_Q\big|\,dx'\right)=0\Bigg\}. \nonumber \end{align} The space ${\mathrm{VMO}}(\mathbb{R}^{n-1})$ turns out to be a closed subspace of ${\mathrm{BMO}}(\mathbb{R}^{n-1})$. In fact, if ${\mathrm{UC}}({\mathbb{R}}^{n-1})$ stands for the space of $\mathbb{C}$-valued uniformly continuous functions in ${\mathbb{R}}^{n-1}$, then a well-known result of Sarason \cite[Theorem~1, p.\,392]{Sa75} implies that, in fact, \begin{equation}\label{ku6ffcfc} \parbox{10.70cm}{$f\in{\mathrm{BMO}}({\mathbb{R}}^{n-1})$ belongs to the space ${\mathrm{VMO}}({\mathbb{R}}^{n-1})$ if and only if there exists a sequence $\{f_j\}_{j\in{\mathbb{N}}}\subset{\mathrm{UC}}({\mathbb{R}}^{n-1})\cap{\mathrm{BMO}}({\mathbb{R}}^{n-1})$ such that $\|f-f_j\|_{{\mathrm{BMO}}({\mathbb{R}}^{n-1})}\longrightarrow 0$ as $j\to\infty$.} \end{equation} Another characterization of ${\mathrm{VMO}}(\mathbb{R}^{n-1})$ due to Sarason (cf. \cite[Theorem~1, p.\,392]{Sa75}) is as follows: \begin{equation}\label{defi-VMO-SSS} \parbox{10.70cm}{a given function $f\in{\mathrm{BMO}}(\mathbb{R}^{n-1})$ actually belongs to the space ${\mathrm{VMO}}(\mathbb{R}^{n-1})$ if and only if $\lim\limits_{{\mathbb{R}}^{n-1}\ni z'\to 0'} \|\tau_{z'}f-f\|_{{\mathrm{BMO}}(\mathbb{R}^{n-1})}=0$.} \end{equation} We are now ready to recall the first main result from \cite{BMO-MMMM}. This concerns the well-posedness of the $\mathrm{BMO}$-Dirichlet problem in the upper half-space for systems $L$ as in \eqref{L-def}-\eqref{L-ell.X}. The existence of a unique solution is established in the class of functions $u$ satisfying a Carleson measure condition (expressed in terms of the finiteness of \eqref{ustarstar}). The formulation of the theorem emphasizes the fact that this contains as a ``sub-problem'' the $\mathrm{VMO}$-Dirichlet problem for $L$ in ${\mathbb{R}}^n_{+}$ (in which scenario $u$ satisfies a vanishing Carleson measure condition). \begin{theorem}\label{them:BMO-Dir} Let $L$ be an $M\times M$ elliptic constant complex coefficient system as in \eqref{L-def}-\eqref{L-ell.X}, and fix an aperture parameter $\kappa>0$. Then the $\mathrm{BMO}$-Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, namely \begin{equation}\label{Dir-BVP-BMO} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[4pt] \big|\nabla u(x',t)\big|^2\,t\,dx'dt\,\,\mbox{is a Carleson measure in }\mathbb{R}^{n}_{+}, \\[6pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[\mathrm{BMO}(\mathbb{R}^{n-1})\big]^M$. Moreover, this unique solution satisfies the following additional properties: \begin{list}{$(\theenumi)$}{\usecounter{enumi}\leftmargin=.8cm \labelwidth=.8cm\itemsep=0.2cm\topsep=.1cm \renewcommand{\theenumi}{\alph{enumi}}} \item[(i)] With $P^L$ denoting the Poisson kernel for $L$ in $\mathbb{R}^{n}_{+}$ from Theorem~\ref{thm:Poisson}, one has the Poisson integral representation formula \begin{equation}\label{eqn-Dir-BMO:u} u(x',t)=(P_t^L*f)(x'),\qquad\forall\,(x',t)\in{\mathbb{R}}^n_{+}. \end{equation} \item[(ii)] The size of the solution is comparable to the size of the boundary datum, i.e., there exists $C=C(n,L)\in(1,\infty)$ with the property that \begin{equation}\label{Dir-BVP-BMO-Car} C^{-1}\|f\|_{[\mathrm{BMO}(\mathbb{R}^{n-1})]^M}\leq \|u\|_{**}\leq C\,\|f\|_{[\mathrm{BMO}(\mathbb{R}^{n-1})]^M}. \end{equation} \item[(iii)] There exists a constant $C=C(n,L)\in(0,\infty)$ independent of $u$ with the property that the following uniform {\rm BMO} estimate holds: \begin{equation}\label{feps-BTTGB} \sup_{\varepsilon>0}\|u(\cdot,\varepsilon)\|_{[\mathrm{BMO}(\mathbb{R}^{n-1})]^M} \leq C\,\|u\|_{**}. \end{equation} Moreover, $u$ satisfies a vanishing Carleson measure condition in $\mathbb{R}^{n}_{+}$ if and only if $u$ converges to its boundary datum vertically in $\big[\mathrm{BMO}(\mathbb{R}^{n-1})\big]^M$, i.e., \begin{equation}\label{eqn:conv-Bfed} {}\hskip 0.30in \lim_{\varepsilon\to 0^+}\|u(\cdot,\varepsilon)-f\|_{[\mathrm{BMO}(\mathbb{R}^{n-1})]^M}=0 \Longleftrightarrow \left\{ \begin{array}{l} \big|\nabla u(x',t)\big|^2\,t\,dx'dt\,\,\,\text{is} \\[4pt] \text{a vanishing Carleson} \\[4pt] \text{measure in }\,\,\mathbb{R}^{n}_{+}. \end{array} \right. \end{equation} \item[(iv)] The following regularity results hold: \begin{align}\label{Dir-BVP-Reg} f\in\big[\mathrm{VMO}(\mathbb{R}^{n-1})\big]^M & \Longleftrightarrow \left\{ \begin{array}{l} \big|\nabla u(x',t)\big|^2\,t\,dx'dt\,\,\mbox{is a vanishing} \\[4pt] \text{Carleson measure in }\,\,\mathbb{R}^{n}_{+} \end{array} \right. \\[6pt] & \Longleftrightarrow \lim_{{\mathbb{R}}^n_{+}\ni z\to 0}\|\tau_z u-u\|_{**}=0, \label{Dir-BVP-Reg.TTT} \end{align} where $(\tau_z u)(x):=u(x+z)$ for each $x,z\in{\mathbb{R}}^n_{+}$. \end{list} As a consequence, the $\mathrm{VMO}$-Dirichlet boundary value problem for $L$ in $\mathbb{R}^{n}_{+}$, i.e., \begin{equation}\label{Dir-BVP-VMO} \left\{ \begin{array}{l} u\in\big[{\mathcal{C}}^\infty(\mathbb{R}^{n}_{+})\big]^M,\quad Lu=0\,\,\mbox{ in }\,\,\mathbb{R}^{n}_{+}, \\[4pt] \big|\nabla u(x',t)\big|^2\,t\,dx'dt\,\,\mbox{is a vanishing Carleson measure in }\mathbb{R}^{n}_{+}, \\[6pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}=f \,\,\text{ at ${\mathcal{L}}^{n-1}$-a.e. point in }\,\,{\mathbb{R}}^{n-1}, \end{array} \right. \end{equation} has a unique solution for each $f\in\big[\mathrm{VMO}(\mathbb{R}^{n-1})\big]^M$. Moreover, its solution is given by \eqref{eqn-Dir-BMO:u}, satisfies \eqref{Dir-BVP-BMO-Car}-\eqref{feps-BTTGB}, and \begin{equation}\label{eqn:conv-BfEE} \lim_{\varepsilon\to 0^+}\|u(\cdot,\varepsilon)-f\|_{[\mathrm{BMO}(\mathbb{R}^{n-1})]^M}=0. \end{equation} \end{theorem} It is reassuring to remark that replacing the original boundary datum $f$ by $f+C$ where $C\in{\mathbb{C}}^M$ in \eqref{Dir-BVP-BMO} changes the solution $u$ into $u+C$ (given that convolution with the Poisson kernel reproduces constants from ${\mathbb{C}}^M$; cf. \eqref{eq:IG6gy.2}). As such, the $\widetilde{\rm BMO}$-Dirichlet problem for $L$ in ${\mathbb{R}}^n_{+}$ is also well-posed, if uniqueness of the solution is now understood modulo constants from ${\mathbb{C}}^M$. \medskip The proof of Theorem~\ref{them:BMO-Dir} given in \cite{BMO-MMMM} employs a quantitative Fatou-type theorem, which includes a Poisson integral representation formula along with a characterization of {\rm BMO} in terms of boundary traces of null-solutions of elliptic systems in ${\mathbb{R}}^n_{+}$. A concrete statement is given below in Theorem~\ref{thm:fatou-ADEEDE}. Among other things, the said theorem shows that the demands formulated in the first two lines of \eqref{Dir-BVP-BMO} imply that the pointwise nontangential limit considered in the third line of \eqref{Dir-BVP-BMO} is always meaningful, and that the boundary datum should necessarily be selected from the space ${\rm BMO}$. This theorem also highlights the fact that it is natural to seek a solution of the $\mathrm{BMO}$ Dirichlet problem by taking the convolution of the boundary datum with the Poisson kernel $P^L$ associated with the system $L$. Finally, Theorem~\ref{thm:fatou-ADEEDE} readily implies the uniqueness of solution for the $\mathrm{BMO}$-Dirichlet problem \eqref{Dir-BVP-BMO}. \begin{theorem}\label{thm:fatou-ADEEDE} Let $L$ be an $M\times M$ elliptic system with constant complex coefficients as in \eqref{L-def}-\eqref{L-ell.X} and consider $P^L$, the Poisson kernel in $\mathbb{R}^{n}_{+}$ associated with $L$ as in Theorem~\ref{thm:Poisson}. Also, fix an aperture parameter $\kappa>0$. Then there exists a constant $C=C(L,n,\kappa)\in(1,\infty)$ with the property that \begin{eqnarray}\label{Tafva.BMO} && \left\{ \begin{array}{r} u\in\big[{\mathcal{C}}^\infty({\mathbb{R}}^n_{+})\big]^M \\[4pt] Lu=0\,\mbox{ in }\,{\mathbb{R}}^n_{+} \\[6pt] \text{and }\,\,\|u\|_{**}<\infty \end{array} \right. \\[4pt] &&\hskip 0.30in \Longrightarrow \left\{ \begin{array}{l} u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\,\mbox{ exists a.e.~in }\, {\mathbb{R}}^{n-1},\,\mbox{ lies in }\,\big[\mathrm{BMO}(\mathbb{R}^{n-1})\big]^M, \\[12pt] u(x',t)=\Big(P^L_t\ast\big(u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big)\Big)(x') \,\text{ for all }\,(x',t)\in{\mathbb{R}}^n_{+}, \\[12pt] \mbox{and }\,C^{-1}\|u\|_{**}\leq \big\|u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\big\|_{[\mathrm{BMO}(\mathbb{R}^{n-1})]^M}\leq C\|u\|_{**}. \end{array} \right. \nonumber \end{eqnarray} In fact, the following characterization of $\big[\mathrm{BMO}(\mathbb{R}^{n-1})\big]^M$, adapted to the system $L$, holds: \begin{equation}\label{eq:tr-sols} \big[\mathrm{BMO}(\mathbb{R}^{n-1})\big]^M=\Big\{u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}: u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M,\,\,Lu=0\,\mbox{ in }\,\mathbb{R}^{n}_{+},\,\,\|u\|_{**}<\infty\Big\}. \end{equation} Moreover, \begin{equation}\label{eq:tr-OP-SP} {\mathrm{LMO}}({\mathbb{R}}^n_{+}):= \Big\{u\in\big[\mathcal{C}^\infty(\mathbb{R}^n_{+})\big]^M:\, Lu=0\mbox{ in }\mathbb{R}^{n}_{+}\,\,\text{ and }\,\,\|u\|_{**}<\infty\Big\} \end{equation} is a linear space on which $\|\cdot\|_{**}$ is a seminorm with null-space ${\mathbb{C}}^M$, the quotient space ${\mathrm{LMO}}({\mathbb{R}}^n_{+})\big/{\mathbb{C}}^M$ becomes complete {\rm (}hence Banach{\rm )} when equipped with $\|\cdot\|_{**}$, and the nontangential pointwise trace operator acting on equivalence classes in the context \begin{equation}\label{eq:tr-OP} {\mathrm{LMO}}({\mathbb{R}}^n_{+})\big/{\mathbb{C}}^M\ni[u]\longmapsto \big[u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}\big] \in\big[\widetilde{\mathrm{BMO}}(\mathbb{R}^{n-1})\big]^M \end{equation} is a well-defined linear isomorphism between Banach spaces, where $[u]$ in \eqref{eq:tr-OP} denotes the equivalence class of $u$ in ${\mathrm{LMO}}({\mathbb{R}}^n_{+})\big/{\mathbb{C}}^M$ and $\big[u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}}\big]$ is interpreted as in \eqref{jgsyjw-AASSS}. \end{theorem} There is also a counterpart of the Fatou-type result stated as Theorem~\ref{thm:fatou-ADEEDE} emphasizing the space {\rm VMO} in place of {\rm BMO}. Specifically, the following theorem was proved in \cite{BMO-MMMM}. \begin{theorem}\label{thm:fatou-VMO} Let $L$ be an $M\times M$ elliptic system with constant complex coefficients as in \eqref{L-def}-\eqref{L-ell.X} and consider $P^L$, the associated Poisson kernel for $L$ in $\mathbb{R}^{n}_{+}$ from Theorem~\ref{thm:Poisson}. Also, fix an aperture parameter $\kappa>0$. Then for any function \begin{equation}\label{Dir-BVP-VMOq1} \text{$u\in\big[{\mathcal{C}}^\infty({\mathbb{R}}^n_{+})\big]^M$ satisfying $Lu=0$ in ${\mathbb{R}}^n_{+}$ and $\|u\|_{**}<\infty$} \end{equation} one has \begin{equation}\label{Dir-BVP-VMOq2} \left. \begin{array}{r} \big|\nabla u(x',t)\big|^2\,t\,dx'dt\,\,\mbox{is} \\[4pt] \text{a vanishing Carleson} \\[4pt] \text{measure in }\,\,\mathbb{R}^{n}_{+} \end{array} \right\} \Longrightarrow \left\{ \begin{array}{l} u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}}\,\mbox{ exists a.e.~in }\, {\mathbb{R}}^{n-1},\,\text{ and} \\[12pt] u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^n_{+}} \,\,\text{ is in }\,\,\big[{\mathrm{VMO}(\mathbb{R}^{n-1})\big]^M}. \end{array} \right. \end{equation} Furthermore, the following characterization of the space $\big[\mathrm{VMO}(\mathbb{R}^{n-1})\big]^M$, adapted to the system $L$, holds: \begin{align}\label{eq:tr-sols-VMO} \big[\mathrm{VMO}(\mathbb{R}^{n-1})\big]^M &=\Big\{u\big|^{{}^{\kappa-{\rm n.t.}}}_{\partial{\mathbb{R}}^{n}_{+}} :\,\, u\in\mathrm{LMO}(\mathbb{R}^n_{+})\,\,\text{ and }\,\, \big|\nabla u(x',t)\big|^2\,t\,dx'dt \nonumber\\[0pt] &\qquad \text{is a vanishing Carleson measure in }\,\,\mathbb{R}^{n}_{+}\Big\}. \end{align} \end{theorem} There is yet another version of the space of functions of vanishing mean oscillations which we would like to recall. To set the stage, let ${\mathcal{C}}^0_0({\mathbb{R}}^{n-1})$ be the space of all continuous functions in ${\mathbb{R}}^{n-1}$ which vanish at infinity, equipped with the supremum norm. Also, let $\{R_j\}_{1\leq j\leq n-1}$ be the family of Riesz transforms in ${\mathbb{R}}^{n-1}$. Define ${\rm CMO}({\mathbb{R}}^{n-1})$ as the collection of all functions $f\in L^1_{\rm loc}({\mathbb{R}}^{n-1})$ which may be expressed as \begin{equation}\label{utggG-TRFF.rt.A} \begin{array}{c} f=f_0+\sum_{j=1}^{n-1}R_jf_j\,\,\text{ in }\,\,{\mathbb{R}}^{n-1}\,\,\text{ in }\,\,{\mathbb{R}}^{n-1} \\[6pt] \text{for some }\,\,f_0,f_1,\dots,f_{n-1}\in{\mathcal{C}}^0_0({\mathbb{R}}^{n-1}), \end{array} \end{equation} and set \begin{equation}\label{utggG-TRFF.rt.B} \|f\|_{{\rm CMO}({\mathbb{R}}^{n-1})}:=\inf\Big\{\|f_0\|_{L^{\infty}({\mathbb{R}}^{n-1})} +\sum_{j=1}^{n-1}\|f_j\|_{L^{\infty}({\mathbb{R}}^{n-1})}\Big\} \end{equation} where the infimum is taken over all possible representations of $f$ as in \eqref{utggG-TRFF.rt.A}. Then ${\rm CMO}({\mathbb{R}}^{n-1})$ becomes a Banach space, which may be alternatively characterized as the pre-dual of the Hardy space $H^1({\mathbb{R}}^{n-1})$ (cf. \cite[(2.0'), p.\,185]{Neri}; see also \cite{Bourdaud} and \cite{CoWe77} for more on this topic). One may also show that ${\rm CMO}({\mathbb{R}}^{n-1})$ is a closed subspace of ${\rm BMO}({\mathbb{R}}^{n-1})$, and ${\mathcal{C}}^0_0({\mathbb{R}}^{n-1})$ is dense in ${\rm CMO}({\mathbb{R}}^{n-1})$. Hence, \begin{equation}\label{utggG-TRFF.rt.C} \text{${\rm CMO}({\mathbb{R}}^{n-1})$ is the closure of ${\mathcal{C}}^0_0({\mathbb{R}}^{n-1})$ in ${\rm BMO}({\mathbb{R}}^{n-1})$.} \end{equation} However, the Sarason space ${\rm VMO}({\mathbb{R}}^{n-1})$ (from \eqref{defi-VMO}) is strictly larger than ${\rm CMO}({\mathbb{R}}^{n-1})$. In relation to the latter version of the space of functions of vanishing mean oscillations we wish to pose the following question. \vskip 0.08in {\bf Open Question~8.} {\it Formulate and prove a well-posedness result for the Dirichlet problem in the upper half-space, for an $M\times M$ elliptic second-order homogeneous constant complex coefficient system $L$, with boundary data from $\big[{\rm CMO}({\mathbb{R}}^{n-1})\big]^M$. Also, prove a Fatou-type theorem for null-solutions of $L$ in ${\mathbb{R}}^n_{+}$, which naturally accompanies the said well-posedness result.} \vskip 0.08in To address these issues, a new brand of Carleson measure must be identified. We close by recording the following result proved in \cite{BMO-MMMM}. The first item can be thought of as an analogue of Fefferman's theorem, characterizing {\rm BMO} as in \eqref{L-dJHG}, in the case of elliptic systems with complex coefficients. The second item may be viewed as a characterization of {\rm VMO} in the spirit of Fefferman's original result. \begin{theorem}\label{thm:FEFF} Let $L$ be an $M\times M$ elliptic system with constant complex coefficients as in \eqref{L-def}-\eqref{L-ell.X} and consider the Poisson kernel $P^L$ in $\mathbb{R}^{n}_{+}$ associated with the system $L$ as in Theorem~\ref{thm:Poisson}. Also, assume $f:\mathbb{R}^{n-1}\to\mathbb{C}^{M}$ is a Lebesgue measurable function satisfying \begin{equation}\label{Di-AK} \int_{{\mathbb{R}}^{n-1}}\frac{|f(x')|}{1+|x'|^{n}}\,dx'<\infty. \end{equation} Finally, let $u$ be the Poisson integral of $f$ in ${\mathbb{R}}^n_{+}$ with respect to the system $L$, i.e., $u:{\mathbb{R}}^n_{+}\to{\mathbb{C}}^M$ is given by $u(x',t):=(P^L_t\ast f)(x')$ for each $(x',t)\in{\mathbb{R}}^n_{+}$. Then the following statements are true. \begin{list}{(\theenumi)}{\usecounter{enumi}\leftmargin=.8cm \labelwidth=.8cm\itemsep=0.2cm\topsep=.1cm \renewcommand{\theenumi}{\alph{enumi}}} \item The function $f$ belongs to the space $\big[{\rm BMO}({\mathbb{R}}^{n-1})\big]^M$ if and only if $|\nabla u(x',t)|^2\,t\,dx'dt$ is a Carleson measure in ${\mathbb{R}}^n_{+}$ {\rm (}or, equivalently, \\ $\|u\|_{\ast\ast}<\infty${\rm ;} cf. \eqref{ncud}{\rm )}. \vskip 0.08in \item The function $f$ belongs to the space $\big[{\rm VMO}({\mathbb{R}}^{n-1})\big]^M$ if and only if $|\nabla u(x',t)|^2\,t\,dx'dt$ is a vanishing Carleson measure in ${\mathbb{R}}^n_{+}$. \end{list} \end{theorem}
1,116,691,498,790
arxiv
\section{Introduction} \subsection{The problem area.} The correct multiscale representation of manifold\discretionary{-}{}{-}\allowhyphenation valued data is a basic question whenever one wishes to eliminate the arbitrariness in choosing coordinates for such data, and to avoid artifacts caused by applying linear methods to the ensuing coordinate representations of data. This question appears to have been proposed first by D.\ Donoho \cite{donoho-lie}. The detailed paper \cite{urrahman-2005} describes different constructions, including most of ours, and states results inferred from numerical experiments, but without giving proofs. A series of papers, starting with \cite{wallner-2005-cca}, has since dealt with the systematic analysis of upscaling operations on discrete data -- also known under the name {\em subdivision rules} -- in the case that data live in Lie groups, Riemannian manifolds, and other nonlinear geometries. Regarding smoothness of limits, a satisfactory solution has been achieved by means of the method of {\em proximity inequalities} which also play a role in the present paper. Multiscale decompositions in particular have been investigated by \cite{grohs-2009-wav} (characterizing smoothness by decay of detail coefficients) and \cite{grohs-2009-st} (stability). The present paper studies multiscale decompositions which are analogous to linear biorthogonal wavelets and reviews the known examples based on interpolatory and midpoint\discretionary{-}{}{-}\allowhyphenation interpolating subdivision rules including the simple Haar wavelets. It turns out, however, that it seems unlikely that a rather general way of defining manifold analogues of linear constructions can have perfect reconstruction, which is the first main result of this paper, even if it turns out to be rather vague. For those multiscale decompositions which exist, we show a stability theorem which represents the second main result of the paper. We further discuss averaging procedures which work in manifolds equipped with an exponential mapping and which generalize the well known Riemannian center of mass. This discussion does not contain substantial new results, but it is included because we need this construction for the definition of nonlinear up- and downscaling rules, as well as for converting continuous data to discrete data in the first place. \subsection{Biorthogonal wavelets revisited} We begin by briefly reviewing the notion of biorthogonal Riesz wavelets, but we are content with the properties relevant for the following sections. We start with real-valued sequences $\alpha=(\alpha\idx i)_{i\in{\mathbb Z}}$ with finite support which are called {\em filters} and define the {\em upscaling rule}, or {\em subdivision rule} associated with the filter $\alpha$ by $$(S_\alpha c)\idx k := \sum\nolimits_{l\in{\mathbb Z}}\alpha \idx{k-2l} c\idx l.$$ Here $c:{\mathbb Z}\to V$ is any sequence with values in a vector space. The transpose of the upscaling rule (we skip the definition of {\em transpose}) shall be the {\em downscaling rule} $D$ associated with the filter $\beta$, via $$ (D_\beta c)\idx k :=\sum\nolimits_{l\in{\mathbb Z}}\beta \idx{l-2k} c\idx l. $$ Upscaling and downscaling commutes with the left shift operator $(Lc)\idx k =c\idx {k+1}$ in the following way: $$ S_\alpha L=L^2 S_\alpha, \quad D_\beta L^2=L D_\beta. $$ The most basic rules are defined by the delta sequence: $S_\delta$ inserts zeros between the elements of the original sequence, and $D_\delta $ deletes every other element. All rules can be expressed in terms of $S_\delta$, $D_\delta$, and convolution: \begin{align*} & S_\delta c=(\dots,c\idx 0,0,c\idx 1,0,c\idx 2,\dots), \quad D_\delta c=(\dots,c\idx 0,c\idx 2,c\idx 4,\dots) \\ \implies & S_\alpha c = (S_\delta c) \mathbin* \alpha, \quad D_\beta c = D_\delta (c\mathbin* \beta). \end{align*} We suppress the indices $\alpha,\beta$ from now on. We assume a further upscaling rule $R$ and a downscaling rule $Q$ which shall be high pass filters in contrast to low-pass filters $S$ and $D$.\footnote{usually formulated in terms of Fourier transforms.} Any sequence $c\level j$, which is interpreted as {\em data at level $j$} may be recursively decomposed into a low-frequency-part $c \level{j-1}$ (data at level $j-1$) and a high-frequency-part $d\level j$ (details at level $j$) by letting \begin{equation}\label{eq:decomp} c\level {j-1}=Dc\level j, \quad d\level j=Q c\level j. \end{equation} This process can be iterated in order to obtain a pyramid consisting of {\em coarse data} $c\level 0$ and {\em wavelet coefficients} $d\level 1,\dots,d\level j$. Data at level $j$ shall be be reconstructed by \begin{equation} c\level j = Sc\level {j-1} + R d\level j, \end{equation} which works precisely if the so-called {\it quadrature mirror filter equation}, \begin{equation}\label{eq:qmf} SD+RQ=\mathord{\rm id}, \end{equation} holds. It makes sense to require certain further (`biorthogonality') properties like $QR=\mathord{\rm id}$. In particular, high pass downscaling should annihilate everything generated by low pass upscaling: \begin{equation} \label{eq:biorth} QS=0. \end{equation} An important consequence of the previous properties is that we can rewrite \eqref{eq:decomp} in the form \begin{equation}\label{eq:wavalt} c\level {j-1} = Dc\level j, \quad d\level j = Q (c\level j - S c\level {j-1}). \end{equation} There are many examples of biorthogonal wavelet decompositions. In the following we give some examples. \subsection{Examples: interpolating and midpoint-interpolating schemes} \begin{example} \label{ex:interp-lin} An upscaling scheme is called {\em interpolating}, if it keeps the original data, which is expressed by $$ (Sc)\idx {2k}=c\idx k \iff D_\delta S=\mathord{\rm id}. $$ For interpolating schemes, downscaling is simply $D = D_\delta$. Then detail coefficients are the difference between data $c$ and the prediction gained via upscaling of $Dc$. With the left shift operator, we can write $$ Q c = DL(c-SDc). $$ If we define detail coefficients via \eqref{eq:wavalt}, then we can also employ the modified downscaling operator $$ Q^{\text{modif}} = DL. $$ Reconstruction works via a basic upscaling rule: $$ R = L^{-1} S_\delta $$ It is easy to check that we have indeed perfect reconstruction. An example is furnished by the four\discretionary{-}{}{-}\allowhyphenation point scheme \cite{dyn:1987:4p} defined by $\alpha\idx{\{-3,\dots,3\}}$ $=$ $(-{1\over 16}$, $0$, ${9\over 16}$, $1$, ${9\over 16}$, $0$, $-{1\over 16})$. The action $c\level{j-1}=D c\level j$ of the decimation operator is consistent with the interpretation of discrete data $c\level j\idx k$ as {\em samples} of a continuous function $f(t)$ at the parameter value $t={1\over 2^j}k$. \end{example} \begin{example} \label{ex:haar-lin} The {\em Haar scheme} is defined by the rules \begin{align*} S = (L+\mathord{\rm id}) S_\delta , \quad D = {1\over 2} D_\delta (L+\mathord{\rm id}) ,\quad R = (\mathord{\rm id} -L) S_\delta, \quad Q = {1\over 2}D_\delta (\mathord{\rm id}-L) , \end{align*} which operate as follows: \begin{align*} Sc &= (\dots,c\idx 0,c\idx 0,c\idx 1,c\idx 1,\dots), \\ Rd & = (\dots, d\idx 0, -d\idx 0, d\idx 1, -d\idx 1,\dots), \\ Dc &= (\dots,{c\idx 0+c\idx 1\over 2}, {c\idx 2+c\idx 3\over 2},\dots), \\ Qc &= (\dots, {c\idx 0-c\idx 1\over 2}, {c\idx 2-c\idx 3\over 2},\dots). \end{align*} \end{example} \begin{example} \label{ex:midpt-lin} A subdivision scheme $S$ is called {\em midpoint\discretionary{-}{}{-}\allowhyphenation interpolating}, if it is a right inverse of the decimation operator $D$ which computes midpoints and which is also used for the Haar wavelets of Example \ref{ex:haar-lin}: $$ DS = \mathord{\rm id}, \quad \mbox{where}\quad Dc = (\dots,{c\idx 0+c\idx 1\over 2}, {c\idx 2+c\idx 3\over 2},\dots). $$ The detail coefficients are the difference between that actual data $c$ and the imputation $SDc$ found by upscaling the decimated data. Since $c-SDc$ is by construction in the kernel of $D$ (i.e., is an alternating sequence), it contains redundant information. We thus complete our definitions by letting \begin{align*} Qc &= D_\delta (c-SDc) = (\dots,(c-SDc)\idx 0, (c-SD c)\idx 2,\dots), \\ Rd & = (\mathord{\rm id}-L) S_\delta d = (\dots, d_0,-d_0,d_1,-d_1,\dots). \end{align*} If we define detail coefficients via \eqref{eq:wavalt}, then a much simpler downscaling operator for details can be employed: $$ Q^{\text{modif}} = D_\delta. $$ The action $c\level{j-1}=D c\level j$ of the decimation rule is consistent with the interpretation of discrete data $c\level j\idx k$ as an {\em average} of continuous data over the interval ${1\over 2^j} \cdot [k,k+1]$. The defining relation implies that any such $S$ can be turned into an interpolating subdivision rule $\widetilde S$ by adding one round of midpoint computation: $$ \widetilde S = {1\over 2}(L+\mathord{\rm id})S. $$ $\widetilde S$ is interpolatory, since $D_\delta \widetilde S = {1\over 2}(D_\delta L + D_\delta ) = DS=\mathord{\rm id}$. The relation $S=2(L+\mathord{\rm id})^{-1}\widetilde S$ leads to a way of finding midpoint\discretionary{-}{}{-}\allowhyphenation interpolating schemes from interpolatory ones, since it can be turned into an effective computation by the use of symbols \cite{dyn-2002-ss}. For more information on that kind of schemes, see e.g.\ \cite{donoho-block}. \end{example} \section{Biorthogonal decompositions for manifold-valued data} \subsection{Manifold analogues of linear elementary constructions.} The main idea to apply the previous constructions to manifold\discretionary{-}{}{-}\allowhyphenation valued data is to find replacements for the elementary operations they are composed of. These are the operations $-$ (``vector is difference of points''), $+$ (``point plus vector is a point''), and computing the weighted average of points, which again yields a point. As to which kind of data are points and which are vectors, data $c\level j$ at level $j$ shall be manifold-valued sequences of points, while detail coefficients $d\level j$ shall be sequences with values in vector spaces associated with the manifold. For data with values in a Lie group $G$, with associated Lie algebra $\Lie g$, we let $$ p\mathbin{\oplus} v := p\exp(v), \quad q\mathbin{\ominus} p := \log(p^{-1}q)\in\Lie g, $$ where $\exp$ is the group exponential function and $\log$ is its inverse. For matrix groups, we have $\exp(x)=\sum_{k\ge 0} x^k/k!$ as usual (see e.g.\ \cite{bump-2004-lg} for Lie theory). In a surface or Riemannian manifold $M$, we use the exponential mapping $\exp_p$ which maps a vector $v$ in the tangent space $T_p M$ to the endpoint of a geodesic of length $\|v\|$ which emanates from $p$ with initial tangent vector $v$: $$ p\mathbin{\oplus} v := \exp_p(v), \quad q\mathbin{\ominus} p := \exp_p^{-1}(q) \in T_p M. $$ We have thus found analogues $\mathbin{\oplus}$ and $\mathbin{\ominus}$ of the $+$ and $-$ operations, respectively. An average with weights of total sum $1$ is in Euclidean space equivalently definable by \begin{equation} \label{mean:elem:equiv} m=\sum\alpha_j x_j \iff \sum\alpha_j(x_j-m)=0 \iff \sum\alpha_j \mathop{\rm dist}\nolimits(x_j,m)^2=\min. \end{equation} The middle definition carries over to both Lie groups and Riemannian manifolds (provided $m$ is unique, which it locally is): \begin{equation} \label{eq:def:average0} \sum\alpha_j(x_j\mathbin{\ominus} m)=0. \end{equation} In Riemannian manifolds, this average is the same as the one defined by the right hand condition. These constructions have been employed to define operations on manifold\discretionary{-}{}{-}\allowhyphenation valued data before, in particular subdivision processes. For more details the reader is referred to \cite{grohs-2009-st}. Another way of redefining averages is by means of an auxiliary base point: In a vector space, we have $$ \sum\alpha_j = 1\implies \sum \alpha_j x_j = x + \sum\alpha_j(x_j-x), $$ for any choice of $x$. This leads to the definition \begin{equation} \label{eq:basepoint} x\mathbin{\oplus} \Big(\sum\alpha_j(x_j\mathbin{\ominus} x)\Big) \end{equation} of manifold average which involves the choice of an additional base point. \begin{example} \label{ex:midpoint} It is not difficult to see that the weights $\alpha_0=\alpha_1={1\over 2}$ lead to a symmetric average $m=\mu(x_0,x_1)=x_0\mathbin{\oplus} {1\over 2}(x_1\mathbin{\ominus} x_0) = x_1\mathbin{\oplus} {1\over 2}(x_0\mathbin{\ominus} x_1)$, which can be taken as the manifold\discretionary{-}{}{-}\allowhyphenation midpoint of $x_0$ and $x_1$. It fulfills the balance condition $(x_1\mathbin{\ominus} m) + (x_0\mathbin{\ominus} m)=0$. \end{example} An obvious generalization, where the averaging process possibly works with a continuum of values is defined as follows: Consider a set $X$ which is equipped with some probability measure. For instance we could take the unit interval $X=[0,1]$ with Lebesgue measure. The weighted average $m$ of data $(f(t))_{t\in X}$ with values in a vector space is defined by the following equivalent definitions \begin{equation} m = \int_X f(x) \iff \int_X (f(x)-m) = 0 \iff \int_X \mathop{\rm dist}\nolimits(f(x),m)^2=\min. \end{equation} In the case that $X$ is the integers, and the measure means giving each $i\in{\mathbb Z}$ the weight $\alpha_i$, then this definition reduces to \eqref{mean:elem:equiv}. Also the integral version of the average can be made to work for manifold\discretionary{-}{}{-}\allowhyphenation valued data, by defining $m$ via \begin{equation} \label{eq:def:average1} \int_X (f(x)\mathbin{\ominus} m) = 0. \end{equation} In the Riemannian case, which has been thoroughly discussed by Karcher \cite{karcher-1977-cm}, this is equivalent to $\int_X \mathop{\rm dist}\nolimits(f(x),m)^2=\min$. It is then called the Riemannian center of mass (see Section IX.2 of \cite{kobayashi-69}). \subsection{Manifold versions of filters.} We now define nonlinear analogues of the up- and downscaling rules $S,D,Q,R$. In order to distinguish them from the corresponding nonlinear rules, we write the latter as $\Slin$, $D_{\text{\sl lin}}} % T_{\widetilde a$, $Q_{\text{\sl lin}}} % T_{\widetilde q$, $\Rlin$. The symbols ${\schoen S},{\schoen D},{\schoen Q},{\schoen R}$ denote nonlinear up- and downscaling operators which like the linear ones commute with the left shift operator in the following way: $$ {\schoen S} L=L^2{\schoen S}, \quad {\schoen D} L^2=L{\schoen D}, \quad {\schoen R} L=L^2{\schoen R}, \quad {\schoen Q} L^2=L{\schoen Q}. $$ We now decompose manifold-valued data `at level $j$', which are denoted by the symbol $c\level j$ in a manner similar to \eqref{eq:wavalt}: \begin{equation}\label{eq:ndecomp} c\level {j-1} = {\schoen D} c\level j\quad d\level j = {\schoen Q} \big(c\level j \mathbin{\ominus} {\schoen S} {\schoen D} c\level j\big). \end{equation} By iteration we arrive at data $c\level 0$ at the coarsest scale together with a pyramid of detail coefficients $d\level 1,\dots , d\level j$. In order to obtain perfect reconstruction via \begin{equation} c\level j = {\schoen S} c\level{j-1} \mathbin{\oplus} {\schoen R} d\level j \end{equation} we impose the following condition on the nonlinear operators which could be interpreted as a nonlinear quadrature mirror filter equation: \begin{equation}\label{eq:nqmf} {\schoen S} {\schoen D} c \mathbin{\oplus} ({\schoen R}{\schoen Q}(c\mathbin{\ominus} {\schoen S}{\schoen D} c)) = c\quad \mbox{for all}\ c. \end{equation} \subsection{Examples: interpolating and midpoint-interpolating schemes} \begin{example} \label{ex:haar-geom} (manifold version of Example \ref{ex:haar-lin}) We show how the Haar scheme can be made to work in groups and in Riemannian manifolds. With the midpoint $\mu(p,q)$ of Example \ref{ex:midpoint} we let \begin{align*} {\schoen S} c &= \Slin c = (\dots,c\idx 0,c\idx 0,c\idx 1,c\idx 1,\dots), \\ {\schoen D} c &= (\dots,\mu(c\idx 0,c\idx 1),\mu(c\idx 1,c\idx 2),\dots) \end{align*} while ${\schoen Q}=Q_{\text{\sl lin}}} % T_{\widetilde q$ and ${\schoen R}=\Rlin$. Indeed, $c\mathbin{\ominus}{\schoen S}{\schoen D} c$ is an alternating sequence of vectors, and the detail coefficients associated with data $c$ are given by \begin{align*} d & =Q_{\text{\sl lin}}} % T_{\widetilde q(c\mathbin{\ominus}{\schoen S}{\schoen D} c) \\& = Q_{\text{\sl lin}}} % T_{\widetilde q(\dots, c\idx 0\mathbin{\ominus} \mu(c\idx 0,c\idx 1), c\idx 1 \mathbin{\ominus} \mu(c\idx 0,c\idx 1), c\idx 2\mathbin{\ominus} \mu(c\idx 2,c\idx 3), \dots), \\&= (\dots, c\idx 0\mathbin{\ominus} \mu(c\idx 0,c\idx 1), c\idx 2\mathbin{\ominus} \mu(c\idx 2,c\idx 3), \dots) . \end{align*} It is obvious that with this definition, ${\schoen S}{\schoen D} c\mathbin{\oplus} {\schoen R} d =c$, so we have perfect reconstruction. \end{example} \begin{example} (manifold version of Example \ref{ex:interp-lin}) To find a nonlinear analogue ${\schoen S}$ of a linear upscaling rule defined by affine averages, we can employ geometric averages instead. In this way the interpolating scheme $\Slin=S_\alpha$ can be transferred to the geometric setting, by letting $$ ({\schoen S} c)\idx {2k} = c_k, \quad \sum\nolimits_{r\in{\mathbb Z}}\alpha \idx{2r+1} \big(c\idx {k-r} \mathbin{\ominus} ({\schoen S} c)\idx {2k+1}\big) = 0. $$ The remaining rules can be taken from the linear case (using the fact that the simplest rules can be applied to {\em any} sequence, as its elements do not undergo computations). $$ {\schoen D}=D_{\text{\sl lin}}} % T_{\widetilde a = D_\delta, \quad {\schoen Q} = Q_{\text{\sl lin}}} % T_{\widetilde q = Q^{\text{modif}} = LD_\delta, \quad {\schoen R} = \Rlin = L^{-1} S_\delta. $$ From the interpolating property of ${\schoen S}$ we see that we have perfect reconstruction. \end{example} \begin{example} \label{ex:midpt-manif} (manifold version of Example \ref{ex:midpt-lin}) In order to make a midpoint\discretionary{-}{}{-}\allowhyphenation interpolating rule $\Slin$ work on manifolds, we define an upscaling operator ${\schoen S}$ which retains the crucial property that $c\idx k$ is the midpoint of $({\schoen S} c)\idx {2k}$ and $({\schoen S} c)\idx {2k+1}$. For this purpose we use \eqref{eq:basepoint}. We introduce the following notation for sequences $c, v$ and a point $x\in M$: $$ (c\mathbin{\ominus} x)\idx k := c\idx k\mathbin{\ominus} x, \quad (x\mathbin{\oplus} v)\idx k := x\mathbin{\oplus} v\idx k, $$ and define $$ ({\schoen S} c)\idx {2k} = c\idx k \mathbin{\oplus} (\Slin (c\mathbin{\ominus} c\idx k))\idx {2k}, \quad ({\schoen S} c)\idx {2k+1} = c\idx k \mathbin{\oplus} (\Slin (c\mathbin{\ominus} c\idx k))\idx {2k+1} . $$ It is clear from $(c\mathbin{\ominus} c\idx k)\idx k = 0$ and the midpoint\discretionary{-}{}{-}\allowhyphenation interpolating property of $\Slin$, that ${\schoen S}$ is also midpoint\discretionary{-}{}{-}\allowhyphenation interpolating: $$ \mu\big(({\schoen S} c)\idx {2k},({\schoen S} c)\idx{2k+1}\big) = c\idx k. $$ We use the same downscaling operators ${\schoen Q}$, ${\schoen D}$ as in the Haar case of Example \ref{ex:haar-geom}, which yields $$ d\level j\idx k = (c\level j\mathbin{\ominus} {\schoen S} c\level {j-1})\idx {2k}. $$ By midpoint interpolation, $c\level {j-1}$ and $d\level j$ together determine the original data $c\level j$: With the geodesic reflection $\sigma_x(y)$ of $y$ in the point $x$ defined by $$ \sigma_x(y) = x\mathbin{\oplus} \big(-(y\mathbin{\ominus} x)\big) \quad \mbox{or, locally equivalently,} \quad \mu(y,\sigma_x(y))=x, $$ we have $$ c\level j\idx{2k} = ({\schoen S} c\level {j-1})\idx {2k} \mathbin{\oplus} d\level j\idx k, \quad c\level j\idx{2k+1} = \sigma_{c\level{j-1}\idx k} \big(c\level j\idx{2k}\big). $$ This construction is already contained in \cite{urrahman-2005}. A nonlinear upscaling operator ${\schoen R}$ which effects exactly this construction via $c\level j = c \level{j-1} \mathbin{\oplus} {\schoen R} d\level j$ necessarily depends on the data and may be defined by $$ ({\schoen R} d)\idx{2k} = d\idx k ,\quad ({\schoen R} d)\idx {2k+1} = \sigma_{c\level{j-1}\idx k} \Big( ({\schoen S} c\level {j-1})\idx {2k} \mathbin{\oplus} d\level j\idx k \Big) \mathbin{\ominus} {\schoen S} c\level j\idx {2k+1} . $$ In Riemannian geometry we cannot further simplify this expression. In the case of matrix groups, we employ the fact that $\sigma_x(y) = x y^{-1} x$ and that successive points with indices $2k,2k+1$ of ${\schoen S} c$ are converted into each other by geodesic reflection in the point $c\idx k$: \begin{align*} ({\schoen R} d)\idx {2k+1} &= \log\Big[ \Big({\schoen S} c\level {j-1}\idx {2k+1}\Big)^{-1} \Big({c\level{j-1}\idx k} \Big) \Big( {\schoen S} c\level {j-1}\idx {2k} \exp d\level j\idx k \Big)^{-1} \Big({c\level{j-1}\idx k} \Big) \Big] \\ &= \log\Big[ \Big(c\level {j-1}\idx k\Big)^{-1} \Big({\schoen S} c\level {j-1}\idx {2k}\Big) \exp \Big(-d\level j\idx k\Big) \Big({\schoen S} c\level {j-1}\idx {2k} \Big)^{-1} \Big({c\level{j-1}\idx k} \Big) \Big] \\ &= -\mathop{\rm Ad}\nolimits_ {\mbox{\small $(c\level {j-1}\idx k)^{-1} ({\schoen S} c\level {j-1}\idx {2k})$}} \big (d\level j\idx k \big) = -\mathop{\rm Ad}\nolimits_ {\mbox{\small $\exp\big( ({\schoen S} c\level {j-1}\idx {2k}) \mathbin{\ominus} (c\level {j-1}\idx k) \big)$}} \big (d\level j\idx k \big) \\&= -\mathop{\rm Ad}\nolimits_ {\mbox{\small $\exp\big( \Slin(c\level{j-1}\mathbin{\ominus} c\level{j-1}\idx k)\idx{2k} \big)$}} \big (d\level j\idx k \big). \end{align*} Here we have used the notation $\mathop{\rm Ad}\nolimits_g(v)=gvg^{-1}$. Note that in abelian groups and especially in Euclidean space, where $g\mathbin{\oplus} v = g+v$, this formula reduces to ${\schoen R} d\idx{2k+1}=-{\schoen R} d\idx {2k}$. \end{example} \subsection{On the general feasibility of the construction} The examples of geometric and nonlinear multiscale decompositions given above are special cases, which are based on interpolatory subdivision rules, or at midpoint\discretionary{-}{}{-}\allowhyphenation interpolating rules. It is not clear how perfect reconstruction can be achieved in general. We shall presently see that there are some basic obstructions which disappear in the linear case. For simplicity we consider only periodic sequences, because then the upscaling and downscaling rules have a finite\discretionary{-}{}{-}\allowhyphenation dimensional domain of definition. \begin{prop} \label{prop:necessary} Smooth rules ${\schoen S},{\schoen D},{\schoen Q},{\schoen R}$ can lead to detail coefficients with perfect reconstruction for periodic data $c\in M^{2n}$ only if the rank of the mapping $ c\mapsto c\mathbin{\ominus} {\schoen S}{\schoen D} c $ equals $n\cdot\dim M$, which is half the generic rank of such a mapping. \end{prop} \begin{proof} Equation \eqref{eq:nqmf}, which expresses perfect reconstruction, is equivalent to $$ {\schoen R}{\schoen Q} x = x, \quad \mbox{where}\ x=c\mathbin{\ominus} {\schoen S}{\schoen D} c. $$ It follows that the mapping $c\mapsto c\mathbin{\ominus}{\schoen S}{\schoen D} c={\schoen R}{\schoen Q}(c\mathbin{\ominus}{\schoen S}{\schoen D} c)$ has rank $\le n\cdot\dim M$, because ${\schoen Q}$, mapping $2n$ data items to $n$ detail coefficients, has this property. As to the mapping $c\mapsto {\schoen S}{\schoen D} c$, its rank does not exceed $n\cdot \dim M$, because ${\schoen D}$ has this property. In case the rank is less than $n\cdot\dim M$, the mapping $\mathord{\rm id}_{M^{2n}}: c\mapsto {\schoen S}{\schoen D} c \mathbin{\oplus} (c\mathbin{\ominus} {\schoen S}{\schoen D} c)$ would have rank $<2n\cdot\dim M$, a contradiction. \end{proof} The condition of rank $n\cdot\dim M$ which is necessary for perfect reconstruction as mentioned in Prop.\ \ref{prop:necessary} is unlikely to be satisfied if both upscaling by ${\schoen S}$ and downscaling by ${\schoen D}$ are defined via geometric averaging rules derived from linear rules $S_\alpha$ and $D_\beta$. The following discussion of derivatives should make this clear: We have \begin{equation} \label{eq:SDdef} \sum\nolimits_l \alpha\idx {k-2l}(c\idx l\mathbin{\ominus} {\schoen S} c\idx k)=0, \quad \sum\nolimits_l \beta \idx{l-2k}( c\idx l\mathbin{\ominus} {\schoen D} c\idx k)=0, \end{equation} and we are interested in the change in $({\schoen S}{\schoen D} c)\idx k$ if each $c\idx l$ undergoes a 1\discretionary{-}{}{-}\allowhyphenation parameter variation. We use the abbreviations $\phi$ and $\psi$ for the derivatives of $\mathbin{\ominus}$ with respect to the first and second argument, respectively. In the Lie group case, where all tangent vectors are represented by elements of the Lie algebra $\Lie g$, both $\phi$ and $\psi$ are linear endomorphisms of $\Lie g$. In case of Riemannian manifolds, where $\mathbin{\ominus}:M\times M\to TM$, both $\phi,\psi$ map to $T_{p\mathbin{\ominus} q}(TM)$. As the next formula shows it is not necessary to look closer at this abstract tangent space, because we always combine $\psi^{-1}$ with $\phi$ and the image of $\phi$ occurs only implicitly. Differentiation of \eqref{eq:SDdef} implies that $$ {d\over dt}({\schoen D} c)\idx k = -\Big(\sum\nolimits_l\beta\idx {l-2k} \psi_{c\idx l,{\schoen D} c\idx k} \Big)^{-1} \Big(\sum\nolimits_l\beta\idx {l-2k} \phi_{c\idx l,{\schoen D} c\idx k} {d\over dt} c_l \Big). $$ and further \begin{align*} {d\over dt}({\schoen S}{\schoen D} c)\idx k &= \Big(\sum\nolimits_l\alpha\idx {k-2l} \psi_{{\schoen D} c\idx l,{\schoen S}{\schoen D} c\idx k} \Big)^{-1} \\& \hphantom{=}\cdot \Big(\sum\nolimits_l\alpha\idx {k-2l} \phi_{{\schoen D} c\idx l,{\schoen S}{\schoen D} c\idx k} \big(\sum\nolimits_r\beta\idx {r-2l} \psi_{c\idx r,{\schoen D} c\idx l} \big)^{-1} \big(\sum\nolimits_r\beta\idx {r-2l} \phi_{c\idx r,{\schoen D} c\idx l} {d\over dt} c_r \big) \Big) . \end{align*} The precise form of this equation is not relevant, but by observing that the differentials of $\mathbin{\ominus}$ have to be evaluated at {\em many more} independent locations than the desired rank $n\cdot\dim M$ would suggest, it is clear that only very special filters can lead to rank $n\cdot\dim M$. The situation in the linear case is different: The differentials of $\mathbin{\ominus}$ are constant, and the condition that the previous formula defines a mapping of rank $n$ is an algebraic condition involving the coefficients of filters $\alpha,\beta$. Similar considerations show that also the so\discretionary{-}{}{-}\allowhyphenation called log\discretionary{-}{}{-}\allowhyphenation exponential construction, where a nonlinear rule is constructed via \eqref{eq:basepoint} (see Ex.\ \ref{ex:midpt-manif}) do not in general yield the rank condition expressed by Prop.\ \ref{prop:necessary}. \section{Stability analysis} The point of going through the trouble of decomposing a signal is that one expects many detail coefficients $d\level k\idx l$ to be small and therefore to be negligible. This is the basis of thresholding in order to compress data, which makes sense only if one can control the change in reconstructed data if we change the detail coefficients by resetting some of them to zero. Similarly, quantizing data will result in deviation from the original. Again, it is important to control that change. It is the purpose of this section to establish a {\em stability} result for nonlinear rules which applies to such situations. \subsection{Coordinate representations of nonlinear rules} For the stability analysis we transfer all manifold operations to a local coordinate chart. This is justified only if the constructions we are going to analyze are local. The linear upscaling and downscaling rules defined previously have this property, and so have the nonlinear ones mentioned in the examples above. The operators $\mathbin{\oplus}$, $\mathbin{\ominus}$ are replaced by their respective coordinate representations, which are denoted by the same symbols and which are defined in open subsets of suitable coordinate vector space: We assume that $\mathbin{\oplus}$ maps from $V\times W$ into $V$, and $\mathbin{\ominus}$ maps from $V\times V$ into $W$. Besides smoothness they are assumed to fulfill the compatibility condition \begin{equation} \label{eq:compatibility} p\mathbin{\oplus} (q\mathbin{\ominus} p)=q. \end{equation} We further assume that $\mathbin{\oplus}$, $\mathbin{\ominus}$ are Lipschitz functions, i.e., there exist constants $A,B$ with \begin{equation} A \|p-q\| \le \|p\mathbin{\ominus} q\| \le B\|p-q\|. \end{equation} Locally this is always the case. Our analysis of stability requires that the operators ${\schoen S}, {\schoen D}, {\schoen Q}, {\schoen R}$ (we do not introduce new symbols for their coordinate representations) fulfill some reasonable assumptions which are listed below. Notation makes use of the symbol ``$\lesssim$'' which means that there is a uniform constant such that the left hand side is less than or equal to that constant times the right hand side. For a sequence $w=(w\idx i)_{i\in{\mathbb Z}}$ we use the notation $\|w\|:=\sup_{i\in{\mathbb Z}}\|w\idx i\|$. \begin{itemize} \item {\it Boundedness of ${\schoen Q}, {\schoen R}$:} The mappings ${\schoen Q}$, ${\schoen R}$ operate on $W$\discretionary{-}{}{-}\allowhyphenation valued sequences $w$, which are generated as the difference of point sequences. They are supposed to satisfy $\|{\schoen Q} w\|$, $\|{\schoen R} w\|\lesssim \|w\|$, with respect to some norm $W$ is equipped with. \item {\it Reproduction of constants:} For constant data we require that ${\schoen S} c =c$ and ${\schoen D} c =c$. \item Each of ${\schoen S},{\schoen R},{\schoen D},{\schoen Q}$ shall be as smooth as is needed (in general a little more than $C^1$ will suffice). \item {\it First-order linearity of ${\schoen S},{\schoen D}$ on constant data:} For constant sequences we require that \\[\smallskipamount] \begin{minipage}{\linewidth} \begin{equation} \label{eq:1storder} d{\schoen S}\big|_c = \Slin, \quad d{\schoen D}\big |_c =D_{\text{\sl lin}}} % T_{\widetilde a \end{equation} \end{minipage} \\[\belowdisplayskip] for some low-pass upscaling and downscaling operators $\Slin$, $D_{\text{\sl lin}}} % T_{\widetilde a$ operating on $V$\discretionary{-}{}{-}\allowhyphenation valued sequences, and where $\Slin$ is a convergent subdivision rule. The only exception shall be Haar case, where $\Slin=S_\delta $ shall be the splitting rule (see Ex.\ \ref{ex:interp-lin}). This condition is natural when one considers ${\schoen S},{\schoen D}$ as geometric analogues of linear constructions which are defined by replacing affine averages by geometric averages, or by replacing the $+$ and $-$ operations by $\mathbin{\oplus}$ and $\mathbin{\ominus}$. \end{itemize} \subsection{Stability Results} The aim of this section is to prove the following stability theorem: \begin{theorem}\label{thm:stability} Suppose that ${\schoen S}, {\schoen D}, {\schoen Q},{\schoen R}$ are upscaling and downscaling operators which fulfill the nonlinear version \eqref{eq:nqmf} of the quadrature mirror filter equation, and which also fulfill the technical conditions listed above. Consider a data pyramid $(c\level j)_{j\ge 0}$ with $c\level{j-1}={\schoen D} c\level{j}$ which enjoys the weak contractivity property \begin{equation} \label{eq:cdec} \|\Delta c_j\|\lesssim \mu^j \quad (\mu<1). \end{equation} Then the reconstruction procedure of data $c\level j$ at level $j$ from coarse data $c\level 0$ and details $d\level 1,\dots , d\level j$ is stable in the sense that there are constants $D$, $E_1$, $E_2$ such that for all $j$ and any further data pyramid $\widetilde c\level i$ with details $\widetilde d\level i$ we have \begin{align} \label{eq:close} & \|c\level 0-\widetilde c\level 0\|\le E_1 ,\quad \|d\level k - \widetilde d\level k\|\le E_2\mu^{k}\mbox{ for all } k \\ \implies & \label{eq:stabestimate} \|c\level j - \widetilde c\level j\| \le D\big(\|c\level 0-\widetilde c\level 0\| +\sum\nolimits_{k=1}^j \|d\level k - \widetilde d\level k\|\big). \end{align} \end{theorem} The assumption of decay given by \eqref{eq:cdec} is fulfilled for any finite data pyramid (simply adjust the constant which is implied by using the symbol ``$\lesssim$''). \subsection{Proofs} The remaining part of this section is devoted to the proof of this statement. Our arguments closely follow the ones in \cite{grohs-2009-st} which will enable us to occasionally skip over some purely technical details and focus on the main ideas. The crux is to show that the differentials of the reconstruction mappings are uniformly bounded. We shall go about this task by using perturbation arguments. The justification of this approach lies in the fact that by our assumptions the nonlinear reconstruction procedure agrees with a linear one up to first order on constant data. Indeed, our assumptions already imply that ${\schoen S}$ satisfies a \emph{proximity condition} with $\Slin $ in the sense of \cite{wallner-2005-cca}: \begin{lemma}\label{lem:prox} With the above assumptions we have the inequalities \begin{align} \|{\schoen S} c - \Slin c\| \lesssim \|\Delta c\|^2 , \quad \|{\schoen D} c - D_{\text{\sl lin}}} % T_{\widetilde a c\| \lesssim \|\Delta c\|^2. \end{align} \end{lemma} \begin{proof} We use a first order Taylor expansion of ${\schoen S}$. For any constant sequence $e$ we have $\Slin e=Se=e$, so \begin{align*} {\schoen S} c & = {\schoen S} e + d{\schoen S}|_{e}(c-e) + O(\|c-e\|^2) \\& = e + \Slin (c-e) +O(\|c - e\|^2) = \Slin c + O(\|c-e\|^2). \end{align*} Since ${\schoen S}$ and $\Slin $ are local operators we may choose $e$ such that $$\|c-e\|\lesssim \|\Delta c\|. $$ This proves the first equation. The proof of the second one is the same. \end{proof} We now show that for all initial data $c\level j$ with exponential decay of $\|\Delta c\level j\|$, the associated detail coefficients experience the same type of decay. \begin{lemma}\label{lem:direct} Assume that \eqref{eq:cdec} holds for $(c\level j)_{j\geq 0}$. Then \begin{equation} \label{eq:ddec} \|d\level j\|\lesssim \mu^j. \end{equation} \end{lemma} \begin{proof} We use the boundedness of ${\schoen Q}$ and Lemma \ref{lem:prox} to estimate the norm of detail coefficients: \begin{align*} \|d\level j\| &= \|{\schoen Q} c\level j \mathbin{\ominus} {\schoen S} c\level{j-1}\| \lesssim \|c\level j\mathbin{\ominus} {\schoen S} c\level{j-1}\| \lesssim \|c\level j - {\schoen S} c\level{j-1}\| \\& \le \|c\level j - \Slin {\schoen D} c\level{j}\|+\|\Slin c\level{j-1} -{\schoen S} c\level{j-1}\| \\ & \lesssim \|c\level j - \Slin D_{\text{\sl lin}}} % T_{\widetilde a c\level j\| + \|\Slin ({\schoen D} c\level j - D_{\text{\sl lin}}} % T_{\widetilde a c\level j)\| +\|\Slin c\level{j-1} -{\schoen S} c\level{j-1}\| \\& \lesssim \|c\level j - \Slin D_{\text{\sl lin}}} % T_{\widetilde a c\level j\| + \mu^{2j}. \end{align*} It remains to estimate $\|c\level j - \Slin D_{\text{\sl lin}}} % T_{\widetilde a c\level j\|$. Reproduction of constants implies that for any constant sequence $e$, $$ \|c\level j - \Slin D_{\text{\sl lin}}} % T_{\widetilde a c\level j\| = \|c\level j - e + \Slin D_{\text{\sl lin}}} % T_{\widetilde a ( c\level j- e)\| \lesssim \|c\level j - e\|. $$ By the locality of $\Slin $ and $D_{\text{\sl lin}}} % T_{\widetilde a $ we can pick $e$ such that $\|c\level j - e\|\lesssim \|\Delta c\level j\|$. This concludes the proof. \end{proof} For later use we record the following two facts. The first one is a perturbation theorem which has been shown in \cite{wallner-2005-cca}. \begin{theorem}\label{thm:proxcon} Assume that $\Slin $ is a convergent linear subdivision scheme and that ${\schoen S}$ satisfies $d{\schoen S}|_c = \Slin $ for all constant data $c$. Then there exists $\mu < 1$ such that \begin{equation}\label{eq:ncontr} \|\Delta {\schoen S}^j c\| \lesssim \mu^j \end{equation} for all initial data $c$ with $\|\Delta c\|$ small enough. \end{theorem} We do not want to go into details concerning the precise meaning of `small enough'. The reader who is interested in the considerable technical subtleties arising from this restriction and also the fact that ${\schoen S}$ is usually not globally defined is referred to our previous work \cite{grohs-2008-sbg,grohs-2009-st,grohs-2009-wav} where these issues are rigorously taken into account and the appropriate bounds for $\|\Delta c\|$ are derived. The second result is also a perturbation result which has been shown in \cite{grohs-2009-st}. \begin{lemma}\label{lem:perturb} Let $A_i$, $U_i$ be operators on a normed vector space. Assume exponential decay $\|U_i\|\lesssim \mu^i$, for some $\mu<1$. Then uniform boundedness of $\|A_1\cdots A_k\|$ implies uniform boundedness of $\|(A_1+U_1)\cdots(A_k+U_k)\|$. \end{lemma} \nix{Next we show that the decay property $\|\Delta c_j\| \lesssim \mu^j$ still holds for perturbed data $\widetilde c_j = P_j(\widetilde c_0 , \widetilde d_1,\dots , \widetilde d_j)$, provided that $\|\widetilde c_0 - c_0\|, \|\widetilde d_1 - d_1\|,\dots ,\|\widetilde d_j - d_j\|$ are small.} We continue with the proof of Theorem \ref{thm:stability} by showing that the decay property \eqref{eq:cdec} we assumed for the data pyramid $c\level j$ also holds for the perturbed data pyramid $\widetilde c\level j$. \begin{lemma}\label{lem:lip} Under the assumptions of Theorem \ref{thm:stability}, further assume that $\Slin $ is a convergent subdivision scheme. Then there exist constants $s_1$, $s_2$ such that for all $j$, and any choice of data $\widetilde c\level j$ we have \begin{equation} \|\Delta \widetilde c\level 0\|\le s_1 ,\quad \|\widetilde d\level k \|\le s_2\mu^{k}\ \mbox{for all}\ k \quad \implies \quad \|\Delta \widetilde c\level j\|\lesssim (\mu+\varepsilon)^j. \end{equation} Here for each $\varepsilon >0$ the implied constant is uniform. \end{lemma} \begin{proof} (Sketch) We make the simplifying assumption that for all initial data $c$ which occur in the course of the proof we have \begin{equation}\label{eq:simpdec} \|\Delta {\schoen S} c\| \le \mu\|\Delta c\|. \end{equation} This is no big restriction as it can be shown that such an equation always holds for some iterate ${\schoen S}^N$ of ${\schoen S}$ and initial data with $\|\Delta c\|$ small enough, provided $\Slin $ is convergent \cite{wallner-2005-cca}. In case that only $$ \|\Delta {\schoen S} c\| \le \bar\mu\|\Delta c\| $$ for some $\bar\mu\in(\mu,1)$ we make the initial $\mu$ larger. This does not change the substance of Theorem \ref{thm:stability}. With the Lipschitz constants $r,r'$ defined by $ \|{\schoen R} c\| \le r \|c \|$, $ \|a\mathbin{\oplus} b-a\| \le r' \|a\mathbin{\oplus} b\|$ we now estimate: \begin{align*} \|\Delta \widetilde c\level 1\| & \le \|\Delta {\schoen S} \widetilde c\level 0\| +2\|({\schoen S} \widetilde c\level 0 \mathbin{\oplus} {\schoen R} \widetilde d\level 1 ) -{\schoen S} \widetilde c\level 0 \| \\ &\le \mu\| \Delta \widetilde c\level 0\| + 2 r' \|{\schoen R} \widetilde d_1\| \le \mu s_1 + 2 rr' s_2 \mu \end{align*} Iteration of this argument gives the inequality $$ \|\Delta \widetilde c\level n\| \le s_1\mu^n + 2n rr' s_2 \mu^n \lesssim (\mu+\varepsilon)^n $$ for all $\varepsilon >0$, which we wanted to show. In case \eqref{eq:simpdec} does not hold for ${\schoen S}$, but only for an iterate ${\schoen S}^N$, a similar argument is required which we would like to skip. The reason for requiring $s_1,s_2$ to be `small enough' is that \eqref{eq:simpdec} usually only holds for data $c$ in some set $$P_{M,\delta}:=\{c\mid c\idx k \in M\ \forall k,\mbox{ and } \|\Delta c\|<\delta \}. $$ In general we need to ensure that all $c\level i$'s lie in the set $P_{M,\delta}$ if the only information on the data is the size of detail coefficients. This rather technical step is where we the restrictions on the constants $s_1,s_2$ come in. We chose to skip the technical details regarding this issue, since we do not find them particularly enlightening and they have already been treated in full detail in previous work \cite{grohs-2009-st,grohs-2009-wav,grohs-2008-sbg}. \end{proof} We are finally in a position to prove Theorem \ref{thm:stability}. \begin{proof}[Proof (of Theorem \ref{thm:stability})] The mapping which computes data $c\level k$ at level $k$ by way of reconstruction is denoted by $P_k$. We use the following notation and definition: \begin{align} X_j & := (c\level 0, d\level 1,\dots , d\level k) \in \ell(V)\times\ell(W)^k \\ \label{eq:iter} P_k(X_k) &:= {\schoen S} P_{k-1}(X_{k-1}) \mathbin{\oplus} {\schoen R} d\level k, \quad P_0=\mathord{\rm id}. \end{align} We first treat the case that $\Slin $ is a convergent subdivision scheme and later deal with the Haar case. Observe that we can without loss of generality assume that both $\|\Delta c\level 0\|$ and the implied constant in \eqref{eq:ddec} are arbitrarily small. This is because we can simply do a re\discretionary{-}{}{-}\allowhyphenation indexing $(c')\level i=c\level{i+j_0}$ and we assumed exponential decay of $\Delta c\level j$. in particular, $$ \|\Delta c\level 0\| \le f_1<s_1, \quad \|d\level k\|\le f_2\mu^k, \quad f_2<s_2, $$ with the constants $s_1,s_2$ from Lemma \ref{lem:lip}. By Lemma \ref{lem:direct}, $\|d\level j\|$ is likewise of exponential decay. By the same argument we can make the implied constant arbitrarily small. Pick the constants $E_1, E_2$ such that $f_1 + E_1 \le s_1$ and $f_2 + E_2 \le s_2$, and consider coarse data $\widetilde c\level 0$ and detail coefficients $\widetilde d\level 1,\dots,\widetilde d\level j$ which obey the assumption \eqref{eq:close} made in the statement of the theorem. Lemma \ref{lem:lip} implies that we have exponential decay of $\|\Delta \widetilde c\level j\|$. The estimates gathered so far enable us to show that there exists a constant $C$ such that for all $j$, $k$ and all perturbed arguments $$ \wtX_j = (\widetilde c\level 0, \widetilde d\level 1 , \dots , \widetilde d\level j), $$ we have the bound \begin{equation} \label{eq:diffbound} \Big\|\frac{\partial}{\partial d\level k} \Big|_{\wtX_j} P_j\Big\|, \quad \Big\|\frac{\partial}{\partial c\level 0} \Big|_{\wtX_j} P_j\Big\| \le C. \end{equation} Indeed, using the chain rule on the recursive definition \eqref{eq:iter}, we see that \begin{equation}\label{eq:norm} \frac{\partial}{\partial c\level 0} \Big|_{\wtX_j} P_j = \Big({d_1\oplus} \big|_{({\schoen S} P_{j-1},{\schoen R} d\level j)}\Big) \Big(d{\schoen S} \big|_{\widetilde c\level{j-1}}\Big) \Big(\frac{\partial}{\partial c\level 0} \Big|_{\wtX_{j-1}} P_{j-1}\Big). \end{equation} Our assumptions on smoothness (here: $\mathbin{\oplus}$ is $C^2$) and the compatibility relation \eqref{eq:compatibility} together imply that \begin{align*} d_1\mathord\oplus\big|_{({\schoen S} \widetilde c\level {j-1},{\schoen R} d\level j)} &= d_1\mathord\oplus \big|_{({\schoen S} \widetilde c\level{j-1},0)} + \Big( d_1\mathord\oplus \big|_{({\schoen S} \widetilde c\level{j-1},{\schoen R} d\level j)} - d_1\mathord\oplus \big|_{({\schoen S} \widetilde c\level{j-1},0)} \Big) = I + V_j \end{align*} with $\|V_j\|\lesssim \|{\schoen R} d\level j\|\lesssim \mu^j$. In order to estimate the term $d{\schoen S}\big|_{\widetilde c\level{j-1}}$, we note that $d{\schoen S} = \Slin $ implies that $\|d{\schoen S}\big|_c-\Slin \|\lesssim \|\Delta c\|$ for all initial data $c$, see \cite{grohs-2009-st}. Hence we can write $$d{\schoen S}\big|_{\widetilde c\level{j-1}}=\Slin + W_j, \quad\mbox{where}\ \|W_j\|\lesssim \|\widetilde c\level{j-1}\| \lesssim (\mu+\varepsilon)^{j}$$ for any $\varepsilon >0$. It is a well known fact that for a convergent subdivision scheme $\Slin $, there is a constant $M$ with $\sup_j \|\Slin ^j\|\le M$. The previous discussion and iterative application of \eqref{eq:norm} implies $$ \frac{\partial}{\partial c\level 0} \Big|_{\wtX_j} P_j =(\Slin + U_1)\cdots(\Slin+U_j), \quad \mbox{where}\ \|U_k\|\lesssim (\mu+\varepsilon)^k. $$ Now we invoke Lemma \ref{lem:perturb} and see that indeed the partial derivatives of $P_k$ with respect to $\widetilde c\level 0$ at $\wtX_j$ are uniformly bounded, independent of $j$. The derivatives with respect to $\widetilde d^k$ can be handled in an analogous manner. This shows \eqref{eq:diffbound}, from which it is easy to see \eqref{eq:stabestimate}. Having concluded the proof in the case that $\Slin$ is a convergent subdivision scheme, we turn to the Haar case. It is analogous, but because we have ${\schoen S}=\Slin$ we do not need the perturbation inequalities at all to estimate differentials (in particular we do not need Lemma \ref{lem:lip}). \end{proof} \begin{remark} The only place where the constants $E_1,E_2$ come into play is the assumption \eqref{eq:simpdec} which is usually only satisfied for data in some set $P_{M,\delta}$ -- see the discussion in the proof of Lemma \ref{lem:lip}. It is easy to see that if ${\schoen S}$ is defined and contractive for {\em all} initial data, then the constants $E_1$, $E_2$ can be arbitrarily large. \end{remark} \section{Obtaining discrete data} \subsection{Convolution and smoothing of manifold\discretionary{-}{}{-}\allowhyphenation valued data} Here we are going to investigate further properties of the geometric average which was defined by Equations \eqref{eq:def:average0} and \eqref{eq:def:average1}. They will become important in Section \ref{sec:last}. This material is already contained in Karcher's paper \cite{karcher-1977-cm} as far as surfaces and Riemannian geometry are concerned. Here we also show the extension to Lie groups, which is not difficult once the Riemannian case is known. Convolution with a function $\psi$ with $\int\psi=1$ can be interpreted as an average. This applies to multivariate functions as well as to univariate ones, which are our main concern. In order to fit the previous definitions, we give an equivalent construction of the convolution $g\mathbin *\psi$ for vector\discretionary{-}{}{-}\allowhyphenation valued functions $g$, and at the same time a definition of $(f\gstar \psi)(u)$ for manifold\discretionary{-}{}{-}\allowhyphenation valued functions $f:{\mathbb R}^d\to M$. \begin{align} &\textstyle m = (g\mathbin*\psi)(u) \iff m=\int_{{\mathbb R}^d} g(x)\psi(u-x)\, dx \iff \int_{{\mathbb R}^d} (g(x)-m)\psi(u-x)\, dx = 0, \\&\textstyle m = (f\gstar \psi)(u) \iff \int_{{\mathbb R}^d} (f \mathbin{\ominus} m) \psi(u-x) dx =0. \end{align} The even more general case where the domain of functions are manifolds has been discussed in \cite{karcher-1977-cm}. It turns out that basically any nonnegative kernel function $\psi$ supported in the cube $[-1,1]^d$ can be used for smoothing in the following way: For each $\rho>0$, we let \begin{equation} f^{\rho} = f \gstar \psi^{\rho}, \quad \mbox{where} \ \psi^{\rho}(x) = {1\over\rho^d} \psi\Big({x\over \rho}\Big). \end{equation} We want to show that $f$ and its differential $df$ are approximated by $f^{\rho}$ and $df^{\rho}$ as $\rho$ approaches zero. The proofs consist of revisiting the proofs given in \cite{karcher-1977-cm} which apply to the Riemannian case. \begin{theorem} \label{thm:conv} Consider the smoothed functions $f^\rho$ defined by a function $f:{\mathbb R}^d\to M$ and a kernel $\psi$ as above. Then $$ \lim\nolimits_{\rho\to 0} f^\rho = f, \quad \lim\nolimits_{\rho\to 0} df^{\rho} = df. $$ In case $f$ is Lipschitz differentiable, then this convergence is linear. \end{theorem} \begin{proof} We skip convergence of $f^\rho$ and show only convergence of $df^\rho$. The proof is in the spirit of Lemma 4.2 and Theorem 4.4 of \cite{karcher-1977-cm}, the difference being that the domain of $f$ is a vector space. We define $V:{\mathbb R}^d\times M\to {\mathbb R}^{\dim M}$ by letting $$ \textstyle V(u,p) := \int (f(x)\mathbin{\ominus} p) \psi^{\rho}(u-x) dx. $$ By definition, $V(u,f^\rho(u))=0$. This implies the following equation of derivatives: \begin{equation} \label{sumderiv} d_1 V_{u,f^{\rho}(u)} + D_2 V_{u,f^{\rho}(u)} \circ df^{\rho}_u = 0. \end{equation} The capital $D$ indicates the fact that in the Riemannian case we employ a covariant derivative. The partial derivatives of $V$ have the form \begin{align*} d_1\big|_{u,p} V(\dot u) &= {\displaystyle {d\over dt}\Big|_{t=0}} \textstyle \int \big(f(y) \mathbin{\ominus} p\big) \psi^{\rho}(u(t)-y) dy \\ &= {\displaystyle {d\over dt}\Big|_{t=0}} \textstyle \int \big(f(x-u+u(t)) \mathbin{\ominus} p\big) \psi^{\rho}(u-x) dx , \\ D_2\big|_{u,p} V(\dot p) & = {\displaystyle {D\over dt}\Big|_{t=0}}\textstyle \int \big(f(x) \mathbin{\ominus} p(t)\big) \psi^{\rho}(u-x) dx \end{align*} Using the functions $E_{p,q}(\dot q) = -{D\over dt} (p\mathbin{\ominus} q(t))$ and $F_{p,q}(\dot p) = {d\over dt} (p(t)\mathbin{\ominus} q)$, we get \begin{equation} \label{eq:Vexplizit} d_1\big|_{u,p} V(\dot u) + D_2\big|_{u,p} V(\dot p) = \int \big( F_{f(x),p}(df_x(\dot u)) -E_{f(x),p}(\dot p) \big) \psi^{\rho}(u-x) dx. \end{equation} It is shown in \cite{karcher-1977-cm} that in the Riemannian case the functions $E_{p,q}$ and $F_{p,q}$ can be bounded in terms of sectional curvature $K$, and the parallel transport operator $\mathop{\rm Pt}\nolimits_{\text{\sl from}}^{\text{\sl to}}$: $$ E_{p,q}(v) = v + R, \quad F_{p,q}(v) = \mathop{\rm Pt}\nolimits_p^q(v) + R', $$ where $\|R\| \le \|v\| \mathord{\rm const}(\min K,\max K) \cdot \mathop{\rm dist}\nolimits(p,q)^2$ and $\|R'\| \le \|F_{p,q}(v)\|\mathord{\rm const}(\max|K|)\cdot\mathop{\rm dist}\nolimits(p,q)^2$. Letting $p=f^{\rho}(u)$ and $\dot p = df^{\rho}(\dot u)$, we convert \eqref{sumderiv} and \eqref{eq:Vexplizit} into the integral \begin{align*} 0 & = \int \big( \mathop{\rm Pt}\nolimits_{f(x)}^{f^{\rho}(u)} df_x(\dot u) + R'(x) -df^{\rho}(\dot u) - R(x) \big) \psi^{\rho}(u-x) dx, \end{align*} without indicating the dependence of the remainder terms $R$, $R'$ on $x$. The assumption that $f$ is $C^1$ implies that for all $x$ with contribute to the integral (i.e., $\psi^{\rho}(u-x)\ne 0$), we have $ x\to u$, $df_x(\dot u)\to df_u(\dot u)$, $f^{\rho}(x) \to f(u)$, $\mathop{\rm Pt}\nolimits \to \mathord{\rm id}$, $R\to 0$, $R'\to 0$. Observe that all these limits have at least linear convergence rate, provided $df$ is Lipschitz. With $\int\psi=1$, we obtain $$ \lim\nolimits_{\rho\to 0} (df_x+df^{\rho})(\dot u)\to 0, $$ where the limit is linear if $df$ is Lipschitz. This concludes the proof in the Riemannian case. In the Lie group case, it is not difficult the compute the derivatives $E_{p,q}(\dot p)$ and $F_{p,q}(\dot q)$ by means of the Baker\discretionary{-}{}{-}\allowhyphenation Campbell\discretionary{-}{}{-}\allowhyphenation Hausdorff formula which says $ \log(e^x e^y) = x + y + {1\over 2}[x,y] + \cdots, $ where the dots indicate terms of third and higher order expressible by Lie brackets. When $p$ and $q=pe^z$ undergo 1\discretionary{-}{}{-}\allowhyphenation parameter variations of the form $p(t)=p e^{tw}$ and $q(t)=qe^{tw}$ with $w\in\Lie g$, then \begin{align*} p(t)\mathbin{\ominus} q & = \log(e^z e^{tw}) = z + tw + {1\over 2}[z,tw] + \dots \\ p\mathbin{\ominus} q(t) & = \log(e^{-tw} e^z) = -tw+z+{1\over 2}[-tw,z]+\dots \end{align*} This implies \begin{align*} F_{p,q}(w) &=w+{1\over 2}[z,w] + \cdots ,\\ E_{p,q}(w) &= w + {1\over 2}[w,z] + \cdots \end{align*} Similar to the Riemannian case above, we convert \eqref{sumderiv} and \eqref{eq:Vexplizit} into the integral \begin{align*} & \int \Big( df_x(\dot u) + {1\over 2}\Big[f^\rho(u)\mathbin{\ominus} f(x), df_x(\dot u) +df^\rho(u)\Big] - df^\rho(u) + \cdots \Big) \psi^{\rho}(u-x) \, dx =0 \end{align*} in the Lie algebra. The same arguments imply $x\to u$, $f^\rho(x)\to f(u)$, $df_x(\dot u)\to df_u(\dot u)$, and as a consequence $df^\rho\to df$ as $\rho\to 0$. This concludes the proof of Theorem \ref{thm:conv} in the Lie group case. \end{proof} \subsection{The passage from continuous to discrete data} \label{sec:last} In the analysis of multiscale decompositions one frequently assumes an infinite detail pyramid. In practice a vector\discretionary{-}{}{-}\allowhyphenation valued or manifold\discretionary{-}{}{-}\allowhyphenation valued function $f(t)$ which depends on a parameter $t\in{\mathbb R}$ is given be finitely many measurements. Such measurements might be samples at parameters $t_i = ih$, for some small $h$; or measurements might be modeled as averages of the form $f\gstar \phi(\cdot\, -ih)_{i\in{\mathbb Z}} $ where $\phi$ is some kernel with $\int\phi=1$ and $\mathop{\rm supp}\nolimits(\phi)$ small (in fact physics excludes the kind of measurement we called {\em samples} and permits only $\phi$ to approach the Dirac delta). In the linear case any multiscale decomposition based on midpoint\discretionary{-}{}{-}\allowhyphenation interpolation and especially the Haar scheme are well adapted to deal with averages: The decimation operator $D$ in this case is consistent with the definition of discrete data as follows: \begin{align*} \psi\level j & = 2^j 1_{[0,1]} (2^j\, \cdot\,) = 2^j 1_{[0,2^{-j}]} ,\quad f\level j = f\mathbin *\psi\level j, \quad c\level j = f\level j\big|_{2^{-j}{\mathbb Z}} \\ \implies c\level {j-1} & = D c\level j. \end{align*} We have no analogous relation for manifold\discretionary{-}{}{-}\allowhyphenation valued multiscale decompositions. Nevertheless we may let $$ f\level j = f\gstar \psi\level j, \quad c\level j = f\level j\big|_{2^{-j}{\mathbb Z}}. $$ In view of Theorem \ref{thm:conv}, this yields discrete data whose discrete derivatives $\Delta c\level j$ approximate the derivatives of $f$. Assuming $f$ to be $C^2$, we have \begin{align*} \Delta c\level j\idx k &:= 2^j\big (c\level j\idx {k+1} - c\level j\idx k \big) \implies \Delta c\level j\idx k = {d\over dt} f\level j \big|_{k2^{-j}} +O(2^{-j}) = {d\over dt} f\big|_{k2^{-j}} + O(2^{-j}). \end{align*} The previous equation is to be interpreted in any smooth coordinate chart of the manifold under consideration. \bigskip\centerline{\sc Acknowledgments}\bigskip The authors gratefully acknowledge the support of the Austrian Science Fund. The work of Philipp Grohs has been supported by grant No.\ P19780. \def \http#1{{\it\spaceskip 0 pt plus 0.5pt http:/$\!$/ #1}} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,116,691,498,791
arxiv
\section{Introduction} Model-/parameter selection in statistical modelling is frequently justified from the maximum likelihood (ML) principle in combination with some measure of model quality (such as the Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), Mallows $C_p$, the $PRESS$ statistic etc.) that estimates the expected predictive performance for some candidate model(s) \cite{hastie09}. According to Hjorth \cite{hjort93} the application of cross-validation measures as a methodology for model-/parameter selection in statistical applications was introduced by Stone \cite{stone74}. Stones ideas motivated the invention of the generalised cross-validation ($GCV$) method by Golub et al. \cite{golub79} which is a computationally efficient approximation to the leave-one-out cross-validation (LooCV) method. It is invariant under orthogonal transformations and considered to be a computationally efficient method for choosing appropriate regularisation parameter values in ridge regression (RR) modelling. The RR method was introduced to the statistics community by Hoerl and Kennard \cite{hoerl70}, and is perhaps the most important special case in the Tikhonov \cite{tikhonov63} regularisation (TR) framework of linear regression methods. The TR ideas were originally introduced to the community of numerical mathematics for solving linear discrete ill-posed problems in the context of inverse modelling. A good elementary introduction to the field is given in Hansen \cite{hansen10}. The fast and exact calculations of the LooCV based {\it Predicted Residual Sum of Squares} ($PRESS$) statistic for the ordinary least squares (OLS) regression have been demonstrated by Allen \cite{allen71,allen74}. The purpose of the present paper is to demonstrate that such calculations are also available for the regularisation parameter selection problem of TR/RR at essentially no additional computational cost. In the present paper we demonstrate this as follows: \begin{enumerate} \item[i)] From the Sherman--Morrison--Woodbury updating formula for matrix inversion, see Householder \cite{householder65}, we prove a new theorem that gives the general formula for calculating the segmented cross-validation (SegCV) residuals of linear least squares regression modelling. The formula for calculating the LooCV residuals in Allen's $PRESS$ statistic \cite{allen71,allen74} follows as a corollary of this result. \item[ii)] We demonstrate how to obtain simple and fast LooCV calculations utilising the compact singular value decomposition (SVD) of a data matrix to quickly obtain $PRESS$ values associated with any choice of the regularisation parameter for a TR-problem. In particular this enables fast graphing of the $PRESS$-values as a function of the regularisation parameter at any desired level of detail. \item[iii)] For situations where some segmented cross-validation approach is required for obtaining the relevant $PRESS$-statistic values in the regularisation parameter selection, one may experience that even the segmented cross-validation formula from our theorem becomes computationally slow. To handle such situations, we propose an approximation of the segmented ($K$-fold) cross-validation strategy by invoking the computationally inexpensive LooCV strategy after conducting an appropriate orthogonal transformation of the data matrix. The particular orthogonal transformation is constructed from the left singular vectors of the $K$ local SVDs associated with each of the $K$ distinct cross-validation segments.\\ We demonstrate that the latter alternative provides practically useful approximations of the $PRESS$-statistic at substantial computational savings -- in particular for large datasets with many cross-validation segments (large $K$) containing either identical, or highly related measurement values. \end{enumerate} \section{Mathematical preliminaries} If not otherwise stated we assume that $\mathbf{X}$ is a centred $(n\times p)$ data matrix ($\mathbf{X}'$ denotes the transpose of $\mathbf{X}$) and that the corresponding $(n\times 1)$ vector $\mathbf{y}$ of responses is also centred. We define the scalar $\bar{y}$ and row vector $\bar{\bf x}$ as the (column) averages of ${\bf y}$ and ${\bf X}$ obtained before centring, respectively. \subsection{Model estimation in ordinary least squares and ridge regression} In ordinary least squares (OLS) regression \cite{hastie09} one minimises the {\it residual sum of squares} \begin{equation}\label{OLS}RSS(\mathbf{b})=\|\mathbf{X}\mathbf{b}-\mathbf{y}\|^2,\end{equation} to identify the least squares solution(s) of (\ref{OLS}) with respect to the regression coefficients ${\mathbf{b}}$. A least squares solution $\mathbf{b}_{OLS}$ of (\ref{OLS}) corresponds to an exact solution of the associated {\it normal equations} \begin{equation}\label{NormEqs}\mathbf{X}'\mathbf{X}\mathbf{b}=\mathbf{X}'\mathbf{y},\end{equation} where $\mathbf{b}_{OLS}$ is unique when $\mathbf{X}'\mathbf{X}$ is non-singular. For later predictions of uncentred data, the associated vector of fitted values is given by \begin{equation}\label{FittedVals}\hat{\mathbf{y}}=\mathbf{X}\mathbf{b}_{OLS}+b_0,\end{equation} where the constant term (intercept) $b_0 = \bar{y}-\bar{\mathbf{x}}\mathbf{b}_{OLS}$. {\color{black} For centred vectors/matrices, $\mathbf{y}$ and $\mathbf{X}$, this equation becomes $\hat{\mathbf{y}}=\mathbf{X}\mathbf{b}_{OLS}=\mathbf{H}\mathbf{y}$. Here, the projection matrix, $\mathbf{H}$, (a.k.a. the hat matrix) is defined as \begin{equation}\label{ProjMat}\mathbf{H}\defeq\mathbf{X}(\mathbf{X}^\prime\mathbf{X})^{-1}\mathbf{X}^\prime=\mathbf{T}\bT^\prime, \end{equation} where $\mathbf{T}$ can be chosen as any orthogonal $(n\times r)$-matrix spanning the column space of the centred $\mathbf{X}$-data.} For various reasons a minimiser $\mathbf{b}_{OLS}$ of $RSS(\mathbf{b})$ in equation (\ref{OLS}) is not always the most attractive choice from a predictive point of view \cite{hastie09,hansen10,kalivas12}. For instance ${\bf X}'{\bf X}$ may be singular or poorly conditioned, the solution of (\ref{NormEqs}) is not unique or inappropriate etc. An alternative and quite useful solution was independently recognised by Tikhonov \cite{tikhonov63}, Phillips \cite{phillips62}, and Hoerl and Kennard \cite{hoerl70}. Instead of directly minimising $RSS(\mathbf{b})$, their alternative proposal was to minimise the weighted bi-objective least squares problem \begin{equation}\label{RR} RSS_\lambda(\mathbf{b})=\|\mathbf{X}\mathbf{b}-\mathbf{y}\|^2+\lambda\|\mathbf{I}\mathbf{b}-{\bf 0}\|^2 =\|\mathbf{X}\mathbf{b}-\mathbf{y}\|^2+\lambda\|\mathbf{b}\|^2, \end{equation} where the scalar $\lambda>0$ is a fixed {\it regularisation parameter} (of appropriate magnitude), the matrix $\mathbf{I}$ is the $(p\times p)$ identity matrix and ${\bf 0}$ is a $(p\times 1)$ vector of zeros. This formulation explicitly represents a penalisation with respect to the Euclidean $(L_2)$ norm $\|\mathbf{b}\|$ of the regression coefficients. {\color{black}The identity matrix $\mathbf{I}$ can also be replaced by an alternative regularisation matrix $\mathbf{L}$ as described in Appendix \ref{secL2}.} For a fixed $\lambda$, the unique minimiser of (\ref{RR}) is given by $\mathbf{b}_\lambda$ of equation (\ref{bRR1}) below. The rightmost part of equation (\ref{RR}) is sometimes referred to as a TR-problem in {\it standard form} \cite{hansen10}. The minimisation of equation $(\ref{RR})$ with respect to $\mathbf{b}$ is equivalent to solving the OLS problem associated with the augmented data matrix and response vector: \begin{equation}\label{Xlambda}\mathbf{X}_{\lambda}=\left[% \begin{array}{c} \mathbf{X} \\ \sqrt{\lambda}\mathbf{I} \\ \end{array}% \right], \ \ \mathbf{y}_0=\left[% \begin{array}{c} \mathbf{y} \\ {\bf 0} \\ \end{array}% \right].\end{equation} Note that linear independence of the $\mathbf{X}_\lambda$-columns trivially follows from linear independence of the $\mathbf{I}$-columns. The matrix product $\mathbf{X}_{\lambda}^\prime\mathbf{X}_{\lambda}$ in the associated normal equations \begin{equation}\label{NormalRR1}\mathbf{X}_{\lambda}^\prime\mathbf{X}_{\lambda}\mathbf{b}=\mathbf{X}_{\lambda}^\prime\mathbf{y}_0\end{equation} is therefore non-singular, and the corresponding least squares solution \begin{equation}\label{bRR1}\mathbf{b}_{\lambda}=(\mathbf{X}_{\lambda}^\prime\mathbf{X}_{\lambda})^{-1}\mathbf{X}_{\lambda}^\prime\mathbf{y}_0\end{equation} of the augmented problem \eqref{Xlambda} becomes unique. Straight forward algebraic simplifications of (\ref{NormalRR1}) result in the the familiar normal equations associated with the RR-problem \begin{equation}\label{normalRR2}({\bf X}^\prime{\bf X}+\lambda{\bf I}){\bf b}={\bf X}^\prime{\bf y},\end{equation} and the solution in (\ref{bRR1}) simplifies to \begin{equation}\label{bRR2}{\bf b}_{\lambda}=({\bf X}^\prime{\bf X}+\lambda{\bf I})^{-1}{\bf X}^\prime{\bf y}.\end{equation} For subsequent applications of the $\lambda$-regularised model to uncentred $\mathbf{X}$-data, the appropriate constant term in the resulting regression model is \begin{equation} \label{constantTerm} b_{0,\lambda}=\bar{ y}-\bar{\bf x}{\bf b}_{\lambda}, \end{equation} and the associated vector of fitted values $\hat{\bf y}_{\lambda}$ is given by \begin{equation}\label{FittedValsRR}\hat{\bf y}_{\lambda}={\bf Xb}_{\lambda}+b_{0,\lambda}.\end{equation} \subsection{Calculating the $\mathbf{b}_\lambda$-solutions from the SVD} \label{Simplifications} The full SVD of $\mathbf{X} = \mathbf{U}\mathbf{S}\mathbf{V}^\prime$ yields $\mathbf{V}\bV^\prime =\mathbf{I}_p$ and $\mathbf{X}^\prime\mathbf{X} = \mathbf{V}\mathbf{S}^\prime\mathbf{S}\mathbf{V}^\prime$. The right singular vectors $\mathbf{V}$ of $\mathbf{X}$ are obviously eigenvectors for both $\mathbf{X}^\prime\mathbf{X}$ and \begin{equation}\label{XtXlambdaI}\mathbf{X}_{\lambda}^\prime\mathbf{X}_{\lambda}=(\mathbf{X}^\prime\mathbf{X}+\lambda \mathbf{I}_p)=\mathbf{V}(\mathbf{S}^\prime\mathbf{S}+\lambda\mathbf{I}_p)\mathbf{V}^\prime,\end{equation} and their corresponding eigenvalues are given by the diagonals of ${\bf S}^\prime{\bf S}$ and ${\bf S}^\prime{\bf S}+\lambda\mathbf{I}_p$, respectively. The inverse matrix $(\mathbf{X}^\prime\mathbf{X}+\lambda\mathbf{I}_p)^{-1}=\mathbf{V}(\mathbf{S}^\prime\mathbf{S}+\lambda\mathbf{I}_p)^{-1}\mathbf{V}^\prime$, and the expression (\ref{bRR2}) for the TR-regression coefficients of a problem on standard form therefore simplifies \cite{hastie09} to \begin{equation}\label{bRR3}\mathbf{b}_{\lambda}=\mathbf{V}(\mathbf{S}^\prime\mathbf{S}+\lambda\mathbf{I}_p)^{-1}\mathbf{V}^\prime\mathbf{V}\mathbf{S}\mathbf{U}^\prime\mathbf{y}=\mathbf{V}(\mathbf{S}^\prime\mathbf{S}+\lambda\mathbf{I}_p)^{-1}\mathbf{S}\mathbf{U}^\prime\mathbf{y}. \end{equation} In the following we assume that $\mathbf{X}$ has full rank, i.e., $r = rank(\mathbf{X}) = \min(n,p)$. Then there are exactly $r$ non-zero rows in the $\mathbf{S}$-factor of $\mathbf{b}_{\lambda}$, and the zero rows of $\mathbf{S}$ cancel both the associated columns in $\mathbf{V}(\mathbf{S}^\prime\mathbf{S}+\lambda\mathbf{I}_p)^{-1}$ and rows in $\mathbf{U}^\prime$. {By considering the compact SVD of} $\mathbf{X}=\mathbf{U}_r\mathbf{S}_r\mathbf{V}_r^\prime$ (the vanishing dimensions associated with the singular value $0$ are omitted from the factorisation), the expression (\ref{bRR3}) for the regression coefficients $\mathbf{b}_{\lambda}$ simplifies to \begin{equation}\label{bRRSVD}\mathbf{b}_{\lambda}=\mathbf{V}_r(\mathbf{S}_r^2+\lambda\mathbf{I}_r)^{-1}\mathbf{S}_r\mathbf{U}_r^\prime\mathbf{y} =\mathbf{V}_r(\mathbf{S}_r+\lambda\mathbf{S}_r^{-1})^{-1}\mathbf{U}_r^\prime\mathbf{y}=\mathbf{V}_r\mathbf{c}_\lambda, \end{equation} where the coordinate vectors $\mathbf{c}_\lambda=(\mathbf{S}_r+\lambda\mathbf{S}_r^{-1})^{-1}\mathbf{U}_r^\prime\mathbf{y}=[c_{\lambda,1}\ ...\ c_{\lambda,r}]^\prime\in \R^r$ has the scalar entries \begin{equation}\label{ci} c_{\lambda,j} = \frac{\mathbf{u}_j^\prime\mathbf{y}}{s_j+\lambda/s_j}, \text{ for } 1\leq j\leq r. \end{equation} Compared to the relatively large computational costs associated with calculating the (compact) SVD of $\mathbf{X}$, calculation of the regression coefficient candidates (even for a large number of different $\lambda$-values) only requires computing the vectors $\mathbf{c}_\lambda$ according to (\ref{ci}) and the matrix-vector multiplications $\mathbf{b}_\lambda=\mathbf{V}_r\mathbf{c}_\lambda$ as derived in equation (\ref{bRRSVD}). For the regularised multivariate regression with several $(q)$ responses ${\bf Y}\in\mathbb{R}^{n\times q}$, the associated matrix of regression coefficients is \begin{equation} \label{BRRSVD} [\mathbf{b}_{1,\lambda}\ ...\ \mathbf{b}_{q,\lambda}] = \mathbf{V}_r(\mathbf{S}_r+\lambda\mathbf{S}_r^{-1})^{-1}\mathbf{U}_r^\prime\mathbf{Y}=\mathbf{V}_r\mathbf{C}_\lambda, \end{equation} where $\mathbf{C}_\lambda=(\mathbf{S}_r+\lambda\mathbf{S}_r^{-1})^{-1}\mathbf{U}_r^\prime\mathbf{Y}$ is the obvious multivariate generalisation of the vector $\mathbf{c}_\lambda$ introduced above. \subsection{Obtaining cross-validation segments by projection matrix correction \label{presstheory} When the columns of the data matrix $\mathbf{X}$ are linearly independent, the associated OLS-solution $\mathbf{b}_{OLS}$ of the normal equations (\ref{NormEqs}) is unique, and cross-validation residuals can be derived from the Sherman--Morrison--Woodbury formula for updating matrix inverses \cite{householder65}. From Theorem \ref{TmSegCV} in the Appendix, we obtain the general segmented CV (SegCV) residuals \begin{equation}\label{eq:seg_residuals} \mathbf{r}_{(\{k\})} = [\mathbf{I}_{n_k} - \mathbf{H}_{\{k\}}]^{-1} \mathbf{r}_{\{k\}}, \end{equation} where $\{k\}$ refers to the samples of the $k$-th CV segment, $\mathbf{r}_{(\{k\})}$ refers to the vector of predicted residuals when the segment samples are not included in the modelling, $n_k$ is the number of samples in the segment and, $\mathbf{H}_{\{k\}}$ is the sub-matrix of the projection matrix $\mathbf{H}$ (defined in Equation (\ref{ProjMat}) above) associated with the samples of the $k$-th CV segment. This means that updating residuals for a given segment entails inversion of a matrix involving the entries of $\mathbf{H}$ corresponding to all pairs of sample indices of the $k$-th CV segment. The computational cost of the inversions obviously depends on the number of segments and number of samples belonging to each segment. {\color{black}Allen \cite{allen71,allen74} suggested the $PRESS$ (Prediction Sum-Of-Squares) statistic \begin{equation}\label{PRESS_Allen} PRESS=\sum_{i=1}^n(y_i-\hat{y}_{i,(i)})^2 = \sum_{i=1}^n \mathbf{r}_{(i)}'\mathbf{r}_{(i)}. \end{equation} \noindent where $\hat{y}_{i,(i)}$ denotes the OLS prediction of the $i$-th sample when the sample has been deleted from the regression estimation, and $\mathbf{r}_{(i)}$ is the corresponding residual. With $\hat{y}_{i,(\{k\})}$ denoting the predictions of the $i$-th sample after deleting the corresponding $k$-th CV segment samples from the regression problem in \eqref{OLS}, the SegCV equivalent of the $PRESS$-statistic becomes}: \begin{equation}\label{PRESS_segmented} PRESS=\sum_{i=1}^n(y_i-\hat{y}_{i,(\{k\})})^2 = \sum_{k=1}^K \mathbf{r}_{(\{k\})}'\mathbf{r}_{(\{k\})} =\sum_{k=1}^K \sum_{i=1}^{n_k} r_{i,\{k\}}^2. \end{equation} \noindent Here $r_{i,\{k\}}$ are the elements of the residual vectors defined in Equation \ref{eq:seg_residuals}. \subsubsection{The leave-one-out cross-validation} Corollary \ref{CoLooCV} of Theorem \ref{TmSegCV} covers the special case of LooCV where Equation (\ref{eq:seg_residuals}) simplifies to a computationally efficient scalar formula for updating the individual residuals \begin{equation} r_{(i)} = r_i/(1-h_i). \end{equation} {\color{black}$h_i$ is often referred to as the {\it leverage value} associated with the $i$-th sample (row) in $\mathbf{X}$.} For $\hat{y}_{i,(i)}$ denoting the prediction of the $i$-th sample after deleting it from the regression modelling problem in \eqref{OLS}, the LooCV $PRESS$-statistic, is given by \begin{equation}\label{PRESS}PRESS=\sum_{i=1}^n(y_i-\hat{y}_{i,(i)})^2 =\sum_{i=1}^n\left(\frac{y_i-\hat{y}_{i}}{1-h_i-1/n}\right)^2.\end{equation} \noindent In (\ref{PRESS}) $\hat{y}_{i}$ is the $i$-th entry in the vector of fitted values $\hat{\bf y}=\mathbf{X}\mathbf{b}_{OLS}+b_0$, and $h_i$ denotes the $i$-th diagonal element of the projection matrix $\mathbf{H}$ defined in (\ref{ProjMat}) above. The denominator $(1-h_i-1/n)$ scales the $i$-th model residual $(y_i-\hat{y}_i)$ to obtain the exact LooCV prediction residual $(y_i-\hat{y}_{i,(i)})$. The term $1/n$ in this denominator accounts for the centring of the $\mathbf{X}$-columns and the associated inclusion of a constant term $(b_0)$ in the regression model (\ref{FittedVals}). \begin{comment}Note that the projection matrix (a.k.a. the hat matrix) \begin{equation}\label{ProjMat}\mathbf{H}\defeq\mathbf{X}(\mathbf{X}^\prime\mathbf{X})^{-1}\mathbf{X}^\prime=\mathbf{T}\bT^\prime, \end{equation} where $\mathbf{T}$ can be chosen as any orthogonal $(n\times r)$-matrix spanning the column space of the centred $\mathbf{X}$-data. $h_i$ is often referred to as the {\it leverage value} associated with the $i$-th sample (row) in $\mathbf{X}$. \end{comment} From the last identity in Equation (\ref{ProjMat}) it is clear that the entries of the $n$-vector $\mathbf{h} = [h_1\ h_2\ ... \ h_n]^\prime$, corresponding to the diagonal elements of $\mathbf{H}$, are identical to the square of the norms of the $\mathbf{T}$-rows, i.e. \begin{equation}\label{h_leverages} \mathbf{h}=(\mathbf{T}\odot\mathbf{T}){\bf 1}. \end{equation} Here, $\mathbf{T}\odot\mathbf{T}$ denotes the Hadamard (element-wise) product of $\mathbf{T}$ with itself and ${\bf 1}\in \mathbb{R}^r$ is the constant vector with $1$'s in all entries. Appropriate choices of the matrix $\mathbf{T}$ can be obtained in various ways including both the QR-factorisation and the SVD of $\mathbf{X}$. It should be noted that calculating the matrix inverse $(\mathbf{X}^\prime\mathbf{X})^{-1}$ in the process for finding the diagonal $\mathbf{h}$ of $\mathbf{H}$ in (\ref{ProjMat}) is neither required nor recommended in practice. In general, the explicit calculation of matrix inverses (for non-diagonal matrices) should be avoided whenever possible due to various unfavourable computational aspects, see Bj{\"o}rck \cite[Section 1.2.6]{bjorck16}. \subsubsection{The generalised cross-validation} The $GCV(\lambda)$ was proposed by Golub et al. \cite{golub79} as a fast method for choosing good regularisation parameter ($\lambda$) values in RR. Here, we consider the definition \begin{equation}\label{GCVdef} GCV(\lambda)\overset{\mathrm{def}}{=\joinrel=} \sum_{i=1}^n\left(\frac{y_i-\hat{y}_{\lambda, i}}{1-\bar{h}_\lambda-1/n}\right)^2 =(1-df(\lambda)/n)^{-2}\|\mathbf{y}-\mathbf{X}\mathbf{b}_\lambda\|^2, \end{equation} where $(y_i-\hat{y}_{\lambda, i})$ is the $i$-th entry of the residual vector $\mathbf{r}_\lambda=\mathbf{y}-\hat{\mathbf{y}}_\lambda$, $\bar{h}_\lambda\defeq\frac{1}{n}\sum_{j=1}^{r}\frac{s_j}{s_j+\lambda/s_j}$ and the effective degrees of freedom $df(\lambda)\overset{\mathrm{def}}{=\joinrel=} n\bar{h}_\lambda+1$. This definition of $GCV(\lambda)$ is proportional (by the sample size $n$) to the definition given in \cite[page 216]{golub79}. The $GCV(\lambda)$ is explained as a rotation invariant alternative to the LooCV that provides an approximation of the $PRESS(\lambda)$-statistic defined below. From the elementary matrix-vector multiplication formula (\ref{bRRSVD}) for computing the regression coefficients $\mathbf{b}_\lambda$, it is clear that $GCV(\lambda)$ can be calculated very efficiently for a large number of different $\lambda$-values once the non-zero singular values of $\mathbf{X}$ are available. In their justification of $GCV(\lambda)$ as the preferable choice over the exact LooCV-based $PRESS(\lambda)$, Golub and co-workers stressed the unsatisfactory properties of the $PRESS$-function when the rows of $\mathbf{X}$ are exactly or approximately orthogonal. In this case the estimated regression coefficient $\mathbf{b}_\lambda^{(i)}$ (obtained by excluding the $i$-th row $\mathbf{x}_i$ of $\mathbf{X}$) must be correspondingly orthogonal (or nearly orthogonal) to the excluded sample $\mathbf{x}_i$. Consequently, the associated leave-one-out prediction $\hat{y}_{i,(i)}(=\mathbf{x}_i\mathbf{b}_\lambda^{(i)})$ becomes a poor estimate of the corresponding $i$-th response value $y_i$. NOTE: In situations such as the one just described, it makes little sense to think of the $\mathbf{X}$-data as a collection of independent random samples, and the statistical motivation for considering the LooCV idea becomes correspondingly inferior. {\color{black}In \cite{golub79} it is claimed that any parameter selection procedure should be invariant under orthogonal transformations of the $(\mathbf{X},\mathbf{y})$-data. We are sceptical to this requirement as an inexpedient restriction. This relates to the context of approximating the $PRESS$-statistic for situations where a segmented/folded cross-validation approach is appropriate.} \section{Calculation of the cross-validation based $PRESS(\lambda)$-functions} From the Equations (\ref{PRESS_segmented}, \ref{PRESS}) and the matrix- and vector augmentations in Equation (\ref{Xlambda}), it is clear that the computationally fast versions of the SegCV and LooCV with the associated $PRESS$-statistic is also valid for TR-problems when the regularisation parameter $\lambda$ is treated as a fixed quantity. Below we will first handle the general case of segmented cross-validation. Thereafter we derive an equation assuring fast calculations of the regularised leverages in the vectors $\mathbf{h}_\lambda$ necessary for the LooCV situation. The required calculations are remarkably similar to a computationally efficient calculation of the fitted values $\hat{\mathbf{y}}_\lambda$ and closely related to the corresponding regularised regression coefficients $\mathbf{b}_\lambda$ in (\ref{bRRSVD}). Both $\mathbf{h}_{\lambda}$, $\hat{\mathbf{y}}_\lambda$ (and $\mathbf{b}_\lambda$) can be obtained from the SVD of the original centred data matrix $\mathbf{X}$. This makes the computations of the exact LooCV-based $PRESS(\lambda)$-function defined in (\ref{PRESSlambda}) below about as efficient as the approximation obtained by the $GCV(\lambda)$ in (\ref{GCVdef}). \subsection{Exact $PRESS(\lambda)$-functions from the SVD of the augmented matrix $\mathbf{X}_\lambda$}\label{SVD_alt} Again,we assume that the centred $\mathbf{X}$ has full rank $r$ and that $\mathbf{X}=\mathbf{U}_r\mathbf{S}_r\mathbf{V}_r^\prime$ is the associated compact SVD. By defining $\mathbf{S}_{\lambda,r}$ to be the diagonal $r\times r$ matrix with non-zero diagonal entries $\sqrt{s_j^2+\lambda}, \ j=1,...,r$, the $r$ most dominant singular values of the augmented matrix $\mathbf{X}_{\lambda}$ in (\ref{Xlambda}) are given by the diagonal elements of $\mathbf{S}_{\lambda,r}$. From equation (\ref{XtXlambdaI}) in Section \ref{Simplifications}, the right singular vectors $\mathbf{V}_r$ of $\mathbf{X}$ are also the right singular vectors of $\mathbf{X}_\lambda$, and the associated $r$ left singular vectors are given by \begin{equation}\label{Ulambda}\mathbf{T}_{\lambda,r}= \mathbf{X}_{\lambda}\mathbf{V}_r\mathbf{S}_{\lambda,r}^{-1}=\left[% \begin{array}{c} \mathbf{X}\mathbf{V}_r\mathbf{S}_{\lambda,r}^{-1} \\ \sqrt{\lambda}\mathbf{I}\mathbf{V}_r\mathbf{S}_{\lambda,r}^{-1} \\ \end{array}% \right]=\left[% \begin{array}{c} \mathbf{U}_r\mathbf{S}_r\mathbf{S}_{\lambda,r}^{-1} \\ \sqrt{\lambda}\mathbf{V}_r\mathbf{S}_{\lambda,r}^{-1} \\ \end{array}% \right]=\left[% \begin{array}{c} \mathbf{U}_{\lambda,r} \\ \sqrt{\lambda}\mathbf{V}_r\mathbf{S}_{\lambda,r}^{-1} \\ \end{array}% \right],\end{equation} where the matrix $\mathbf{U}_{\lambda,r}\defeq \mathbf{U}_r\mathbf{S}_r\mathbf{S}_{\lambda,r}^{-1}$ denoting the upper $n$ rows of $\mathbf{T}_{\lambda,r}$ is the part of actual interest (the additional left singular vectors not included in (\ref{Ulambda}) are all zeros in the upper $n$ entries). Because $\mathbf{S}_r\mathbf{S}_{\lambda,r}^{-1}$ is $(r\times r)$ diagonal, $\mathbf{U}_{\lambda,r}$ is obtained by scaling the $j$-th column $(1\leq j\leq r)$ of $\mathbf{U}_r$ with $\sqrt{s_j/(s_j+\lambda/s_j)}$. From the above definition of $\mathbf{U}_{\lambda,r}$, calculation of the $PRESS$-residuals associated with the $n$ original $(\mathbf{X},\mathbf{y})$ data points in the augmented least squares problem $\mathbf{X}_\lambda\mathbf{b}=\mathbf{y}_0$ is straight forward. According to Equations (\ref{ProjMat}, \ref{Ulambda}), the regularised hat matrix $\mathbf{H}_\lambda$ is given by \begin{equation}\label{hlambdaSeg}\mathbf{H}_\lambda= \mathbf{U}_{\lambda,r} \mathbf{U}^\prime_{\lambda,r} . \end{equation} For each choice of the regularisation parameter $\lambda>0$ and the corresponding expression for the regression coefficients $\mathbf{b}_\lambda$ in Equation (\ref{bRRSVD}), the fitted values are \begin{equation}\label{yhat}\hat{\mathbf{y}}_\lambda=\mathbf{X}\mathbf{b}_\lambda+b_{0,\lambda}=(\mathbf{U}_r\mathbf{S}_r)\mathbf{c}_\lambda+b_{0,\lambda}\end{equation} $$= \mathbf{H}_\lambda\mathbf{y}+b_{0,\lambda}.$$ Hence, \begin{equation}\label{PRESSlambdaSeg}PRESS(\lambda)\overset{\mathrm{def}}{=\joinrel=} \sum_{k=1}^K \| [\mathbf{I}_{n_k} - \mathbf{H}_{\lambda,\{k\}}-1/n]^{-1} (\mathbf{y}_{\{k\}}-\hat{\mathbf{y}}_{\lambda,\{k\}})\|^2,\end{equation} where $\mathbf{y}_{\{k\}}-\hat{\mathbf{y}}_{\lambda,\{k\}}$ is the sub-vector of the residual vector ${\mathbf{r}}_\lambda=\mathbf{y}-\hat{\mathbf{y}}_\lambda$ corresponding to the $k$-th CV segment and $\mathbf{H}_{\lambda,\{k\}}$ is the associated sub-matrix of $\mathbf{H}_\lambda$. While Equation (\ref{PRESSlambdaSeg}) defines the general, segmented cross-validation case, the special case of LooCV simplifies considerably. Only the diagonal entries of $\mathbf{H}_\lambda$ (the sample leverages) are required, i.e., Equation (\ref{PRESSlambdaSeg}) simplifies to \begin{equation}\label{PRESSlambda}PRESS(\lambda)\overset{\mathrm{def}}{=\joinrel=} \sum_{i=1}^n\left(\frac{y_i-\hat{y}_{\lambda,i}}{1-h_{\lambda,i}-1/n}\right)^2.\end{equation} Note that $\bar{h}_\lambda$ in the denominator of Equation (\ref{GCVdef}) defining $GCV(\lambda)$ is identical to the mean of the $\mathbf{h}_\lambda$-entries, i.e. $\bar{h}_\lambda=(1/n)\sum_{i=1}^nh_{\lambda,i}$, due to the fact that $\mathbf{U}_r$ is an orthogonal matrix. Also note that the diagonal entries of $\mathbf{H}_\lambda$ can be calculated directly by \begin{equation}\label{hlambda}\mathbf{h}_\lambda= (\mathbf{U}_{\lambda,r}\odot \mathbf{U}_{\lambda,r}){\bf 1}=(\mathbf{U}_r\odot \mathbf{U}_r)\mathbf{d}_{\lambda}, \end{equation} where the coefficient vector $\mathbf{d}_{\lambda}=[d_{1,\lambda}\ ...\ d_{r,\lambda}]^\prime=(\mathbf{S}_r\mathbf{S}_{\lambda,r}^{-1})^2{\bf 1}\in \R^r$ has the entries \begin{equation}\label{di} d_{i,\lambda} = \frac{s_j^2}{s_j^2+\lambda}=\frac{s_j}{s_j+\lambda/s_j},\text{ for } 1\leq j\leq r. \end{equation} Consequently, the evaluation of the $PRESS(\lambda)$-function defined in (\ref{PRESSlambda}) is essentially available at the additional computational cost of two matrix-vector multiplications (Equations (\ref{yhat},\ref{hlambda})) where the matrices ($\mathbf{U}_r\mathbf{S}_r$ and $\mathbf{U}_r\odot\mathbf{U}_r$) are fixed and the associated coefficient vectors $\mathbf{c}_\lambda$ and $\mathbf{d}_\lambda$ are obtained by elementary arithmetic operations for each choice of $\lambda>0$. {\color{black} A note on the number of floating point operations (flops) required for the fast calculation of the LooCV-based $PRESS(\lambda)$-function is included in Appendix \ref{floploocv}}. \begin{comment} An efficient prototype MATLAB-routine for computing the $PRESS$-statistic and regression coefficients is available in Appendix \ref{trcode}. A corresponding implementation in R-code will be made available upon publication at https://cran.r-project.org/web/packages/TR. \end{comment} \subsection{Alternative strategies for estimating the SegCV-based $PRESS(\lambda)$-function} The LooCV calculations in the previous section can be implemented at low computational costs dominated by the SVD of $\mathbf{X}$. The SegCV version, however, also involves inversion of several matrices associated with each combination of the regularisation parameter value of $\lambda$ and cross-validation segment. In situations with many CV segments, e.g., defined by relatively small groups of replicates, the additional computational costs may be acceptable as the matrices to be inverted are small. However, for large datasets with few segments, e.g., 5-10, the required amount of computations may be rather large (comparable to explicitly holding out samples and recalculating a full TR model from scratch for each CV segment). We therefore describe two alternative strategies for speeding up calculations. The first one is based on approximating the $PRESS$-values, while the second strategy involves clever usage of a small subset of exact $PRESS(\lambda)$-values to estimate minimum of the $PRESS(\lambda)$-value and/or the complete $PRESS(\lambda)$ curve within some range of the regularisation parameter value. \subsubsection{$PRESS(\lambda)$ approximated by segmented virtual cross-validation -- VirCV} We will consider a faster alternative for approximating the SegCV approach for the type of situations just described. In the following we assume (without loss of generality) that the uncentred data matrix \begin{equation}\label{UnceteredForT} \mathbf{X} = \left[% \begin{array}{c} \mathbf{X}_1 \\ \mathbf{X}_2 \\ : \\ \mathbf{X}_K \end{array}% \right]\text{together with the uncentred response vector } \mathbf{y} = \left[% \begin{array}{c} \mathbf{y}_1 \\ \mathbf{y}_2 \\ : \\ \mathbf{y}_K \end{array}% \right]\ (K\geq 2) \end{equation} is composed by $K$ distinct sample segments. For $1\leq k\leq K$, we assume that $\mathbf{U}_k\mathbf{S}_k\mathbf{V}_k^\prime = \mathbf{X}_k$ denotes the compact SVD of segment number $k$, and that $n_k$ is the number of rows in $\mathbf{X}_k$ so that the total number of samples is $n = \sum_{k=1}^{K}n_k$. From the SVD of the $k$-th segment we obtain the identity $\mathbf{U}_k^\prime\mathbf{X}_k = \mathbf{S}_k\mathbf{V}_k$. Consequently, the orthogonal transformation performed by left multiplication with the $(n_k\times n_k)$ matrix $\mathbf{U}_k^\prime$ transforms the samples segment $\mathbf{X}_k$ into a matrix of strictly orthogonal rows. Now we define the two block diagonal matrices \begin{equation} \label{tdef} \mathbf{T} = \left[% \begin{array}{cccc} \mathbf{U}_1 & & & \\ & \mathbf{U}_2 & & \\ & & \ddots & \\ & & & \mathbf{U}_K \end{array}% \right] \text{ and } \tilde{\mathbf{T}}=\left[\begin{array}{cccc} \mathbf{T} & \bm{0} \\ \bm{0} & \mathbf{I}\end{array}\right], \end{equation} with the properties $\mathbf{T}^\prime\mathbf{T}=\mathbf{T}\bT^\prime=\mathbf{I}$ and $\tilde{\mathbf{T}}^\prime\tilde{\mathbf{T}}=\tilde{\mathbf{T}}\tilde{\mathbf{T}}^\prime=\mathbf{I}$, i.e., both $\mathbf{T}$ and $\tilde{\mathbf{T}}$ are orthogonal. The formulation of TR-modelling for uncentred $\mathbf{X}$ and explicit inclusion of the constant term corresponds to finding the least squares solution of the linear system \begin{equation}\label{boeq1} \left[\begin{array}{cc} \bm{1} & \mathbf{X} \\ \bm{0} & \sqrt{\lambda}\mathbf{I}\end{array}\right] \cdot \left[\begin{array}{c} b_0 \\ \bm{b}\end{array}\right] = \left[\begin{array}{c} \mathbf{y} \\ \bm{0} \end{array}\right], \end{equation} and left multiplication of \eqref{boeq1} by the orthogonal matrix $\tilde{\mathbf{T}}^\prime$ yields the system \begin{equation}\label{boeq2} \left[\begin{array}{cc} \mathbf{T}^\prime\bm{1} & \mathbf{T}^\prime\mathbf{X} \\ \bm{0} & \sqrt{\lambda}\mathbf{I}\end{array}\right] \cdot \left[\begin{array}{c} b_0 \\ \bm{b}\end{array}\right] = \left[\begin{array}{c} \mathbf{T}^\prime\mathbf{y} \\ \bm{0} \end{array}\right]. \end{equation} Note that the associated normal equations of the systems in \eqref{boeq1} and \eqref{boeq2} are identical. Hence, their least squares solutions are also identical. {\bf Definition of the segmented virtual cross-validation} \\ We define the \textit{segmented virtual cross-validation (VirCV)} strategy as the process of applying the LooCV strategy to the transformed system in equation \eqref{boeq2}. As is noted above, multiplication by $\mathbf{T}^\prime$ has the effect of orthogonalising the rows within each of the $K$ segments in the $\mathbf{X}$ matrix. The heuristic argument for justifying the VirCV approach as an approximation of a SegCV approach is that the rows within each transformed data segment are unsupportive of each other under the LooCV strategy (due to the internal "decoupling" of each segment into a set of mutually orthogonal row vectors). However, from practical cases it can be observed that the accuracy of this approximation depends on the level of similarity between the original samples within each segment of data points. Note that contrary to the LooCV, the $GCV$ is not useful in combination with the VirCV strategy. The reason for this is that the singular values of $\mathbf{X}$ are invariant under orthogonal transformations. From equation (\ref{GCVdef}) and the definition of $\bar{h}_\lambda$ it follows that $GCV(\lambda)$ is also invariant under orthogonal transformations, i.e., the systems in \eqref{boeq1} and \eqref{boeq2} lead to the same $GCV(\lambda)$-function . With the VirCV we are clearly cross-validating on the orthogonal phenomena caused by the samples within each segment. As all the samples in a segment contribute to identifying these directions, the VirCV cannot be expected to provide exactly the same results as the SegCV. One may, however, expect that when the different segments are carefully arranged to contain highly similar samples only (which is a reasonable assumption to make for most organised studies with such data segments), then the VirCV should provide a useful approximation to the SegCV. This will be demonstrated in the application section below. For special situations deviating from highly similar samples in the segments, see Appendix \ref{app:VirCVsituations}. \vspace{3mm} \noindent {\bf Computational aspects in the leverage corrections for the VirCV}\\ As is noted in association with (\ref{UnceteredForT}), the VirCV procedure requires an initial calculation of the transformation $\mathbf{T}$ from the segments of the uncentred $\mathbf{X}$-data. For a successful (correct) implementation of the computational shortcuts similar to those of the LooCV, it is necessary to mean centre the data matrix $\mathbf{X}$ prior to executing the $\mathbf{T}$-transformation and the least squares modelling. In practice, one must therefore mean centre the data prior to the multiplication with $\mathbf{T}^\prime$ (or, equivalently, one can multiply by $\mathbf{T}^\prime$ and subtract the projection of the transformed data onto the transformed vector $\mathbf{T}^\prime\bm{1}$ of ones). As $\mathbf{T}$ is an orthogonal transformation the angles and in particular the orthogonality between vectors will be preserved. For the transformed data, modelling by including a constant term is therefore associated with the transformed vector $\mathbf{T}^\prime\bm{1}$ of ones. With $\mathbf{X}_c$ and $\mathbf{y}_c$ denoting the centred data matrix and the associated centred response vector, respectively, the vector $\mathbf{T}^\prime\bm{1}$ is orthogonal to the columns of the transformed centred data $\mathbf{T}^\prime\mathbf{X}_c$ and $\|\mathbf{T}^\prime\bm{1}\|=\|\bm{1}\|=\sqrt{n}$. The justification for the leverage correction described earlier therefore still holds, but the particular correction terms ($1/n$) changes. {\color{black} With the transformed centred predictors $\tilde{\mathbf{X}}=\mathbf{T}^\prime\mathbf{X}_c$ and responses $\tilde{\mathbf{y}}=\mathbf{T}^\prime\mathbf{y}_c$ in \eqref{boeq2}, the associated fitted values as $\hat{\tilde{\mathbf{y}}}_\lambda=\tilde{\mathbf{X}}\mathbf{b}_\lambda$}, the $PRESS$-function for the VirCV is given by \begin{equation}\label{PRESSVirCV}PRESS_{VirCV}(\lambda)=\sum_{i=1}^n(\tilde{{y}}_i-\hat{\tilde{{y}}}_{\lambda,i,-1})^2 =\sum_{i=1}^n\left(\frac{\tilde{{y}}_i-\hat{\tilde{{y}}}_{\lambda,i}}{1-h_{\lambda,i}-m_i/n}\right)^2.\end{equation} Here the leverages $h_{\lambda,i}$ are calculated as in (\ref{hlambda}) based on the transformed version $\tilde{\mathbf{X}}$ of the centred data, and the enumerator of the correction terms are the entries of the vector $\mathbf{m}=\mathbf{T}^\prime\bm{1}\odot\mathbf{T}^\prime\bm{1}\in\mathbb{R}^n$. This means that the correction term $1/n$ in the denominator of (\ref{PRESSlambda}) must be replaced by $m_i/n$ in (\ref{PRESSVirCV}), where $m_i$ denotes the $i$-th entry of the vector $\mathbf{m}$ (to be consistent with the orthogonal transformation of the regularised least squares problem). {\color{black}A comparison of the number of flops required for the VirCV compared to the SegCV is included in Appendix \ref{flopVirCV}}. \begin{comment} An efficient prototype MATLAB-routine for computing the VirCV is available in Appendix \ref{VirCVcode}. Corresponding R-code will be made available upon publication at https://cran.r-project.org/web/packages/TR. \end{comment} {\color{black}\subsubsection{Approximated $PRESS$-function using subsets of $\lambda$} {\bf Minimum $PRESS$-value estimation} \\ If TR is used in an automated system (without subjective assessment) or only the optimal $PRESS(\lambda)$ is needed, we can avoid redundant calculations by searching for the $\lambda$ value that minimises (\ref{PRESSlambdaSeg}) instead of calculating a large range of solutions. A possible approach for such a search can be based on the golden section search with parabolic interpolation \cite{brent1973algorithms}. This method performs a search for the minimal function value over a bounded interval of a single parameter. To leverage the previously described efficient computations of fitted values, $\hat{\mathbf{y}}_\lambda$, coefficient vectors, $\mathbf{d}_\lambda$, etc. the search for minimum $PRESS(\lambda)$ is then performed over a fixed set of $\lambda$-values. The grid of $\lambda$-values can have high resolution while still achieving a considerable advantage in computational speed compared to the exhaustive $PRESS$-function calculations. It is well-known that this type of function minimisation cannot guarantee the optimal value to be found, however the $PRESS$-functions of interest often have relatively smooth and simple graphs, where a global minimum over the $\lambda$-interval of interest can be fount with high accuracy. {\bf $PRESS(\lambda)$-function estimation by spline interpolation} \\ In cases where estimating the detailed $PRESS(\lambda)$-function is beneficial, e.g., for plotting and inspection, it may be possible to reduce the number of accurate $PRESS(\lambda)$-evaluations to be calculated quite substantively without sacrificing much precision in the estimation. We propose a cubic spline strategy, where the $PRESS(\lambda)$-function is estimated from a small set of distinct $\lambda$-values, and new values are added to the set iteratively until the difference between estimation and true $PRESS$-value falls below a chosen threshold for all $\lambda$-values in the extended set. The latter is determined by cross-validation of the cubic spline interpolation, i.e., a low-cost operation. As with the $PRESS$-minimisation procedure, we consider a fixed set of $\lambda$-values from which we choose starting points and select subsequent values. The $\lambda$-values extending the set in each iteration are the ones halfway to neighbours of the chosen $\lambda$-values on both sides, effectively doubling the local density of $\lambda$-values where needed (low accuracy of spline approximation). Starting values for the initial set of $\lambda$s can be chosen equidistant (on a log$_{10}$ scale) or the sequence obtained using the above "Minimum $PRESS$-value estimation" strategy. Experience with real datasets indicate that the latter is an efficient strategy that may provide close to exact estimation of the minimum $PRESS$-value.} \subsection{A short note on model selection heuristics} With the key formulas derived above we obtain efficient model selection procedures from minimising the $PRESS(\lambda)$- or the $GCV(\lambda)$-functions with respect to the regularisation parameter $\lambda$. However, the minima of these functions will not necessarily assure the selection of the best model in terms of future predictions. This is particularly the case when the $PRESS$- and $GCV$ functions are relatively flat for a relatively large interval of $\lambda$s containing the minimum value. In such situations it is often useful to invoke heuristic principles such as {\it Occam's razor} for identifying a simpler model (in terms of the norm of the regression coefficients) at a small additional cost in terms of the $PRESS$ (or the $GCV$): The '{\bf 1 standard error rule}' described in \cite{hastie09} obtains a simpler (more regularised) alternative by selecting a model where the $PRESS$-statistic is within one standard error of the $PRESS$-minimal model. More precisely, we first identify the minimum $PRESS$ value and calculate the standard error of the squared cross-validation errors associated with this model. Then the largest regularisation parameter value where the associated model has a $PRESS$-statistic within one standard error of the $PRESS$-minimum is selected. The '{\bf $\chi^2$ model selection rule}' to determine the regularisation parameter was originally introduced for model selection with Partial Least Squares regression modelling \cite{indahl05}. By assuming that the residuals associated with the minimum value $PRESS_{min}$ of $PRESS(\lambda)$ are randomly drawn from a normal distribution, the statistic given by $n\cdot PRESS_{min}/\sigma^2$, where $\sigma^2$ is the associated (unknown) variance, follows a $\chi_n^2$ distribution (where $n$ is the degrees of freedom). By fixing a particular significance level $\alpha$, the selection rule says: {\it Choose the largest possible value of $\lambda$ so that $n\cdot PRESS_{min}/PRESS(\lambda)\geq \chi_{n,\alpha}^2$}. Here, $\chi_{n,\alpha}^2$ is the lower $\alpha$-quantile of the $\chi_{n}^2$ distribution and $PRESS(\lambda)$ is a substitute for $\sigma^2$. Based on the efficient formulas for calculating the $PRESS(\lambda)$ function, both these model selection alternatives can be implemented without affecting the total computational costs significantly. \section{Applications}\label{Sec: applications} In the following we demonstrate some applications of our fast cross-validation approaches for model selection within the TR framework for several real world datasets. We consider situations where both leave-one-out and segmented cross-validation are appropriate. The required algorithms were implemented and executed in MATLAB, and prototype code is given in Appendices \ref{trcode}-\ref{VirCVcode}. A corresponding implementation in R-code will be made available upon publication at https://CRAN.R-project.org/package=TR. We used a computer running Mac OS Ventura 13.0.1 and MATLAB R2022a, with 16\,GB RAM, and an M1 Pro 10 core processor. For the derivative regularisation we use the full rank approximations described in Section \ref{secL2} with the scaling coefficient set to $\epsilon=10^{-10}$ in the appended rows in the discrete regularisation matrices. This is done to mitigate the numerical impact from these rows in the resulting regression coefficients. \subsection{The fast leave-one-out cross-validation} \subsubsection{Datasets} The following datasets will be considered in the examples presented below: \begin{enumerate} \item \textit{Octane data} \cite{kalivas97}. This dataset consists of near infrared (NIR) spectra of gasoline. There are $60$ samples and $401$ features (wavelengths in the range $900\,nm-1700\,nm$). The response value is the octane number measured for each sample. \item \textit{Pork fat data} \cite{lyndgaard12}. This dataset consists of Raman spectra measured on pork fat tissue. There are $105$ samples, $5567$ features (wavenumbers in the range $1889.9\,cm^{-1}-200.1\,cm^{-1}$), and $19$ different responses. For modelling and prediction we only consider the response consisting of saturated fatty acids as percentage of total fatty acids, hereafter referred to as SFA. \item \textit{Prostate gene data} \cite{singh02}. The dataset is a microarray gene expression dataset. There are $102$ samples, and the gene expression of $12600$ different genes were measured. The response is binary (cancer/not cancer), and we consider the dummy-regression approach to the underlying classification problem. For this dataset we standardise the data prior to modelling. The standardisation will introduce a small bias in the model selection that will be discussed later \end{enumerate} \noindent For all datasets we have used approximately $2/3$ of the available samples for model building and -selection. The remaining $1/3$ of samples were used for testing the selected models. We considered the following model selection alternatives identifying good regularisation parameter candidates: (i) $PRESS_{min}$ -- the minimum $PRESS(\lambda)$-value, (ii) $GCV_{min}$ -- the minimum $GCV(\lambda)$-value, (iii) the $1$ standard error rule for $PRESS(\lambda)$, (iv) the $\chi^2$-rule for $PRESS(\lambda)$ using the significance level $\alpha=0.2$. \subsubsection{Model selection and prediction} For each dataset, the modelling was based on $1000$ regularisation parameter candidate values spaced uniformly on a log-scale. For the octane data the displayed values were in the range $10^{-4}$ to $10^5$, for the Pork fat data in the range $10^{2}$ to $10^{25}$, and for the Prostate data in the range $10^{-1}$ to $10^{8}$. Different ranges were chosen for each dataset to avoid irrelevant levels of regularisation, and to obtain a good visualisation of the $PRESS$- and $GCV$ curves including the located minima. In Figures \ref{fig:octanermsecv}--\ref{fig:prostatermsecv} the $PRESS/n$ and $GCV/n$ are plotted as functions of the regularisation parameter for the different datasets and the different choices of the regularisation matrix. Such plots are useful for model selection as they allow for a direct comparison of the model quality for different values of the regularisation parameter. Division of the $PRESS$- and $GCV$ values by the samples size $n$ makes the model selection statistics directly comparable to the prediction results obtained by the test sets. The test set results are shown in the Tables \ref{tab:rmsepoctane}--\ref{tab:rmsepprostate}. {\color{black}For the prostate data, the percentage correctly classified on the training set using cross-validation (classifying each sample to the largest of the fitted target values when using $0/1$ dummy-coding for the group memberships) is $91.2\%$ for all the parameter selection methods (it should be noted that this number happens to be identical to the test set result for most of the parameter selection methods).} It should be noted that most of the displayed $PRESS$- (and $GCV$-) curves are relatively flat without a very distinct minimum point. Therefore it may be advantageous to employ either the $1$ S.E. rule or the $\chi^2$-rule to assure the selection of a simpler model. For the Prostate data, in particular, we note that the smallest available candidate regularisation parameter value provides the minimum $PRESS$-value. The effect in terms of prediction when using the $1$ S.E. rule or the $\chi^2$-rule to obtain a simpler model varies between the datasets. For the Pork fat data the $\chi^2$-rule gives better prediction than the other parameter selection methods for the SFA response, while the $\chi^2$-rule selects a poorer model than the other parameter selection methods on the Prostate data. For the most precise identification of the $PRESS$- and $GCV$-minima a numerical optimiser should be used. However, in most practical situations the suggested strategy of considering just a subset of candidate regularisation parameter values is usually good enough for approximating the minima before doing the subsequent identification of parsimonious models (based on the principle of Occam's razor) that predicts well. \begin{figure}[!htb] \includegraphics[width=0.975\textwidth]{graphics/octanermsecvgcv.png} \caption{\textit{Octane data}. $PRESS/n$ and $GCV/n$ for a range of regularisation parameter values and different regularisation matrices. \textit{Top}: $L_2$ regularisation. \textit{Middle}: 1st derivative regularisation. \textit{Bottom}: 2nd derivative regularisation. The minimum $PRESS$ and $GCV$ values have been marked, as well as the regularisation parameter values selected by the $1$ S.E. rule and the $\chi^2$-rule.} \label{fig:octanermsecv} \end{figure} \begin{figure}[!htb] \includegraphics[width=0.975\textwidth]{graphics/porkrmsecv.png} \caption{\textit{Pork fat data and SFA response.} $PRESS/n$ and $GCV/n$ for a range of regularisation parameter values and different regularisation matrices. \textit{Top}: $L_2$ regularisation. \textit{Middle}: 1st derivative regularisation. \textit{Bottom}: 2nd derivative regularisation. The minimum $PRESS$ and $GCV$ values have been marked, as well as the regularisation parameter values selected by the $1$ S.E. rule and the $\chi^2$-rule.} \label{fig:porkrmsecv} \end{figure} \begin{figure}[!htb] \includegraphics[height=0.5\textwidth]{graphics/prostatermsecvgcvl2.png} \caption{\textit{Prostate data.} $PRESS/n$ and $GCV/n$ for a range of regularisation parameter values using $L_2$ regularisation. The minimum $PRESS$ and $GCV$ values have been marked, as well as the regularisation parameter values selected by the $1$ S.E. rule and the $\chi^2$-rule.} \label{fig:prostatermsecv} \end{figure} \begin{table} \begin{tabular}{|c|c|c|c|} \hline \backslashbox{Parameter selection method}{Regularisation type} & $L_2$ & First derivative & Second derivative \\ \hline Minimum PRESS value & 0.057 & 0.047 & 0.038 \\ \hline Minimum GCV value & 0.057 & 0.047 & 0.039 \\ \hline PRESS and 1 standard error rule & 0.059 & 0.045 & 0.036\\ \hline PRESS and $\chi^2$-rule & 0.073 & 0.047 & 0.039 \\ \hline \end{tabular} \caption{\textit{Octane data.} $MSE$ (from test data) using various regularisation types and parameter selection methods.} \label{tab:rmsepoctane} \end{table} \begin{table} \begin{tabular}{|c|c|c|c|} \hline \backslashbox{Parameter selection method}{Regularisation type} & $L_2$ & First derivative & Second derivative \\ \hline Minimum PRESS value & 4.46 & 5.39 & 5.56 \\ \hline Minimum GCV value & 4.36 & 5.45 & 5.58 \\ \hline PRESS and 1 standard error rule & 4.58 & 5.56 & 5.72 \\ \hline PRESS and $\chi^2$-rule & 4.11 & 4.32 & 4.20\\ \hline \end{tabular} \caption{\textit{Pork fat data.} $MSE$ (from test data) for the SFA response using various regularisation types and parameter selection methods.} \label{tab:rmsepporkfatSFA} \end{table} \begin{table} \begin{tabular}{|c|c|} \hline Parameter selection method & PCC test set\\ \hline Minimum PRESS value & 91.2 \\ \hline Minimum GCV value & 91.2 \\ \hline PRESS and 1 standard error rule & 91.2 \\ \hline PRESS and $\chi^2$-rule & 88.2 \\ \hline \end{tabular} \caption{\textit{Prostate data.} Percentage of correctly classified (PCC) samples using the test set predictions of the selected $0 - 1$ dummy regression model based on $L_2$ regularisation.} \label{tab:rmsepprostate} \end{table} \subsubsection{Regression coefficients} Figure \ref{fig:octanespec} shows the octane data together with the $PRESS$-minimal regression coefficients using the $L_2$-, the first derivative-, and the second derivative regularisations. Note that the choice of regularisation matrix heavily influences the appearance of the regression coefficients without the minimum $PRESS$- or $GCV$ values changing much. Table \ref{tab:rmsepoctane} confirms that the predictive powers are relatively similar for all these models. Doing consistent model interpretations solely based on the regression coefficients in Figure \ref{fig:octanespec} is obviously a challenging (if not impossible) task, see also \cite{brown09}. \begin{figure}[!htb] \includegraphics[height=0.5\textwidth]{graphics/octanespectrabcoefs.png} \caption{\textit{Octane data. Top}: Plot of the NIR spectra of octane. \textit{Bottom}: $PRESS$-minimal regression coefficients based on different regularisation matrices.} \label{fig:octanespec} \end{figure} \subsubsection{Computational speed} Table \ref{tab:time} shows the computation times for model selection with the different datasets and different types of regularisation when varying the number of regularisation parameter candidate values. The times in Table \ref{tab:time} also includes the computation of the regression coefficients corresponding to the minimal $GCV$ and $PRESS$ values for all responses. The main differences in computational time between finding the SVD in the case of $L_2$ regularisation and in the cases of first- and second derivative regularisation is due to the initial calculations of $\tilde{\mathbf{X}}$, see Section \ref{secL2}. Similarly, the required transformation of the regression coefficients (see \eqref{backtransf}) explains the increase in computational time from calculating the SVD only to finding $PRESS$, $GCV$ and regression coefficients for a single regularisation parameter value for first and second derivative regularisation. \begin{table}[!htb] \begin{tabular}{|c|c|c|c|c|c|c|} \hline \backslashbox{Data (reg. type)}{Number of $\lambda$-values} & $0$ (SVD only) & $1$ & $10$ & $100$ & $1000$ & $10000$ \\ \hline Octane ($L_2$) & 0.0014 & 0.0014 & 0.0014 & 0.0016 & 0.0024 & 0.013 \\ \hline Octane (1st derivative) & 0.0034 & 0.0046 & 0.0051 & 0.0052 & 0.0055 & 0.017 \\ \hline Octane (2nd derivative) & 0.0048 & 0.0074 & 0.0082 & 0.0082 & 0.0087 & 0.020 \\ \hline Pork fat ($L_2$) & 0.018 & 0.023 & 0.023 & 0.026 & 0.040 & 0.26 \\ \hline Pork fat (1st derivative) & 0.096 & 0.22 & 0.22 & 0.22 & 0.24 & 0.46 \\ \hline Pork fat (2nd derivative) & 0.23 & 0.59 & 0.60 & 0.62 & 0.64 & 0.85 \\ \hline Prostate ($L_2$) & 0.038 & 0.072 & 0.077 & 0.078 & 0.078 & 0.11 \\ \hline \end{tabular} \caption{\textit{Computing time} (in seconds) for model selection including finding the $PRESS$- and $GCV$-minimal regression coefficients when varying the number of candidate regularisation parameter values. The times are the averages of $50$ repeated runs rounded to the two most significant digits.} \label{tab:time} \end{table} \subsection{Segmented cross-validation \subsubsection{Datasets} In the following we will demonstrate the use of segmented cross-validation with $L_2$ regularisation for three datasets: \begin{enumerate} \item Raman spectra of fish oil \cite{afseth06}. The dataset consists of $42$ sample segments including $3$ replicate spectra of each unique sample giving a total of $126$ rows and $2801$ wavenumbers in the range $3200\,cm^{-1}$ to $400\,cm^{-1}$. The response variable was the iodine value (the response values were identical across each segment), which is frequently used as an indicator of the degree of unsaturation of fat \cite{afseth06}. The spectra of this dataset are plotted in Figure \ref{fig:fishspec} after applying Extended Multiplicative Signal Correction (EMSC) \cite{afseth12} with 6th order polynomial baseline correction. \item {\color{black}Fourier transform infrared (FTIR) spectra of hydrolysates from various mixtures of rest raw materials and enzymes \cite{kristoffersen2019ftir}. The dataset consists of $332$ samples including 1 to 12 replicates of each unique sample giving a total of $885$ rows and $571$ wavenumbers in the range $1800\,cm^{-1}$ to $700\,cm^{-1}$. The response variable was average molecular weight (AMW) (identical across each replicate set), which can be used as a proxy for degree of hydrolysation. The spectra of this dataset are plotted in Figure \ref{fig:hydrolysisspec}.} \item Raman milk spectra \cite{afseth10,randby12,liland16}. The dataset consists of $232$ unique sample segments including between $6$ and $12$ replicate measurements of each unique sample giving a total of $2682$ rows and $2981$ wavenumbers in the range $3100\,cm^{-1}$ to $120\,cm^{-1}$. The response variables were the iodine value and the concentration of conjugated linoleic acid (CLA). Also for this dataset the response values were identical across each segment. The spectra of this dataset are plotted in Figure \ref{fig:milkspec} after applying EMSC with 6th order polynomial baseline correction. \end{enumerate} \noindent For all datasets we have excluded the endpoint regions of the original spectra due to noise and the poor quality of the measurements. The wave numbers reported above are those included after this truncation. Approximately 2/3 of the replicate segments were used for model building and -selection, and the remaining 1/3 of the segments were used as a test set. \begin{figure}[!htb] \includegraphics[width=0.95\textwidth]{graphics/fishoilspectra.png} \caption{\textit{Plot of the fish oil spectra} after pre-processing with EMSC with 6th order polynomial baseline (\textit{top}) and additional replicate correction (\textit{bottom}).} \label{fig:fishspec} \end{figure} \begin{figure}[!htb] \includegraphics[width=0.95\textwidth]{graphics/hydrolysisspectra.png} \caption{\textit{Plot of the hydrolysis spectra} after pre-processing with EMSC with 2nd order polynomial baseline.} \label{fig:hydrolysisspec} \end{figure} \begin{figure}[!htb] \includegraphics[width=0.95\textwidth]{graphics/milkspectra.png} \caption{\textit{Plot of the milk spectra} after pre-processing with EMSC with 6th order polynomial baseline. Noise in some replicates is clearly visible as spikes around the main variation. } \label{fig:milkspec} \end{figure} The following four model selection strategies were considered: (i) $PRESS_{min}$ -- the minimum $PRESS(\lambda)$-value from LooCV (ignoring the presence of sample segments), (ii) $GCV_{min}$ -- the minimum $GCV(\lambda)$-value, (iii) the $PRESS_{min}$ from the SegCV (successively holding out the entire sample segments), and (iv) the $PRESS_{min}$ from the VirCV. We have chosen to focus only on the parameter selections associated with the minima of the various error curves in this part of our study (neither the $\chi^2$-rule nor the $1$ S.E. rule turned out to affect the model selections much). {\color{black}Neither of the two strategies for quicker estimation of $PRESS$-values are shown in the plots as the minimum $PRESS$-value (from searching) coincides with the minimum-$PRESS$ value from the 1000 sampled $\lambda$-values and the cubic spline interpolation is visually indistinguishable from the full $PRESS$ curve obtained from explicit segment removal.} \subsubsection{Fish data -- effect of pre-processing} \label{sec:fish} Spectroscopic measurements may be corrupted by both additive and multiplicative types of noise. Pre-processing of such data prior to modelling is therefore usually required. It is therefore of particular interest also to investigate how the model selection strategies considered above compare for pre-processed data. In particular we will consider the Extended Multiplicative Signal Correction (EMSC) \cite{afseth12} with replicate corrections \cite{kohler09}. In general, the goal of the EMSC pre-processing is to adjust all the measured spectra to a common scale and to eliminate the possible effects of additive noise. This includes the estimation of an individual scaling constant for each spectrum and an orthogonalisation step that de-trends the spectra with respect to some set of lower order polynomial trends (the reader is referred to the provided references for the technical details). In the present examples with Raman spectra, the samples were orthogonalised with respect to the subspace including all polynomial trends up to the $6$-th degree. The Raman spectra of fish samples were subjected to EMSC pre-processing to compensate for different scaling and competing phenomena such as fluorescence and optical/scattering effects in the equipment and samples. For the milk data the spectrum having least fluorescence background was chosen as reference, though the effect of choice of reference spectrum is minimal. For datasets including segments of replicated measurements, a replicate correction step is often considered to alleviate the presence of inter-replicate variance. Such correction can be done by an initial EMSC-based pre-processing of the spectra in each sample segment. Thereafter, the corrected sample segments can be individually mean-centred, and organised into a full data matrix. As we expect the dominant right singular vectors of the full matrix to account for the most dominant inter-replicate variance, orthogonalisation of the data with respect to one or more of the associated dimensions contributes to making the replicates more similar, see \cite{kohler09} for details. Because every sample in the training dataset is included in the pre-processing, some bias affecting the subsequent $PRESS$-calculations and model selection must be expected. Figure \ref{fig:fishrep} shows the model selection for pre-processed fish oil data based on the pure EMSC and for the EMSC where $30\%$ of the inter-replicate variance is removed. It is evident that the SegCV and the VirCV become considerably more similar in the latter case. As one should expect, the $GCV$- and $PRESS$ curves based on the LooCV seem to provide unrealistically low error values and the selection of lesser regularised models. This phenomenon does not occur with the SegCV where an entire segment of replicates is held out in each cross-validation step. The VirCV seems quite robust against the inter-replicate variance. \begin{figure}[!htb] \includegraphics[height=0.5\textwidth]{graphics/fishoilvar.png} \caption{\textit{Fish oil data.} Model selection for data pre-processed with the EMSC both with and without replicate correction. Top: Standard EMSC pre-processing. Bottom: EMSC with 30\% of the inter-replicate variance removed.} \label{fig:fishrep} \end{figure} \begin{table}[!htb] \begin{tabular}{|c|c|c|c|c|} \hline \backslashbox{Pre proc.}{Selection curves} & LooCV & GCV & VirCV & SegCV\\ \hline Raw data & 20.3 & 21.5 & 12.3 & 9.7\\ \hline EMSC & 14.4 & 15.1 & 6.9 & 4.5 \\ \hline EMSC + 30\% inter-replicate variance removed & 14.4 & 15.9 & 6.7 & 6.7\\ \hline \end{tabular} \caption{\textit{Fish oil data. $MSE$ (from test data) for different model selection strategies and different pre-processing alternatives.}} \label{tab:fishoilrmsep} \end{table} \noindent The prediction results for the test set of the fish oil data with the various pre-processing alternatives are presented in Table \ref{tab:fishoilrmsep}, and shows that the best results are obtained with the ordinary EMSC pre-processing and model selection based on the SegCV. By simultaneously considering Figure \ref{fig:fishrep}, it is clear that the more heavily regularised among the selected models (those based on the largest regularisation parameter values) perform better on the test set. With standard EMSC pre-processing the minima of the VirCV is located at a smaller regularisation parameter value than for the SegCV, suggesting an explanation of the difference in predictive performance. For the milk data, the prediction error estimates obtained after pre-processing the data are similar for all the parameter selection methods (table omitted), as was also the case with the raw data. \subsubsection{Hydrolysis data -- heterogeneous segments} \label{sec:hydrolysis} {\color{black}The hydrolysis data is used as an example of model comparison which is often performed using 5-fold or 10-fold segmented cross-validation. For the FTIR data we have chosen a 5-fold strategy where replicates are kept together inside each fold to prevent information bleeding by replicates of the same sample appearing in both training and test data. The resulting cross-validation segments vary in size from 103 to 117 samples, each, due to the present replicate sets. We have chosen to combine this with a 2nd derivative regularisation.} In Figure \ref{fig:hydrolysisrmsecv}, we have plotted the $PRESS$-curves for SegCV, VirCV, LooCV and GCV. For these highly heterogeneous cross-validation segments, the virtual cross-validation strategy coincides with $GCV$, both underestimating the prediction errors. Also, LooCV underestimates the errors, but less so. Since the general form of the $PRESS$-curves are quite similar, the minimum $PRESS$-values are located quite close together, suggesting that for the FTIR dataset any of the strategies will give a reasonable estimate of the optimal $\lambda$-value. As Table \ref{tab:amwrmsep} suggests, performance when applying the regressions corresponding to minimal $PRESS$-values on the test data are also similar with a slight advantage to the more regularised LooCV solution. \begin{figure}[!htb] \includegraphics[height=0.25\textwidth]{graphics/hydrolysisrmsecv.png} \caption{\textit{Hydrolysis data.} Different model selection strategies for a range of regularisation parameter values using 2nd derivative regularisation.} \label{fig:hydrolysisrmsecv} \end{figure} \begin{table}[!htb] \begin{tabular}{|c|c|c|c|c|} \hline \backslashbox{Pre proc.}{Selection curves} & LooCV & GCV & VirCV & SegCV\\ \hline EMSC & 1.85 & 1.92 & 1.92 & 1.89 \\ \hline \end{tabular} \caption{\textit{Hydrolysis data. $MSE$ (from test data) using EMSC for pre-processing.}} \label{tab:amwrmsep} \end{table} \subsubsection{Milk data -- efficiency with many segments} \label{sec:milk} The milk data is an example of relatively many samples (2682) and replicate groups (232), which can be challenging with regard to computational resources when cross-validating over a large range of $\lambda$-values. As can be observed from Figure \ref{fig:milkrmsecv}, the differences between SegCV, VirCV, LooCV and GCV are small both with regard to shape of the curves and location of respective minimum values. This is due to the low variation between samples within each replicate group, in sharp contrast to the FTIR dataset with its highly heterogeneous cross-validation segments. Of more interest, is the time usage for the various strategies, which is summarised in Section \ref{sec:speed} below. \begin{figure}[!htb] \includegraphics[height=0.5\textwidth]{graphics/milkrmsecv2.png} \caption{\textit{Milk data.} Different model selection strategies for a range of regularisation parameter values using $L_2$ regularisation. \textit{Top}: CLA. \textit{Bottom}: Iodine value.} \label{fig:milkrmsecv} \end{figure} \subsubsection{Approximations of $PRESS$-values - computational speed}\label{sec:speed} Table \ref{tab:timeblockorth} shows the computational times for the different model selection strategies. Both the $PRESS$- and the $GCV$ values are included as computing only one of them takes approximately the same time as computing both. Because the size of the replicate segments are relatively small for the Raman datasets ($3$ replicate measurements for the fish oil data and $6$ to $12$ replicate measurements for the milk data), the SVDs required for the internal orthogonalisations of the segments contribute insignificantly to the total computational load. The amount of computations required for model selection based on the VirCV is therefore quite comparable to the computations required for the LooCV version of $PRESS$ (and for the $GCV$). {\color{black}The strategy of searching for the minimum $PRESS$-value by golden section search and parabolic interpolation (MinSearch), is remarkably similar to VirCV in time usage. However, there is a trade-off between obtaining an estimate of the exact minimum value (MinSearch) and a full $PRESS$-curve (VirCV). Approximation of the SegCV using cspline interpolation is slower than VirCV and MinSearch, but still sufficiently fast for practical use in all tested cases and with the advantage of giving a $PRESS$-curve highly similar to the one obtained by the SegCV. The implicit segmented cross-validation (ImpCV) using Theorem \ref{TmSegCV}, is faster than SegCV for small segments and a bit slower for large segments, though still fast enough in providing exact results for all $\lambda$ values. In general the initial calculation of the SVD seems to be the main limiting factor in computational speed when the datasets grow in size. This is especially prominent for the milk data where SegCV performs this initial SVD 232 times. Here, a strategy avoiding SVD or using a randomised SVD algorithm \cite{Halko2011} might be favourable, however, the other presented strategies are still usable.} \begin{table}[!htb] \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & SegCV & ImpCV & VirCV & MinSearch & Spline & PRESS\&GCV \\ \hline Fish oil & $0.624 ~(1)$ & $0.100 ~(1/6)$ & $0.016 ~(1/38)$ & $0.015 ~(1/42)$ & $0.062 ~(1/10)$ & $0.010 ~(1/65)$\\ \hline Hydrolysis & $0.686 ~(1)$ & $0.927 ~(1/0.7)$ & $0.106 ~(1/6)$ & $0.118 ~(1/6)$ & $0.252 ~(1/3)$ & $0.096 ~(1/7)$\\ \hline Milk & $867.8 ~(1)$ & $8.9 ~(1/98)$ & $3.3 ~(1/266)$ & $3.2 ~(1/271)$ & $4.1 ~(1/214)$ & $2.95 ~(1/294)$ \\ \hline \end{tabular} \caption{\textit{Computational time for different model selection strategies for the Fish oil data, Hydrolysis data and Milk data when considering $500$ candidate regularisation parameter values. The times are given in seconds, rounded to two significant digits, and is the average of $50$ repeated runs. The speedup relative to SegCV is shown in parenthesis.}} \label{tab:timeblockorth} \end{table} \section{Discussion and conclusions} The essence of the TR-framework described in the present work is that just a single SVD-calculation (of either the original data matrix $\mathbf{X}$ or a transformed version $\tilde{\mathbf{X}}$) is required to explore some particular regularised regression problem of interest. We have pointed out that the $PRESS$- and $GCV$ values required for model selection(s) based on the LooCV or the $GCV$ can be obtained at the computational cost of two matrix-vector multiplications for each choice of the regularisation parameter value $\lambda$. In the applications section it is demonstrated that our framework scales well when increasing the number of candidate regularisation parameter value in the case of 'small $n$ with large $p$' problems. {\color{black}This scaling will also work well for problems involving multiple responses as most of the computations will be shared among responses.} For smaller and medium sized data as well as for other situations where the required SVD can be calculated (or approximated) reasonably fast, the acquired computational efficiency allows for the exploration of a large number of candidate models in a very short amount of time. {\color{black} For situations where leave-one-out cross-validation underestimates validation error because of sample replicates or other grouping of samples, a segmented cross-validation is the appropriate choice. We have proved a theorem saying that explicit remodelling for computation of cross-validated $PRESS$-values can be avoided, while still giving exact results, at the computational cost of inverting one matrix per sample segment per $\lambda$-value. For cases where the cost outweighs the benefits, we have proposed alternative strategies for reducing the number of inversions through careful selections of $\lambda$-values as well as an approximate virtual cross-validation (VirCV) strategy. T}he VirCV is a computationally efficient approximation of the traditional SegCV. In the applications (Section \ref{Sec: applications}) we observed that the VirCV approximation of the SegCV appears to be quite accurate for model selection in the case of highly similar samples within each segment, while using the LooCV or $GCV$ in such situations is more likely to propose insufficient regularisation and models that predicts poorer. It is important to note that when the dataset is pre-processed and/or transformed by a data dependent method, some bias both in the LooCV- and VirCV based $PRESS$ values must be expected. The data variable standardisation commonly used in RR is a typical example. The EMSC pre-processing that was used with or without replicate corrections is another. However, the main purpose of the LooCV- and VirCV based $PRESS$ values in the proposed framework is model selection rather than error estimation. The bias introduced by such pre-processing methods is therefore not likely to be very harmful as long as the (training) data does not contain serous outliers. Although leverage correction of the model residuals for fast calculation of the LooCV in linear least squares regression problems is well known, there are some misleading assertions in the literature regarding both the properties and accuracy of $PRESS$-values that requires clarification: i) Hansen\cite[page 96]{hansen10} claims that the leverage values are not invariant under row permutations of the $\mathbf{X}$-data making the $PRESS$-values dependent of the ordering of the data. However, when the rows of the data matrix are permuted it can be verified that the leverage values are unchanged and undergo precisely the same permutation. Consequently, the correct leverage values will match up perfectly with the corresponding model residuals in the calculation of the $PRESS(\lambda)$ calculations assuring its invariance under any row permutation of the $(\mathbf{X},\mathbf{y})$-data. ii) Myers\cite[page 399]{myers90} claims that the expression for fast calculation of $PRESS(\lambda)$ is only an approximation when performing centring and scaling of the data. This is, however, only true when the scaling factors are calculated from the data to be used in the model building. The data centring, as such, does not corrupt the leverage- and $PRESS(\lambda)$-values as long as the $1/n$ terms are included in the associated leverage corrections of the model residuals. iii) The version of Ridge regression implemented in the MASS package\cite{MASS} for the R programming language includes a fast calculation of the $GCV(\lambda)$-values for a desired vector of corresponding $\lambda$-values. The $1/n$ term is, however ignored when correcting the model residuals by the required averaged leverage value. Consequently, the resulting $GCV$-values are misleading when centring of the data is included as a part of the Ridge Regression modelling. We believe that future statistical texts and software dealing with Ridge Regression (and Tikhonov Regularisation) will find value in including the necessary pieces of linear algebra (in particular the simple matrix-vector multiplications of Equation (\ref{hlambda}) to establish the fast calculation of the $PRESS(\lambda)$ in Equation (\ref{PRESSlambda}). In our opinion these relatively simple but still powerful results demonstrate yet another remarkable consequence of the SVD at the core of applied multivariate data analysis. {\color{black} Finally, we have established a theorem describing how to compute the cross-validated residuals for (regularised) linear regression models from the fitted value residuals. The computation can be seen as a multi-sample kind of leverage correction that applies to any type of segmented cross-validation strategy. In many cases it represents a computationally efficient alternative to the computationally slower "hold out/remodelling approach" most common within statistics and machine learning. For the special case of LooCV, our theorem simplifies to the well known scalar leverage correction calculations of the LooCV errors.} \newpage
1,116,691,498,792
arxiv
\section{Introduction} Let $H^2_d$ be the Drury-Arveson space, that is, the reproducing kernel Hilbert space on the open unital ball in $\mathbb C^d$ with reproducing kernel \begin{equation*} K(z,w) = \frac{1}{1-\langle z,w \rangle}, \end{equation*} also known as symmetric Fock space. The coordinate functions $z_i$ are multipliers on $H^2_d$, that is, the multiplication operators \begin{equation*} M_{z_i} : H^2_d \to H^2_d, \quad f \mapsto z_i f \end{equation*} determine a commuting operator tuple $S = M_z = (M_{z_1}, \ldots , M_{z_d})$, which is known as the \emph{d-shift}. The tuple $S$ is a row contraction, and according to Arveson \cite{arv-s3} (see also \cite{popescu-isometries} and \cite{ep02}), the unital non-selfadjoint norm-closed algebra $\mathcal A_d$ generated by $M_{z}$ is universal for row contractions, in the sense that whenever $T = (T_1,\ldots,T_d)$ is any commuting row contraction on a Hilbert space $H$, the algebra homomorphism \begin{equation*} \polyring{d} \to \mathcal B(H), \quad p \mapsto p(T_1, \ldots, T_d) \end{equation*} extends to a completely contractive representation of $\mathcal A_d$. Recently, Davidson, Ramsey and Shalit \cite{davramshal} examined universal operator algebras for commuting row contractions which satisfy relations given by a homogeneous ideal $I \subset \polyring{d}$. Algebras of this type, even in a more general case, were already studied by Popescu \cite{popescu}. The universal object in this setting is the quotient algebra $\mathcal A_d / \overline{I}$, which is an abstract operator algebra in the sense of Blecher, Ruan and Sinclair (see for example \cite[Chapter 17]{effros}). Popescu's work \cite{popescu} shows that $\mathcal A_d / \overline{I}$ can also be identified with the concrete algebra of operators $\mathcal A_I$ obtained by compressing $\mathcal A_d$ to the co-invariant subspace \begin{equation*} \mathcal F_I = H^2_d \ominus I. \end{equation*} If $I$ is a radical homogeneous ideal, $\mathcal A_I$ can be regarded as an algebra of continuous functions on the intersection of the vanishing locus $V(I)$ of $I$ with the closed unit ball. In particular, $\mathcal A_I$ is a commutative semi-simple Banach algebra in this case. In \cite{davramshal}, the isomorphism problem for algebras $\mathcal A_I$ of this type, and non-commutative generalizations thereof, was investigated. In the commutative radical case, a close connection between the structure of the algebra $\mathcal A_I$ and the geometry of the vanishing locus $V(I)$ of $I$ was established. More precisely, the authors of \cite{davramshal} proved, building upon results due to Shalit and Solel \cite{shalit_solel}, that for two radical homogeneous ideals $I$ and $J$ in $\polyring{d}$, the algebras $\mathcal A_I$ and $\mathcal A_J$ are completely isometrically isomorphic if and only if they are isometrically isomorphic, which in turn happens if and only if there is a unitary map $U$ on $\mathbb C^d$ mapping $V(I)$ onto $V(J)$. Moreover, Davidson, Ramsey and Shalit studied the existence of algebraic isomorphisms, which are the same as topological isomorphisms since the algebras $\mathcal A_I$ are semi-simple in the radical case. They showed that if $I \subset \polyring{d}$ and $J \subset \polyring{d'}$ are radical homogeneous ideals such that $\mathcal A_I$ and $\mathcal A_J$ are topologically isomorphic, then there exist two linear maps $A: \mathbb C^{d'} \to \mathbb C^{d}$ and $B: \mathbb C^{d} \to \mathbb C^{d'}$ which restrict to mutually inverse bijections $A: Z(J) \to Z(I)$ and $B: Z(I) \to Z(J)$, where $Z(I) = V(I) \cap \overline{\mathbb B_d}$ and $Z(J) = V(J) \cap \overline{\mathbb B_{d'}}$. The converse of this fact was established in \cite{davramshal} for the case of tractable varieties, and was conjectured to be true in general. In fact, Davidson, Ramsey and Shalit reduced this problem to the case where $I$ and $J$ are vanishing ideals of unions of subspaces. To give an example, single subspaces and unions of two subspaces are always tractable. However, unions of three or more subspaces are not tractable in general. The aim of the present note is to prove the following theorem, which establishes the above conjecture in full generality. \begin{thm} Let $I$ and $J$ be radical homogeneous ideals in $\polyring{d}$ and $\polyring{d'}$, respectively. The algebras $\mathcal A_I$ and $\mathcal A_J$ are isomorphic if and only if there exist linear maps $A: \mathbb C^{d'} \to \mathbb C^{d}$ and $B: \mathbb C^{d} \to \mathbb C^{d'}$ which restrict to mutually inverse bijections $A: Z(J) \to Z(I)$ and $B: Z(I) \to Z(J)$. \end{thm} To this end, we proceed as follows. In Section 2, we show that to establish the conjecture, it is enough to prove that if $V_1, \ldots, V_r \subset \mathbb C^d$ are subspaces, then the algebraic sum of the full Fock spaces $\fock{V_1} + \ldots + \fock{V_r}$ is closed in $\fock{\mathbb C^d}$. This problem can be approached using the notion of the Friedrichs angle between subspaces of a Hilbert space. In Section 3, we recall the basic facts concerning this concept, and we introduce a variant of the Friedrichs angle using the Calkin algebra which is more suitable to our needs than the classical one. The main result of Section 4 is Lemma \ref{lem:angle_several_sum_perp}, which reduces the problem of showing closedness of $\fock{V_1} + \ldots + \fock{V_r}$ to the case where $V_1 \cap \ldots \cap V_r = \{0\}$. Section 5 finally contains a proof of the closedness of $\fock{V_1} + \ldots + \fock{V_r}$. \section{\texorpdfstring{Algebra isomorphisms and sums of Fock spaces}{Algebra isomorphisms and sums of Fock spaces}} As usual, let $\polyring{d}$ denote the algebra of complex polynomials in $d$ variables. When $d$ is understood, we will simply write $\mathbb C[z]$. If $n$ is a natural number, then $\mathbb C[z]_n$ will denote the space of homogeneous polynomials of degree $n$. For a radical homogeneous ideal $I \subset \mathbb C[z]$, let $\mathcal F_I = H^2_d \ominus I$ and let $\mathcal A_I \subset \mathcal B(\mathcal F_I)$ be the norm-closed non-selfadjoint algebra generated by the compressions of $M_{z_i}$ to the co-invariant subspace $\mathcal F_I$. The vanishing locus of $I$ will be denoted by $V(I)$, and we will write $Z^0(I)$ (respectively $Z(I)$) for the intersection of $V(I)$ with the open (respectively the closed) unit ball. Moreover, for a subset $S$ of a vector space, $\spa(S)$ will denote the linear span of $S$. We follow the route of \cite{davramshal} and try to find isomorphisms between the Hilbert spaces $\mathcal F_I$ such that conjugation with these isomorphisms yields algebra isomorphisms between the algebras $\mathcal A_I$. We begin by exhibiting a convenient generating set for the $n$th homogeneous part $\mathcal F_I \cap \mathbb C[z]_n$ of $\mathcal F_I$ (compare the discussion preceding \cite[Lemma 7.11]{davramshal}). \begin{lem} \label{lem:F_I_generating_set} Let $I \subset \mathbb C[z]$ be a radical homogeneous ideal. Then for all natural numbers $n$, \begin{equation*} \mathcal F_I \cap \mathbb C[z]_n = \spa \{ \langle \cdot,\lambda \rangle^n: \lambda \in Z^0(I) \} = \spa \{ \langle \cdot,\lambda \rangle^n: \lambda \in V(I) \}. \end{equation*} \end{lem} \begin{proof} Note that for any $\lambda \in \mathbb B_d$, we have \begin{equation*} K(\cdot,\lambda) = \sum_{n=0}^\infty \langle \cdot,\lambda \rangle^n \in H^2_d, \end{equation*} where $K$ is the reproducing kernel of $H^2_d$. Using that homogeneous polynomials of different degree are orthogonal in $H^2_d$, we obtain for $\lambda \in Z^0(I)$ and $f \in \mathbb C[z]_n$ the identity \begin{equation*} \Big\langle f, \langle \cdot,\lambda \rangle^n \Big\rangle_{H^2_d} = \Big\langle f, K(\cdot,\lambda) \Big\rangle_{H^2_d} = f(\lambda). \end{equation*} In particular, if $f \in I \cap \mathbb C[z]_n$ and $\lambda \in Z^0(I)$, then \begin{equation*} \Big\langle f, \langle \cdot, \lambda \rangle^n \Big\rangle_{H^2_d} = 0, \end{equation*} hence $\langle \cdot,\lambda \rangle^n \in \mathcal F_I$. Conversely, if $g \in \mathcal F_I \cap \mathbb C[z]_n$ is orthogonal to each $\langle \cdot,\lambda \rangle^n$ for $\lambda \in Z^0(I)$, then $g$ vanishes on $Z^0(I)$. By homogeneity of $I$ and $g$, we infer that $g$ vanishes on $V(I)$, hence $g \in I$ by Hilbert's Nullstellensatz. Consequently, $g=0$, from which the first equality follows, while the second is obvious. \end{proof} Suppose now that $I \subset \polyring{d}$ and $J \subset \polyring{d'}$ are radical homogeneous ideals and that $A: \mathbb C^{d'} \to \mathbb C^{d}$ is a linear map which maps $V(J)$ into $V(I)$. It is an easy consequence of the homogeneity of $J$ that $\mathcal D_J = \mathcal F_J \cap \polyring{d'}$ is a dense subspace of $\mathcal F_J$. Since \begin{equation} \label{eqn:com_A_skal_prod} \langle \cdot,\lambda \rangle^n \circ A^* = \langle \cdot, A \lambda \rangle^n \end{equation} for all $\lambda \in \mathbb C^{d'}$ and $n \in \mathbb N$, we conclude with the help of the preceding lemma that $A$ induces a densely defined linear map \begin{equation*} \mathcal F_J \supset \mathcal D_J \to \mathcal F_I, \quad f \mapsto f \circ A^*. \end{equation*} The crucial problem is to determine when this map is bounded. If $J$ is the vanishing ideal of a single subspace $V \subset \mathbb C^{d'}$ and $A$ is isometric on $V$, then the map is in fact isometric. This follows from results in \cite{davramshal}. For the convenience of the reader, a proof is provided below. \begin{lem} \label{lem:comp_single_subspace} Let $V \subset \mathbb C^{d'}$ be a subspace and let $J \subset \polyring{d'}$ be its vanishing ideal. If $A: \mathbb C^{d'} \to \mathbb C^d$ is a linear map which is isometric on $V$, then \begin{equation*} \compmap{A}: \mathcal F_J \supset \mathcal D_J \to H^2_d, \quad f \mapsto f \circ A^* \end{equation*} is an isometry. \end{lem} \begin{proof} Let $\lambda,\mu \in V \cap \mathbb B_{d'}$ and $k,n \in \mathbb N$ be arbitrary. Using the homogeneous decomposition of the reproducing kernel $K$ of $H^2_d$, we see that \begin{align*} \big\langle \compmap{A} (\langle \cdot, \lambda \rangle^n) , \compmap{A} (\langle \cdot, \mu \rangle^k) \big\rangle_{H^2_d} &= \big\langle \langle \cdot, A \lambda \rangle^n , \langle \cdot, A \mu \rangle^k \big\rangle_{H^2_d} \\ &= \delta_{kn} \big\langle \langle \cdot, A \lambda \rangle^n , K(\cdot,A \mu) \big\rangle_{H^2_d} \\ &= \delta_{kn} \, \langle A \mu, A \lambda \rangle^n = \delta_{kn} \, \langle \mu, \lambda \rangle^n. \end{align*} Similarly, \begin{equation*} \big\langle \langle \cdot, \lambda \rangle^n , \langle \cdot, \mu \rangle^k \big\rangle_{H^2_{d'}} = \delta_{kn} \, \langle \mu, \lambda \rangle^n. \end{equation*} Since $\mathcal D_J$ is linearly spanned by polynomials of the form $\langle \cdot,\lambda \rangle^n$ with $\lambda \in V \cap \mathbb B_{d'}$ and $n \in \mathbb N$ by the preceding lemma, we conclude that $\compmap{A}$ is isometric. \end{proof} When considering more complicated algebraic sets such as unions of subspaces, one of course wishes to decompose the sets into smaller pieces which are easier to deal with. Algebraically, this corresponds to writing an ideal as an intersection of larger ideals. On the level of the spaces $\mathcal F_I$, we get the following result. \begin{lem} \label{lem:ideal_dec_F} Let $J_1, \ldots,J_r \subset \mathbb C[z]$ be homogeneous ideals and let $J = J_1 \cap \ldots \cap J_r$. Then \begin{equation*} \overline{J} = \overline{J_1} \cap \ldots \cap \overline{J_r}, \end{equation*} and \begin{equation*} \mathcal F_J = \overline{\mathcal F_{J_1} + \ldots + \mathcal F_{J_r}}. \end{equation*} \end{lem} \begin{proof} It suffices to prove the first claim, since the second will then follow by taking orthogonal complements. To this end, note that the inclusion $\overline{J} \subset \overline{J_1} \cap \ldots \cap \overline{J_r}$ is trivial. Conversely, it is an easy consequence of the homogeneity of the $J_k$ that, for any element $f \in \overline{J_k}$ with homogeneous expansion \begin{equation*} f = \sum_{n=0}^\infty f_n, \end{equation*} each $f_n$ is contained in $J_k$, from which the reverse inclusion readily follows. \end{proof} The question under which conditions the sum $\mathcal F_{J_1} + \ldots + \mathcal F_{J_r}$ in the preceding lemma is itself closed will be of central importance. In general, $\mathcal F_{J_1} + \mathcal F_{J_2}$ need not be closed for two radical homogeneous ideals $J_1$ and $J_2$, see Example \ref{exa:sum_not_closed} below. But thanks to the reduction to unions of subspaces in \cite{davramshal}, we only need to consider the case where the $J_k$ are vanishing ideals of subspaces in $\mathbb C^d$. To keep the statements of the following results reasonably short, we make an ad-hoc definition which will only be used in this section. \begin{defn} Let $J \subset \polyring{d}$ be a radical homogeneous ideal, and let $V(J) = W_1 \cup \ldots \cup W_r$ be the decomposition of $V(J)$ into irreducible components. Denote the vanishing ideal of $\spa{W_k}$ by $\widehat J_k$. We call $J$ \emph{admissible} if the algebraic sum $\mathcal F_{\widehat J_{1}} + \ldots + \mathcal F_{\widehat J_r}$ is closed. \end{defn} \begin{prop} \label{prop:good_bounded_maps} Let $I$ and $J$ be radical homogeneous ideals in $\polyring{d}$ and $\polyring{d'}$, respectively. Suppose that there is a linear map $A: \mathbb C^{d'} \to \mathbb C^{d}$ that maps $Z(J)$ bijectively onto $Z(I)$. If $J$ is admissible, then \begin{equation*} \mathcal F_J \supset \mathcal D_J \to \mathcal F_I, \quad f \mapsto f \circ A^* \end{equation*} is a bounded map. \end{prop} \begin{proof} Let $V(J) = W_1 \cup \ldots \cup W_r$ be the irreducible decomposition of $V(J)$, and let $\widehat J_k$ be the vanishing ideal of $\spa(W_k)$. Define \begin{equation*} S = \spa (W_1) \cup \ldots \cup \spa(W_r), \end{equation*} and denote the vanishing ideal of $S$ by $\widehat J$, so that $\widehat J = \widehat J_1 \cap \ldots \cap \widehat J_r$. Since $\mathcal D_{J} \subset \mathcal D_{\widehat J}$, it suffices to show that $f \mapsto f \circ A^*$ defines a bounded map on $\mathcal D_{\widehat J}$. By Lemma \ref{lem:ideal_dec_F}, we have \begin{equation*} \mathcal F_{\widehat J} = \overline{\mathcal F_{\widehat J_1} + \ldots + \mathcal F_{\widehat J_r}}. \end{equation*} By Lemma 7.5 and Proposition 7.6 in \cite{davramshal}, the linear map $A$ is isometric on $S$. Consequently, Lemma \ref{lem:comp_single_subspace} shows that $f \mapsto f \circ A^*$ defines an isometry on each $D_{\widehat J_k} \subset \mathcal F_{\widehat J_k}$. We will use the hypothesis that $J$ is admissible\ in order to show that $f \mapsto f \circ A^*$ defines a bounded map on $\mathcal D_{\widehat J}$. To this end, we note that since $\mathcal F_{\widehat J_1} + \ldots + \mathcal F_{\widehat J_r}$ is closed, a standard application of the open mapping theorem yields a constant $C \ge 0$ such that for any $f \in \mathcal F_{\widehat J}$, there are $f_k \in \mathcal F_{\widehat J_k}$ with $f = f_1 + \ldots + f_r$ and \begin{equation*} ||f_1||^2 + \ldots + ||f_r||^2 \le C ||f||^2. \end{equation*} If $f$ is a homogeneous polynomial of degree $n$, we can choose the $f_k$ to be homogeneous polynomials of degree $n$ as well. Consequently, if $f \in \mathcal D_{\widehat J}$, the $f_k$ can be chosen from $\mathcal D_{\widehat J_k}$. With such a choice, we obtain for $f \in \mathcal D_{\widehat J}$ the (crude) estimate \begin{align*} ||f \circ A^*||^2 &= ||f_1 \circ A^* + \ldots + f_r \circ A^*||^2 \\ &\le r^2 \max_{1 \le k \le r} ||f_k \circ A^*||^2 \\ &= r^2 \max_{1 \le k \le r} ||f_k||^2 \le C r^2 ||f||^2, \end{align*} where we have used that $f \mapsto f \circ A^*$ is an isometry on each $\mathcal D_{\widehat J_k}$. \end{proof} In the setting of the preceding proposition, let $\compmap{A}: \mathcal F_J \to \mathcal F_I$ be the continuous extension of $f \mapsto f \circ A^{*}$ onto $\mathcal F_J$. Taking the homogeneous expansion of the kernel functions $K(\cdot,\lambda)$ into account, we infer from \eqref{eqn:com_A_skal_prod} that $\compmap{A}$ satisfies \begin{equation*} \compmap{A}( K(\cdot,\lambda)) = K(\cdot,A \lambda) \quad \text{ for all } \lambda \in Z^{0}(J). \end{equation*} The existence of topological isomorphisms between $\mathcal A_I$ and $\mathcal A_J$ if $I$ and $J$ are admissible\ now follows exactly as in the proof of \cite[Theorem 7.17]{davramshal}. \begin{cor} \label{cor:good_algebra_iso} Let $I$ and $J$ be radical homogeneous ideals in $\polyring{d}$ and $\polyring{d'}$, respectively. Suppose that there are linear maps $A: \mathbb C^{d'} \to \mathbb C^d$ and $B: \mathbb C^d \to \mathbb C^{d'}$ which restrict to mutually inverse bijections $A: Z(J) \to Z(I)$ and $B: Z(I) \to Z(J)$. If $I$ and $J$ are admissible, then $\compmap{A}$ and $\compmap{B}$ are inverse to each other, and \begin{equation*} \Phi: \mathcal A_I \to \mathcal A_J, \quad T \mapsto (\compmap{A})^* T (\compmap{B})^*, \end{equation*} is a completely bounded isomorphism. Regarding $\mathcal A_I$ and $\mathcal A_J$ as function algebras on $Z(I)$ and $Z(J)$, respectively, $\Phi$ is given by composition with $A$, that is, \begin{equation*} \Phi (\varphi) = \varphi \circ A \quad \text{ for all } \varphi \in \mathcal A_I. \eqno\qed \end{equation*} \end{cor} To improve the corresponding results from \cite{davramshal}, we will show that every radical homogeneous ideal $I \subset \polyring{d}$ is automatically admissible. To this end, we will work with the description of Drury-Arveson space as symmetric Fock space, rather than as a Hilbert function space. We begin by recalling some standard definitions. For a finite dimensional Hilbert space $E$, let \begin{equation*} \fock{E} = \bigoplus_{n=0}^\infty E^{\otimes n} \end{equation*} be the full Fock space over $E$. Note that if $V \subset E$ is a subspace, we can regard $\fock{V}$ as a subspace of $\fock{E}$, and the orthogonal projection from $\fock{E}$ onto $\fock{V}$ is given by \begin{equation*} P_{\fock{V}} = \bigoplus_{n=0}^\infty (P_V)^{\otimes n}. \end{equation*} Let $E^n \subset E^{\otimes n}$ denote the $n$-fold symmetric tensor power of $E$, and write \begin{equation*} \symfock{E} = \bigoplus_{n=0}^\infty E^{n} \subset \fock{E} \end{equation*} for the symmetric Fock space over $E$. Then $H^2_d$ can be identified with $\symfock{\mathbb C^d}$ via an anti-unitary map $U: H^2_d \to \symfock{\mathbb C^d}$, which is uniquely determined by \begin{equation} \label{eqn:DA_Fock} U (\langle \cdot,\lambda \rangle^{n}) = \lambda^{\otimes n} \end{equation} for all $\lambda \in \mathbb C^d$ and $n \in \mathbb N$ (see \cite[Section 1]{arv-s3}). This identification allows us to translate the condition that the ideals $I$ and $J$ be admissible\ in terms of symmetric Fock space. In fact, working with full Fock space suffices. \begin{lem} \label{lem:full_Fock_good} Let $J \subset \polyring{d}$ be a radical homogeneous ideal, and let \begin{equation*} V(J) = W_1 \cup \ldots \cup W_r \end{equation*} be the irreducible decomposition of $V(J)$. Let $V_k = \spa{W_k}$. If the algebraic sum of the full Fock spaces $\fock{V_1} + \ldots + \fock{V_r}$ is closed, then $J$ is admissible. \end{lem} \begin{proof} Let $\widehat J_k$ be the vanishing ideal of $V_k$. Then by Lemma \ref{lem:F_I_generating_set}, the linear span of the elements $\langle \cdot,\lambda \rangle^n$ with $\lambda \in V_k$ and $n \in \mathbb N$ is dense in $\mathcal F_{\widehat J_k}$, whereas $\symfock{V_k}$ is the closed linear span of the symmetric tensors $\lambda^{\otimes n}$ with $\lambda \in V_k$ and $n \in \mathbb N$. Hence, the identity \eqref{eqn:DA_Fock} shows that $U$ maps $\mathcal F_{\widehat J_k}$ onto $\symfock{V_k}$, so that $J$ is admissible\ if and only if the algebraic sum \begin{equation*} S = \symfock{V_1} + \ldots + \symfock{V_r} \end{equation*} is closed. Now, let $Q$ be the orthogonal projection from $\fock{\mathbb C^d}$ onto $\symfock{\mathbb C^d}$. It is well known that in degree $n$, we have \begin{equation*} Q \big|_{(\mathbb C^d)^{\otimes n}} = \frac{1}{n!} \sum_{\sigma \in S_n} U_\sigma, \end{equation*} where $S_n$ denotes the symmetric group on $n$ letters, and for $\sigma \in S_n$, the unitary operator $U_\sigma$ is given by \begin{equation*} U_\sigma (x_1 \otimes \ldots \otimes x_n) = x_{\sigma^{-1} (1)} \otimes \ldots \otimes x_{\sigma^{-1}(n)}. \end{equation*} Note that for a subspace $V \subset \mathbb C^d$, the projections $Q$ and $P_{\fock{V}}$ commute and $Q P_{\fock{V}} = P_{\symfock{V}}$, from which it easily follows that closedness of $\fock{V_1} + \ldots + \fock{V_r}$ implies closedness of $S$. Indeed, if $x$ is in the closure of $S$, then we can write $x = \widetilde x_1 + \ldots + \widetilde x_r$ with $ \widetilde x_k \in \fock{V_k}$. Setting $x_k = Q \widetilde x_k \in \symfock{V_k}$, we have \begin{equation*} x = Q x = x_1 + \ldots + x_r \in S. \qedhere \end{equation*} \end{proof} \section{The Friedrichs angle} In order to show that sums of full Fock spaces are closed, we will make use of a classical notion of angle between two closed subspaces of a Hilbert space due to Friedrichs \cite{friedrichs} (for the history of this and related quantities, see for example \cite{boettcher}). \begin{defn} Let $H$ be a Hilbert space and let $M,N \subset H$ be closed subspaces. If $M \not \subset N$ and $N \not \subset M$, the Friedrichs angle between $M$ and $N$ is defined to be the angle in $[0,\frac{\pi}{2}]$ whose cosine is \begin{equation*} c(M,N) = \sup_{\substack{x \in M \ominus (M \cap N) \\ y \in N \ominus (M \cap N) \\ x \neq 0 \neq y}} \frac{ | \langle x,y \rangle|} {||x|| \, ||y||}. \end{equation*} Otherwise, we set $c(M,N) = 0$. \end{defn} We record some standard properties of the Friedrichs angle in the following lemma. For a closed subspace $M$ of a Hilbert space $H$, we denote the orthogonal projection from $H$ onto $M$ by $P_M$. \begin{lem} \label{lem:angle_standard_prop} Let $H$ be a Hilbert space and let $M$ and $N$ be closed subspaces of $H$. \begin{enumerate}[label=\normalfont{(\alph*)},ref={\thelem~(\alph*)}] \item $c(M,N) = c(M \ominus (M \cap N), N \ominus (M \cap N))$. \label{it:angle_standard_prop_disjoint} \item $c(M,N) = ||P_M P_N - P_{M \cap N}||$ and $c(M,N)^2 = ||P_N P_M P_N - P_{M \cap N}||$. \label{it:angle_standard_prop_square} \item $M+N$ is closed if and only if $c(M,N)< 1$. \label{it:angle_standard_prop_sum_closed} \end{enumerate} \end{lem} \begin{proof} (a) is obvious, and the first half of (b) and (c) are well known, see for example Lemma 10 and Theorem 13 in \cite{deutsch95}. To show the second half of (b), we set $T=P_M P_N - P_{M \cap N}$ and note that \begin{equation*} T^* T = (P_N P_M - P_{M \cap N}) (P_M P_N - P_{M \cap N}) = P_N P_M P_N - P_{M \cap N}. \end{equation*} Hence, by the first half of (b), \begin{equation*} c(M,N)^2 = ||T||^2 = ||T^* T|| = ||P_N P_M P_N - P_{M \cap N}||. \qedhere \end{equation*} \end{proof} Part (c) is the reason why we are considering the Friedrichs angle. Recently, Badea, Grivaux and M\"uller \cite{BGV} have introduced a generalization of the Friedrichs angle to more than two subspaces. Although we want to show closedness of sums of arbitrarily many Fock spaces, an inductive argument using the classical definition for two subspaces seems to be more feasible in our case. As a first application, we exhibit two radical homogeneous ideals $I,J \subset \mathbb C[z]$ such that $\mathcal F_I + \mathcal F_J$ is not closed. When the ideals are not necessarily radical, an example for this phenomenon is also given by Shalit's example of a set of polynomials which is not a stable generating set, see \cite[Example 2.6]{shalit}. \begin{exa} \label{exa:sum_not_closed} Let $I = \langle y^2 + x z \rangle$ and $J=\langle x \rangle$ in $\mathbb C[x,y,z]$. We claim that $\mathcal F_I + \mathcal F_J$ is not closed. Since for two closed subspaces $M$ and $N$ of a Hilbert space $H$, closedness of $M+N$ is equivalent to closedness of $M^\bot + N^\bot$ (see for example \cite[Theorem 13]{deutsch95}), it suffices to show that $\overline{I} + \overline{J}$ is not closed. To this end, we set for $n \ge 2$ \begin{equation*} f_n = z^{n-2} (y^2+xz) \quad \text{ and } \quad g_n = z^{n-1} x. \end{equation*} Clearly, $f_n \in I$ and $g_n \in J$ for all $n$. Using that different monomials in $H^2_d$ are orthogonal, one easily checks that all $f_n$ and $g_n$ are orthogonal to $I \cap J = \langle x^2 z + x y^2 \rangle$, so they are orthogonal to $\overline{I} \cap \overline{J} = \overline{I \cap J}$ (see Lemma \ref{lem:ideal_dec_F}) as well. Moreover, a straightforward calculations yields \begin{equation*} ||f_n||^2 = \frac{n+1}{n (n-1)} \quad \text{ and } \quad \langle f_n,g_n \rangle = ||g_n||^2 = \frac{1}{n}. \end{equation*} Consequently, \begin{equation*} \frac{\langle f_n,g_n \rangle}{||f_n|| \, ||g_n||} = \sqrt{\frac{n-1}{n+1}} \xrightarrow{n \to \infty} 1, \end{equation*} from which we conclude that $c(\overline{I}, \overline{J})=1$, so that $\overline{I} + \overline{J}$ is not closed by Lemma \ref{it:angle_standard_prop_sum_closed}. \end{exa} Let $H$ be a Hilbert space which is graded in the sense that $H$ is the orthogonal direct sum \begin{equation*} H = \bigoplus_{n \in \mathbb N} H_n \end{equation*} for some Hilbert spaces $H_n$. Denote the orthogonal projection from $H$ to $H_n$ by $P_n$. We say that a closed subspace $M \subset H$ is graded if $P_n P_M = P_M P_n$ for all $n \in \mathbb N$. Equivalently, \begin{equation*} M = \bigoplus_{n=0}^\infty M \cap H_n. \end{equation*} Note that $M$ is graded if and only if $P_M$ belongs to the commutant of $\{P_n: n \in \mathbb N\}$, which is a von Neumann algebra. In particular, if $M,N \subset H$ are graded, then $\overline{M+N}$ and $M \cap N$ are graded as well. The most important examples of graded Hilbert spaces in our case are full Fock spaces and sums thereof. The angle between two graded subspaces can be easily expressed in terms of the angles between their graded components by the following formula. \begin{lem} \label{lem:angle_graded_hilb_space} Let $H=\bigoplus_{n=0}^\infty H_n$ be a graded Hilbert space and let $M,N \subset H$ be graded subspaces. Write $M_n = M \cap H_n$ and $N_n = N \cap H_n$ for $n \in \mathbb N$. Then \begin{equation*} c(M,N) = \sup_{n \in \mathbb N} c(M_n,N_n). \end{equation*} \end{lem} \begin{proof} The assertion readily follows from Lemma \ref{it:angle_standard_prop_square} and the fact that for any graded subspace $K \subset H$, we have \begin{equation*} P_{K} = \bigoplus_{n=0}^\infty P_{K \cap H_n}^{H_n}, \end{equation*} where $P_{K \cap H_n}^{H_n}$ denotes the orthogonal projection from $H_n$ onto $K \cap H_n$. \end{proof} If each of the spaces $H_n$ in the preceding lemma is finite dimensional, then $c(M_n,N_n) < 1$ for all $n \in \mathbb N$. This can easily be seen from the definition of the Friedrichs angle, or, alternatively, it follows as an application of Lemma \ref{it:angle_standard_prop_sum_closed}. In particular, $M+N$ is closed if and only if $\limsup_{n \to \infty} c(M_n,N_n) < 1$. That is, closedness of $M+N$ only depends on the asymptotic behaviour of the sequence $(c(M_n,N_n))_n$. Inspired by condition 7 in \cite[Theorem 2.3]{BGV}, we will now introduce a variant of the Friedrichs angle which reflects this fact. For a closed subspace $M$ of a Hilbert space $H$, we denote the equivalence class of $P_M$ in the Calkin algebra by $p_M$. \begin{defn} Let $H$ be a Hilbert space and let $M,N \subset H$ be closed subspaces. The essential Friedrichs angle is defined to be the angle in $[0,\frac{\pi}{2}]$ whose cosine is \begin{equation*} c_e(M,N) = ||p_M p_N - p_{M \cap N}||. \end{equation*} \end{defn} Parts (a) and (b) of Lemma \ref{lem:angle_standard_prop} also hold with $c_e$ in place of $c$. \begin{lem} \label{lem:ess_angle_standard_prop} Let $H$ be a Hilbert space and let $M,N \subset H$ be closed subspaces. \begin{enumerate}[label=\normalfont{(\alph*)},ref={\thelem~(\alph*)}] \item $c_e(M,N) = c_e(M \ominus (M \cap N), N \ominus (M \cap N))$. \label{it:ess_angle_standard_prop_disjoint} \item $c_e(M,N)^2 = ||p_N p_M p_N - p_{M \cap N}||$. \label{it:ess_angle_standard_prop_square} \end{enumerate} \end{lem} \begin{proof} (a) follows from the identity \begin{equation*} (P_M - P_{M \cap N}) ( P_N - P_{M \cap N}) = P_M P_N - P_{M \cap N}, \end{equation*} while (b) is again an application of the $C^*$-identity, see the proof of Lemma \ref{lem:angle_standard_prop}. \end{proof} To determine if $M+N$ is closed, the essential Friedrichs angle is just as good as the usual one, that is, part (c) of Lemma \ref{lem:angle_standard_prop} holds with $c_e$ in place of $c$ as well. This follows from \cite[Theorem 2.3]{BGV}. For the convenience of the reader, a short proof is provided below. First, we record a simple lemma. \begin{lem} \label{lem:proj_intersection_point_spectrum} Let $H$ be a Hilbert space and let $M_1 ,\ldots , M_r \subset H$ be closed subspaces. Define $T = P_{M_1} P_{M_2} \ldots P_{M_r}$ and $M = M_1 \cap \ldots \cap M_r$. \begin{enumerate}[label=\normalfont{(\alph*)},ref={\thelem~(\alph*)}] \item $\ker(1-T^* T) = M$. \label{it:proj_intersection_point_spectrum_kernel} \item If $\dim H < \infty$, then $||T|| = 1$ if and only if $M \neq \{0\}$. \label{it:proj_intersection_point_spectrum_fin_dim} \end{enumerate} \end{lem} \begin{proof} We first claim that a vector $x \in H$ satisfies $||T x|| = ||x||$ if and only if $x \in M$. We prove the non-trivial implication by induction on $r$. The case $r=1$ is clear. So suppose that $r \ge 2$ and that the assertion is true for $r-1$ subspaces. Let $x \in H$ such that $||T x|| = ||x||$. Setting $y = P_{M_2} \ldots P_{M_r} x$, we have \begin{equation*} ||x|| = ||P_{M_1} y|| \le ||y|| \le ||x||, \end{equation*} hence $y \in M_1$ and $||P_{M_2} \ldots P_{M_r} x|| = ||x||$. The inductive hypothesis implies that $x \in M_2 \cap \ldots \cap M_r$, and thus also $x = y \in M_1$, which finishes the proof of the claim. Both assertions easily follow from this observation. Clearly, $M$ is contained in $\ker(1-T^* T)$. Conversely, any $x \in \ker(1-T^* T)$ satisfies $||x||^2 = ||T x||^2$, so that $x \in M$ by the above remark, which proves (a). Part (b) is immediate from the claim as well, since $||T||$ is attained if $H$ is finite dimensional. \end{proof} \begin{lem} \label{lem:sum_closed_ess_angle} Let $H$ be a Hilbert space and let $M,N \subset H$ be closed subspaces. Then $M+N$ is closed if and only if $c_e(M,N) < 1$. \end{lem} \begin{proof} In view of Lemma \ref{it:angle_standard_prop_sum_closed}, it is sufficient to show that $c(M,N) < 1$ if $c_e(M,N) < 1$, since $c_e(M,N) \le c(M,N)$ holds trivially. To this end, we can assume without loss of generality that $M \cap N = \{0\}$ by Lemma \ref{it:angle_standard_prop_disjoint} and Lemma \ref{it:ess_angle_standard_prop_disjoint}. Then $||P_N P_M P_N||_e < 1$, so $T=1-P_N P_M P_N$ is a self-adjoint Fredholm operator. Lemma \ref{it:proj_intersection_point_spectrum_kernel} implies that $T$ is injective, from which we conclude that $T$ is invertible. It follows that $1 \not \in \sigma(P_N P_M P_N)$, and hence that $c(M,N) = ||P_N P_M P_N|| < 1$. \end{proof} For graded subspaces, we obtain a more concrete description of the essential Friedrichs angle, which gives another proof for the preceding lemma in the graded case. In particular, we see that the essential Friedrichs angle indeed only depends on the asymptotic behaviour of the Friedrichs angles between the graded components. \begin{lem} \label{lem:ess_angle_limsup} Let $H = \bigoplus_{n=0}^\infty H_n$ be a graded Hilbert space, where all $H_n$ are finite dimensional, and let $M,N \subset H$ be graded subspaces. Write $M_n = M \cap H_n$ and $N_n = N \cap H_n$ for $n \in \mathbb N$. Then \begin{equation*} c_e(M,N) = \limsup_{n \to \infty} c(M_n, N_n). \end{equation*} \end{lem} \begin{proof} Let $\varepsilon > 0 $ be arbitrary. By definition of $c_e$, there is a compact operator $K$ on $H$ such that \begin{equation*} ||P_M P_N - P_{M \cap N} + K|| \le c_e(M,N) + \varepsilon. \end{equation*} It is easy to see that $\lim_{n \to \infty} ||P_n K P_n|| = 0$. Furthermore, \begin{align*} c(M_n, N_n) &= ||P_n (P_M P_N - P_{M \cap N}) P_n|| \\ &\le ||P_n (P_M P_N - P_{M \cap N} + K) P_n|| + ||P_n K P_n|| \\ &\le c_e(M,N) + \varepsilon + ||P_n K P_n||, \end{align*} so $\limsup_{n \to \infty} c(M_n,N_n) \le c_e(M,N)$. Conversely, for any $k \in \mathbb N$, the operator \begin{equation*} K= \bigoplus_{n=0}^k P_n (P_M P_N - P_{M \cap N}) P_n \end{equation*} has finite rank, and \begin{equation*} P_M P_N - P_{M \cap N} -K = \bigoplus_{n=k+1}^\infty P_n(P_M P_N -P_{M \cap N}) P_n. \end{equation*} Hence \begin{equation*} c_e(M,N) \le ||P_M P_N - P_{N \cap N} -K|| = \sup_{n \ge k+1} c(M_n,N_n) \end{equation*} for all natural numbers $k$, which establishes the reverse inequality. \end{proof} \begin{rem*} If $T$ is an operator on a Hilbert space $H$, the infimum \begin{equation*} \inf \{ ||T + K||: K \in \mathcal K(H) \} \end{equation*} is always attained \cite{holmes}. In particular, we can choose an operator $K$ in the first part of the above proof such that $||P_M P_N -P_{M \cap N} + K|| = c_e(M,N)$. \end{rem*} \section{Reduction to subspaces with trivial joint intersection} Let $V_1,\ldots,V_r$ be subspaces of $\mathbb C^d$. In this section, we will reduce the problem of showing closedness of the sum of Fock spaces $\mathcal F(V_1) + \ldots + \mathcal F(V_r) \subset \fock{\mathbb C^d}$ to the case where $V_1 \cap \ldots \cap V_r = \{0\}$. Note that in \cite[Lemma 7.12]{davramshal}, Davidson, Ramsey and Shalit reduced the problem of showing boundedness of the map $f \mapsto f \circ A^*$ in the setting of unions of subspaces to the case where the joint intersection of the subspaces is trivial. However, in our situation, it does not suffice to consider only subspaces with trivial joint intersection. The issue is that in the inductive proof of closedness of the sum of $r$ Fock spaces, we will use the inductive hypothesis on $r-1$ subspaces which do not necessarily have trivial joint intersection. We begin with two simple consequences of the Gelfand-Naimark theorem. \begin{lem} \label{lem:C_alg_gelfand_consequences} Let $\mathcal A$ be a unital $C^*$-algebra and let $a,b \in \mathcal A$ be self-adjoint elements. \begin{enumerate}[label=\normalfont{(\alph*)},ref={\thelem~(\alph*)}] \item If $a b= 0$, then $||a+b|| = \max(||a||,||b||)$. \label{it:C_alg_gelfand_consequences_prod_zero} \item Suppose that $a$ and $b$ commute and that $a \le b$. If $f$ is a continuous and increasing real-valued function on $\sigma(a) \cup \sigma(b)$, then $f(a) \le f(b)$. \label{it:C_alg_gelfand_consequences_increasing} \end{enumerate} \end{lem} \begin{proof} In both cases, the unital $C^*$-algebra generated by $a$ and $b$ is commutative. By the Gelfand-Naimark theorem, we can therefore regard $a$ and $b$ as real-valued functions on a compact Hausdorff space, where both assertions are elementary. \end{proof} \begin{lem} \label{lem:angle_4_sum_orth} Let $H$ be a Hilbert space and let $M_1,M_2,N_1,N_2 \subset H$ be closed subspaces with $M_1 \bot M_2, M_1 \bot N_2, M_2 \bot N_1, N_1 \bot N_2$. Then \begin{equation*} c(M_1 \oplus M_2, N_1 \oplus N_2) = \max( c(M_1,N_1),c(M_2,N_2)). \end{equation*} The same is true with $c_e$ in place of $c$. \end{lem} \begin{proof} The assertion can be shown using the definition of the Friedrichs angle or working with projections. The latter has the advantage of proving the claim for the essential Friedrichs angle at the same time. First, we note that the assumptions on the subspaces imply that \begin{equation*} (M_1 \oplus M_2) \cap (N_1 \oplus N_2) = (M_1 \cap N_1) \oplus (M_2 \cap N_2). \end{equation*} Indeed, if $m_1 + m_2 = n_1 + n_2$ is an element of the space on the left-hand side, with $m_i \in M_i$ and $n_i \in N_i$ for $i=1,2$, then $m_1 - n_1 = n_2 - m_2$, and the orthogonality relations show that this vector is zero. Hence $m_1 \in M_1 \cap N_1$ and $m_2 \in M_2 \cap N_2$, thus proving the non-trivial inclusion. Using the orthogonality relations once again, we conclude that \begin{align*} &P_{N_1 \oplus N_2} P_{M_1 \oplus M_2} P_{N_1 \oplus N_2} - P_{ (M_1 \oplus M_2) \cap (N_1 \oplus N_2)} \\ = (&P_{N_1}+ P_{N_2}) (P_{M_1} + P_{M_2})(P_{N_1} + P_{N_2}) - (P_{M_1 \cap N_1} + P_{M_2 \cap N_2}) \\ = (&P_{N_1} P_{M_1} P_{N_1} - P_{M_1 \cap N_1}) + ( P_{N_2} P_{M_2} P_{N_2} - P_{M_2 \cap N_2}). \end{align*} Since \begin{equation*} (P_{N_1} P_{M_1} P_{N_1} - P_{M_1 \cap N_1}) ( P_{N_2} P_{M_2} P_{N_2} - P_{M_2 \cap N_2}) = 0, \end{equation*} both assertions follow from Lemma \ref{it:C_alg_gelfand_consequences_prod_zero}. \end{proof} Tensoring with another Hilbert space does not make the angle worse. \begin{lem} \label{lem:angle_tensor_same_space} Let $H$ be a Hilbert space and let $M,N \subset H$ be closed subspaces. If $E$ is another non-trivial Hilbert space, then \begin{equation*} c(M \otimes E, N \otimes E) = c(M,N). \end{equation*} \end{lem} \begin{proof} First, note that $(M \cap N) \otimes E = (M \otimes E) \cap (N \otimes E)$. Since $P_{K \otimes E} = P_K \otimes P_E$ for any closed subspace $K \subset H$, we have \begin{align*} ||P_{M \otimes E} P_{N \otimes E} - P_{(M \otimes E) \cap (N \otimes E)}|| &= ||P_{M \otimes E} P_{N \otimes E} - P_{(M \cap N) \otimes E}|| \\ &= || (P_M P_N - P_{M \cap N} ) \otimes 1_E|| \\ &= ||P_M P_N - P_{ M \cap N}||. \qedhere \end{align*} \end{proof} We can now prove the main result of this section. It enables the desired reduction to subspaces with trivial joint intersection. \begin{lem} \label{lem:angle_several_sum_perp} \label{cor:Fock_closed_reduction} Let $V_1,\ldots,V_r \subset \mathbb C^d$ be subspaces and let $V= V_1 \cap \ldots \cap V_r \neq \{0\}$. Suppose that $\mathcal F(V_1) + \ldots + \mathcal F(V_{r-1})$ and $\mathcal F(V_1 \ominus V) + \ldots + \mathcal F(V_{r-1} \ominus V)$ are closed. Then $\mathcal F(V_1) + \ldots + \mathcal F (V_r)$ is closed if and only if $\mathcal F(V_1 \ominus V) + \ldots + \mathcal F(V_r \ominus V)$ is closed. \end{lem} \begin{proof} We claim that it suffices to prove the following assertion: If $W_1, \ldots, W_r \subset \mathbb C^d$ are subspaces, and if $E \subset \mathbb C^d$ is a non-trivial subspace that is orthogonal to each $W_i$, then \begin{equation}\begin{split} \label{eqn:sum_closed_red_formula} &c( (W_1 \oplus E)^{\otimes n} + \ldots + (W_{r-1} \oplus E)^{\otimes n} , (W_r \oplus E)^{\otimes n}) \\ = \quad &\max_{j=1,\ldots,n} c( W_1^{\otimes j} + \ldots + W_{r-1}^{\otimes j}, W_r^{\otimes j} ). \end{split}\end{equation} Indeed, setting $E=V$ and $W_i = V_i \ominus V$ for each $i$, we see from Lemma \ref{lem:angle_graded_hilb_space} and Lemma \ref{it:angle_standard_prop_sum_closed} that this assertion will prove the lemma. In fact, we will show that \begin{equation} \label{eqn:sum_closed_red_proof} \begin{split} &c \Big( \sum_{i=1}^{r-1} W_i^{\otimes k} \otimes (W_i \oplus E)^{\otimes n} , W_r^{\otimes k} \otimes (W_r \oplus E)^{\otimes n} \Big) \\ = \quad &\max_{j=k,\ldots,k+n} c \Big( \sum_{i=1}^{r-1} W_i^{\otimes j}, W_r^{\otimes j} \Big) \end{split} \end{equation} holds for all natural numbers $k$ and $n$. The assertion \eqref{eqn:sum_closed_red_formula} corresponds to the case $k=0$, with the usual convention $W^{\otimes 0} = \mathbb C$ for a subspace $W \subset \mathbb C^d$. We proceed by induction on $n$. If $n=0$, this is trivial. So suppose that $n \ge 1$ and that the assertion has been proved for $n-1$. First, we note that \begin{align*} &W_i^{\otimes k} \otimes (W_i \oplus E)^{\otimes n} \\ = \quad &\big( W_i^{\otimes k+1} \otimes (W_i \oplus E)^{\otimes n-1} \big) \oplus \big( W_i^{\otimes k} \otimes E \otimes (W_i \oplus E)^{\otimes n-1} \big), \end{align*} holds for all $i$. So defining \begin{align*} M_1 &= \sum_{i=1}^{r-1} W_i^{\otimes k+1} \otimes (W_i \oplus E)^{\otimes n-1} \quad \text{ and } \\ M_2 &= \sum_{i=1}^{r-1} W_i^{\otimes k} \otimes E \otimes (W_i \oplus E)^{\otimes n-1}, \end{align*} as well as \begin{align*} N_1 &= W_r^{\otimes k+1} \otimes (W_r \oplus E)^{\otimes n-1} \quad \text{ and } \\ N_2 &= W_r^{\otimes k} \otimes E \otimes (W_r \oplus E)^{\otimes n-1}, \end{align*} we have \begin{align*} \sum_{i=1}^{r-1} W_i^{\otimes k} \otimes (W_i \oplus E)^{\otimes n} &= M_1 + M_2 \quad \text{ and } \\ W_r^{\otimes k} \otimes (W_r \oplus E)^{\otimes n} &= N_1 + N_2. \end{align*} Since $E$ is orthogonal to each $W_i$, we see that $M_1 \bot M_2, M_1 \bot N_2, M_2 \bot N_1$ and $N_1 \bot N_2$. Consequently, Lemma \ref{lem:angle_4_sum_orth} applies to show that the left-hand side of \eqref{eqn:sum_closed_red_proof} equals \begin{equation*} \max( c(M_1,N_1), c(M_2,N_2)). \end{equation*} By induction hypothesis, \begin{equation*} c(M_1,N_1) = \max_{j=k+1, \ldots , k+n} c \Big( \sum_{i=1}^{r-1} W_i^{\otimes j}, W_r^{\otimes j} \Big). \end{equation*} Moreover, an application of Lemma \ref{lem:angle_tensor_same_space} combined with the inductive hypothesis shows that \begin{equation*} c(M_2,N_2) = \max_{j=k, \ldots, k+n-1} c \Big( \sum_{i=1}^{r-1} W_i^{\otimes j}, W_r^{\otimes j} \Big), \end{equation*} which finishes the proof. \end{proof} \begin{exa} With the formula derived in the proof of the preceding lemma, we can already determine the Friedrichs angle between two full Fock spaces. To begin with, suppose that $V_1$ and $V_2$ are two subspaces in $\mathbb C^{d}$ such that $V_1 \cap V_2 = \{0\}$. Then Lemma \ref{it:angle_standard_prop_square} yields for all natural numbers $n$ the identity \begin{equation*} c(V_1^{\otimes n}, V_2^{\otimes n}) = ||P_{V_1}^{\otimes n} P_{V_2}^{\otimes n}|| = ||P_{V_1} P_{V_2}||^n = c(V_1,V_2)^n. \end{equation*} Note that $c(V_1,V_2) < 1$ because $\mathbb C^d$ is finite dimensional. If $V_1 \cap V_2 \neq \{0\}$, we set $W_i = V_i \ominus (V_1 \cap V_2)$ for $i=1,2$. By formula \eqref{eqn:sum_closed_red_formula}, we have \begin{equation*} c(V_1^{\otimes n}, V_2^{\otimes n}) = \max_{j=1,\ldots,n} c(W_1^{\otimes j}, W_2^{\otimes j}) \end{equation*} for all $n$. Since $W_1$ and $W_2$ have trivial intersection, \begin{equation*} c(W_1^{\otimes j}, W_2^{\otimes j}) = c(W_1,W_2)^{j} = c(V_1,V_2)^j \end{equation*} by what we have just proved, so \begin{equation*} c(V_1^{\otimes n}, V_2^{\otimes n}) = c(V_1,V_2) \end{equation*} for all $n$. As an application of Lemma \ref{lem:angle_graded_hilb_space}, we see that in any case, \begin{equation*} c(\fock{V_1},\fock{V_2}) = c(V_1,V_2), \end{equation*} while Lemma \ref{lem:ess_angle_limsup} shows that \begin{equation*} c_e(\fock{V_1},\fock{V_2}) = \begin{cases} c(V_1,V_2), & \text{ if } V_1 \cap V_2 \neq \{0\}, \\ 0, & \text{ if } V_1 \cap V_2 = \{0\}. \end{cases} \end{equation*} In particular, we see that sums of \emph{two} Fock spaces are closed. \end{exa} We conclude this section with a lemma about the case of trivial joint intersection. In view of the definition of the essential Friedrichs angle, it indicates why the reduction to this case will be helpful. \begin{lem} \label{lem:proj_product_compact} Let $V_1,\ldots, V_r \subset \mathbb C^d$ be subspaces with $V_1 \cap \ldots \cap V_r = \{0\}$. Set $M_i = \mathcal F(V_i)$ for $i=1,\ldots, r$. Then $P_{M_1} \ldots P_{M_r}$ is a compact operator. \end{lem} \begin{proof} We note that for each $i$, \begin{equation*} P_{M_i} = \bigoplus_{n=0}^\infty P_{V_i}^{\otimes n}, \end{equation*} hence \begin{equation*} P_{M_1} \ldots P_{M_r} = \bigoplus_{n=0}^\infty (P_{V_1} \ldots P_{V_r})^{\otimes n}. \end{equation*} Since $V_1 \cap \ldots \cap V_r = \{0\}$, and since $\mathbb C^d$ is finite dimensional, $||P_{V_1} \ldots P_{V_r}|| < 1$ by Lemma \ref{it:proj_intersection_point_spectrum_fin_dim}. Therefore, \begin{equation*} ||(P_{V_1} \ldots P_{V_r})^{\otimes n}|| = ||(P_{V_1} \ldots P_{V_r})||^n \xrightarrow{n \to \infty} 0. \end{equation*} From this observation, it is easy to see that $P_{M_1} \ldots P_{M_r}$ is compact. \end{proof} \section{A closedness result} In this section, we will deduce a closedness result which will form the inductive step in the proof of our general result on the closedness of algebraic sums of $r$ Fock spaces. Because of Lemma \ref{lem:angle_several_sum_perp} and Lemma \ref{lem:proj_product_compact}, we will consider the following situation throughout this section: Let $r \ge 2$, and let $M_1,\ldots,M_r$ be closed subspaces of a Hilbert space $H$ which satisfy the following two conditions: \begin{enumerate}[label=\normalfont{(\alph*)}] \item Any algebraic sum of $r-1$ or fewer subspaces of the $M_i$ is closed, that is, for any subset $\{i_1,\ldots,i_k\} \subset \{1,\ldots,r\}$ with $k \le r-1$, the sum \begin{equation*} M_{i_1} + \ldots + M_{i_k} \end{equation*} is closed. \label{it:cond_a} \item Any product of the $P_{M_i}$ containing each $P_{M_i}$ at least once is compact, that is, for any collection of (not necessarily distinct) indices $i_1,\ldots,i_k$ with $\{i_1,\ldots,i_k \} = \{1,\ldots,r\}$, the operator \begin{equation*} P_{M_{i_1}} P_{M_{i_2}} \ldots P_{M_{i_k}} \end{equation*} is compact. \label{it:cond_b} \end{enumerate} Our goal is to show that under these assumptions, the sum $M_1 + \ldots + M_r$ is closed. Note that for $r=2$, the first condition is empty, while the second is equivalent to demanding that $P_{M_1} P_{M_2}$ be compact. Recall that for a closed subspace $M \subset H$, we denote the equivalence class of $P_M$ in the Calkin algebra by $p_M$. Moreover, we define $\mathcal A$ to be the unital $C^*$-subalgebra of the Calkin algebra generated by $p_{M_1}, \ldots,p_{M_r}$. The following proposition is the key step in proving that the sum $M_1 + \ldots + M_r$ is closed. It crucially depends on condition \ref{it:cond_b}. \begin{prop} \label{prop:special_representation} For any irreducible representation $\pi$ of $\mathcal A$ on a Hilbert space $K$, there is an $i \in \{1,\ldots,r\}$ such that $\pi(p_{M_i}) = 0$. In particular, there are representations $\pi_1, \ldots, \pi_r$ of $\mathcal A$ such that $\pi_i(p_{M_i}) = 0$ for each $i$, and such that $\pi = \bigoplus_{i=1}^r \pi_i$ is a faithful representation of $\mathcal A$. \end{prop} \begin{proof} We write $p_i = p_{M_i}$. Suppose that $\pi(p_2), \ldots , \pi(p_r)$ are all non-zero. We have to prove that $\pi(p_1) = 0$. First, note that by condition \ref{it:cond_b}, \begin{equation} \label{eqn:irred_rep} \pi(p_1 a_1 p_2 a_2 \ldots a_{r-1} p_r) = 0 \end{equation} holds if each of the $a_i$ is a monomial in the $p_j$. By linearity and continuity, \eqref{eqn:irred_rep} therefore holds for all $a_1, \ldots, a_{r-1} \in \mathcal A$. Since $\pi$ is irreducible, and since $\pi(p_r) \neq 0$, we have \begin{equation*} \bigvee_{a_{r-1} \in \mathcal A} \pi(a_{r-1} p_r) K = K. \end{equation*} Consequently, \eqref{eqn:irred_rep} implies that $\pi(p_1 a_1 p_2 a_2 \ldots a_{r-2} p_{r-1})=0$. Iterating this process yields the conclusion $\pi(p_1) = 0$, as desired. To establish the additional assertion, let $\pi_i$ be the direct sum of all irreducible GNS representations $\pi_f$ with $\pi_f(p_i)=0$, which is understood to be zero if there are no such representations. Then $\pi = \bigoplus_{i=1}^r \pi_i$ contains every irreducible GNS representation of $\mathcal A$ as a summand by the first part, and is therefore faithful. \end{proof} We will use the preceding proposition to get a good estimate of the essential Friedrichs angle \begin{equation*} c_e(M_1 + \ldots + M_{r-1}, M_r) = ||p_{M_1 + \ldots + M_{r-1}} p_{M_r} - p_{(M_1 + \ldots + M_{r-1}) \cap M_r}||. \end{equation*} To this end, we have to make sure that all occurring elements belong to $\mathcal A$. Part of this is done by the following lemma. \begin{lem} \label{lem:sum_closed_spectrum_and_algebra_membership} Let $H$ be a Hilbert space and let $M,N,N_1,\ldots,N_s \subset H$ be closed subspaces. \begin{enumerate}[label=\normalfont{(\alph*)},ref={\thelem~(\alph*)}] \item The algebraic sum $N_1 + \ldots + N_s$ is closed if and only if $0$ is not a cluster point of the spectrum of the positive operator $P_{N_1} + \ldots + P_{N_s}$. In this case, the image of the operator $P_{N_1} + \ldots + P_{N_r}$ equals $N_1 + \ldots + N_r$. \label{it:sum_closed_spectrum_and_algebra_membership_spectrum} \item If $N_1 + \ldots + N_s$ is closed, then \begin{equation*} P_{N_1 + \ldots + N_s} = \chi_{(0,\infty)} (P_{N_1} + \ldots + P_{N_s}), \end{equation*} where $\chi_{(0,\infty)}$ denotes the indicator function of $(0,\infty)$. In particular, the projection $P_{N_1 + \ldots + N_s}$ belongs to the $C^*$-algebra generated by $P_{N_1}, \ldots, P_{N_s}$. \label{it:sum_closed_spectrum_and_algebra_membership_sum} \item $M+N$ is closed if and only if the sequence $((P_{M} P_{N} P_{M})^n)_n$ converges in norm to $P_{M \cap N}$. In particular, if $M+N$ is closed, then $P_{M \cap N}$ belongs to the $C^*$-algebra generated by $P_{M}$ and $P_{N}$. \label{it:sum_closed_spectrum_and_algebra_membership_intersection} \end{enumerate} \end{lem} \begin{proof} (a) Consider the continuous operator \begin{equation*} T: \bigoplus_{i=1}^s N_i \to H, \quad (x_i)_{i=1}^s \mapsto \sum_{i=1}^s x_i. \end{equation*} Clearly, the image of $T$ equals $N_1 + \ldots + N_r$. Consequently, this sum is closed if and only if the image of $T$ is closed, which, in turn, happens if and only if the image of $T^*$ is closed. It is easy to check that $T^*$ is given by $T^* x = (P_{N_1} x, \ldots, P_{N_s} x)$, so $T T^* = P_{N_1} + \ldots + P_{N_s}$. Hence the assertion follows from the general fact that the range of an operator $S$ is closed if and only if $0$ is not a cluster point of $\sigma(S^* S)$. The additional claim is now obvious. (b) Part (a) shows that the restriction of $\chi_{(0,\infty)}$ to $\sigma(P_{N_1} + \ldots + P_{N_s})$ is continuous, so \begin{equation*} P = \chi_{(0,\infty)} (P_{N_1} + \ldots + P_{N_s}) \end{equation*} belongs to the $C^*$-algebra generated by $P_{N_1}, \ldots, P_{N_s}$. By standard properties of the functional calculus, $P$ is the orthogonal projection onto the range of $P_{N_1} + \ldots + P_{N_s}$, which is $N_1 + \ldots + N_s$. (c) For any $n \in \mathbb N$, we have \begin{equation*} ||(P_M P_N P_M)^n - P_{M \cap N}|| = || (P_M P_N P_M - P_{M \cap N})^n|| = c(M,N)^{2 n}, \end{equation*} which converges to zero if and only if $c(M,N) < 1$. This, in turn, is equivalent to $M+N$ being closed by Lemma \ref{it:angle_standard_prop_sum_closed}. \end{proof} \begin{rem} \label{rem:alternating_projections} Statement (c) in the preceding lemma is just part of a bigger picture: For any closed subspaces $M,N \subset H$, the sequence $((P_M P_N)^n)_n$ (and hence also $((P_M P_N P_M)^n)_n = ((P_M P_N)^n P_M)_n$) converges in the strong operator topology to $P_{M \cap N}$, and the convergence is in norm if and only if $M+N$ is closed, see for example \cite[Section 3]{deutsch95}. \end{rem} Because of condition \ref{it:cond_a}, the preceding lemma shows that $p_{M_1 + \ldots + M_{r-1}} \in \mathcal A$. If $r \ge 3$, we define for $i=1,\ldots,r-1$ \begin{equation*} S_i = M_1 + \ldots + \widehat{M_i} + \ldots + M_{r-1}, \end{equation*} where $\widehat M_i$ stands for omission of $M_i$. If $r=2$, this is understood to be the zero vector space. Note that $S_i$ is a sum of $r-2$ subspaces for $r \ge 3$. Thus another application of Lemma \ref{lem:sum_closed_spectrum_and_algebra_membership} shows that $p_{S_i}$ and $p_{S_i \cap M_r}$ belong to $\mathcal A$. However, care must be taken when using Proposition \ref{prop:special_representation} to estimate $c_e(M_1 + \ldots + M_{r-1}, M_r)$ since it is not obvious a priori that $p_{(M_1 + \ldots + M_{r-1}) \cap M_r}$ lies in $\mathcal A$. Before we address this question, we record the following simple lemma for future reference. \begin{lem} \label{lem:special_rep_proj_sum} Let $\pi= \bigoplus_{i=1}^r \pi_i$ be the representation from Proposition \ref{prop:special_representation}. Then for $i=1,\ldots,r-1$, \begin{equation*} \pi_i (p_{M_1 + \ldots + M_{r-1}}) = \pi_i (p_{S_i}). \end{equation*} \end{lem} \begin{proof} Let $a = p_{M_1} + \ldots + p_{M_{r-1}}$ and $b = p_{M_1} + \ldots + \widehat{p_{M_i}} + \ldots + p_{M_{r-1}}$ (if $r=2$, we set $b=0$). By condition \ref{it:cond_a} and Lemma \ref{lem:sum_closed_spectrum_and_algebra_membership}, the origin is neither a cluster point of $\sigma(a)$ nor one of $\sigma(b)$, and \begin{equation*} \chi_{(0,\infty)} (a) = p_{M_1 + \ldots + M_{r-1}} \quad \text{ and } \quad \chi_{(0,\infty)} (b) = p_{S_i}. \end{equation*} The assertion therefore follows from the identity $\pi_i(a) = \pi_i(b)$ and the fact that the continuous functional calculus is compatible with $*$-homomorphisms. \end{proof} The question whether $p_{(M_1 + \ldots + M_{r-1}) \cap M_r}$ belongs to $\mathcal A$ is more difficult. We will see below that it can well happen that for subspaces $M$ and $N$ of a Hilbert space $H$, the projection $p_{M \cap N}$ does not belong to the unital $C^*$-algebra generated by $p_M$ and $p_N$. Moreover, although there is a criterion for the closedness of $M+N$ only in terms of $P_M$ and $P_N$, namely $M+N$ is closed if and only if the sequence $((P_M P_N)^n)_n$ is a Cauchy sequence in norm (see Remark \ref{rem:alternating_projections}), there cannot be such a criterion only in terms of $p_M$ and $p_N$. \begin{exa} A concrete example of two closed subspaces $M$ and $N$ of a Hilbert space $H$ such that $M+N$ is not closed can be obtained as follows (compare the discussion preceding Problem 52 in \cite{halmos-hspb}): Take a continuous linear operator $T$ on $H$ with non-closed range, and let $M$ be the graph of $T$, that is, \begin{equation*} M = \{ (x, Tx): x \in H \} \subset H \oplus H. \end{equation*} Set $N = H \oplus \{0\}$. Then $M$ and $N$ are closed, but \begin{equation*} M + N = H \oplus \ran(T) \end{equation*} is not closed. Suppose now that $T$ is additionally self-adjoint and compact. It is easy to check that the projection onto $M$ is given by \begin{equation*} P_M = \begin{pmatrix} (1+T^2)^{-1} & T ( 1+T^2)^{-1} \\ T(1+T^2)^{-1} & T^2 (1+T^2)^{-1} \end{pmatrix}. \end{equation*} Clearly, \begin{equation*} P_{N} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}. \end{equation*} However, the equivalence classes $p_{M}$ and $p_N$ of these projections in the Calkin algebra are the same. In particular, we see that there cannot be a criterion for the closedness of $M+N$ only in terms of $p_M$ and $p_N$. Moreover, $M \cap N = \ker(T) \oplus \{0\}$, so \begin{equation*} P_{M \cap N} = \begin{pmatrix} P_{\ker(T)} & 0 \\ 0 & 0 \end{pmatrix}. \end{equation*} Hence, if both $\ker(T)$ and $H \ominus \ker(T)$ are infinite dimensional, $p_{M \cap N}$ does not belong to the unital $C^*$-algebra generated by $p_M$ and $p_N$. For a concrete example, set $H = \ell^2(\mathbb N)$, choose a null sequence $(a_n)_n$ of real numbers with infinitely many zero and infinitely many non-zero terms, and let $T$ be componentwise multiplication with $(a_n)_n$. \end{exa} In the presence of conditions \ref{it:cond_a} and \ref{it:cond_b}, the situation is better. \begin{lem} \label{lem:proj_in} Under the above hypotheses, $p_{(M_1 + \ldots + M_{r-1}) \cap M_r} \in \mathcal A$. Moreover, if $\pi = \bigoplus_{i=1}^r \pi_i$ is the faithful representation from Proposition \ref{prop:special_representation}, we have \begin{equation*} \pi_i (p_{(M_1 + \ldots + M_{r-1}) \cap M_r}) = \pi_i (p_{S_i \cap M_r}), \end{equation*} for $i=1,\ldots,r-1$, and $\pi_r(p_{(M_1 + \ldots + M_{r-1})\cap M_r})=0$. \end{lem} \begin{proof} If $r=2$, condition \ref{it:cond_b} asserts that $P_{M_1} P_{M_2}$ is a compact operator. Since $P_{M_1 \cap M_2} = P_{M_1} P_{M_2} P_{M_1 \cap M_2}$, we conclude that $p_{M_1 \cap M_2} = 0$, so the statement is trivial for $r = 2$. Now, let us assume that $r \ge 3$ and define \begin{equation*} S = M_1 + \ldots + M_{r-1}. \end{equation*} In a first step, we show that the sequence $((p_{M_r} p_{S} p_{M_r})^n)_n$ converges to an element $q_u \in \mathcal A$ with $q_u \ge p_{S \cap M_r}$. To this end, let $\pi = \bigoplus_{i=1}^r \pi_i$ be the faithful representation from Proposition \ref{prop:special_representation}. By Lemma \ref{lem:special_rep_proj_sum}, we have $\pi_i (p_S) = \pi_i(p_{S_i})$ for each $i$. Since $S_i + M_r$ is closed, Lemma \ref{it:sum_closed_spectrum_and_algebra_membership_intersection} shows that for $i=1,\ldots,r-1$, \begin{equation} \label{eqn:lem_proj_in_1} \pi_i \big( (p_{M_r} p_{S} p_{M_r})^n \big) = \pi_i \big( (p_{M_r} p_{S_i} p_{M_r})^n \big) \xrightarrow{n \to \infty} \pi_i (p_{S_i \cap M_r}). \end{equation} Clearly, $\pi_r (p_{M_r} p_S p_{M_r}) = 0$. Since $\pi = \bigoplus_{i=1}^r \pi_i$ is a faithful representation, we conclude that $ ((p_{M_r} p_S p_{M_r})^n)_n$ is a Cauchy sequence in $\mathcal A$. Denoting its limit by $q_u$, we see from \begin{equation*} (p_{M_r} p_S p_{M_r})^n - p_{S \cap M_r} = (p_{M_r} p_S p_{M_r} - p_{S \cap M_r})^n \ge 0 \end{equation*} for all $n \in \mathbb N$ that $q_u \ge p_{S \cap M_r}$. The next step is to prove that $0$ is not a cluster point of the spectrum of the positive element $a=p_{S_1 \cap M_r} + \ldots + p_{S_{r-1} \cap M_r} \in \mathcal A$, and that \begin{equation*} q_l = \chi_{(0,\infty)} (a) \le p_{S \cap M_r}. \end{equation*} To this end, we fix an $i \in \{1,\ldots,r-1\}$, and for $j=1,\ldots,r-1$ with $j \neq i$, we set \begin{equation*} N_j = M_1 + \ldots + \widehat M_i + \ldots + \widehat M_j + \ldots + M_{r-1} \subset S_i, \end{equation*} which is understood as the zero vector space if $r=3$. Clearly, $N_j$ is closed by condition \ref{it:cond_a}. Then $p_{N_j} \in \mathcal A$, and just as in the proof of Lemma \ref{lem:special_rep_proj_sum}, we see that $\pi_i(p_{S_j}) = \pi_i(p_{N_j})$. Since $N_j + M_r$ and $S_j + M_r$ are closed by condition \ref{it:cond_a}, an application of Lemma \ref{it:sum_closed_spectrum_and_algebra_membership_intersection} yields that $p_{N_j \cap M_r}$ belongs to $\mathcal A$ and that $\pi_i( p_{S_j \cap M_r}) = \pi_i(p_{N_j \cap M_r})$. Therefore, \begin{equation*} \pi_i(a) = \pi_i (p_{N_{1} \cap M_r} + \ldots + p_{N_{i-1} \cap M_r} + p_{S_i \cap M_r} + p_{N_{i+1} \cap M_r} + \ldots + p_{N_{r-1} \cap M_r}). \end{equation*} Using the fact that the algebraic sum \begin{equation*} N_1 \cap M_r + \ldots + N_{i-1} \cap M_r + S_i \cap M_r + N_{i+1} \cap M_r + \ldots + N_{r-1} \cap M_r \end{equation*} equals $S_i \cap M_r$ and is therefore evidently closed, we conclude with the help of Lemma \ref{it:sum_closed_spectrum_and_algebra_membership_spectrum} that $0$ is not a cluster point of $\sigma( \pi_i(a))$, and that \begin{equation} \label{eqn:lem_proj_in_2} \chi_{(0,\infty)} (\pi_i(a)) = \pi_i (p_{S_i \cap M_r}). \end{equation} Since $\pi_r(a) = 0$, and since $\pi = \bigoplus_{i=1}^r \pi_i$ is a faithful representation of $\mathcal A$, it follows that $0$ is not a cluster point of $\sigma(a)$. Thus, we can define \begin{equation*} q_l = \chi_{(0,\infty)}(a) \in \mathcal A. \end{equation*} To prove the asserted inequality, we note that $a \le (r-1) \, p_{S \cap M_r}$, and that $a$ and $p_{S \cap M_r}$ commute. Hence Lemma \ref{it:C_alg_gelfand_consequences_increasing} shows that \begin{equation*} q_l \le \chi_{(0,\infty)} ( (r-1) \, p_{S \cap M_r}) = p_{S \cap M_r}. \end{equation*} We have established the following situation so far: \begin{equation*} q_l \le p_{(M_1 + \ldots + M_{r-1}) \cap M_r} \le q_u, \end{equation*} and $q_l$ and $q_u$ belong to $\mathcal A$. We now finish the proof of $p_{(M_1 + \ldots + M_{r-1}) \cap M_r} \in \mathcal A$ by showing that $q_l = q_u$. Using once again the representation from Proposition \ref{prop:special_representation}, it suffices to show that $\pi_i(q_l) = \pi_i(q_u)$ for $i=1,\ldots,r$. This is obvious for $i=r$, because $\pi_r(q_l) = 0 = \pi_r(q_u)$. So let $i \in \{1,\ldots,r-1\}$. According to equation \eqref{eqn:lem_proj_in_1}, we have $\pi_i(q_u) = \pi_i(p_{S_i \cap M_r})$, while equation \eqref{eqn:lem_proj_in_2} shows that $\pi_i(q_l) = \pi_i (p_{S_i \cap M_r})$, as desired. The additional assertion is now obvious. \end{proof} We are now in the position to prove the main theorem of this section. \begin{thm} \label{thm:sum_closed} Let $H$ be a Hilbert space, let $r \ge 2$ and let $M_1, \ldots ,M_r \subset H$ be closed subspaces such that the following two conditions hold: \begin{enumerate}[label=\normalfont{(\alph*)}] \item Any algebraic sum of $r-1$ or fewer subspaces of the $M_i$ is closed, that is, for any subset $\{i_1,\ldots,i_k\} \subset \{1,\ldots,r\}$ with $k \le r-1$, the sum \begin{equation*} M_{i_1} + \ldots + M_{i_k} \end{equation*} is closed. \item Any product of the $P_{M_i}$ containing each $P_{M_i}$ at least once is compact, that is, for any collection of (not necessarily distinct) indices $i_1,\ldots,i_k$ with $\{i_1,\ldots,i_k \} = \{1,\ldots,r\}$, the operator \begin{equation*} P_{M_{i_1}} P_{M_{i_2}} \ldots P_{M_{i_k}} \end{equation*} is compact. \end{enumerate} Then the algebraic sum $M_1 + \ldots + M_r$ is closed. \end{thm} \begin{proof} As above, let $\mathcal A$ be the unital $C^*$-algebra generated by $p_{M_1}, \ldots, p_{M_r}$, and let $\pi = \bigoplus_{i=1}^r \pi_i$ be the faithful representation from Proposition \ref{prop:special_representation}. By the discussion preceding Lemma \ref{lem:special_rep_proj_sum}, the elements $p_{S_i}$ and $p_{S_i \cap M_r}$, as well as $p_{M_1 + \ldots + M_{r-1}}$, all belong to $\mathcal A$ for $i=1,\ldots,r-1$. According to Lemma \ref{lem:proj_in}, this is also true for $p_{(M_1 + \ldots + M_{r-1}) \cap M_r}$, and \begin{equation*} \pi_i (p_{(M_1 + \ldots + M_{r-1}) \cap M_r}) = \pi_i(p_{S_i \cap M_r}) \quad \text{ for } i=1,\ldots,r-1. \end{equation*} Moreover, for these $i$, we have $\pi_i (p_{M_1 + \ldots + M_{r-1}}) = \pi_i (p_{S_i})$ by Lemma \ref{lem:special_rep_proj_sum}. Combining these results, we obtain \begin{align*} ||\pi_i(p_{M_1 + \ldots + M_{r-1}} p_{M_r} - p_{(M_1 + \ldots + M_{r-1}) \cap M_r})|| &= ||\pi_i(p_{S_i} p_{M_r} - p_{S_i \cap M_r})|| \\ &\le c_e(S_i,M_r). \end{align*} Since $\pi_r (p_{M_r}) = 0 = \pi_r(p_{(M_1 + \ldots + M_{r-1})\cap M_r})$, we conclude that \begin{align*} c_e(M_1 + \ldots +M_{r-1}, M_r) &= ||p_{M_1 + \ldots + M_{r-1}} p_{M_r} - p_{(M_1 + \ldots + M_{r-1}) \cap M_r}|| \\ &\le \max_{1 \le i \le r-1} c_e(S_i,M_r) < 1 \end{align*} because $S_i + M_r$ is closed for each $i$ by condition \ref{it:cond_a}. \end{proof} The desired result about sums of Fock spaces follows now by a straightforward inductive argument. \begin{cor} \label{cor:Fock_sum_closed} Let $V_1, \ldots, V_r \subset \mathbb C^d$ be subspaces. Then the algebraic sum \begin{equation*} \mathcal F(V_1) + \ldots + \mathcal F(V_r) \subset \mathcal F(\mathbb C^d) \end{equation*} is closed. \end{cor} \begin{proof} We prove the result by induction on $r$, noting that the case $r=1$ is trivial. So suppose that $r \ge 2$ and that the assertion has been proved for $k \le r-1$. In order to show that sums of $r$ Fock spaces $\mathcal F(V_1) , \ldots, \mathcal F(V_r)$ are closed, it suffices to consider the case where \begin{equation*} V_1 \cap \ldots \cap V_r = \{0\} \end{equation*} by Lemma \ref{cor:Fock_closed_reduction}. Let $M_i = \mathcal F(V_i)$ for each $i$. As an application of Lemma \ref{lem:proj_product_compact}, we see that condition \ref{it:cond_b} of the preceding theorem is satisfied, whereas condition \ref{it:cond_a} holds by the inductive hypothesis. Thus the assertion follows from the preceding theorem. \end{proof} In the terminology of the second section, this result, combined with Lemma \ref{lem:full_Fock_good}, shows that every radical homogeneous ideal is admissible. Hence, Proposition \ref{prop:good_bounded_maps} and Corollary \ref{cor:good_algebra_iso} hold without the additional hypotheses on $I$ and $J$. We thus obtain the following generalization of \cite[Theorem 8.5]{davramshal}. \begin{thm} Let $I$ and $J$ be radical homogeneous ideals in $\polyring{d}$ and $\polyring{d'}$, respectively. The algebras $\mathcal A_I$ and $\mathcal A_J$ are isomorphic if and only if there exist linear maps $A: \mathbb C^{d'} \to \mathbb C^{d}$ and $B: \mathbb C^{d} \to \mathbb C^{d'}$ which restrict to mutually inverse bijections $A: Z(J) \to Z(I)$ and $B: Z(I) \to Z(J)$. \qed \end{thm} \begin{rem*} Using Corollary \ref{cor:good_algebra_iso} in place of \cite[Theorem 7.17]{davramshal}, we also see that the hypothesis of the ideals being tractable can be removed from Corollary 9.7 and Theorem 11.7 (b) in \cite{davramshal}. \end{rem*} \subsection*{Acknowledgements} The author wishes to thank Ken Davidson for valuable discussions, and for the kind hospitality provided during the author's stay at the University of Waterloo. Moreover, he is grateful to his Master's thesis advisor J\"org Eschmeier for his advice and support.
1,116,691,498,793
arxiv
\section{Introduction} \label{sec:introduction} Self-similar solutions to systems of partial differential equations are valuable guides to complex physical problems. For instance, \cite{vonNeumann1941}, \cite{sedov1946} and \cite{1950RSPSA.201..159T} independently derived self-similar solutions for an energy-conserving point explosion in a cold homogeneous medium. These `Sedov-Taylor' blastwave solutions provide good descriptions for the early phase of a terrestrial explosion, and can also be used to describe the evolution of energetic supernovae \citep{1976ApJ...207..872C}. While particularly energetic supernovae can be modeled by the Sedov-Taylor blastwave, there is growing evidence -- both observationally \citep{2011ApJ...738..154H} and theoretically \citep{O_Connor_2011, ertl16} -- that not all core-collapse events result in high-energy or even successful explosions. In these situations, the kinetic energy behind the blast can be comparable to or less than the binding energy of the star, and the Sedov-Taylor solutions cannot accurately reproduce the shock propagation or the evolution of the post-shock fluid. However, in the low-energy limit, there are other self-similar solutions that have been more recently discovered. For example, in supergiant stars, most of the stellar mass is concentrated in the core and the gravitational field in the extended, rarefied hydrogen envelope is approximately that of a point mass. The corresponding density profile of the envelope follows a simple power-law. For this configuration of a point-mass gravitational potential and a non-self-gravitating power-law density profile, \cite{paper1} -- hereon called `Paper I' -- derived self-similar solutions that describe the propagation of a weak shock (i.e., one with a Mach number that is only somewhat in excess of unity) that account for the binding energy of the envelope; these solutions also result in accretion onto the compact object at the origin. For convenience, we refer to these self-similar solutions as `CQR' and the self-similar solutions for strong shocks (e.g., Sedov-Taylor) as `SS'. In \cite{paper2} -- hereon called `Paper II' -- we derived another set of self-similar solutions that describe infinitely weak shocks. We refer to these as rarefaction wave (RW) solutions. One specific physical scenario to which these self-similar solutions apply is a \emph{failed supernova}, in which the stalled, protoneutron star bounce shock (thought to be responsible for ejecting the envelope in a successful supernova explosion) is not revived. In this case, the formation of the protoneutron star still liberates $\sim few\times0.1\,\mathrm{M}_\odot$ of mass in the form of neutrinos. Since the neutrinos escapes the star almost instantly, the over-pressurized envelope expands, and an acoustic pulse is launched from the inner regions of the star. This acoustic pulse steepens into a shock in the outer layers of the star \citep{Nadyozhin1980, 2013ApJ...769..109L, 2018MNRAS.476.2366F, 2018MNRAS.477.1225C}. If the Mach number of this secondary shock is very small, the ensuing hydrodynamic response is a rarefaction wave which merely informs the stellar envelope of the collapsed core. If the Mach number is only on the order of a few, then the shock can be adequately described by the CQR solution. By contrast, if the supernova is successful, the resulting shock typically has a very large Mach number, and the shock propagation will be well-described by the Sedov-Taylor blastwave. In general, the shock from a supernova can have a seemingly arbitrary initial Mach number that may not map particularly well to any one of these self-similar regimes. We then ask how do these shocks evolve and what guidance, if any, do the self-similar solutions provide in their evolution? Our goal in this paper is to answer these questions with a suite of hydrodynamic simulations spanning a large range of explosion energies. We first define the physical problem and summarize the relevant self-similar solutions in \S\ref{sec:equations}. The numerical setup and parameters are in \S\ref{sec:numerics} and Appendix A. Results with discussion follow in \S\ref{sec:results1} and \S\ref{sec:results2} along with a summary in \S\ref{sec:conclusion}. \section{Physical Problem and Solutions} \label{sec:equations} The derivations for the RW, CQR, and SS self-similar solutions are available in Papers I, II, \cite{DLBook1994}, \cite{ws93}, and various other sources. Here, we only describe the physical setup and present the relevant solutions. The notation here closely follows that of Paper II. \subsection{Physical Setup} We consider a spherically symmetric, non-self-gravitating, adiabatic, and motionless fluid with the following density and pressure structures: \begin{eqnarray} \rho_1(r) &=& \rho_a \left(\frac{r}{r_a} \right)^{-n}, \label{eq:rho_ambient} \\ p_1(r) &=& p_a\left(\frac{r}{r_a} \right)^{-n - 1}, \label{eq:p_ambient} \end{eqnarray} where $\rho_a$ and $p_a$ are the density and pressure at radius $r=r_a$, and $n\ge0$ is the polytropic index. We take the adiabatic indices for the ambient fluid, $\gamma_1$, and post-shocked fluid, $\gamma_2$, to be identical and equal to $\gamma=1+n\inv$, implying that the gas is a pure polytrope (i.e., the adiabatic and polytropic indices are equal, which is the situation that is realized in the hydrogen envelopes of most supergiants). We note, though, that the results from Paper I and II do not require this choice. The ambient sound speed is then \begin{equation} c_{1}(r) = \sqrt{\frac{\gamma p_1}{\rho_1} } = \sqrt{\frac{\gamma p_a r_a}{\rho_a r}}. \label{eq:csound1} \end{equation} We also assume a point mass gravitational field or acceleration at all radii: \begin{equation} g = \frac{GM}{r^2}, \label{eq:gravity} \end{equation} where $M$ is the point mass and $G$ is Newton's constant. Substituting $GM=(n+1)p_a r_a/\rho_a$ renders the fluid motionless and relates the pressure of the ambient medium to the ambient density and the mass $M$. \subsection{Self-Similar Solutions} \subsubsection{Rarefaction Wave (RW) Solution} \label{sec:rfwave} In Paper II, we derive the self-similar solutions for a `shock' with zero strength and call this a rarefaction wave (RW). The RW solution is led by a sound wave of zero amplitude, or an acoustic node \citep{courant1999supersonic}, that propagates at the ambient sound speed, $c_1(r)$. After a RW arrives, the gas immediately falls inward and accretes onto the point mass at the origin. The RW solutions exist for any $n$. \subsubsection{CQR Solution} The shock-jump conditions \citep{Rankine01011870,ecole1887journal} demand that the velocity is positive immediately behind a shock expanding into a motionless fluid. If the shock is not strong, the shocked gas stagnates and eventually falls in toward the black hole. A sonic point forms where the infalling gas becomes supersonic with respect to the shock. The sonic point and shock-jump conditions provide two boundary conditions for the subsonic shocked flow which has a self-similar solution (Paper I). Analogous to standard spherical accretion \citep{1952MNRAS.112..195B}, the supersonic solution automatically connects to the black hole only if the sonic point conditions are satisfied correctly. The CQR solutions only exist between $2<n<3.5$. The shock expands at a velocity: \begin{equation} V_{\rm CQR}(R) = V_c\sqrt{\frac{GM}{R}},\label{eq:vshock_cqr} \end{equation} where $V_c$ is a dimensionless Eigenvalue that is unique for each $n$. As $n\rightarrow2$, $V_c$ diverges and the sonic point approaches the origin. This is consistent with a Sedov-Taylor blast wave in which the explosion is assumed to conserve a total energy (i.e., the shock and origin are in causal contact) that dwarfs the other energies in the problem. In the limit of $n=3.5$, $V_c\rightarrow(3.5)^{-1/2}$ and the Mach number, \begin{equation} M_{\rm CQR}(t) = \sqrt{n}V_c, \label{eq:mach_cqr} \end{equation} approaches unity. In this limit, the CQR and RW solutions converge. Integrating Eq.\,(\ref{eq:vshock_cqr}) yields the shock position for the CQR solution: \begin{equation} R_{\rm CQR}(t) = R_0\left(1+\frac{3}{2}\frac{V_{\rm c}\sqrt{GM}}{R_0^{3/2}}\left(t-t_0\right)\right)^{2/3}, \label{eq:rshock_cqr} \end{equation} given a reference position, $R_0$, at time, $t_0$. Since the self-similar solution is scale invariant, the temporal and spatial origin is arbitrary and, so, $t_0$ and $R_0$ can be set to zero and one, respectively, without loss of generality. The energy of the post-shock gas is not conserved for the CQR solution because the shock sweeps up gas with finite binding energy and accretion at the origin removes binding energy from the post-shock region. \subsubsection{Strong Shock (SS) Solution} \label{sec:sssss} Self-similar solutions for energy-conserving, strong (i.e., Mach number much greater than one) spherical explosions have been described by numerous authors for $n<3$. The shock velocity and position follows \begin{eqnarray} V_{\rm ST} &=& \frac{2 r_{\rm ST}}{(5 - n)t}, \label{eq:vshock_st}\\ R_{\rm ST}(t) &=& \left(\frac{E_0 t^2}{\beta \rho_a r_a^n} \right)^{\frac{1}{5 - n}}, \label{eq:rshock_st} \end{eqnarray} for spherical geometry, where $E_0$ is the explosion energy, and $\beta$ is a unique value for given $\gamma$ and $n$. For $n>3$, the shock accelerates with a trailing sonic point. The total explosion energy behind the shock diminishes with time because the post-shock gas passes through the sonic point and loses causal contact with the shock. \cite{ws93} derive the self-similar solutions for strong shocks that satisfy the sonic point conditions. The temporal exponent for the shock position, $R\propto t^{\alpha}$, is always above unity (i.e., $\alpha_{\rm WS}>1$) and must be found numerically. On the other hand for $n<2$, strong shocks in a medium satisfying Eq.\,(\ref{eq:csound1}) (i.e., $c_1(r)\propto r^{-1/2}$) must weaken and eventually depart from the SS solution. The post-shock gas structure depends on the polytropic and adiabatic indices\footnote{A gallery can be found on Frank Timmes' website: \url{http://cococubed.asu.edu/research\_pages/sedov.shtml}}. Material is distributed at all radii behind the shock for $n<n_h=(7-\gamma)/(\gamma+1)$. For $n\ge n_h$, the self-similar solutions have a vacuum interior below a contact discontinuity (CD) that resides at some dimensionless radius, $\xi_{\rm CD}=r/R$ \citep{1990ApJ...358..214G}. For $n> n_c=6/(\gamma+1)$, the density at the CD becomes infinite. In our setup where $\gamma=1+n\inv$, we have $n_h\simeq2.28$ and $n_c=2.5$. For a certain range of polytropic indices $3<n<n_g(\gamma)$, \cite{2010ApJ...723...10K} state that the solutions from \cite{ws93} do not exist (e.g., $n_g(\gamma\!=\!4/3)\!\simeq\!3.13$, $n_g(\gamma\!=\!5/3)\!\simeq\!3.26$). Instead, there is another class of self-similar solutions where the sonic point does not manifest and the shock does not accelerate (i.e., $R\propto t^1$). As the authors state, the hydrodynamic simulations converge very slowly for this setup. In the interest of our broader goals, we do not include polytropic indices in this range. \subsection{Shock Trajectories} \label{sec:trajectory} A shock propagates along a space-time trajectory, $R(t)$, with velocity $V=dR/dt$. Re-writing this as the instantaneous temporal power-law of the shock position, \begin{equation} \alpha\equiv\frac{d{\rm log}(R)}{d{\rm log}(t)} = \frac{t}{R}V, \label{eq:alpha} \end{equation} the shock trajectory grows as $R\propto t^\alpha$ assuming $\alpha$ varies slowly with time. Since the ambient sound speed follows $c_1\propto R^{-1/2}\propto t^{-\alpha/2}$ at the shock position, the Mach number of the shock trajectory evolves as \begin{equation} M_{\rm s} = \frac{V}{c_1} \propto t^{\frac{3}{2}\alpha -1}. \label{eq:machtrajectory} \end{equation} We use Eq.\,(\ref{eq:machtrajectory}) to measure the growth of a shock's strength. Shocks that have $\alpha>2/3$ strengthen (i.e., increase in Mach number) with time, while shocks satisfying $\alpha<2/3$ weaken. SS solutions have $\alpha_{\rm ST}=2/(5-n)<1$ to conserve total energy ($n<3$) and $\alpha_{\rm WS}>1$ due to constraints imposed by the sonic point ($n>3$). We refer to these values as $\alpha_{\rm SS}$ unless the polytropic index is specified. Suppose strong shocks have temporal power-laws near $\alpha_{\rm SS}$. Since $\alpha_{\rm ST}<2/3$ for $n<2$, strong shocks decay in strength and will not resemble the Sedov-Taylor solution at late times. For $n>2$, strong shocks continue to strengthen and are expected to resemble the SS solution at late times. A trajectory with finite and constant Mach number must have a constant temporal power-law of $\alpha=2/3$. Therefore, RW and CQR solutions have trajectories that expand as \begin{equation} R\propto t^{2/3}. \label{eq:r_selfsimilar} \end{equation} Eq.\,(\ref{eq:machtrajectory}) is then not useful in measuring the growth of a shock with Mach numbers near the RW and CQR values. In Paper II, we present a detailed linear radial perturbation analysis for the CQR solution as well as the RW and SS solutions. The perturbed CQR solutions can be written as a linear sum of the unperturbed CQR solution and an infinite set of non-standard radial Eigenmodes. All of the Eigenmodes are found to rapidly decay with time except for the first and only growing Eigenmode. This is true for every $n$ where the CQR solution exists. Thus, the CQR solution is always linearly unstable to perturbations. We also find the RW solution is linearly unstable to perturbations for $n>3.5$ but stable otherwise. A self-similar solution that is linearly unstable is unlikely to be realized at late times; this leads us to question its fate. Do such shocks evolve to another known self-similar solution or to a different solution entirely (self-similar or not)? In Paper II, we find that the Sedov-Taylor solution is linearly stable to radial perturbations. Yet, Eq.\,(\ref{eq:machtrajectory}) suggests that an initially strong shock must weaken for $n<2$ and, therefore, cannot resemble the Sedov-Taylor solution at late times. These points suggest that the linear stability of a self-similar solution does not necessarily determine its fate. In what follows, we employ hydrodynamic simulations to understand the long-term evolution of shocks that are not exactly self-similar. \section{Numerical Setup} \label{sec:numerics} We employ the hydrodynamic code \texttt{FLASH}\ (\citealt{2000ApJS..131..273F}) and the HLLC Riemann solver for our investigation. The simulation domain is a one-dimensional, spherical grid with uniform grid resolution. We assume an ideal equation of state where $\gamma = 1+n\inv$, $\gamma$ being the adiabatic index of the gas and $n$ the power-law index of the density of the ambient medium (i.e., $\rho \propto r^{-n}$). The initial density and pressure across the domain are defined by Eq.\,(\ref{eq:rho_ambient}) and (\ref{eq:p_ambient}). We use a constant point mass gravitational field as defined by Eq.\,(\ref{eq:gravity}). The inner boundary is at $r_{\rm min}/r_a=1$ with the Bondi-outflow condition described by \cite{2007ApJ...667..626K}. The fluid variables at the inner ghost zones are linearly extrapolated from the values at the inner boundary. We set the outer boundary at $r_{\rm max}/r_a=10^3$ and prescribe a reflecting boundary condition to maintain hydrostatic equilibrium. The initial pressure between $1\le r/r_a\le 1.5$ is set to a constant $p=(1+\delta p)\times p_a$, which introduces an over-pressured edge at $r/r_a=1.5$. This setup triggers a shock to form immediately and near the inner boundary. To simplify the notation, throughout the remainder of the paper we set $\rho_a=p_a=r_a=1$. We run a suite of simulations which span the only two physical parameters in this problem: $n$ and $\delta p$. A table of simulation parameters is available at the end of the paper along with a brief discussion in Appendix A. The table includes values of the shock Mach number at $t=100$, which is a somewhat more intuitive way of gauging the shock strength than the value of $\delta p$. The `high' resolution simulations from Paper II (see their Table 3) are also included here. For large $n$ and small $\delta p$ we increase the resolution to better resolve the weak shock. The simulations are complete once $t=10^3$ or when the (strong) shock reaches the outer boundary. A true discontinuity cannot manifest on a discretized grid. We adopt the grid cell with the largest outward velocity as the shock location, $R$, which is always in close proximity to the cells with highest compression gradient. To quantify the Mach number of the shock, we take the pressure at the cell with maximum velocity as the immediate post-shock pressure, $p_2=p(R)$. The ambient pressure, $p_1$, is given by Eq.\,(\ref{eq:p_ambient}) using the shock location. With the shock jump condition for momentum conservation, \begin{equation} \frac{p_2}{p_1} = \frac{2\gamma M_{\rm s}^2 - (\gamma-1)}{\gamma+1}, \label{eq:shockjump} \end{equation} we can compute the instantaneous Mach number, $M_{\rm s}$, and, with Eq.\,(\ref{eq:csound1}), the velocity of the shock. The temporal power-law of a shock's trajectory, Eq.\,(\ref{eq:alpha}), can be computed from either the time-derivative of $R(t)$ or the instantaneous shock position and velocity. We use the latter method because the former is sensitive to the spatial and temporal resolutions and is generally less precise. The time in Eq.\,(\ref{eq:alpha}) is defined as zero when the shock is at the center. Since we cannot simulate fluids at $r=0$, we approximate the simulation time as starting at zero upon initialization. This approximation improves as time proceeds. \section{Shocks in an $\MakeLowercase{n}=2.5$ Polytrope} \label{sec:results1} We take an $n=2.5$ polytrope (i.e., $\rho_1\propto r^{-2.5}$) as our fiducial model. This is both a typical polytropic index for which SS, CQR, and RW solutions exist, and is a good approximation for red and yellow supergiant hydrogen envelopes over a large range of radii (Paper I). \subsection{Initial and Asymptotic Shock Evolution} \label{sec:initialphase} Fig.\,\ref{fig:timezoom} shows the early hydrodynamic evolution of a shock with intermediate ($\delta p=0.5$, $M_{\rm s}(t\!=\!10^2)\simeq2.2$) and high strength ($\delta p=100$, $M_{\rm s}(t\!=\!10^2)\simeq29$) in an $n=2.5$ polytrope. A shock forms at the over-pressurized edge, $r=1.5$, and expands outward in radius. A rarefaction wave forms behind the shock that propagates inward and leaves the inner boundary, which can be seen in the earliest density profile in Fig.\,\ref{fig:timezoom}. Separating the shocked and rarified materials is a contact discontinuity (CD), which propagates outward at a slower velocity than the shock. A second, weak rarefaction wave propagates outwards toward the shock from the inner boundary informing the fluid of the `black hole'. The CD is eventually engulfed by the outgoing rarefaction wave and descends toward the inner boundary. In simulations of stronger shocks, the shocked material is unaware of the black hole for a longer amount of time. Information of the black hole may never reach a very strong shock if either a hollow interior ($n\ge n_h$) or sonic point ($n>3$) forms first. We find the post-shock distributions of physical quantities in our array of simulations do not qualitatively change after $t\ge10^2$. We label the logarithm of time between $t=10^2$ and $10^3$ as the `asymptotic' phase of the shock evolution. The distributions of shocked material in the last snapshot in Fig.\,\ref{fig:timezoom} are representative of the distributions seen in the asymptotic phase. For the strong shock in particular, the distribution above the CD remains qualitatively unchanged. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{timezoom.pdf} \caption{Velocity and density structures of the early evolution of shocks with intermediate (black) and strong strengths (blue dashed) for $n=2.5$. Velocities behind the strong shock are divided by a factor of $10$. The shock is immediately in front of the velocity peak. The contact discontinuity is at the density `drop' behind the shock, which eventually leaves through the inner boundary for the shock with intermediate strength. Solutions for the different shock strengths are shown at the same shock radius, which corresponds to different times in each simulation.} \label{fig:timezoom} \end{figure} \subsection{Suite of Shock Simulations} In Paper I, we predict the CQR solution for $n=2.5$ follows a constant Mach number of $M_{\rm CQR}\simeq1.90$ or log$_{10}(M_{\rm CQR}-1)\simeq-0.045$. Fig.\,\ref{fig:dmach_time} shows time evolution of the Mach number from simulations with each color labeling an initial condition of varying shock strength. The black horizontal line in Fig.\,\ref{fig:dmach_time} is the prediction from Paper I and independent of the simulation results. We find shocks with larger (smaller) Mach numbers than $M_{\rm s} \simeq1.90$ continue to strengthen (weaken) in time, consistent with the result of Paper II that the CQR solutions are weakly linearly unstable. This is also true for shocks with Mach numbers far from the CQR prediction where the linear analysis of the CQR solutions from Paper II breaks down. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{dmach_time.pdf} \caption{Time evolution of the Mach number for shocks of varying initial strength in an $n=2.5$ polytrope. Each coloured line is one simulation. The horizontal black line is the predicted value, $M_{\rm CQR}\simeq1.90$, from Paper I. Shocks with Mach numbers above or below this predicted value monotonically strengthen or weaken with time, respectively. } \label{fig:dmach_time} \end{figure} \begin{figure}[ht] \centering \subfigure[$n=2.5$ ]{\includegraphics[width=\columnwidth]{alphasigma.pdf}\label{fig:n2p5}} \subfigure[$n=2.9$ ]{\includegraphics[width=\columnwidth]{alphasigma_n2p9.pdf}\label{fig:n2p9}} \caption{Green/yellow points show the temporal power-laws, $\alpha=d{\rm ln}(R)/d{\rm ln}(t)$, of the shock trajectories with respect to a trajectory with constant Mach number: $\alpha-2/3$ (Eq.\,\ref{eq:alpha}). Purple/yellow points show the growth rate of the shock position with respect to the self-similar solution from Paper I (Eq.\,\ref{eq:sigma2}). Time proceeds from yellow to green/purple. Red dashed line is the predicted temporal power-law, $\alpha=2/3$, for the self-similar solutions for weak and infinitely weak shocks. Black vertical and horizontal lines are the predictions for $M_{\rm CQR}$ and $\beta_{\rm CQR}$ from Papers I \& II. Green dashed lines are the expected values for both $\alpha-2/3$ and $\beta$ in the Sedov-Taylor limit. The blue lines are empirical relations for $\alpha-2/3$ from Table\,\ref{table:2}. } \label{fig:alphasigma_dmach} \end{figure} \subsubsection{Shock Trajectories} \label{sec:results_trajectory_n2.5} Each cluster of yellow/green points in Fig.\,\ref{fig:n2p5} corresponds to a simulation spanning $t=10^2\!-\!10^3$, with yellow being the earliest time. Their values are the instantaneous temporal power-laws of the numerical simulations with respect to a trajectory with constant Mach number (i.e., $\alpha-2/3$). Shocks that strengthen (weaken) with time have a higher (lower) Mach number at the green end of the cluster of points. Fig.\,\ref{fig:n2p5} shows that shocks that strengthen with time have $\alpha>2/3$, while shocks that weaken with time have $\alpha<2/3$. Also shown as dashed lines are the predicted values of the RW, CQR, and SS temporal power-laws (\S\ref{sec:trajectory}). Infinitesimally weak shocks are effectively sound waves and, so, we expect and observe that $\alpha$ approaches $2/3$ for very small Mach numbers. Likewise, very strong shocks have temporal power-laws that agree with the predictions for Sedov-Taylor blast waves: $\alpha_{\rm ST}={4/5}$. The values for $\alpha$ and $M_{\rm s}$ from all of our simulations appear to follow a single contour, $\alpha(M_{\rm s})$, that limits to the Sedov-Taylor and RW values. This contour smoothly passes through $\alpha=2/3$ at exactly the Mach number predicted for a CQR solution. The solid blue lines in Fig.\,\ref{fig:alphasigma_dmach} are empirical relations from Table\,\ref{table:2} in Appendix C. All of the calculations of $\alpha$ in Fig.\,\ref{fig:alphasigma_dmach} span a factor of 10 in time, $t=10^2-10^3$. Clusters of points near the CQR Mach number span very narrow ranges of $\alpha$ and $M_s$ in this time in comparison to shocks with much larger or smaller Mach numbers. Therefore, shocks near the CQR solution evolve very slowly. In Paper II, we find that the linearly unstable growing mode affects the shock trajectory of a perturbed CQR solution in the following manner: \begin{equation} R(t)\simeq R_{\rm CQR}(t)\left(1+\zeta' (t-t_0)^{2\sigma/3}\right), \label{eq:rshock_perturbedcqr} \end{equation} where $R_{\rm CQR}(t)$ is the unperturbed CQR solution (Eq.\,\ref{eq:rshock_cqr}) and $\zeta'$ encodes the perturbation amplitude. The growth rate, $\sigma(n,\gamma)$, takes on a characteristic value for a given $n$ and $\gamma$. We can compute the growth rate from our simulations using the following expression: \begin{eqnarray} \beta&\equiv&\frac{d{\rm log}}{d{\rm log}(t)}\left(\frac{R(t)}{R_{\rm CQR}(t)}-1 \right) \label{eq:sigma} \\ &=& \left(\alpha- \frac{2}{3}\right) \left(1-\frac{R_{\rm CQR}(t)}{R(t)} \right)\inv, \label{eq:sigma2} \end{eqnarray} and compare this to the predicted growth rate from Paper II. The second equation above is derived with the knowledge that $\alpha_{\rm CQR}=2/3$. From here on, we refer to the predicted values of $\sigma$ as $\beta_{\rm CQR}=2\sigma/3$. For $n=2.5$, these values are $\sigma\simeq0.175$ and $\beta_{\rm CQR}\simeq0.117$. The yellow/purple cluster of points in Fig.\,\ref{fig:n2p5} shows Eq.\,(\ref{eq:sigma2}) as a function of Mach number. The shock location for the unperturbed CQR solution, Eq.\,(\ref{eq:rshock_cqr}), is defined by the numerical shock position at time $t_0=1$. The second factor on the right side of Eq.\,(\ref{eq:sigma2}) approaches either zero or one if the shock is very weak or strong. Thus, we expect $\beta$ is zero in the RW limit and $\alpha_{\rm ST}-2/3\simeq 0.133$ in the Sedov-Taylor limit. Indeed, our numerical results agree with these analytic expectations. We also see that shocks with Mach numbers near the CQR value have trajectories that grow with the expected value of $\beta_{\rm CQR}\simeq0.117$. This result is consistent with our detailed analysis in Paper II of high-resolution simulations of shocks with Mach numbers near the CQR value. The similarity between the values of $\beta_{\rm CQR}\simeq0.117$ and $\alpha_{\rm ST}-2/3\simeq0.133$ is a coincidence, as demonstrated explicitly in Fig.\,\ref{fig:n2p9}, which shows the shock trajectories in an $n=2.9$ polytrope. From Papers I and II, we expect the following properties of the CQR solution for $n=2.9$: $M_{\rm CQR}\simeq1.36$ or log$_{10}(M_{\rm CQR}-1)\simeq-0.44$, $\beta_{\rm CQR}\simeq0.09$, and $\alpha_{\rm ST}-2/3\simeq0.29$. The numerical results in Fig.\,\ref{fig:n2p9} are reasonably consistent with the analytics. Fig.\,\ref{fig:n2p9} does show a small difference between the hydrodynamic simulations and theoretical predictions toward the Sedov-Taylor limit. This discrepancy is related to the formation of a hollow interior with an infinite density at the CD, which is numerically difficult to model, as is discussed further in Appendix B. \subsubsection{Post-Shock Solutions} \label{sec:postshocksolution} Fig.\,\ref{fig:fgh} shows the post-shock solutions for self-similar RW, CQR, and Sedov-Taylor solutions. This figure adopts the definitions from Paper II for the velocity, density, and pressure variables: \begin{equation} f(\xi) = \frac{v}{V(t)}, \ \ g(\xi) = \frac{\rho}{\rho_1(R)}, \ \ h(\xi) = \frac{p}{\rho_1(R)V(t)^2} \label{eq:dimensionless} \end{equation} where $\xi=r/R(t)$ is the dimensionless radial coordinate defined between the shock ($\xi=1$) and black hole ($\xi=0$). Each color in Fig.\,\ref{fig:dmach_time} and \ref{fig:fgh} represents the same simulation. The collection of thin lines in Fig.\,\ref{fig:fgh} are the post-shock solutions from our numerical simulations between $t=10^2\!-\!10^3$, which uniformly sample time every $\Delta t=5$. The thickest line is the solution at $t=10^3$. Populating the gaps between the three self-similar solutions are the post-shock solutions from our suite of simulations. We see shocks with Mach numbers close to the CQR value have post-shock solutions that resemble the CQR solution. This resemblance is expected and discussed in detail in Paper II. The post-shock solutions behind the weakest shock also resemble the RW solution. For small $\xi$, the post-shock solutions from all of our non-strong shock simulations (i.e., $M_{\rm s} \lesssim3$) accrete in a Bondi-fashion since the dynamics there are predominantly set by the black hole. The post-shock solution behind the strongest shock resembles the Sedov-Taylor solution above the CD. As time proceeds, we see the pressure and density below the CD decreases to, presumably, form a hollow interior. These solutions show that shocks weaker than CQR have post-shock solutions that evolve away from the CQR solution and toward the RW solution. Likewise, the post-shock solutions behind shocks stronger than CQR evolve toward the Sedov-Taylor solution. Moreover, a comparison to Fig.\,\ref{fig:dmach_time} shows the order of these solutions are sequential with Mach number, which itself grows monotonically with time. The trend is seen for each individual simulation and also across the suite of simulations, and suggests our suite of simulations is not solely a collection of individual explosions. Rather, each simulation (at late times) samples a decade in time of a single explosion's long term evolution. One puzzle raised by our results is related to the formation of a hollow interior in the Sedov-Taylor solution. The strongest shock in Fig.\,\ref{fig:fgh} is the late evolution of the strong shock in Fig.\,\ref{fig:timezoom}. The CD in the density distribution from Fig.\,\ref{fig:timezoom} is preserved at late times and rests near the predicted self-similar radial coordinate $\xi_{\rm CD}\simeq0.44$ or log$_{10}(\xi)\simeq-0.36$ for $n=2.5$. If a weaker shock that has lost its original CD strengthens toward the Sedov-Taylor limit with time, how does the solution develop another CD? Our simulations do not span a long enough time to answer this question. \begin{figure}[] \centering \subfigure[Velocity]{\includegraphics[width=0.95\columnwidth]{f_st.png}} \subfigure[Density]{\includegraphics[width=0.95\columnwidth]{log_g_st.png}} \subfigure[Pressure]{\includegraphics[width=0.95\columnwidth]{log_h_st.png}} \caption{Post-shock solutions for $n\!=\!2.5$ between $10^2\le\!t\!\le10^3$ (i.e., velocity [$F$], density [$G$], and pressure [$H$]; Eq.\,\ref{eq:dimensionless}). Thin lines represent an instance in time sampled at every $\Delta t\!=\!5$. The thickest line is the latest time, $t\!=\!10^3$. Solid lines have corresponding colours to Fig.\,\ref{fig:dmach_time}. Red, black, and blue dashed lines are the self-similar solutions for infinitely weak, weak, and infinitely strong shocks, respectively, for $n\!=\!2.5$. Note, the Sedov-Taylor solution is hollow (i.e., a vacuum) below $r/R\!\lesssim\!0.44$.} \label{fig:fgh} \end{figure} \section{Shocks in Polytropes} \label{sec:results2} \begin{figure*}[ht] \centering \setbox1=\hbox{\includegraphics[width=\textwidth]{mega_alpha_mach_cut250sec.pdf}} \includegraphics[width=\textwidth]{mega_alpha_mach_cut250sec.pdf}\llap{\makebox[\wd1][l]{\hspace{16.5ex}\raisebox{0.495\textheight}{\includegraphics[width=0.425\textwidth]{alpha_dmach_diagram2.pdf}}}} \caption{Each solid line is the temporal power-law of a shock trajectory, $\alpha=d{\rm ln}(R)/d{\rm ln}(t)$, as a function of Mach number for $t\ge250$. The dot represents $t=250$ so that the direction of each line relative to the point shows if the Mach number increases or decreases with time. Colours correspond to a polytropic index labeled in the right margin. Dashed lines are the self-similar values. Coloured crosses are the solutions for weak self-similar shocks from Paper I. The inset is a schematic of how shocks evolve in time depending on the initial strength and ambient density profile. For $n\le2$ ($n\ge3.5$), shocks in our simulations weaken (strengthen) with time and follow a curve toward the values for infinitely weak (strong) self-similar shocks. For $2<n<3.5$, the weak self-similar solution from Paper I is an unstable equilibrium point (red dot). Shock trajectories migrate away from this solution towards either the infinitely weak or strong limits.} \label{fig:mega_alpha} \end{figure*} \subsection{Phase Portrait of Shock Trajectories} \label{sec:megaalpha} Our results in \S\ref{sec:results1} suggest that the numerical simulations of explosions of different strength in an $n=2.5$ polytrope are all closely related to each other. Each simulation (at late times) effectively samples the long term evolution of a single explosion. The state of this explosion is described by two `phase variables': the shock Mach number, $M_s(t)$, and temporal power-law, $\alpha(t)$. In \S\ref{sec:results_trajectory_n2.5}, we found the value $\alpha(t)$ measures whether $M_s(t)$ increases or decreases with time. This suggests $M_s(t)$ and $\alpha(t)$ are the `phase position' and `phase velocity' of an explosion . Another example of where this nomenclature is used is the simple pendulum problem where the angular position, $\theta(t)$, and velocity, $\dot{\theta}(t)$, are the phase variables for the pendulum's state. Fig.\,\ref{fig:alphasigma_dmach} is an example of a `phase portrait' or a graph of the phase variables for an explosion. Fig.\,\ref{fig:mega_alpha} shows a phase portrait of the shock trajectories from all of our simulations including those for $n=2.5$ for $t\ge250$. The lines indicate the direction in which $\alpha$ is changing with time. Broadly, we see that all shocks with $\alpha<2/3$ weaken with time (i.e., the Mach number decreases). Likewise, all shocks with $\alpha>2/3$ strengthen in time (i.e., the Mach number increases). These results support the suggestion from \S\ref{sec:trajectory} that $\alpha-2/3$ is a measure of the growth of a shock's strength. More specifically, we see that all shocks, including those that are initially strong, in $n\le2$ polytropes decay toward the RW solution, while for $n\ge3.5$ all shocks -- including those that are initially weak -- strengthen toward the SS solution from \cite{ws93}. For $2<n<3.5$, all shocks with Mach numbers larger (smaller) than the CQR value strengthen (weaken) with time. For $n=2.9$ and 3.0, there is a small discrepancy in $\alpha$ for high Mach numbers related to the formation of a CD with infinite density; we discuss this further in Appendix B. \subsubsection{Phase Portrait of Stable Equilibrium Solutions} The system of hydrodynamic equations with boundary conditions defined by a shock and sonic point (due to the black hole) describes a dynamical system. Our investigation studies the solutions to this dynamical system that start from an array of initial conditions. In \S\ref{sec:postshocksolution}, we find that the post-shock solutions evolve in a monotonic fashion toward a self-similar solution. Fig.\,\ref{fig:dmach_time} shows that the post-shock evolution varies monotonically with the shock Mach number, which also grows monotonically at late times. In \S\ref{sec:results_trajectory_n2.5} and \S\ref{sec:megaalpha}, we show a representation of a shock's space-time trajectory in terms of the temporal power-law, $\alpha$, and Mach number, $M_{\rm s}$. At late times, the shock trajectories appear to lose memory of their initial conditions and collectively rest on a single curve in $\alpha$ and $M_s$. This suggests that the system has a stable equilibrium solution, $\alpha_n(M_{\rm s})$, for each $n$ to which shock trajectories are attracted and follow at late times. In Fig.\,\ref{fig:mega_alpha}, we embed a schematic phase portrait of the three types of stable equilibrium solutions suggested by our numerical and analytical results. At high and low Mach numbers, the function, $\alpha_n(M_{\rm s})$, limits to the RW and SS solutions. For $2<n<3.5$, there is a crossing where $\alpha_n(M_{\rm s})=2/3$ at exactly the predicted Mach numbers from Paper I. If there were another self-similar solution besides RW, CQR, and SS then there must be five (or any odd number larger than three) total number of self-similar solutions. Otherwise, the weak and strong limits of $\alpha_n(M_{\rm s})$ will not reach the RW or SS values. The lack of two additional crossings in our numerical experiments in Fig.\,\ref{fig:mega_alpha} suggest \textit{there are no other self-similar solutions besides the RW, CQR, and SS solutions for this physical problem}. This statement is limited to the domain of Mach numbers simulated here. For $n<2$, the stable equilibrium solution is restricted to values of $\alpha_n<2/3$. The Sedov-Taylor solution is an unstable asymptotic limit from which all shock trajectories, no matter how strong initially, migrate away as $t\rightarrow\infty$. Because all of the trajectories approach the RW solution, we refer to this limit as asymptotically stable. Inverted behavior is seen for $n>3.5$: the stable equilibrium solution is restricted to values of $\alpha_n>2/3$ and the stability of the asymptotic limits reverse. All shocks, including initially weak shocks, have trajectories that migrate away from an unstable asymptotic limit, the RW solution, toward a stable asymptotic limit, the solution by \cite{ws93}. For intermediate $2<n<3.5$, an equilibrium point divides the stable equilibrium solution into two parts. Low Mach numbers have properties of the stable equilibrium solution for $n<2$, while high Mach numbers have the properties for $n>3.5$. The equilibrium point at intermediate Mach number is exactly the CQR solution found in Paper I. Our linear perturbation analysis in Paper II is, therefore, a study of trajectories near this equilibrium point. In our simulations, the equilibrium point is unstable since it repels all shock trajectories toward the stable asymptotic limits (RW or SS) at late times (e.g., Fig.\,\ref{fig:mega_alpha}). A relationship of the form $\alpha_n(M_{\rm s})$ suggests Eq.\,(\ref{eq:alpha}) is an implicit ordinary differential equation for the shock space-time trajectory, $\mathcal{R}(t)$: \begin{equation} \frac{t\dot{\mathcal{R}}}{\mathcal{R}} = \alpha_n\left( \frac{\dot{\mathcal{R}}}{c_1(\mathcal{R})} \right), \label{eq:ode} \end{equation} where $\dot{\mathcal{R}}=\mathcal{V}$ is the shock velocity. Given the initial shock Mach number and position, we can integrate Eq.\,(\ref{eq:ode}) and solve for the shock trajectory and velocity, $\mathcal{V}_n(\mathcal{R})$, for each $n$. The strong shock solutions from \cite{1999ApJ...510..379M} take the form $\mathcal{V}(R,\rho)$ which has the same functional relation. This suggests there exists a weak extension to the strong shock solution by \cite{1999ApJ...510..379M} that includes a black hole. In Appendix C, we provide empirical relations of our numerical solutions for $\alpha_n(M_{\rm s})$ in Fig.\,\ref{fig:mega_alpha}. \subsection{Accretion of Weakly Shocked Gas Onto a Black Hole} \label{sec:mdot} The rate of mass swept up by a self-similar shock with finite Mach number is proportional to the accretion onto a black hole, $\dot{M}$. Therefore, we expect the accretion rate to scale as \begin{equation} \dot{M}=4\pi R^2 \rho_1(R) V \propto t^{1 - 2n/3}, \end{equation} where the right proportionality is derived using Eq.\,(\ref{eq:rho_ambient}), (\ref{eq:alpha}), and (\ref{eq:r_selfsimilar}). This is also the accretion rate for a RW solution for all $n$ since it is a `shock' with a Mach number of one. The accretion rate for a RW solution grows with time for $n<3/2$ and declines otherwise. Since the CQR solution exists only for $2<n<3.5$, the accretion rate always declines with time. Fig.\,\ref{fig:mdot_dmach} shows the fitted temporal power-law exponent of the accretion rate at the inner boundary of the simulations for $2<n<3.5$. The fit is made for $t\ge500$. The figure presents only the simulations where the contact discontinuity has been lost to the inner boundary. We exclude the accretion rates of shocks that retain their CD because the material below the CD is symptomatic of the initial conditions, which is not representative of realistic core-collapse. We see the power-law exponent is nearly constant for shocks with Mach numbers below or comparable to the CQR solutions. A small maximum exists slightly below the CQR value which corresponds to the minima in $\alpha$ in Fig.\,\ref{fig:mega_alpha}. The accretion rate declines steeply with time for shocks stronger than the CQR value since the shocked material is less bound and follows longer paths, in a Lagrangian sense, before falling into the black hole. The temporal exponent appears to decrease linearly with log$_{10}(M_{\rm s}-1)$ for shocks stronger than the CQR solution. Assuming the temporal exponent is a function of the shock Mach number, \begin{equation} \frac{d{\rm log}(\dot{M})}{d{\rm log}(t)} = f_n(M_{\rm s}-1), \end{equation} for each polytropic index $n$, the effective accretion rate could be estimated given a solution for Eq.\,(\ref{eq:ode}). \begin{figure}[] \centering \includegraphics[width=\columnwidth]{mega_mdot_mach_cut500sec.pdf} \caption{Each point is the temporal power-law of the accretion rate, measured at the inner boundary, linearly-fitted between $500\le t\le10^3$. Each colour corresponds to a polytropic index labeled in the right margin. Horizontal lines are the predicted values, $1 - 2n/3$, and the vertical lines mark the Mach numbers for solutions for weak self-similar shocks from Paper I. For strong shocks, accretion is suppressed with respect to the values for free-fall or weak self-similar shocks because much of the mass is unbound. Simulations with a contact discontinuity in the domain are not included.} \label{fig:mdot_dmach} \end{figure} \section{Summary and Conclusions} \label{sec:conclusion} Simulations of core-collapse supernovae predict a large range of explosion energies. Weak explosions with energies comparable to, or less than, the stellar binding energy generate little ejecta. The majority of the shocked gas, instead, falls back onto the natal neutron star or black hole on timescales that depend on the finite shock strength, finite sound speed of the progenitor, and local escape speed. For these reasons, the Sedov-Taylor solution is not applicable for weak explosions. For certain stellar configurations, however, there are other self-similar solutions that describe weak explosions. In stellar supergiants, the gravitational field in the envelope is similar to a point mass since the stellar mass is so centrally concentrated. As a result, radial profiles of the envelope density and pressure follow simple power-laws (i.e., polytropic profiles). Under this configuration, we derived two sets of self-similar solutions in Papers I and II. These solutions describe weak and infinitely weak explosions. Unlike the Sedov-Taylor solution, these do not have a constant energy in the post-shock gas because they account for both the energy of the envelope that passes through the shock and accretion induced by core-collapse. We label the three self-similar solutions as SS, CQR, and RW in reference to the strong shock (e.g., Sedov-Taylor), weak shock, and rarefaction wave solutions, respectively. The RW and SS solutions exist for density profiles, $\rho_1\propto r^{-n}$, with any polytropic index, $n$, and correspond to either an infinitely weak or strong shock. The CQR solution has a finite characteristic Mach number and only exists between $2<n<3.5$. All of the self-similar solutions depend on the adiabatic indices, $\gamma_i$, of the pre- and post-shock gas (i.e., $p\propto \rho^{\gamma_i}$). To simplify our study here, we assume the adiabatic indices are identical and related to the polytropic index in the form $\gamma=1+n\inv$. This is a good approximation for low energy explosions in supergiants (\citealt{2018MNRAS.477.1225C}, Paper I). In high energy explosions, however, $\gamma_2=4/3$ in the post-shock gas and is independent of the pre-shock conditions. One application of all of these solutions is a failed supernova, in which the bounce shock from the neutron star is not revived. Instead, the liberation and (nearly instantaneous) escape of $\sim few\times0.1\,\mathrm{M}_\odot$ of mass-energy in neutrinos from the protoneutron star results in an over-pressurized stellar envelope. Consequently, the envelope expands and triggers a global acoustic pulse. The acoustic pulse either steepens into a weak shock or damps to form a rarefaction wave (i.e., an infinitely weak shock). We expect the self-similar solutions are good representations of shocks with Mach numbers close or equal to a self-similar value. In general, however, a failed supernova shock can have any Mach number not equal to a self-similar value. In this paper, the third in a series, our goal has been to understand the role of self-similar solutions in the long term evolution of shocks that are not self-similar. We employed hydrodynamic simulations to study explosions in polytropic environments with a central point mass. We presented a suite of simulations that samples a wide range of explosion energies and polytropic indices. An explosion generates a shock that propagates along a trajectory, $R(t)$, in space-time. Expressing the trajectories in terms of the shock Mach number, $M_s$, and instantaneous temporal power-law, $\alpha=t\dot{R}/R$, provides a compact summary of the shock properties. Phase portraits of these trajectories are shown in Fig.\,\ref{fig:alphasigma_dmach} and \ref{fig:mega_alpha}. All shocks are seen to either weaken or strengthen with time for $n<2$ or $n>3.5$, respectively. Very strong or weak shocks have temporal power-laws that correspond to the SS and RW solutions, respectively.\footnote{For $n=2.9$ and 3, there is a small discrepancy between our numerics and known analytics for strong shocks due to the slow formation of a hollow interior with an infinitely dense contact discontinuity. See Appendix B.} Between $2<n<3.5$, we find all shocks with Mach numbers stronger (weaker) than the CQR value continue to strengthen (weaken) with time. Shocks with Mach numbers and temporal power-laws near the CQR solution grow away from the CQR solution at an extremely slowly rate. Therefore, these shocks are very unlikely to evolve into the RW or SS solutions in astrophysical applications such as failed supernovae, where the radial dynamic range over which the shock propagates is only a few orders of magnitude. Instead, shocks with Mach numbers of order the CQR value can be adequately described by the CQR solution, as shown specifically in Paper I. We study the post-shock solution in detail for $n=2.5$. We find that shocks with Mach numbers near a self-similar value have post-shock density, velocity, and pressure profiles that resemble the self-similar solutions (e.g., Fig.\,\ref{fig:fgh}). A striking result is the post-shock flow structure for simulations with Mach numbers that span the gap between the three self-similar solutions: the post-shock flow structure itself continuously varies between the appropriate self-similar values. Moreover, within a given simulation, the Mach number changes monotonically with time (e.g., Fig.\,{\ref{fig:dmach_time}}) and the post-shock flow structure changes in time as well, effectively bridging the gap between our simulations with different initial Mach numbers. Taking into account both the results of different simulations and the time evolution within a given simulation, we find that shock trajectories can be described by a single function, $\alpha_n(M_{\rm s})$ (i.e., temporal power-law, Eq.\,\ref{eq:ode}, as a function of Mach number), for each value of the density power-law index, $n$, of the ambient medium. We find shock trajectories attract toward a stable equilibrium solution, $\alpha_n(M_s)$, for each $n$. The RW and SS solutions are asymptotic limits to the stable equilibrium solution. For $n<2$, shock trajectories migrate away from the SS solution and toward the RW solution. In terms of the evolutionary state at $t\rightarrow\infty$, we refer to the SS solution as an unstable asymptotic limit, while the RW solution is a stable asymptotic limit. For $n>3.5$, we find the roles reverse: the SS (RW) solution is a stable (an unstable) asymptotic limit. Between $2<n<3.5$, the CQR solution is an unstable equilibrium point from which shock trajectories repe ; both RW and SS solutions are stable asymptotic limits. Our linear stability analysis of the CQR solution in Paper II is an analysis of the trajectories near the unstable equilibrium point. The embedded diagram in Fig.\,\ref{fig:mega_alpha} illustrates this interpretation of our analytical and numerical results. \cite{1999ApJ...510..379M} presented a method for combining Sedov-Taylor strong shock solutions with accelerating strong shock solutions for exponential atmospheres \citep{sakurai}. This has the attractive feature of fully describing shock propagation through a star. A similar method may exist for the lower energy shocks considered in this paper (see Eq.\,\ref{eq:ode} and \S\ref{sec:megaalpha}). Our solutions are limited to shocks in point mass gravitational fields and polytropic envelopes, but such conditions are satisfied over many decades in radii in supergiant envelopes. The evolution of a shock through an envelope with a more complicated density profile (not a single power-law) can likely, we suspect, be modeled by an appropriate combination of the numerical solutions presented in this paper. This is an interesting direction for future study because such a solution would be helpful in estimating the ejecta distribution and the rate of accretion onto the central black hole (\S 5.2) in low-energy stellar explosions. \section*{Acknowledgements} We thank Paul Duffell, Chris Matzner, and Frank Timmes for useful discussions. This research was funded by the Gordon and Betty Moore Foundation through Grant GBMF5076. ERC acknowledges support from NASA through the Einstein Fellowship Program, Grant PF6-170170. EQ thanks the theoretical astrophysics group and the Moore Distinguished Scholar program at Caltech for their hospitality and support. This work was supported in part by a Simons Investigator Award from the Simons Foundation. The software used in this work was in part developed by the DOE NNSA-ASC OASCR Flash Center at the University of Chicago. \software{FLASH \citep{2000ApJS..131..273F}}
1,116,691,498,794
arxiv
\section{The curvature} Let $M$ be a $2n$ dimensional smooth manifold endowed with the symplectic structure $\sigma.$ Let $h : M \rightarrow \mathbb{R}$ be a smooth function on the manifold, let $\vec h$ denote the Hamiltonian vector field associated to it, $d_z h = \sigma (\cdot,\vec h(z))$, and assume that $\vec h$ is a complete vector field; we will denote by $\phi^t(\cdot) := e^{t\vec h(\cdot)} (\cdot)$ the flow generated by $\vec h.$ Let $\Lambda$ be a Lagrangian distribution on $M,$ and let us define, for any $z \in M$, the bilinear mapping $g^h_z: \Lambda_z \times \Lambda_z \rightarrow \mathbb{R}$ as $g^h_z(X,Y) = \sigma ([\vec h,X],Y),\; X,Y \in \Lambda_z.$ \begin{defin} The Hamiltonian vector field $\vec h$ is said to be regular at $z \in M$ w.r.t. the Lagrange distribution $\Lambda$ if the bilinear form $g_z^h$ is nondegenerate. A regular Hamiltonian vector field $\vec h$ is said to be monotone at $z \in M$ w.r.t. $\Lambda$ if the form $g_z^h$ is sign-definite. \end{defin} \vspace{0.3cm} \noindent \textbf{Example} Assume that $\Lambda$ is an involutive Lagrangian distribution; then, by Darboux-Weinstein Theorem, there exist local coordinates $\{(p,q) : p,q \in \mathbb{R}^n \}$ such that $\sigma = \sum_{i=1}^n dp_i \wedge dq^i$ and $\Lambda_z = \{(p,0)\}$; in these coordinates, the previous requirement about the bilinear form $g_z^h$ is equivalent to asking the matrix $\{\frac{\partial^2 h}{ \partial p_i \partial p_j}\}$ to be nondegenerate and sign-definite. \vspace{0.3 cm} Let us assume that $\vec h$ is regular and monotone. We define a curve in the Lagrange Grassmannian $L(T_zM)$ putting $J_z(0)= \Lambda_z,\;$ $J_z(t) = \phi^{-t} _* \Lambda_{\phi^t z};$ this curve is called \emph{Jacobi curve}. Using the terminology of \cite{AgGaFee}, the curve is regular, because the bilinear form $g^h_z$ is nondegenerate; we have that, for any $t$ sufficiently close (but not equal to) 0, $J_z(t)$ is transversal to $J_z(0)$ \cite{AgGeHam}. Let us denote by $\pi_{J_z(t) J_z(0)}$ the projector of $T_z M$ onto $J_z(0)$ and parallel to $J_z(t),$ and note that the space $\{\pi_{\Delta J_z(0)} : \Delta \in G_n(T_z M), \Delta \in J_z(0)^{\pitchfork}\}$ is an affine subspace of $gl(T_z M)$ \cite{AgGeHam}; if we compute the Laurent expansion around 0 of the operator-valued function $t \mapsto \pi_{J_z(t) J_z(0)}$, that is $\pi_{J_z(t) J_z(0)} = \pi_0 + \sum_{i \neq 0} t^i \pi_i,$ we can prove that, for $i \neq 0$, $\pi_i \in gl(T_z M)$, while $\pi_0$ is an element of the affine space and hence there exists a unique $\Delta \in J_z(0)^{\pitchfork}$ such that $\pi_0 = \pi_{\Delta J_z(0)};$ this subspace is called the derivative element to $J_z(0)$ and is denoted by $J_z^{\circ}(0)$. Analogously, we can apply the same procedure to construct the derivative element to $J_z(t)$ for $t \neq 0$, and hence we can define the \emph{derivative curve} of the curve $J_z(t)$: $t \mapsto J_z^{\circ}(t)$; moreover, we have that $J_z^{\circ}(t) = \phi^{-t}_* J_{\phi^t z}^{\circ} (0).$ Since the Jacobi curve is regular, its derivative curve is smooth and lies in the Lagrange Grassmannian of $T_z M$ \cite{AgGeHam}. These two curves form a splitting (which is called canonical splitting) of $T_z M$ into two Lagrangian subspaces $T_z M = J_z(t) \oplus J_z^{\circ}(t).$ Let $\Delta_0$ and $\Delta_1$ be two transversal subspaces in the Grassmannian $G_n (T_z M),$ and $\xi_0$ and $\xi_1$ be two tangent vectors to $G_n(T_z M)$ respectively at the points $\Delta_0$ and $\Delta_1;$ let $\gamma_i(t),$ for $i=0,1,$ be two curves in $G_n(T_z M)$ such that $\gamma_i(0) = \Delta_i$ and $\frac{d}{dt} \gamma_i(t) |_{t=0} = \xi_i.$ Let us set the operator in $gl(\Delta_1):$ $$[\xi_0,\xi_1] := \frac{\partial^2}{\partial t \partial \tau} \pi_{\gamma_0 (t) \gamma_1(0)} \pi_{\gamma_0(0)\gamma_1(\tau)} |_{\Delta_1} \vert_{t=\tau=0};$$ \noindent this operator depends only on $\xi_0$ and $\xi_1$. \begin{defin} The operator $R_{J_z} (t) \in gl (J_z(t))$ defined as $$ R_{J_z}(t) := [\dot{J}_z^{\circ}(t),\dot{J_z}(t)] $$ \noindent is called the (generalized) curvature of the curve $J_z(t)$ at the time $t.$ \end{defin} If we choose local coordinates on the Jacobi curve and its derivative curve putting $J_z(t) \simeq \{(x,S_t x): x \in \mathbb{R}^n\}$ and $J_z^{\circ}(t) \simeq \{(x,S^{\circ}_t x): x \in \mathbb{R}^n\},$ where $S_t$ and $S^{\circ}_t$ are matrices of dimension $n,$ the curvature is then $R_{J_z}(t) = (S^{\circ}_t - S_t)^{-1} \dot{ S^{\circ}_t} (S^{\circ}_t - S_t)^{-1} \dot{S_t}.$ \begin{defin} The operator $R_z^h \in gl(J_z(0))$ defined as $$ R_z^h := R_{J_z}(0) $$ \noindent is called the curvature of the Hamiltonian vector field $\vec h$ at the point $z \in M$. \end{defin} Let us call $\Sigma_z = \ker (d_z h) / \mathrm{span} \{\vec h (z)\},$ and let $\psi_z : T_z M \rightarrow T_z M/ \mathrm{span} \{\vec h(z)\} $ be the canonical projection onto the factor space; the space $\Sigma_z$ inherits a symplectic structure given by the restriction of the form $\sigma$. Let us now set $J_z^h (t) =\phi^{-t}_* [\Lambda_{\phi^t z} \: \cap \: \ker (d_{\phi^tz} h) + \mathrm{span}\{\vec h(\phi^t z)\} ]$ (it can be shown that actually $J_z^h (t) = J_z (t) \: \cap \: \ker (d_z h) + \mathrm{span}\{\vec h(z)\}$), and $\bar{J}_z (t) = J_z^h (t) / \mathrm{span} \{\vec h(z)\};$ $\bar{J}_z (t)$ is actually a curve in the Lagrange Grassmannian $L(\Sigma_z ).$ If this Jacobi curve is regular, then its curvature operator $R_{\bar{J}_z} (t)$ is well defined on $\bar{J}_z(t).$ \begin{defin} The operator $\hat{R}_{J_z^h} (t)$ on $J_z^h(t)$ defined as $$\hat{R}_{J_z^h} (t) := (\psi|_{J_z (t) \cap \ker (d_zh)})^{-1} \circ R_{\bar{J}_z} (t) \circ \psi$$ \noindent is called the curvature operator of the $h-$reduction $J_z^h$ at the time $t$. \end{defin} As before, we define \begin{defin} The operator $\hat{R}^h_z$ on $J_z^h(0)$ defined as $$ \hat{R}_z^h := \hat{R}_{J_z^h} (0) $$ \noindent is called the reduced curvature of the Hamiltonian vector field $\vec h$ at the point $z \in M.$ \end{defin} \vspace{0.3cm} \noindent \textbf{Examples} \begin{itemize} \item Let $M = \mathbb{R}^n \times \mathbb{R}^n,$ $h(p,q) = \frac{1}{2}|p|^2 + U(q);$ let us consider the Lagrangian distribution $\Lambda_{(p,q)} = (\mathbb{R}^n,0),$ and let us define the Jacobi curve $J_{(p,q)}(t) = \phi^{-t}_* \Lambda_{\phi^t (p,q)};$ Then we have that the curvature is given by $R_{(p,q)}^h = \frac{\partial^2 U}{\partial q^2},$ and $\hat{R}_{(p,q)}^h = \frac{\partial^2 U}{\partial q^2} + \frac{3}{|p|^2} (\nabla_q U, 0) \otimes (\nabla_q U,0)^T.$ \item Let $M$ be an $n$ dimensional smooth manifold, and let $h : T^*M \rightarrow \mathbb{R}$ be such that the restriction $h|_{T_{\pi(z)}^*M}$ (where $\pi : T^*M \rightarrow M$ is the canonical projection) is a positive quadratic form, hence it defines a Riemannian structure on $M$. Let $J_z(0) = T_z(T^*_{\pi(z)}M);$ then we have that $R_z^h X = \mathcal{R}(\bar{z},\bar{X})\bar{z}$ for any $X \in T_z(T^*_{\pi(z)}M),$ $z \in T^*M,$ where $\mathcal{R}$ is the Riemann curvature tensor, $\bar{z}$ is a vector in $TM$ obtained from $z$ by the action of the metric tensor, and $X$ is identified with a linear form of $T^*_zM$ via the isomorphism between $T_z(T^*_{\pi (z)}M)$ and $T^*_{\pi (z)}M.$ The curvature operator of the $h-$reduction $J_z^h$ is the same, $\hat{R}_z^h = R_z^h$. \item Let $M$ as in the previous example, and let the Hamiltonian function $h$ be the sum of the Hamiltonian function of previous example and the function $U \circ \pi,$ where $U$ is a function on $M$; then $R_z^h X = \mathcal{R}(\bar{z},\bar{X})\bar{z} + D_{X}(\nabla U)$, and $\hat{R}_z^h X = R_z^h X + \frac{3 \scalar{\nabla_{\pi(z)}U}{X}_h}{2 (h(z) - U(\pi(z)))} (\nabla_{\pi(z)} U,0)^T,$ where here we denote by $\scalar{\cdot}{\cdot}_h$ the scalar product defined by the Riemannian structure given by $h$, and where $D_X$ is the Riemannian covariant derivative along $X$. \end{itemize} \section{Results} Let $M$ be a $2n$ dimensional smooth manifold endowed with the symplectic structure $\sigma$, and let $h: M \rightarrow \mathbb{R}$ a smooth function on the manifold; we restrict ourselves on a regular sublevel $N$ of the Hamiltonian function $h$, which is then a codimension one submanifold of $M$, and we require this submanifold to be compact; moreover, we ask the Hamiltonian function to satisfy a regularity condition we will specify later. Let us now consider the flow generated by the Hamiltonian vector field $\vec h(z),$ and let us notice that it preserves the level sets of the Hamiltonian, i.e. $h(\phi^t z)=h(z) \; \forall \: t;$ we are interested in computing the dynamical entropy $h_{\mu}(\phi),$ where $\mu$ is the (normalized) Liouville measure restricted to the submanifold $N$; it is defined as $d\mu = \frac{1}{\mathcal{N}} \sigma \wedge \cdots \wedge \sigma \wedge \iota_X \sigma,$ where $\sigma$ is multiplied by itself $n-1$ times, $\iota_X \sigma = \sigma(X,\cdot),$ $X$ is a vector field on a neighborhood of $N$ such that $\scalar{dh}{X} =1$ and $\mathcal{N} = \int_N \sigma \wedge \cdots \wedge \sigma \wedge \iota_X \sigma$; it can be proved that this definition does not depend on the particular choice of such a vector field. In order to compute the dynamical entropy, we are going to use Pesin Theorem \cite{Mane}, which states that the entropy is equal to the integral of the sum of positive Lyapunov exponents, taken with their multiplicities, and hence we shall compute the exponents of the Hamiltonian flow. Let us recall that the Lyapunov exponent in the point $z \in N$ along the direction $X \in T_z N$ is defined as \begin{equation} \label{eq: lyap} \lambda^{\pm} (z,X) = \lim_{t \rightarrow \pm \infty} \frac{1}{|t|} \log \|\phi^t_* X\|, \end{equation} \noindent where $\|\cdot\|$ is a scalar product defined on $T_z N$ and, since $N$ is compact, this definition does not depend on the choice of the norm. The symplectic form restricted to $N$ has a one dimensional kernel given by the span of the Hamiltonian vector associated to $h:$ indeed $$\forall \ v \in T_zN \quad \sigma(v,\vec h) = \scalar{d_zh}{v}=0,$$ since $T_zN=\ker(d_zh);$ hence, $\:\forall\: z \in N,$ we can write $T_zN \simeq \Sigma_z \oplus \mathrm{span}\{\vec h(z)\},$ where $\Sigma_z = T_z N / \mathrm{span}\{ \vec h(z)\} $ is a $2n-2$ dimensional vector space and the restriction $\bar{\sigma} = \sigma |_{\Sigma_z}$ induces a symplectic structure on $\Sigma_z$. Since $\mathrm{span}\{\vec h\}$ is preserved by the action of its flow, i.e. $\phi^t_* \vec h(z) = \vec h(\phi^t z),$ we can take the quotient and study the exponential divergence of the trajectories along directions given by vectors lying in $\Sigma_z$, so we will consider the map ${\tilde{\phi}^t}_* : \Sigma_z \rightarrow \Sigma_{\phi^t z},$ where $ {\tilde{\phi}^t}_* = {\phi^t}_*|_{\Sigma_z}.$ Now we can state the result: \begin{theorem} \nonumber Let N be a compact regular level set of a smooth Hamiltonian function defined on a smooth symplectic manifold on dimension $2n$; let $\Lambda$ be a Lagrangian distribution in $TN/\mathrm{span}\{\vec h\}$ and let the Hamiltonian vector field $\vec h$ be monotone on $N$ w.r.t. $\Lambda$. Consider the Jacobi curve $\bar{J}_z(t)= \tilde{\phi}^{-t}_* \Lambda_{\phi^t z}$ and let the curvature $\hat{R}_z^h$ of $\vec h$ be nonpositive. Then the dynamical entropy $h_{\mu}$ of the Hamiltonian flow on $N$ w.r.t. the normalized Liouville measure on $N$ satisfies $$ h_{\mu} \geq \int_N \mathrm{Tr} \: \sqrt{-\hat{R}_z^h} \;d\mu. $$ \end{theorem} \noindent \textbf{\textit{Proof }} \noindent Due to sign-definiteness of the bilinear form $g_z^h$, we can endow $\Sigma_z$ with a scalar product; indeed, let us define (for $g_z^h$ positive-definite) the following scalar product on $\bar{J}_z(0):$ $$ \bar{J}_z(0) \ni X,Y \mapsto \scalar{X}{Y}^{\prime}_h:=\bar{\sigma} ([\vec h,X],Y). $$ \noindent By means of the symplectic form we can establish an isomorphism between $\bar{J}_z^{\circ}(0)$ and the dual of $\bar{J}_z(0):$ $\bar{J}_z^{\circ}(0) \ni W \mapsto \bar{\sigma}(W, \cdot) : \bar{J}_z(0) \rightarrow \mathbb{R};$ since there exists a unique $X_W \in \bar{J}_z(0)$ such that $\bar{\sigma}(W,\cdot)=\scalar{X_W}{\cdot}_h,$ we can define the scalar product on $\bar{J}_z^{\circ}(0)$ in this way: $$ \bar{J}_z^{\circ}(0) \ni W,V \mapsto \scalar{W}{U}^{\circ}_h:=\scalar{X_W}{X_V}_h. $$ \noindent Now it is possible to define a scalar product on the whole $\Sigma_z:$ for any $X,Y \in \Sigma_z$, we set $$ \scalar{X}{Y}_h := \scalar{\pi_{\bar{J}^{\circ}_z(0) \bar{J}_z(0)}X}{\pi_{\bar{J}^{\circ}_z(0) \bar{J}_z(0)} Y}^{\prime}_h + \scalar{\pi_{\bar{J}_z(0) \bar{J}^{\circ}_z(0)} X}{\pi_{\bar{J}_z(0) \bar{J}^{ \circ}_z(0)} Y}^{\circ}_h; $$ by definition, $\bar{J}_z^{\circ}(0)$ is orthogonal to $\bar{J}_z(0)$ with respect to the scalar product just defined. Since the space $\Sigma_z$ has a symplectic structure and for any $t$ the pair $(\bar{J}_z(t),\bar{J}_z^{\circ}(t))$ forms a splitting of Lagrangian subspaces, given a basis $\{\epsilon^1,\ldots,\epsilon^{ n-1}\}$ of $\bar{J}_z(0)$ there is a unique way to choose a basis $\{e_z^1(t), \ldots,e_z^{n-1}(t)\}$ of $J_z(t)$ such that $e_z^i(0)=\epsilon^i \; \forall\: \: i=1,\ldots,n ,$ $\{\dot{e}_z^1(t),\ldots,\dot{e}_z^{n-1}(t)\}$ is a basis for $J_z^{\circ}(t)$ and $\{e_z^i(t),\dot{e}_z^i(t)\}_{i=1}^{n-1}$ is a Darboux basis for $\Sigma_z$, and it is called the \emph{canonical moving frame} \cite{AgGeHam}. Moreover, as shown in \cite{AgGeHam}, the vectors $\ddot{e}_z^i(t)$ lie in $\bar{J}_z(t)$ for any $i=1,\dots,n-1$, and $$ \ddot{e}_z^i(t) = \sum_{j=1}^{n-1} (-R_z(t))_{ij} e_z^j(t), $$ \noindent where $R_z(t)$ is the representation of the curvature $\hat{R}_z^h$ w.r.t. the basis $\{e_z^i(t)\}_{i=1}^{n-1}$, and it is symmetric. Let us define, for any $z \in N,$ the basis $\varepsilon_1(z),\ldots, \varepsilon_{2n-2}(z)$ of $\Sigma_z$ by putting $\varepsilon_i(z) = e_z^i(0), \;\: \varepsilon_{i-n+1}(z) = \dot{e}_z^i(0), \; i=1,\ldots,n-1;$ this basis is indeed orthonormal for any $z.$ Consider a vector $X \in \Sigma_z:$ \begin{equation} \label{eq: base} X = \sum_{i=1}^{2n-2} x_i \, \varepsilon_i(z) = \sum_{i=1}^{n-1} \eta_i (t) \, e_z^i(t) + \xi_i(t) \, \dot{e}_z^i(t), \end{equation} \noindent ($(\eta_i(t),\xi_i(t))$ are the components of the vector w.r.t. the canonical moving frame, and obviously $(\eta(0),\xi(0))=(x_1,\ldots,x_{2n-2})$). By computations we can prove that the pair $(\eta (t), \xi(t))$ satisfies the differential first-order system \begin{equation} \label{eq: system} \Big\lbrace \begin{array}{lcl} \dot{\xi}(t) &=& -\eta(t) \\ \dot{\eta}(t) &=& R_z(t)\xi(t) \end{array} \end{equation} \noindent and hence the vector $\xi(t)$ satisfies the second order differential equation \begin{equation} \label{eq: jacobi} \ddot{\xi}(t) + R_z(t) \xi(t) = 0. \end{equation} \noindent Since the canonical moving frame is defined such that $e_{\phi^t z}^i(0) = {\tilde{\phi}^t}_* e_z^i(t),\;$ $\dot{e}_{\phi^t z}^i(0) ={\tilde{\phi}^t}_* \dot{e}_z^i(t),\;$ $i=1,\ldots,n-1,$ it implies that $e_z^i(t) = {\tilde{\phi}^{-t}} _* e_{\phi^t z} (0) = {\tilde{\phi}^{-t}}_* \varepsilon_i(\phi^t z),\;$ $i=1,\ldots,n-1,$ and $\dot{e}_z^i(t) = {\tilde{\phi}^{-t}}_* \dot{e}_{\phi^t z} (0) = {\tilde{\phi}^{-t}}_* \varepsilon_i(\phi^t z), \;$ $i=n,\ldots,2n-2.$ Hence \begin{eqnarray} {\tilde{\phi}^t}_* X & = & \sum_{i=1}^{n-1} \eta_i (t)\, {\tilde{\phi}^t}_* e_z^i(t) + \xi_i(t) \, {\tilde{\phi}^t}_* \dot{e}_z^i(t) \nonumber\\ & = & \sum_{i=1}^{n-1} \eta_i \, (t) \varepsilon_i(\phi^t z) + \xi_i(t) \, \epsilon_{i+n-1}(\phi^t z) \nonumber, \end{eqnarray} \noindent and it means that the components of ${\tilde{\phi}^t}_* X$ w.r.t. the basis $\{\varepsilon_i(\phi^t_z)\}_{i=1}^{2n-2}$ of $\Sigma_{\phi^t z}$ are the same as the components of $X$ w.r.t. the canonical moving frame at time $t$. Since the basis $\{\varepsilon_i(z)\}_i$ is orthonormal for any $z,$ we find that $\|\pi_{\bar{J}_{\phi^tz} \bar{J}^{\circ}_{\phi^tz}} \tilde{\phi}^t_*X\|= |\xi(t)|$ and $\|\pi_{\bar{J}^{\circ}_{\phi^tz} \bar{J}_{\phi^tz}} \tilde{\phi}^t_*X\|= |\dot{\xi}(t)|.$ \noindent Now we shall compute the Lyapunov exponents on $N$; by Multiplicative Ergodic Theorem \cite{Mane} we know that the limit (\ref{eq: lyap}) exists a.e. (w.r.t. the standard Liouville measure normalized on $N$) in $N.$ Hence we can define the following subspaces of $\Sigma_z:$ \begin{eqnarray} E^u_z & = & \{X \in \Sigma_z : \lambda^-(z,X) <0\}, \nonumber\\ E^s_z & = & \{X \in \Sigma_z : \lambda^+(z,X) <0\}, \nonumber\\ E^0_z & = & \{X \in \Sigma_z : \lambda^-(z,X) \leq 0 \; \mathrm{and} \; \lambda^+(z,X) \leq 0\}; \nonumber \end{eqnarray} \noindent these subspaces span $\Sigma_z$. For any subspace $E_z$ of $\Sigma_z$ such that $E^u_z \subset E_z \subset E^u_z \oplus E^0_z$, we have that $\lim_{t \rightarrow \pm \infty} \frac{1} {|t|} \log |\det({\tilde{\phi}^t}_* |_{E_z})|= \pm \chi(z),$ where $\chi(z)$ is the sum of the positive Lyapunov exponents in $z$, taken with their multiplicities. Knowing this, we are now looking for such a subspace $E_z;$ we'll see that a good candidate will be the graph of a proper linear operator, that we will call $U_z$, defined from $\bar{J}_z^{\circ}(0)$ to $\bar{J}_z(0)$ . Let us now introduce for any $z\in N$ the subset $H(z)$ of $\Sigma_z$ such that $$ H(z)= \{X \in \Sigma_z : \; \frac{d}{dt}\| \pi_{\bar{J}_{\phi^t z}(0) \bar{J}^{\circ}_{\phi^t z}(0)} \tilde{\phi}^t_* X\| \geq 0\: \forall\: t\}; $$ \noindent clearly $H(z)$ is intrinsically defined and it is invariant along the trajectory $\phi^t z$. In the following, for simplicity we will denote $\bar{J}_{\phi^tz}(0)$ by $v(t)$ and $\bar{J}^{\circ}_{\phi^tz}(0)$ by $v^{\circ}(t).$ \begin{lemma} \label{lemma: sottospazio} $H(z)$ is a subspace of $\Sigma_z$. \end{lemma} \noindent \textbf{\textit{Proof }} From the convexity of $\|\pi_{v(t) v^{\circ}(t)} \tilde{\phi}^t_* X\|^2$ (\ref{eq: jacobi}) we deduce that a vector $X \in \Sigma_z$ belongs to $H(z)$ if and only if $\|\pi_{v(t) v^{\circ}(t)} \tilde{\phi}^t_* X\|$ is bounded for negative times. Linear combinations of vectors having this property satisfy this requirement. \hfill $\square$\\ \begin{lemma} $H(z) \cap v(0) = \{0\} $. \label{lemma: verticale} \end{lemma} \noindent \textbf{\textit{Proof }} A vector $X \in H(z)$ belongs to $v(0)$ if $\pi_{v(0) v^{\circ}(0)} X=0,$ i.e. if $\xi(0) = 0$; suppose by contradiction that such a (nonzero) vector is contained in $H(z);$ then $\frac{ d^2}{dt^2}|\xi(t)|^2|_{t=0}= \scalar{\dot{\xi}(0)}{\dot{\xi}(0)} - \scalar{R_z(t)\xi(0)}{\xi(0)} >0,$ hence 0 is a strong minimum for $| \xi(t)|,$ which contradicts the definition of $H(z).$ \hfill $\square$\\ \begin{lemma} $H(z)$ is a Lagrangian subspace. \label{lemma: H_Lagrangiano} \end{lemma} \noindent \textbf{\textit{Proof }} Let us define, for any $\tau \in \mathbb{R}$, $H_{\tau} = \{X \in \Sigma_z : \frac{d}{dt}\|\pi_{v (t) v^{\circ}(t)} \tilde{\phi}^t_* X \| \geq 0 \; \forall \:t \geq \tau\};$ we have that $H_{\tau_1} \subseteq H_{\tau_2}$ if $\tau_1 \leq \tau_2$ and that $H(z)= \cap_{\tau} H_{\tau}.$ $H_{\tau}$ contains a Lagrangian subspace for any $\tau.$ Indeed, fix $\tau$ and consider $V_{\tau} = \{X \in \Sigma_z : \pi_{v (0) v^{\circ} (t)} \tilde{\phi}^{\tau}_* X =0 \};$ we prove using coordinates that this subspace is contained in $H_{\tau}:$ if we write $X = \sum_{i=1}^{n-1} -\dot{\xi}(t) e_z^i(t) + \xi(t) \dot{e}_x^i(t),$ we have that $\|\pi_{v (t) v^{\circ}(t)} \tilde{\phi}^t_* X \| = |\xi(t)|$ and hence, since $$ \frac{d}{dt} |\xi(t)|^2|_{t=\tau} =0 \quad \mathrm{and} \quad \frac{d^2}{dt^2}| \xi(t)|^2 \geq 0 \; \forall\:t, $$ \noindent $\frac{d}{dt} \|\pi_{v (t) v^{\circ}(t)} \tilde{\phi}^t_* X \| \geq 0 \; \forall\: t \geq \tau,$ and $V_{\tau} \subset H_{\tau}.$ Now, since $\tilde{\phi}^{\tau}_* V_{\tau} = \bar{J}_{\phi^{\tau} z} (0),$ and this last subspace is Lagrangian, we proved our claim. $H(z)$ contains a Lagrangian subspace too; indeed, let us define for any $\tau$ $\hat{H}_{\tau} = \{V \in L(\Sigma_z) :V \subset H_{\tau} \},$ which is a compact nonempty subset in the Lagrange Grassmannian $L (\Sigma_z)$. Moreover, since $\hat{H}_{\tau_1} \subseteq \hat{H}_{\tau_2}$ for $\tau_1 \leq \tau_1,$ we have that $\cap_{\tau} \hat{H}_{\tau} \neq \varnothing;$ hence, since $\hat{H}_{\tau} \subset H_{\tau}$ for any $\tau,$ we can conclude that $H(z) \supseteq \cap_{\tau} \hat{H}_{\tau} \neq \varnothing,$ that means that $H(z)$ contains a Lagrangian subspace. From Lemma \ref{lemma: verticale} we know that $\dim H(z) \leq n-1,$ hence we can conclude that $H(z)$ is a Lagrangian subspace. \hfill $\square$\\ Since the space $H(z)$ is Lagrangian and $H(z) \cap \bar{J}_z(0)=0 \; \forall\: z,$ there exists a symmetric linear operator $U_z : \bar{J}_z^{\circ}(0) \rightarrow \bar{J}_z(0) $ such that for any element $X \in H(z)$ we have that $X= x+ U_z(0) x,$ where $x \in \bar{J}_z^{\circ}(0)$, i.e. $H(z)$ is the graph of the operator $U_z$. Hence we can find a linear operator $V_z : \mathbb{R}^{n-1} \rightarrow \mathbb{R}^{n-1}$ such that if $H(z) \ni X = \sum_{i=1}^{n-1} \eta_i(0) \varepsilon_i(z)+\xi(0) \varepsilon_{i+n-1}(z),$ then $\eta(0) = -V_z \xi(0),$ and, by (\ref{eq: system}) we get that $\dot{\xi}(0)= V_z\xi(0),$ by (\ref{eq: jacobi}) that the operator satisfies the equation \begin{equation} \label{eq: riccati} \dot{V}_{\phi^tz} + V_{\phi^tz}^2 + R_z(t)=0. \end{equation} \noindent By definition of $H(z),$ the operator $V_z$ is nonnegative definite for any $z$. \begin{lemma} $E^u_z \subset H(z) \subset E^u_z \oplus E^0_z.$ \label{lemma: contenuto} \end{lemma} \noindent \textbf{\textit{Proof }} Let $X \in E^u(z)$ and $Y \in E^u_z \oplus E^0_z;$ $\lim_{t \rightarrow -\infty} \frac{1}{|t|} \log|\bar{\sigma}({\tilde{\phi}^t}_* X,{\tilde{\phi}^t}_* Y)| \leq \lim_{t \rightarrow -\infty} [\frac{1}{|t|} \log\|\bar{\sigma}\| + \frac{1}{|t|} \log\| {\tilde{\phi}^t}_* X\| + \frac{1}{|t|} \log\|{\tilde{\phi}^t}_* Y\|] = \lambda^-(z,X) + \lambda^-(z,Y) < 0,$ and this implies that $\bar{\sigma}({\tilde{\phi}^t}_* X,{\tilde{\phi}^t}_* Y) \rightarrow 0$ for $t \rightarrow -\infty .$ Since $\tilde{\phi}^{t*} \bar{\sigma} = \bar{\sigma},$ we get that $\bar{\sigma}(X,Y) = 0$ and hence $E^u_z$ and $E^u_z \oplus E^0_z$ are skew-orthogonal. By dimensional computations, we can prove that actually $E^u_z$ and $E^u_z \oplus E^0_z$ are the skew-orthogonal complement to each other. Let $X \in E_z^u$, i.e. $\lim_{t \rightarrow -\infty} \frac{1}{|t|} \log\| \tilde{\phi}^t_*X\|< 0,$ and this means that $\| \tilde{\phi}^t_* X\| <1,$ which implies that $\tilde{\phi}^t_* X$ is bounded in norm for nonpositive times, and consequentely also $\pi_{v(t) v^{\circ}(t)} \tilde{\phi}^t_* X$ is, which implies that $\tilde{\phi}^t_* X \in H(\phi^t z)= \tilde{\phi}^t_*[H(z)] \Rightarrow E_z^u(z) \subset H(z).$ Since $H(z)$ is Lagrangian, we also find that $H(z) \subset E_z^u(z) \oplus E_z^0.$ \hfill $\square$\\ \begin{lemma} Let $X \in H(z);$ then $\pi_{v(0)v^{\circ}(0)} X \in \ker U_z$ if and only if $\|\pi_{v^{\circ}(t)v(t)} \tilde{\phi}^t_* X\|=0$ for any $t\leq 0$. \end{lemma} \noindent \textbf{\textit{Proof }} We are proving it in coordinates. Let $X$ as in (\ref{eq: base}) such that $\xi(0) \in \ker V_z,$ i.e. $\eta(0)=0;$ since by convexity (\ref{eq: jacobi}) $\frac{d^2}{dt^2} |\xi(t)|^2 \geq 0,$ and by hypothesis $\frac{d}{dt} |\xi(t)|^2|_{t=0} = 0,$ we get that $|\xi(t)|^2$ shall remain constant $\forall \: t\leq 0,$ which implies, using again convexity, that $|\dot{\xi}(t)|= 0 \ \forall \: t \leq 0.$ Conversely, if $\dot{\xi}(t) =0 \ \forall \: t\leq 0,$ then obviously we get the thesis. \hfill $\square$\\ Let us denote by $H_0(z)$ the graph of $U_z$ restricted to the orthogonal complement in $\bar{J}_z^{\circ}(0)$ to $\ker U_z;$ it follows from the above lemma that $\tilde{\phi}^t_*[H_0(z)] \subseteq H_0(\phi^t z) \; \forall\:t\geq 0.$ Indeed, let $X \in H(z)$ such that $\xi(0) \in \ker V_z;$ then, by previous results, $\dot{\xi}(t) =0$ for any $t \leq 0,$ that means that $\xi(t) \in \ker V_{\phi^tz}$ for negative times, i.e. $\pi_{v^{\circ}(t) v(t)} \tilde{\phi}^t_* X \in \ker U_{\phi^tz}.$ Since the dimension of $H_0(z)$ is nondecreasing along the orbits of the Hamiltonian flow, we get that $\dim H_0(z)$ is constant on a $\phi^t-$invariant set of full measure, and hence on this set $\phi^t_*[ H_0(z)]=H_0({\phi^tz}).$ We will work in the space $H_0(z)$ because we need the operator $U_z$ to be strictly positive definite, and we are calling $U_z^0$ the restriction of $U_z$ on the orthogonal complement to $\ker U_z$ in $\bar{J}_z^{\circ}(0),$ and respectively $V^0_z$ and $R_z^0(t)$ the restrictions of $V_z$ and $R_z(t)$ to the orthogonal complement of $\ker V_z$ in $\mathbb{R}^{n-1}$; to do this, we shall prove that actually it satisfies Lemma \ref{lemma: contenuto}. First, we need the following result: \begin{lemma} $R_z(t)$ vanishes on $\ker V_{\phi^t z}$ and both $R_z(t)$ and $V_{ \phi^t z}$ preserve the orthogonal complement in $\mathbb{R}^{n-1}$ to $\ker V_{\phi^t z}.$ \end{lemma} \noindent \textbf{\textit{Proof }} Call $\Delta_z(t)$ the orthogonal complement in $\mathbb{R}^{n-1}$ to $\ker V_{\phi^t z}.$ Let $X \in H(z),$ let $(-\dot{\xi}(t),\xi(t))$ be its components as in (\ref{eq: base}), and let $\xi(t) \in \ker V_{\phi^t z}$; then, by previous lemma, $\dot{\xi}(\tau) = 0$ for $\tau \leq t,$ which implies the vanishing of the second derivative too, i.e. $R_z(t) \xi(t)= 0.$ Let now $x \in \ker V_{\phi^t z}, x' \in \Delta_z(t);$ since $\scalar{x}{ R_z(t) x'} = \scalar{R_z(t)x}{x'}=0,$ we conclude that $R_z(t)[\Delta_z(t)] \subseteq \Delta_z(t).$ In the same way we can show that $V_{\phi^t z} [\Delta_z(t)] \subseteq \Delta_z(t).$ \hfill $\square$\\ Let $X \in H(z) \setminus H_0(z)$; then, $\tilde{\phi}^t_* X$ is constant in norm w.r.t. $t$ for any nonpositive $t$. Hence $\lambda^-(z,X) = 0 \Rightarrow X \notin E^u_z,$ which implies that $E^u_z \subset H_0(z).$ Moreover, consider $X = X^{(1)} + X^{(2)} \in H(z);$ we call as usual $(-\dot{\xi}^{(i)}(t), \xi^{(i)}(t))$ the components of $\tilde{\phi}^t_* X^{(i)}$ w.r.t. the orthonormal frame $\{\varepsilon_i(\phi^tz)\}_i$, and we assume that $\xi^{(1)}(t) \in \ker V(z)$ and $\xi^{(2)}(t)$ lies in $\Delta_z(t)$ (defined as above). By previous results, we get that $\dot{\xi}^{(1)}(t) = 0 $ for $ t \leq 0,$ and hence $\ddot{\xi}^{(1)}(t) = 0 $ for $ t \leq 0,$ and also $R_z(t)\xi^{(1)}(t) =0,$ which implies that both $\xi^{(1)}$ and $\xi^{(2)}$ satisfy equation (\ref{eq: jacobi}). \noindent Since $H_0(z)$ is the graph of the operator $U_z^0,$ we can express the scalar product on $H_0(z)$ in term of the scalar product on $\mathbb{R}^{n-1},$ putting $\scalar{X}{Y}_h = \scalar{\xi^X(0)}{A_z(0) \xi^Y(0)}_c,$ where $A_z(t) = \mathbb{I} + {V^0_{\phi^t z}}^2$ ($X$ and $Y$ as above), and $\scalar{\cdot}{\cdot}_c$ denotes the canonical scalar product on $\mathbb{R}^{n-1}.$ We call $a_z(t) = |\det \phi^t_*|_{H_0(z)}|$ the determinant w.r.t. the scalar product defined by $A_z(t)$ of $\phi^t_*;$ hence we have that $$a_z(t) = \sqrt{ \det A_z(t)} |\det \phi^t_*|_{H_0(z)}|_c = \sqrt{ \det A_z(t)} |\det e^{\int_0^t V_{\phi^s z}ds}|_{H_0(z)}|_c .$$ We define $r_z(t) := \frac{d}{dt}\log a_z(t) = \frac{1}{2} \mathrm{Tr} \: \dot{A}_z(t) A^{-1}_z(t) + \mathrm{Tr} \: V^0_{\phi^t z},$ and we get by computations that $r_z(t)= \mathrm{Tr} \: [(V^0_{\phi^t z} - R_z^0(t) V^0_{\phi^t z})(\mathbb{I} + {V_{\phi^t z}^0}^2)^{-1}].$ Since $$\chi(z) = \lim_{t \rightarrow \infty} \frac{1}{t} \log |\det(\phi^t_*z|_{ H_0(z)})| = \lim_{t \rightarrow \infty} \frac{1}{t} \log a_z(t) = \lim_{t \rightarrow \infty} \frac{1}{t} \int_0^t r_z(s) \: ds,$$ \noindent by Birkhoff Ergodic Theorem \cite{Mane} we get that, provide that $r_z$ is an integrable function on $N,$ $h_{\mu}(\phi) = \int_N \chi(z) \, d\mu (z) = \int_N r_z (0) \, d\mu.$ Now we are going to compute dynamical entropy using a different scalar product on $H_0(z),$ after showing that we will get the same value. Call $A_z'(t) = V^0_{\phi^t z},$ and define the scalar product $\scalar{X}{Y}' = \scalar{\xi^X(0)}{A_z'(0) \xi^Y(0)};$ we also get that $r_z'(t) = \frac{1}{2}\mathrm{Tr} \: [V^0_{\phi^t z} - R_z^0(t){ V_z^0}^{-1}].$ The volume element on $N$ w.r.t. the scalar product given by $A'$ is related to the standard volume element in this way: $d\mu' = \sqrt {\frac{\det A'}{\det A}} d\mu.$ If we call $c(t) = \frac{d\mu}{d\mu'} =\sqrt{\frac{\det A (t)}{\det A' (t)}} > 1,$ we find that $0<a'(t)< a(t) c (0).$ We have that: $$ \limsup_{t \rightarrow \infty} \frac{1}{t} \int_0^t r_z'(s) \: ds = \limsup_{t \rightarrow \infty} \frac{1}{t} \log a_z'(t) \leq \lim_{t \rightarrow \infty} \frac{1}{t} \log a_z (t) = \chi(z) $$ $$\liminf_{t \rightarrow -\infty} \frac{1}{|t|} \int_t^0 r_z'(s) \: ds = - \limsup_{t \rightarrow -\infty} \frac{1}{|t|} \log a_z'(t) \geq - \lim_{t \rightarrow -\infty} \frac{1}{|t|} \log a_z (t) = \chi(z) ,$$ \noindent hence $$\limsup_{t \rightarrow \infty} \frac{1}{t} \int_0^t r_z'(s)\: ds \leq \chi(z) \leq \liminf_{t \rightarrow -\infty} \frac{1}{|t|} \int_t^0 r_z'(s) \: ds.$$ \noindent $r_z'$ is measurable on $N,$ since continuos. Applying the following Lemma (see \cite{BaWoEnGeo}), we can prove it is also integrable on $N$: \begin{lemma} Let $\phi^t$ be a measure preserving flow on a probability space $(X,\mu)$ and $f: X \rightarrow \mathbb{R}$ a measurable nonnegative function; if for almost every $x \in X$ $\limsup_{T \rightarrow + \infty} \frac{1}{T} \int_0^T f(\phi^t x)\:dt \leq k(x),$ where $k :X \rightarrow \mathbb{R}$ is a measurable function, then $$\int_X f(x) \:d \mu(x) \leq \int_X k(x) \:d \mu(x).$$ \end{lemma} Hence, we get by Ergodic Theorem and equality of time averages in the future and in the past that $\int_N r_z(0)' d\mu = \int_N \chi(z) \, d\mu (z) = h_{\mu}(\phi).$ Finally, we use the following result (see \cite{BaWoEnGeo}): \begin{lemma} Given three symmetric linear operators $U,M,N$ on a Euclidean space such that $M$ and $N$ are nonnegative definite and $U$ is strictly positive definite, we get that $\mathrm{Tr} \: [MU + NU^{-1}] \geq 2 \mathrm{Tr} \: \sqrt{M} \sqrt{N},$ where equality holds iff $\sqrt{M}U = \sqrt{N}.$ \end{lemma} \noindent Since we have that $r_z'(t) = \frac{1}{2}\mathrm{Tr} \: [V^0_{\phi^t z} - R_z^0(t){ V_{\phi^t z}^0}^{-1}],$ where $V^0_{\phi^t z}$ is (strictly) positive definite and $-R_z^0(t)$ is nonnegative definite, we can apply previous lemma with $U= V_{\phi^t z}^0,$ $M=\mathbb{I}$ and $N=-R_z^0(t),$ obtaining $\frac{1}{2}\mathrm{Tr} \: [V^0_{\phi^t z} - R_z^0(t){V_{\phi^t z}^0}^{-1}] \geq \mathrm{Tr} \: \sqrt{-R_z^0(t)},$ and hence $$ h_{\mu}(\phi) \geq \int_N \mathrm{Tr} \: \sqrt{-R_z^0(0)} \: d\mu = \int_N \mathrm{Tr} \: \sqrt{-R_z(0)} \: d\mu. $$ \hfill $\square$ \vspace{0.7cm} \noindent \textbf{Remark} The estimate is sharp (i.e. we have the equality) if and only if $V_{\phi^t z}^0 = \sqrt{-R_z^0(t)}$ for almost all $z \in N,$ which implies that $V^2_{\phi^t z} = -R_z(t)$ almost everywhere on $N$, and hence, by continuity, for every $z \in N$; this means that $\dot{V}_{\phi^t z} =0$ on $N$, i.e. all Jacobi curves are symmetric \cite{AgGeHam}. \vspace{1cm} \noindent \textbf{Acknowledgements} I would like to thank Prof. A. A. Agrachev for his constant support and fruitful discussions.
1,116,691,498,795
arxiv
\section{Introduction} \label{introduction} The deluge of data accumulated from sensors, social networks, computational sciences, and location-aware services calls for advanced querying and analytics that are often dependent on efficient aggregation and summarization techniques. The SQL group-by operator is one main construct that is used in conjunction with aggregate operations to cluster the data into groups and produce useful summaries. Grouping is usually performed by aggregating into the same groups tuples with equal values on a certain subset of the attributes. However, many applications (i.e.,in Section~\ref{section:applications}) are often interested in grouping based on \textit{similar} rather than strictly equal values. Clustering~\cite{BIBExample:han2006data} is a well-known technique for grouping similar data items in the multi-dimensional space. In most cases, clustering is performed outside of the database system. Moving the data outside of the database to perform the clustering and then back into the database for further processing results in a costly impedance mismatch. Moreover, based on the needs of the underlying applications, the output clusters may need to be further processed by SQL to filter out some of the clusters and to perform other SQL operations on the remaining clusters. Hence, it would be greatly beneficial to develop practical and fast similarity group-by operators that can be embedded within SQL to avoid such impedance mismatch and to benefit from the processing power of all the other SQL operators. SQL-based Similarity Group-by (SGB) operators have been proposed in~\cite{BIBExample:silva2013similarity} to support several semantics to group similar but not necessarily equal data. Although several applications can benefit from using existing SGB over Group-by, a key shortcoming of these operators is that they focus on one-dimensional data. Consequently, data can only be approximately grouped based on one attribute at a time. \begin{sloppypar} In this paper, we introduce new similarity-based group-by operators that group multi-dimensional data using various metric distance functions. More specifically, we propose two SGB operators, namely SGB-All and SGB-Any, for grouping multi-dimensional data. SGB-All forms groups such that a tuple or a data item, say $o$, belongs to a group, say $g$, if $o$ is at a distance within a user-defined threshold from all other data items in $g$. In other words, each group in SGB-All forms a clique of nearby data items in the multi-dimensional space. For example, all the two-dimensional points ($a$-$e$) in Figure~\ref{fig:sgbnotion}a are within distance 3 from each other and hence form a clique. They are all reported as members of one group as they are all part of the output of SGB-All. In contrast, SGB-Any forms groups such that a tuple or a data item, say $o$, belongs to a group, say $g$, if $o$ is within a user-defined threshold from at least one other data item in $g$. For example, all the two dimensional points in Figure~\ref{fig:sgbnotion}b form one group. Point $a$ is within Distance $3$ from Point $c$ that in turn is within Distance $3$ from Points $b$, $d$, and $f$. Furthermore, Point $e$ is within Distance $3$ from Point $d$, and so on. Therefore, Points a-h of Figure \ref{fig:sgbnotion}b are reported as members of one group as part of the output of SGB-Any. Notice that in the SGB-All operator, a data item may qualify the membership criterion of multiple groups. For example, data item $c$ in Figure~\ref{fig:sgbnotion}a forms a clique with two groups. In this case, we propose three semantics, namely, \textit{on-overlap join-any}, \textit{on-overlap eliminate}, and \textit{on-overlap form-new-group}, for handling such a case. We provide efficient algorithms for computing the two proposed SGB operators over correlated multi-dimensional data. The proposed algorithms use a filter-refine paradigm. In the filter step, a fast yet conservative check is performed to identify the data items that are candidates to form groups. Some of the data items resulting from the filter step will end up being false-positives that will be discarded. The refinement step eliminates the false-positives to produce the final output groups. Notice that for the case of SGB-Any, a data item cannot belong to multiple groups. For example, consider a data item, say $o$, that is a member of two groups, say $g_1$ and $g_2$, i.e., $o$ is within distance $epsilon$ from at least one other data item in each of $g_1$ and $g_2$. In this case, based on the semantics of SGB-Any, Groups $g_1$ and $g_2$ merge into one encompassing bigger group that contains all members of $g_1$, $g_2$ and common data item $o$. Specificity, we mainly focus on two and three dimensional data space, leaving higher dimensions for future work. The contributions of this paper are as follows: \begin{enumerate} \item We introduce two new operators, namely SGB-All and SGB-Any, for grouping multi-dimensional data from within SQL. \item We present an extensible algorithmic framework to accommodate the various semantics of SGB-All and SGB-Any along with various options to handle overlapping data among groups. We introduce effective optimizations for both operators. \item We prototype the two operators inside PostgreSQL and study their performance using the TPC-H benchmark. The experiments demonstrate that the proposed algorithms can achieve up to three orders of magnitude enhancement in performance over the baseline approaches. Moreover, the performance of the proposed SGB operators is comparable to that of relational Group-by, and outperform state-of-the-art clustering algorithm (i.e., \textit{K-means}, \textit{DBSCAN} and \textit{BIRCH}) from one to three orders of magnitude. \end{enumerate} \end{sloppypar} The rest of the paper proceeds as follows. Section~\ref{section:related-work} discusses the related work. Section~\ref{section:background} provides background material. Section~\ref{section: similarity operators} introduces the new SGB operators. Section~\ref{section:applications} presents application scenarios that demonstrate the use and practicality of the various proposed semantics for SGB operators. Sections~\ref{section:sgball-framework} and ~\ref{section:sgb-any-framework} introduce the algorithmic frameworks for SGB-All and SGB-Any operators, respectively. Section~\ref{section:performance-evaluation} describes the in-database extensions to support the two operators and their performance evaluation from within PostgreSQL. Section~\ref{section:conclusion} concludes the paper. \begin{figure} \centering \includegraphics[width=3in,height=1.2in]{pic/sgbsemantics} \caption{The Semantics of Similarity predicates $\epsilon=3$.} \label{fig:sgbnotion} \end{figure} \section{Related Work} \label{section:related-work} \begin{sloppypar} Previous work on similarity-aware query processing addressed the theoretical foundation and query optimization issues for similarity-aware query operators~\cite{BIBExample:silva2013similarity}. \cite{BIBExample:adali1998multi,BIBExample:atnafu2001similarity} introduce similarity algebra that extends relational algebra operations, e.g., joins and set operations, with similarity semantics. Similarity queries and their optimizations include algorithms for similarity range search and K-Nearest Neighbor (KNN)~\cite{BIBExample:braunmuller2001multiple}, similarity join~\cite{BIBExample:chen2003similar_join}, and similarity aggregates~\cite{BIBExample:razente2008aggregate}. Most of work focus on semantic and transformation rules for query optimization purpose independently from actual algorithms to realize similarity-aware operators. In contrast, our focus is on the latter. \end{sloppypar} Clustering forms groups of similar data for the purpose of learning hidden knowledge. Clustering methods and algorithms have been extensively studied in the literature, e.g., see~\cite{BIBExample:berkhin2006survey,BIBExample:han2006data}. The main clustering methods are partitioning, hierarchical, and density-based. \textit{K-means}~\cite{BIBExample:kanungo2002efficient} is a widely used partitioning algorithm that uses several iterations to refine the output clusters. Hierarchical methods build clusters either divisively (i.e., top-down) such as in \textit{BIRCH}~\cite{BIBExample:zhang1996birch}, or agglomeratively (i.e., bottom-up) such as in \textit{CURE}~\cite{BIBExample:guha1998cure}. Density-based methods, e.g., \textit{DBSCAN}~\cite{BIBExample:ester1996density}, cluster data based on local criteria, e.g., density reachability among data elements. The key differences between our proposed SGB operators and clustering are: (1)~the proposed SGB operators are relational operator that are integrated in a relational query evaluation pipeline with various grouping semantics. Hence, they avoid the impedance mismatch experienced by standalone clustering and data mining packages that mandate extracting the data to be clustered out of the DBMS. (2)~In contrast to standalone clustering algorithms, the SGB operators can be interleaved with other relational operators. (3)~Standard relational query optimization techniques that apply to the standard relational group-by are also applicable to the SGB operators as illustrated in~\cite{BIBExample:silva2013similarity}. This is not feasible with standalone clustering algorithms. Also, improved performance can be gained by using database access methods that process multi-dimensional data. An early work on similarity-based grouping appears in~\cite{BIBExample:schallehn2004efficient}. It addresses the inconsistencies and redundancy encountered while integrating information systems with dirty data. However, this work realizes similarity grouping through pairwise comparisons which incur excessive computations in the absence of a proper index. Furthermore, the introduced extensions are not integrated as first class database operators. The work in~\cite{BIBExample:zhang2007cluster} focuses on overcoming the limitations of the distinct-value group-by operator and introduces the SQL construct ``Cluster~By" that uses conventional clustering algorithms, e.g., \textit{DBSCAN}, to realize similarity grouping. Cluster~By addresses the impedance mismatch due to the data being outside the DBMS to perform clustering. Our SGB operators are more generic as they use a set of predicates and clauses to refine the grouping semantics, e.g., the distance relationships among the data elements that constitute the group and how inter-group overlaps are dealt with. \begin{sloppypar} Several DBMSs have been extended to support similarity operations. SIREN~\cite{BIBExample:razente2006siren} is a similarity retrieval engine that allows executing similarity queries over a relational DBMS. POSTGRESQL-IE~\cite{BIBExample:guliato2009postgresql} is an image handling extension of PostgreSQL to support content-based image retrieval capabilities, e.g., supporting the image data type and responding to image similarity queries. While these extensions incorporate various notions of similarity into query processing, they focus on the similarity search operation. SimDB~\cite{BIBExample:silva2013similarity} is a PostgreSQL extension that supports similarity-based queries and their optimizations. Several similarity operations, e.g., join and group-by, are implemented in as first-class database operators. However, the similarity operators in SimDB focus on one-dimensional data and do not handle multi-dimensional attributes. \end{sloppypar} \section{Preliminaries} \label{section:background} In this section, we provide background definitions and formally introduce similarity-based group-by operators. \newtheorem{defn}{Definition} \begin{defn} A \textbf{metric space} is a space $\textbf{M} = \langle \mathbb{D}, \delta\rangle$ in which the distance between two data points within a domain $\mathbb{D}$ is defined by a function $\delta : \mathbb{D} \times \mathbb{D} \to \mathbb{R}$ that satisfies the properties of symmetry, non-negativity, and triangular inequality. \end{defn} We use the Minkowski distance $L_p$ as the distance function $\delta$. We consider the following two Minkowski distance functions. Let $p_x$ be a data point in the multi-dimensional space of the form $p_x:\langle x_1, ..., x_d\rangle$ and $p_{xy}$ is the value of the $y^{th}$ dimension of $p_x$. \begin{itemize} \item The Euclidean distance\\ $L_2 : \delta_2(p_i, p_j) = \sqrt{\displaystyle\sum_y\left(p_{iy}-p_{jy}\right)^2}$ \item The maximum distance \\$L_\infty : \delta_\infty(p_i, p_j) = \displaystyle \max_y\left|p_{iy}-p_{jy}\right|$. \end{itemize} \begin{defn} A \textbf{similarity predicate} $\xi_{\delta, \epsilon}$ is a Boolean expression that returns TRUE for two multi-dimensional points, say $p_i$ and $p_j$, if the distance $\delta$ between $p_i$ and $p_j$ is less than or equal to $\epsilon$, i.e., \xi_{\delta,\epsilon}(p_i, p_j) : \delta(p_i, p_j) \le \epsilon. ~In this case, the two points are said to be similar. \end{defn} \begin{sloppypar} \begin{defn} Let T be a relation of tuples, where each tuple, say $t$, is of the form $t = \left\lbrace GA_{1}, ..., GA_{k},NGA_{1}, ..., NGA_{l}, \right\rbrace $, the subset GA_c = \left\lbrace GA_1, ..., GA_k \right\rbrace $ be the grouping attributes, the subset NGA = \left\lbrace NGA_1, ..., NGA_l \right\rbrace $ be the non-grouping attributes, and ${\xi}_{\delta,\epsilon}$ be a similarity predicate. Then, the \textbf{similarity Group-by operator} $ {\mathcal{G}}_{\langle GA_c, ({\xi}_{\delta, \epsilon})\rangle} (R) $ forms a set of answer groups $G_{s}$ by applying ${\xi}_{\delta, \epsilon}$ to the elements of $GA_c$ such that a pair of tuples, say $t_i$ and $t_j$, are in the same group if $ \xi_{\delta, \epsilon}(t_i._{GAc}, t_j._{GAc})$. \end{defn} \end{sloppypar} \begin{sloppypar} \begin{defn} \label{def:oset} Given a set of groups $G=\{g_1, ..., g_m\}$, the \textbf{Overlap Set} $Oset$ is the set of tuples formed by the union of the intersections of all pairs of groups $(g_1, ..., g_m)$, i.e., $Oset =\cup_{(i,j) \in \{1..m\}} (g_i \cap g_j )$, where $i \neq j$. In other words, $Oset$ contains all the tuples that belong to more than one group. \end{defn} \end{sloppypar} For simplicity, we study the case where the set of grouping attributes, $GA_c$, contains only two attributes. In this case, we can view tuples as points in the two-dimensional space, each of the form $p$:$(x_1,x_2)$. We enclose each group of points by a bounding rectangle $R$:$(p_l, p_r)$, where points $p_l$ and $p_r$ correspond to the upper-left and bottom-right corners of R, respectively. \section{Similarity Group-By Operators} \label{section: similarity operators} This section introduces the semantics of the two similarity-based group-by operators, namely, SGB-All and SGB-Any. \subsection{Similarity Group-By ALL (SGB-All)} \begin{sloppypar} Given a set of tuples whose grouping attributes form a set, say $P$, of two-dimensional points, where $P=\left\lbrace p_1, ..., p_n \right\rbrace$, the SGB-All operator $\mathcal{\check{G}}_{all}$ forms a set, say $G_m$, of groups of points from $P$ such that $\forall g\in G_m$, the similarity predicate $\xi _{\delta, \epsilon}$ is TRUE for all pairs of points $\langle p_i ,p_j\rangle \in g$, and $g$ is maximal, i.e, there is no group $g'$ such that $g \subseteq g'$. More formally, \[\mathcal{\check{G}}_{all} =\left\lbrace g \;| \; \forall p_{i}, p_{j} \in g, \; \xi _{\delta, \epsilon}(p_i, p_j) \wedge g \; is \;maximal\right\rbrace\] \end{sloppypar} Figure~\ref{fig:sgbnotion} gives an example of two groups (a-e) and (c,f,g), where all pairs of elements within each group are within a distance $\epsilon \le 3$. The proposed SQL syntax for the SGB-All operator is as follows:\\ {\ttfamily \scriptsize \hspace*{4ex}SELECT \textit{column}, aggregate-func(\textit{column}) \newline \hspace*{4ex}FROM \textit{table-name} \newline \hspace*{4ex}WHERE \textit{condition} \newline \hspace*{4ex}{GROUP BY} \textit{column} \textbf{DISTANCE-TO-ALL} [\textit{L2} $ \mid $ \textit{LINF}] \textbf{WITHIN} \textit{$\epsilon$} \newline \hspace*{4ex}\textbf{ON-OVERLAP} [\textit{JOIN-ANY} $ \mid $ \textit{ELIMINATE} $ \mid $\textit{FORM-NEW-GROUP}] } SGB-All uses the following clauses to realize similarity-based grouping: \begin{itemize} \item DISTANCE-TO-ALL: specifies the distance function to be applied by the similarity predicate when deciding the membership of points within a group. \begin{itemize} \item L2: $L_2$ (Euclidean distance). \item LINF: $L_{\infty}$ (Maximum infinity distance) \end{itemize} \item ON-OVERLAP: is an arbitration clause to decide on a course of action when a data point is within Distance $\epsilon$ from more than one group. When a point, say $p_i$, matches the membership criterion for more than one group, say $g_1 \cdots g_w$, one of the three following actions are taken: \begin{itemize} \item JOIN-ANY: the data point $p_i$ is randomly inserted into any one group out of $g_1 \cdots g_w$. \item ELIMINATE: discard the data point $p_i$, i.e., all data points in $Oset$ (see Definition \ref{def:oset}) are eliminated. \item FORM-NEW-GROUP: insert $p_i$ into a separate group, i.e., form new groups out of the points in $Oset$. \end{itemize} \end{itemize} \begin{sloppypar} \begin{example} \label{example-all} The following query performs the aggregate operation $count$ on the groups formed by SGB-All on the two-dimensional grouping attributes \textit{GPSCoor-lat} and \textit{GPSCoor-long}. The $L_{\infty}$ distance is used with Threshold $\epsilon=3$.\\ {\ttfamily \scriptsize \hspace*{4ex}SELECT \textit{count(*)} \newline \hspace*{4ex}FROM \textit{$GPSPoints$} \newline \hspace*{4ex}GROUP BY \textit{GPSCoor-lat,GPSCoor-long} \textbf{DISTANCE-TO-ALL} \textit{LINF} \newline \hspace*{4ex}\textbf{WITHIN} \textit{3} \newline \hspace*{4ex}\textbf{ON-OVERLAP} <\textit{clause}> } Consider Points $a_1$-$a_5$ in Figure~\ref{fig:sgbexample} that arrive in the order $a_1,a_2, \cdots,a_5$. After processing $a_4$, the following groups satisfy the SGB-All predicates: $g_{1} \left\lbrace a_{1}, a_2 \right\rbrace$ and $g_{2} \left\lbrace a_{3},a_{4} \right\rbrace$. However, Data-point $a_5$ is within $\epsilon$ from $a_1,a_2$ in $g_1$ and $a_3, a_5$ in $g_2$. Consequently, an arbitration ON-OVERLAP clause is necessary. We consider the three possible semantics below for illustration. With an ON-OVERLAP JOIN-ANY clause, a group is selected at random. If $g_1$ is selected, the resulting groups are $g_1\{a_1, a_2, a_5\}$ and $g_2\{a_3, a_4\}$, and the answer to the query is $\{3,2\}$. With an ON-OVERLAP ELIMINATE clause, the overlapping point $a_5$ gets dropped; the resulting groups are $g_{1} \left\lbrace a_{1}, a_2 \right\rbrace$ and $g_{2} \left\lbrace a_{3},a_{4} \right\rbrace$, and the query output is $\{2,2\}$. With an ON-OVERLAP FORM-NEW-GROUP clause, the overlapping point $a_5$ is inserted into a newly created group; the resulting groups are $g_{1} \left\lbrace a_{1}, a_2 \right\rbrace$, $g_{2} \left\lbrace a_{3},a_{4} \right\rbrace$, $g_3\{a_5\}$ and the query output is $\{2,2,1\}$. \end{example} \end{sloppypar} \begin{figure} \centering \includegraphics[width=1.6in,height=1.25in]{pic/sgbexample} \caption{Data points using $\epsilon = 3$ and $L_{\infty}$.} \label{fig:sgbexample} \end{figure} \subsection{Similarity Group-By Any (SGB-Any)} \begin{sloppypar} Given a set of tuples whose grouping attributes from a set, say $P$, of two dimensinal points, where $P=\left\lbrace p_1, ..., p_n \right\rbrace$, the SGB-Any operator $\mathcal{\check{G}}_{any}$ clusters points in $P$ into a set of groups, say $G_m$, such that, for each group $g \in G_m$, the points in $g$ are all connected by edges to form a graph, where an edge connects two points, say $p_i$ and $p_j$, in the graph if they are within Distance $\epsilon$ from each other, i.e,. $\xi _{\delta, \epsilon}(p_i, p_j)$. More formally,\\ $\mathcal{\check{G}}_{any} =\{ g \;| \; \forall p_{i}, p_{j} \in g,( \xi _{\delta, \epsilon}(p_i, p_j) \; \vee \;(\exists \; p_{k1},..., p_{kn}, \; \xi _{\delta \epsilon} (p_i, p_{k1})\;\wedge ... \wedge \xi _{\delta \epsilon}(p_{kn}, p_{j})) ) \wedge g \; is \;maximal\}$\\ \end{sloppypar} The notion of distance-to-any between elements within a group is illustrated in Figure \ref{fig:sgbnotion}b, where $\epsilon = 3$. All of the points (a-h) form one group. The corresponding SQL syntax of the SGB-Any operator is as follows:\\ {\ttfamily \scriptsize \hspace*{4ex}SELECT \textit{column}, aggregate-func(\textit{column}) \newline \hspace*{4ex}FROM \textit{table-name} \newline \hspace*{4ex}WHERE \textit{condition} \newline \hspace*{4ex}GROUP BY \textit{column} \textbf{DISTANCE-TO-ANY} [\textit{L2} $\mid$ \textit{LINF}] \textbf{WITHIN} \textit{$\epsilon$} \\ } SGB-Any uses the DISTANCE-TO-ANY predicate that applies the metric space function while evaluating the distance between adjacent points. When using the semantics for SGB-Any, the case for points overlapping multiple groups does not arise. The reason is that when an input point overlaps multiple groups, the groups merge to form one large group. \begin{sloppypar} \begin{example} \label{example-any} The following query performs the aggregate operation $count$ on the groups formed by SGB-Any on the two-dimensional grouping attributes \textit{GPSCoor-lat} and \textit{GPSCoor-long} using the Euclidean distance with $\epsilon=3$.\\ {\ttfamily \scriptsize \hspace*{4ex}SELECT \textit{count(*)} \newline \hspace*{4ex}FROM \textit{$GPSPoints$} \newline \hspace*{4ex}\textbf{GROUP BY} \textit{\textit{GPSCoor-lat} and \textit{GPSCoor-long}} \ \newline \hspace*{4ex}\textbf{DISTANCE-TO-ANY }\textit{L2} \textbf{WITHIN} \textit{3} } Consider the example in Figure~\ref{fig:sgbexample}. After processing $a_4$, the following groups are $g_1\{ a_1, a_2 \}$ and $g_2\{a_3, a_4\}$. Since Point $a_5$ is within $\epsilon$ from both $a_1, a_2$ in $g_1$ and $a_3, a_4$ in $g_2$, the two groups are merged into a single group. Therefore, the output of the query is $\{5\}$. Any overlapping point will cause groups to merge and hence there is no need to add a special clause to handle overlaps. \end{example} \end{sloppypar} \section{Applications} \label{section:applications} \begin{sloppypar} In this section, we present application scenarios that demonstrate the practicality and the use of the various semantics for the proposed Similarity Group-by operators. \begin{example} \label{example-manet} \textbf{Mobile Ad hoc Network (MANET)} is a self-configuring wireless network of mobile devices (e.g., personal digital assistants). A mobile device in a MANET communicates directly with other devices that are within the range of the device's radio signal or indirectly with distant mobile devices using gateways (i.e., intermediate mobile devices, e.g., $m_1$ and $m_2$ in Figure \ref{fig:manet}a). In a MANET, wireless links among nearby devices are established by broadcasting special messages. Radio signals are likely to overlap. As a result, uncareful broadcasting may result in redundant messages, contention, and collision on communication channels Consider the Mobile Devices table in Figure~\ref{fig:manet}b that maintains the geographic locations of the mobile devices in a MANET. In the following, we give example queries that illustrate how MANETs can tremendously benefit from SGB-All and SGB-Any operators. \end{example} \begin{figure} \centering \includegraphics[width=3.3in,height=1.15in]{pic/manet} \caption{(a)~An Mobile Ad hoc Network (MANET) containing the devices $m_1\dots m_6$, where the circle around each device is its signal range, (b)~The corresponding Mobile Devices table.} \label{fig:manet} \end{figure} \begin{query} \label{query:manet-any} \textbf{Geographic areas that encompass a MANET.} A mobile device, say $m$, belongs to a MANET if and only if $m$ is within the \textit{signal range} from at least one other device mobile. The SGB-ANY semantics identifies a connected group of mobile devices using \textit{signal range} as a similarity grouping threshold.\\ {\ttfamily \scriptsize \hspace*{4ex}SELECT \textit{ST\_Polygon(Device-lat, Device-long)} \newline \hspace*{4ex}FROM \textit{$MobileDevices$} \newline \hspace*{4ex}\textbf{GROUP BY} \textit{\textit{Device-lat}, \textit{Device-long}} \newline \hspace*{4ex}\textbf{DISTANCE-TO-ANY }\textit{L2} \textbf{WITHIN} \textit{SignalRange} } \\ Referring to the mobile devices in Figure \ref{fig:manet}a, the output of Query~\ref{query:manet-any} returns a polygon that encompasses mobile devices $m_1$-$m_6$. \end{query} \begin{query} \label{query:gateway} \textbf{Candidate gateway mobile devices.} A gateway represents an overlapping mobile device that connects two devices that are not within each other's signal range. The SGB-All FORM-NEW-GROUP inserts the overlapped devices into a new group. Therefore, those devices in the newly formed group are ideal gateway candidates. \\ {\ttfamily \scriptsize \hspace*{4ex}SELECT \textit{COUNT(*)} \newline \hspace*{4ex}FROM \textit{$MobileDevices$} \newline \hspace*{4ex}\textbf{GROUP BY} \textit{\textit{Device-lat} , \textit{Device-long}} \newline \hspace*{4ex}\textbf{DISTANCE-TO-ALL }\textit{L2} \textbf{WITHIN} \textit{SignalRange} \newline \hspace*{4ex}\textbf{ON-OVERLAP FORM-NEW-GROUP} } \\The output of Query~\ref{query:gateway} returns the number of candidate gateway mobile devices. Along the same line, identifying mobile devices that cannot serve as a gateway is equally important to a MANET. SGB-All ELIMINATE identifies mobile devices that cannot serve as a gateway by discarding the overlapping mobile devices. \end{query} \begin{example} \label{example-lSNE} \textbf{Location-based group recommendation in mobile social media}. Several social mobile applications, e.g., \textit{WhatsApp} and \textit{Line}, employ the frequent geographical location of users to form groups that members may like to join. For instance, users who reside in a common area (e.g., within a distance threshold) may share similar interests and are inclined to share news. However, members who overlap several groups may disclose information from one group to another and undermine the privacy of the overlapping groups. Query~\ref{query:social-sgball} demonstrates how SGB-ALL allows forming location-based groups without compromising privacy. \end{example} \begin{query} \label{query:social-sgball} \textbf{Forming private location-based groups}. The various SGB-All semantics form groups while handling ON-OVERLAP options that restrict members to join multiple groups. In Query \ref{query:social-sgball}, we assume that Table Users-Frequent-Location maintains the users' data, e.g., user-id and frequent location. The user-defined aggregate function \textit{List-ID} returns a list that contains all the user-ids within a group. \\ {\ttfamily \scriptsize \hspace*{4ex}SELECT \textit{List-ID(user-id)}, \newline \hspace*{4ex}\textit{ST\_Polygon(User-lat, User-long)} \newline \hspace*{4ex}FROM \textit{$Users-Frequent-Location$} \newline \hspace*{4ex}\textbf{GROUP BY} \textit{\textit{User-lat} , \textit{User-long}} \newline \hspace*{4ex}\textbf{DISTANCE-TO-ALL }\textit{L2} \textbf{WITHIN} \textit{Threshold} \newline \hspace*{4ex}\textbf{[ON-OVERLAP JOIN-ANY | ELIMINATE | FORM-NEW-GROUP]} }\\ The output of Query~\ref{query:social-sgball} returns a list of user-ids for each formed group along with a polygon that encompasses the group's geographical location. The JOIN-ANY semantics recommends any one arbitrary group for overlapping members who in this case will not be able to join multiple groups. The ELIMINATE semantics drops overlapping members from recommendation, while FORM-NEW-GROUP creates dedicated groups for overlapping members. \end{query} \end{sloppypar} \section{Algorithms for SGB-All} \label{section:sgball-framework} In this section, we present an extensible algorithmic framework to realize similarity-based grouping using the distance-to-all semantics with the various options to handle the overlapping data among the groups. \subsection{Framework} \begin{sloppypar} Procedure~\ref{alg:sgball-framework} illustrates a generic algorithm to realize SGB-All. This generic algorithm supports the various data overlap semantics using one algorithmic framework. The algorithm breaks down the SGB-All operator into procedures that can be optimized independently. For each data point, the algorithm starts by identifying two sets (Line 2). The first set, namely $CandidateGroups$, consists of groups that $p_i$ can join. $p_i$ can join a group, say $g$, in $CandidateGroups$ if the similarity predicate is true for all pairs $\langle p_i, p_i'\rangle~\forall p_i' \in g$. The second set, namely $OverlapGroups$, includes groups that have some (but not all) of its data points satisfying the similarity predicate. A group, say $g$, is in $OverlapGroups$ if there exist at least two points $p$ and $q$ in $g$ such that the similarity distance between $p_i$ and $p$ holds and the similarity distance between $p_i$ and $q$ does not hold. $OverlapGroups$ serves as a preprocessing step required to handle the semantics of ELIMINATE and FORM-NEW-GROUP encountered in later steps. Figure~\ref{fig:sgbpoints} gives four existing groups $g_1$-$g_4$ while Data-point $x$ is being processed. In this case, $CandidateGroups$ contains $\left\lbrace g_2, g_3 \right\rbrace$ and $OverlapGroups$ contains $\left\lbrace g_1 \right\rbrace$. Procedure $ProcessGroupingALL$ (Line 3 of Procedure \ref{alg:sgball-framework}) uses $CandidateGroups$ and the ON-OVERLAP clause $CLS$ to either (i)~place $p_i$ into a new group, (ii)~place $p_i$ into existing group(s), or (iii)~drop $p_i$ from the output, in case of an ON-OVERLAP clause. Finally, Procedure $ProcessOverlap$ (Line 5) uses $OverlapGroups$ to verify whether additional processing is needed to fulfill the semantics of SGB-All. \end{sloppypar} \begin{algorithm}[t]\small \KwIn{$P$: set of data points, $\epsilon$: similarity threshold , $\delta$: distance function , $CLS$: ON-OVERLAP clause, $G$ set of existing groups} \KwOut{Set of output groups } \For{ each data element $p_{i}$ in $P$} { $(CandidateGroups, OverlapGroups) \leftarrow FindCloseGroups(p_i, G, \epsilon, \delta, CLS)$\\ $ProcessGroupingALL(p_i, CandidateGroups, CLS)$\\ \If{CLS is not JOIN-ANY And sizeOf(OverlapGroups)!= 0} {$ProcessOverlap(p_i, OverlapGroups, CLS)$} } \caption{\small{Similarity Group-By ALL Framework}} \label{alg:sgball-framework} \end{algorithm} \subsection{Finding Candidate and Overlap Groups} \label{section: Finding Candidate and Overlap Groups } \begin{sloppypar} In this section, we present a straightforward approach to identify $CandidateGroups$ and $OverlapGroups$. In Section~\ref{subsection:boundschecking}, we propose a new two-phase filter-refine approach that utilizes a conservative check in the filter phase to efficiently identify the member groups in $CandidateGroups$. Then, in Section ~\ref{section:false-positive}, we introduce the refine phase that is applied only if $L_2$ is used as the distance metric to detect the $CandidateGroups$ that falsely pass the filter step. Procedure~\ref{alg:findcloseall-nested} gives the pseudocode for \textit{Naive FindCloseGroups} that evaluates the distance-to-all similarity predicate between $p_i$ and all the points that have been previously processed (Lines 6-15). The grouping semantics (Lines 16-20) identify how the two sets $CandidateGroups$ and $OverlapGroups$ are populated. \end{sloppypar} \begin{algorithm}[t]\small \KwIn{$p_i$: data point, $\epsilon$: similarity threshold , $\delta$: distance function, $CLS$: ON-OVERLAP clause, $G$: set of existing groups } \KwOut{$Candidate$, $OverlapGroups$} $Candidate\leftarrow NULL$\\ $OverlapGroups\leftarrow NULL$\\ \For{ each group $g_{j}$ in $G$} { CandidateFlag = True\\ OverlapFlag = False\\ \For{ each $p_{k}$ in $g_j$} {\uIf { (Distance($p_i$, $p_k$, $\delta$)$\leqslant \epsilon$)}{OverlapFlag = True }\Else {CandidateFlag = False \\ \If{CLS == JOIN-ANY}{break}} } \uIf{CandidateFlag is True}{ insert $g_j$ into $Candidate$} \ElseIf {CLS is not JOIN-ANY and CandidateFlag is False and OverlapFlag is True}{insert $g_j$ into $OverlapGroups$} } \caption{\small{Naive FindCloseGroupsALL}} \label{alg:findcloseall-nested} \end{algorithm} \subsubsection{Processing New Points} The second step of the SGB-All Algorithm in Procedure \ref{alg:sgball-framework} places $p_i$, the data point being processed, into a new group or into an existing group, or drops $p_i$ from the output depending on the semantics of SGB-All specified in the query. \begin{sloppypar} Procedure~\ref{alg:processgroupingall} (ProcessGroupingAll) proceeds as follows. First, it identifies the cases where $CandidateGroups$ is empty or consists of a single group. In these cases, $p_i$ is inserted into a newly created group or into an existing group depending on $p$'s distance from the existing group. Procedure $ProcessInsert$ places the data point $p_i$ into a group. Next, the ON-OVERLAP clause $CLS$ is consulted to determine the proper course of action. The JOIN-ANY clause arbitrates among the overlapping groups by inserting $p_i$ into a randomly chosen group. The procedure $ProcessEliminate$ (Line 13) handles the details of processing the ELIMINATE clause. Consider the example illustrated in Figure \ref{fig:sgbpoints}, where $CandidateGroups$ consists of $ \left\lbrace g_2, g_3 \right\rbrace$. $ProcessEliminate$ drops Point $x$. Finally, Procedure $ProcessNewGroup$ (Line 15) processes the FORM-NEW-GROUP clause. It inserts $p_i$ into a temporary set termed $S'$ for further processing. The SGB-All with FORM-NEW-GROUP option forms groups out of $S'$ by calling SGB-All recursively until $S'$ is empty. \end{sloppypar} \begin{algorithm}[t]\small \KwIn{$p_i$: data point, $CLS$: ON-OVERLAP clause, $CandidateGroups$ } \KwOut{updates $CandidateGroups$ based on $CLS$ semantics} \uIf{sizeof(CandidateGroups)== 0 }{ create a new group $g_{new}$ \\ $ProcessInsert(p_i, g_{new})$} \uElseIf {sizeof(CandidateGroups) == 1 }{ insert into existing group $g_{out}$\\ $ProcessInsert(p_i, g_{out})$ } \Else{\Switch{CLS}{ \uCase{JOIN-ANY}{ $g_{out} \leftarrow GetRandomGroup(CandidateGroups)$\\ $ProcessInsert(p_i, g_{out})$ } \uCase{ELIMINATE}{ ProcessEliminate($p_i$, $CandidateGroups$) } \uCase{FORM-NEW-GROUP}{ ProcessNewGroup($p_i$, $CandidateGroups$) } }} \caption{\small{ProcessGroupingALL}} \label{alg:processgroupingall} \end{algorithm} \begin{figure}[t] \centering \includegraphics[width=2.8in,height=1.3in]{pic/sgb-clq} \caption{Processing the point x using $L_{\infty}$ with $\epsilon = 4$. } \label{fig:sgbpoints} \end{figure} \subsubsection{Handling Overlapped Points} \begin{sloppypar} The final step of SGB-All in Procedure~\ref{alg:sgball-framework} processes the groups in the Set $OverlapGroups$. $OverlapGroups$ consists of groups, where each group has some data points (but not all of them) that satisfy the similarity predicate with the new input point $p_i$. This step is required by the ELIMINATE and FORM-NEW-GROUP semantics. Procedure $ProcessOverlap$ handles the ELIMINATE semantics as follows. It iterates over $OverlapGroups$ and deletes overlapped data points. Consider the example illustrated in Figure \ref{fig:sgbpoints}. Set $OverlapGroups$ consists of $ \left\lbrace g_1 \right\rbrace$ with overlapped Data-Point $a_3$. Finally, $ProcessOverlap$ handles the FORM-NEW-GROUP semantics by inserting the overlapped data points into a temporary set termed $S'$ and deletes these points from their original groups. \end{sloppypar} \begin{sloppypar} The time complexity for SGB-All according the algorithmic framework in Procedure \ref{alg:sgball-framework} is dominated by the time complexity of $FindCloseGroups$. The time complexity of $ProcessGrouping$ and $ProcessOverlap$ (Lines 3-6) is linear in the size of $CandidateGroups$ and $OverlapGroups$. Consequently, given an input set of size $n$, Procedure \textit{Naive FindCloseGroups} incurs $n \choose 2$ distance computations that makes the upper-bound time complexity of SGB-All quadratic i.e., $O(n^2)$. Section \ref{subsection:boundschecking} introduces a filter-refine paradigm to optimize over Procedure \textit{Naive FindCloseGroups}. \end{sloppypar} \subsection{The Bounds-Checking Approach} \label{subsection:boundschecking} In this section, we introduce a Bounds-Checking approach to optimize over Procedure \textit{Naive FindCloseGroups}. Consider the data points of Group $g$ illustrated in Figure ~\ref{fig:bounds-check-new}a. Procedure~\textit{Naive FindCloseGroups} performs six distance computations to determine whether a new data point $x$ can join Group $g$. To reduce the number of comparisons, we introduce a bounding rectangle for each Group $g$ in conjunction with the similarity threshold $\epsilon$ so that all data points that are bounded by the rectangle satisfy the distance-to-all similarity predicate. For example, Data Element $x$ in Figure \ref{fig:bounds-check-new}b is located inside $g$'s bounding rectangle. Therefore, $g$ is a candidate group for $x$. \begin{defn} Given a set of multi-dimensional points and a similarity predicate $\xi_{\delta_\infty, \epsilon}$, the \textbf{$\epsilon$-All Bounding Rectangle} $R_{\epsilon-All}$ is a bounding rectangle such that for any two points $x_i$ and $y_i$ bounded by $R_{\epsilon-All}$, the simiarity predicate $\xi_{\delta_{\infty},\epsilon}(x_i, y_i)$ is true. \end{defn} Consider Figure~\ref{fig:bounds-check-new}c, where the bounding rectangle $R_{\epsilon-All}$ is constructed for a group that consists of a single Point $a_1$, where $\epsilon = 2$ and the sides of the rectangle are $2\epsilon$ by $2\epsilon$ centered at $a_1$. After inserting the second Point $a_2$ into $g$, as in Figure~\ref{fig:bounds-check-new}d, $R_{\epsilon-All}$ is shrunk to include the area where the similarity predicate is true for both Points $a_1$ and $a_2$. The invariant that $R_{\epsilon-All}$ maintains varies depending on the distance metric used. For the $L_{\infty}$ distance metric, $R_{\epsilon-All}$ is updated such that if a Point, say $x_i$, is inside $R_{\epsilon-All}$, then $x_i$ is guaranteed to be within Distance $\epsilon$ from all the points that form Group $g$. For the Euclidean distance, the invariant that $R_{\epsilon-All}$ maintains is that if a point, says $x_i$, is outside $R_{\epsilon-All}$, then $x_i$ cannot belong to Group $g$. In this case, if $x_i$ is inside $R_{\epsilon-All}$, it is likely that $x_i$ is within distance $\epsilon$ from all the points inside $R_{\epsilon-All}$. Hence, for the Euclidean distance, $R_{\epsilon-All}$ is a conservative representation of the group $g$ and serves as a filter step to save needless comparisons for points that end up being outside of the group. We illustrate in Figures~\ref{fig:bounds-check-new}c-~\ref{fig:bounds-check-new}e how to maintain these invariants when a new point joins the group. We use the case of $L_{\infty}$ for illustration. When a new point $x_i$ is inside the bounding rectange $R_{\epsilon-All}$ of Group $g$, then $x_i$ is within Distance $\epsilon$ from all the points in the group, and hence will join Group $g$. Once $x_i$ joins Group $g$, the bounds of Rectangle $R_{\epsilon-All}$ are updated to retain the truth of $R_{\epsilon-All}$'s invariant. The sides of $R_{\epsilon-All}$ will need to shrink and will be updated as illustrated in Figures \ref{fig:bounds-check-new}d-\ref{fig:bounds-check-new}e. Notice that deciding membership of a point into the group requires a constant number of comparisons regardless of the number of points inside Group $g$. Furthermore, the maintenance of the bounding rectangle of the group takes constant time for every inserted point into $g$. Also, notice that $R_{\epsilon-All}$ stops shrinking if its dimensions reach $\epsilon \times \epsilon$, which is a lower-bound on the size of $R_{\epsilon-All}$. Figure~\ref{fig:bounds-check-new}e gives the updated $R_{\epsilon-All}$ after Point $a_3$ is inserted into the group. \begin{figure}[t] \centering \includegraphics[width=3.26in,height=2.15in]{pic/bounds-check-new} \caption{The $\epsilon$-All Bounding Rectangle approach.} \label{fig:bounds-check-new} \end{figure} \begin{algorithm}[t]\small \KwIn{$p_i$: data point, $\epsilon$: similarity threshold , $\delta$: distance function, $CLS$: ON-OVERLAP clause, $G$: set of existing groups } \KwOut{$CandidateGroups$, $OverlapGroups$} $CandidateGroups\leftarrow NULL$\\ $OverlapGroups\leftarrow NULL$\\ \For{ each group $g_{j}$ in $G$}{ \uIf {PointInRectangleTest($p_i$, $g_j$) is True}{insert $g_j$ into $CandidateGroups$} \ElseIf {CLS is not JOIN-ANY \textbf{and} OverlapRectangleTest($p_i$, $g_j$) is True}{ \For{ each $p_k$ in $g_{j}$}{\If { (Distance($p_i$, $p_k$, $\delta$)$\leqslant \epsilon$)}{insert $g_j$ into $OverlapGroups$\\ break}} }} \caption{\small{Bounds-Checking FindCloseGroups}} \label{alg:findcloseboundchecking} \end{algorithm} \begin{sloppypar} Procedure~\ref{alg:findcloseboundchecking} gives the pseudocode for \textit{Bounds-Checking FindCloseGroups}. The procedure uses the $\epsilon$-All bounding rectangle to reduce the number of distance computations needed to realize $FindCloseGroups$ using the $L_{\infty}$ distance metric. Procedure $PointInRectangleTest$ (Line 4) uses the $\epsilon$-All rectangle to determine in constant time whether $g_j$ is a candidate group for the input point. Procedure $OverlapRectangleTest$ (Line 6) tests whether the $\epsilon$-All rectangle of $p_i$ overlaps Group $g_j$'s bounding rectangle. In case of an overlap, all data points in $g_j$ are inspected to verify whether the overlap is nonempty. The correctness of the $\epsilon$-All bounding rectangle for the $L_{\infty}$ distance metric follows from the fact that the rectangles are closed under intersection, i.e., the intersection of two rectangles is also a rectangle. \end{sloppypar} \begin{sloppypar} A major bottleneck of the bounding rectangles approach is in the need to linearly scan all existing bounding rectangles that represent the groups to identify sets $CandidateGroups$ and $OverlapGroups$, which is costly. To speedup Procedure \textit{Bounds-Checking FindCloseGroups}, we use a spatial access method (e.g., an R-tree ~\cite{BIBExample:guttman1984r}), to index the $R_{\epsilon-All}$ bounding rectangles of the existing groups. Procedure~\ref{alg:findclose-boundindex} gives the pseudocode for \textit{Index Bounds-Checking FindCloseGroups}. The procedure performs a window query on the index $Groups\_IX$ (Line 4) to retrieve the set $GSet$ of all groups that intersect the bounding rectangle $R_{p_i}$ for the newly inserted point $p_i$. Next, it iterates over $GSet$ (Lines 4-11) and executes $PointInRectangleTest$ to determine whether the inspected group belongs to either one of the two sets $CandidateGroups$ or $OverlapGroups$. Finally, the elements of $OverlapGroups$ are inspected to retrieve the subset of elements that satisfy the similarity predicate. Refer to Figure~\ref{fig:windowquery} for illustration. An R-tree index, termed $Groups\_IX$, is used to index the bounding rectangles of the groups discovered so far. In this case, $Groups\_IX$ contains bounding rectangles for Groups $g_1$-$g_4$. Given the newly arriving Point $x$, a window query of the $\epsilon$-All rectangle for $x$ is performed on $Groups\_IX$ that returns all the intersecting rectangles; in this case, $g_1$, $g_2$, and $g_3$. The outcome of the query is used to construct the sets $CandidateGroups$ and $OverlapGroups$. \end{sloppypar} \begin{algorithm}[t]\small \KwIn{$p_i$: data point, $\epsilon$: similarity threshold , $\delta$: distance function, $CLS$: ON-OVERLAP clause, $G$: set of existing groups, $Groups\_IX$: index on $G$'s bounding rectangles } \KwOut{$CandidateGroups$, $OverlapGroups$} $CandidateGroups\leftarrow NULL$\\ $OverlapGroups\leftarrow NULL$\\ $R_{p_i} \leftarrow$ CreateBoundingRectangle($p_i$, $\epsilon$)\\ $GSet\leftarrow WindowQuery(p_i, R_{p_i}, Groups\_IX)$\\ \For{ each group $g_{j}$ in $GSet$}{ \uIf {PointInRectangleTest($p_i$, $g_j$) is True}{insert $g_j$ into $CandidateGroups$} \ElseIf {CLS is not JOIN-ANY}{\For{ each $p_{k}$ in $g_j$} {\If { (Distance($p_i$, $p_k$, $\delta$)$\leqslant \epsilon$)} {insert $g_j$ into $OverlapGroups$\\ break}}}} \caption{\small{Index Bounds-Checking FindCloseGroups}} \label{alg:findclose-boundindex} \end{algorithm} \begin{figure}[t] \centering \includegraphics[width=2.5in,height=2.2in]{pic/sgb-clq4} \caption{SGB-All: performing a window Query on $Groups\_IX$ using $\epsilon = 4$ and $L_{\infty}$} \label{fig:windowquery} \end{figure} \subsection{Handling False Positives $L_2$} \label{section:false-positive} In this section, we study the effect of using $L_2$ as a similarity distance function on the SGB-All operator. Refer to Figure \ref{fig:falsepositive}a for illustration. In contrast to the $L_{\infty}$ distance, the set of points that are exactly $\epsilon$ away from $a_1$ in the $L_2$ metric space form a circle. Inserting $a_2$ (Figure \ref{fig:falsepositive}b) is correct using the $L_{\infty}$ distance since $a_2$ is inside the $\epsilon$-All rectangle of $a_1$'s group. However, under the $L_2$ distance, $a_2$ is more than $\epsilon$ away from $a_1$ since $a_2$ lies outside $a_1$'s $\epsilon$-circle. As a result, all points that are inside $a_1$'s $\epsilon$-All group rectangle but are outside the $\epsilon$-circle (i.e., the grey-shaded area in Figure \ref{fig:falsepositive}b) falsely pass the bounding rectangle test. Procedure \textit{Naive FindCloseGroups} in (Procedure \ref{alg:findcloseall-nested}) inspects all input data points. Therefore, the problem of false-positive points does not occur. On the other hand, the Bounds-Checking approach introduced in Procedures \ref{alg:findcloseboundchecking} and \ref{alg:findclose-boundindex} uses the $\epsilon$-All rectangle technique to identify the sets \textit{CandidateGroups} and \textit{OverlapGroups} and hence must address the issue of false-positive points for the $L_2$ distance metric. We introduce a \textbf{Convex Hull Test} to refine the data points that pass the Bounds-Checking filter step. Given a group of points, a convex hull \cite{BIBExample:de2008computational} is the smallest convex set of points within a group. In Figure \ref{fig:falsepositive}c, the points $a_1$-$a_5$ form the convex hull set for Group $g$. Based on the SGB-All semantics, the diameter of the conevex hull (i.e., the two farthest points) satisfies the similarity predicate. \begin{figure}[t] \centering \includegraphics[width=2.6in,height=2.3in]{pic/fig5b} \caption{(a) The $\epsilon$-radius circle in $L_2$, (b) The problem of false positive for $L_2$, (c) The $\epsilon$-convex hull test for $\epsilon = 6$. } \label{fig:falsepositive} \end{figure} \begin{algorithm}[t]\small \KwIn{$p_i$: data point, $g$: existing group } \KwOut{True if $p_i$ is not false positive, False otherwise} \SetAlgoNoLine $ConvexHullSet\leftarrow$ $getConvexHull(g)$\\ \uIf {$p_i$ inside convex hull}{return True} \Else { $farthestPoint\leftarrow$ $getMaxDistElem(ConvexHullSet,p_{i})$\\ \If{$distance(farthestDistPoint,p_{i} ) <= \epsilon$}{ return True}} return False \caption{\small{Convex Hull Test}} \label{alg:convexhulltest} \end{algorithm} The \textit{Convex Hull Test}, illustrated in Procedure \ref{alg:convexhulltest}, verifies whether a point is a false-positive. This additional test can be inserted immediately after (Line 4) in Procedure~\ref{alg:findcloseboundchecking} or immediately after (Line 6) in Procedure~\ref{alg:findclose-boundindex}. Consequently, any new point that lies inside a group's convex hull (e.g., Point $y$ in Figure~\ref{fig:falsepositive}c) satisfies the similarity predicate. In addition, in order to verify points that are outside the convex hull (e.g., Point $x$ in Figure \ref{fig:falsepositive}c), it is enough to evaluate the similarity predicate between $p_i$ and the convex hull. The correctness of the convex hull test follows from the fact that the convex hull set contains the farthest point from $p_i$, say $p_{f}$. Therefore, it is sufficient to evaluate the similarity predicate between $p_i$ and $p_f$ (e.g., Point $x$ and Point $a_3$ in Figure \ref{fig:falsepositive}c). Section \ref{section:complexity analysis} discusses the complexity of the convex hull approach. \section{Algorithms for SGB-Any} \label{section:sgb-any-framework} \begin{sloppypar} In this section, we present an algorithmic framework to realize similarity-based grouping using the distance-to-any semantics. The generic SGB-Any framework in Procedure~\ref{alg:SGBAny} proceeds as follows. For each data point, say $p_i$, Procedure $FindCandidateGroups$ (Line 2) uses the distance-to-any similarity predicate to identify the set $CandidateGroups$ that consists of all the existing groups that $p_i$ can join. In contrast to SGB-All, in the distance-to-any semantics, a point, say $p_i$, can join a candidate group, say $g$, when $p_i$ is within a predefined similarity threshold from at least one another point in $g$. Procedure $ProcessGroupingANY$ (Line 3) inserts $p_i$ into a new or an existing group. \end{sloppypar} \begin{algorithm}[t]\small \KwIn{$P$: set of data points, $\epsilon$: similarity threshold, $\delta$: distance function, $Points\_IX$: spatial index} \KwOut{Set of groups G} \For{ each data element $p_{i}$ in $P$} { $CandidateGroups \leftarrow FindCandidateGroups(p_i, $Points\_IX$, \epsilon, \delta)$\\ $ProcessGroupingANY(p_i, CandidateGroups)$\\ } \caption{\small{Similarity Group-By ANY Framework}} \label{alg:SGBAny} \end{algorithm} \subsection{Finding Candidate Groups} \begin{sloppypar} A Naive \textit{FindCandidateGroups} approach similar to Procedure 2 can identify the set $CandidateGroups$. However, this solution incurs many distance computations, and brings the upper-bound time complexity of the SGB-Any framework to $O(n^2)$. The filter-refine paradigm using an $\epsilon$-group bounds-checking approach while applying a distance-to-any predicate (i.e., similar to Procedures \ref{alg:findcloseboundchecking}-\ref{alg:convexhulltest}) suffers from two main challenges. By drawing squares of size $\epsilon\times \epsilon$ around the input point and forming a bounding rectangle that encloses all these squares results in a consecutive chain-like region and the area of false-positive progressively increases in size as we add new data points. Furthermore, the convex hull approach to test for false-positive points cannot be applied in SGB-Any as it suffers from false-negatives caused by the fact that the length of the diameter of the convex hull can actually be more than $\epsilon$ in the case of SGB-Any. Details are omitted here for brevity. Consequently, \textit{FindCandidateGroups} in Procedure~\ref{alg:findcloseany} uses an R-tree index, termed $Points\_IX$. $Points\_IX$ maintains the previously processed data points to efficiently find $CandidateGroups$. Refer to Figure~\ref{fig:sgbanyindex-unionfind} for illustration. For an incoming point, say Point $x$, an $\epsilon$-rectangle (Line 2 of Procedure~\ref{alg:findcloseany}) is created to perform a window query on $Points\_IX$ to retrieve $PointsSet$ (Line 3). $PointsSet$ corresponds to the points that are within $epsilon$ from $x$, e.g., $\{a_3, c_1, c_2, c_3, b_1, b_2\}$. Based on the semantics of SGB-Any, $CandidateGroups$ contains the groups that cover the points in $PointsSet$. For instance, point $a_3$ belongs to $g_1$, points $\{c_1 \textendash c_3\}$ belong to $g_2$, and points $\{b_1 \textendash b_2\}$ belong to group $g_3$. Hence, $CandidateGroups = \{g_1, g_2, g_3\}$. Procedure $GetGroups$ (Line 7) employs a Union-Find data structure~\cite{BIBExample:Tarjan} to keep track of existing, newly created, and merged groups (see Figure~\ref{fig:sgbanyindex-unionfind}b) to efficiently construct $CandidateGroups$ given $PointsSet$. \end{sloppypar} \begin{algorithm}[t]\small \KwIn{$p_i$: data point, $Points\_IX$: spatial index, $\delta$: distance function, $\epsilon$: similarity threshold} \KwOut{$CandidateGroups$} $CandidateGroups\leftarrow NULL$\\ $R_{pi} \leftarrow$ CreateBoundingRectangle($p_i$, $\epsilon$)\\ $PointsSet\leftarrow WindowQuery(p_i, R_{pi}, Points\_IX)$\\ \If{$\delta$ is $L_2$}{$PointsSet\leftarrow VerifyPoints(Points\_IX, \delta, \epsilon)$} $CandidateGroups\leftarrow GetGroups(PointsSet)$\\ insert $p_i$ into $Points\_IX$ \caption{\small{FindCandidateGroups}} \label{alg:findcloseany} \end{algorithm} \begin{figure}[t] \centering \includegraphics[width=3.3in,height=2.2in]{pic/sgb-any-rtee-unionfind} \caption{(a) SGB-Any: Performing a window query on $Points\_IX \epsilon=4$ using $L_\infty$ (b) The disjoint data structure Union-Find is used to maintain existing groups.} \label{fig:sgbanyindex-unionfind} \end{figure} \subsection{Processing New Points} \begin{sloppypar} Procedure~\ref{alg:ProcessGroupingANY} gives the pseudocode for \textit{ProcessGroupingANY}. Lines~1-6 identify the cases when $CandidateGroups$ is empty, or when it consists of one group. In these cases, $p_i$ is inserted into a newly created group or into an existing group. Next, it handles the case that occurs when $p_i$ is close to more than one group. In the SGB-Any semantics, all candidate groups that $p_i$ can join are merged into one group. Therefore, Procedure $MergeGroupsInsert$ (Line 8) handles merging candidate groups and then inserts $p_i$ into the merged groups. Referring to Figure~\ref{fig:sgbanyindex-unionfind}b, Point $x$ overlaps groups $g_1, g_2,$ and $g_3$. Based on the semantics of SGB-Any, the overlapped groups $g_1$, $g_2$, and $g_3$ are merged into one encompassing bigger group, termed $G \textendash new$. In this case, the root pointers of $g_1$, $g_2$ and $x$ in the Union-Find data strucure are redirected to Point $a_1$. \end{sloppypar} \begin{algorithm}[t]\small \KwIn{$p_i$: data point, $CandidateGroups$ } \KwOut{updates $CandidateGroups$} \uIf{$CandidateGroups$ is Empty} {create a new group $g_{new}$ \\ $ProcessInsert(p_i, g_{new})$} \uElseIf {sizeof(CandidateGroups) == 1 }{ insert into existing group $g_{out}$\\ $ProcessInsert(p_i, g_{out})$ } \Else {MergeGroupsInsert($CandidateGroups$, $p_i$)} \caption{\small{ProcessGroupingANY} } \label{alg:ProcessGroupingANY} \end{algorithm} \section{Complexity, Realization, and \\ Evaluation} \label{section:performance-evaluation} \subsection{Complexity Analysis} \label{section:complexity analysis} \begin{sloppypar} Table \ref{SGBAll_table_complexity} summarizes the average-case running time of SGB-All using the proposed optimizations for the $L_{\infty}$ distance metric. The \textit{All-Pairs} algorithm corresponds to naive \textit{FindCloseGroups} in Procedure \ref{alg:sgball-framework}. Similarly, \textit{Bounds-Checking} and \textit{On-the-fly Indexing} corresponds to the \textit{Bounds-Checking} and \textit{Index Bounds-Checking} optimizations, where $|G|$ is the number of output Groups and $m$ is the recursion depth for the ON-OVERLAP FORM-NEW. In addition, the average-case running time of SGB-Any when using the index is $O(n\log n)$. The worst-case and best-case running times, and detailed analysis are given in the Appendix. \end{sloppypar} \begin{sloppypar} \begin{table}[h] \centering \scriptsize\addtolength{\tabcolsep}{-5pt} \begin{tabular}{|c|c|c|c|} \hline & JOIN-ANY & ELIMINATE & FORM-NEW-GROUP \\\hline All-Pairs & $O(n^2)$& $O(n^2)$& $O(n^3)$ \\\hline Bounds-Checking & $O(n|G|)$& $O(n|G|))$& $O(mn|G|) $ \\\hline on-the-fly Index & $O(n\log|G|)$& $O(n\log|G|)$& $O(mn\log|G|)$ \\\hline \end{tabular} \caption{SGB-All Complexity for the $L_{\infty}$ distance} \label{SGBAll_table_complexity} \end{table} \end{sloppypar} \begin{sloppypar} \begin{table}[h] \tiny \centering \scriptsize\addtolength{\tabcolsep}{-6pt} \begin{tabular}{|c|l|} \hline \multicolumn{2}{|l|}{\textbf{Business Question}: Retrieve large volume customers} \\ \hline GB1 & Same as the TPCH-Q18 \\ \hline \multicolumn{2}{|l|}{\tabincell{l} {\textbf{Business Question}: Retrieve customers with similar buying power, account balance}} \\ \hline \tabincell{l}{SGB1 \\or \\ SGB2} & \tabincell{l} { SELECT max(ab), min(tb),max(tb), average(ab), array\_agg(R1.c\_custkey) \\ \quad \ \quad FROM (SELECT c\_custkey, c\_acctbal as ab FROM Customer \\ \quad \ \quad WHERE c\_acctbal $>$100 ) as R1 \\ \quad \ \quad (SELECT o\_custkey, sum(o\_totalprice) as tp FROM Orders, Lineitem \\ \quad \ \quad WHERE o\_orderkey in (SELECT l\_orderkey FROM lineitem \\ \quad \ \quad GROUP BY Rl\_orderkey having sum(l\_quantity) $>$3000) \\ \quad \ \quad and o\_orderkey =l\_orderkey and o\_totalprice $>$ 30000) as R2 \\ \quad \ \quad WHERE R1.c\_custkey=R2.o\_custkey \\ GROUP BY ab,tp \textbf{DISTANCE-ALL} WITHIN $\epsilon$ USING lone/ltwo \\ on\_overlap join-any/form-new/eliminate \\ \textbf{or} GROUP BY ab,tp \textbf{DISTANCE-ANY} WITHIN $\epsilon$ USING lone/ltwo } \\ \hline \hline \multicolumn{2}{|l|}{\tabincell{l} {\textbf{Business Question}: \\ Report profit on a given line of parts (by supplier nation and year)}}\\ \hline GB2 & Same as the TPCH-Q9\\ \hline \multicolumn{2}{|l|}{\tabincell{l}{\textbf{Business Question}: \\Report profit and shipment time of parts share similar profit and shipment date}} \\ \hline \tabincell{l}{SGB3 \\or \\ SGB4} & \tabincell{l} { SELECT count(\*),sum(tprof), sum(stime) FROM \\ \quad \ \quad (SELECT ps\_partkey as partkey, sum(l\_extendedprice * (1 - l\_discount) \\ \quad \ \quad - ps\_supplycost *l\_quantity) as tprof, sum(l\_receiptdate-l\_shipdate) \\ \quad \ \quad as stime FROM lineitem, partsupp,supplier WHERE ps\_partkey = \\ \quad \ \quad l\_partkey and s\_suppkey=ps\_suppkey GROUP BY ps\_partkey) as profit \\ GROUP BY tprof, stime \textbf{DISTANCE-ALL} WITHIN $\epsilon$ USING lone/ltwo \\ on\_overlap join-any/form-new/eliminate \\ \textbf{or} GROUP BY tprof, stime \textbf{DISTANCE-ANY} WITHIN $\epsilon$ USING lone/ltwo } \\ \hline \hline \multicolumn{2}{|l|}{\tabincell{l} {\textbf{Business Question}: \\ Determines top supplier who contributed the most to the overall revenue for parts)}}\\ \hline GB3 & Same as the TPCH-Q15\\ \hline \multicolumn{2}{|l|}{\tabincell{l}{\textbf{Business Question}: \\Report supplier who contributed the similar profit and account balance}} \\ \hline \tabincell{l}{SGB5 \\or \\ SGB6} & \tabincell{l} { SELECT array\_agg(s\_suppkey), sum(r.trevenue), sum(s\_acctbal) \\ \quad \ \quad FROM (SELECT l\_suppkey as suppkey, sum(l\_extendedprice * (1 - \\ \quad \ \quad l\_discount)) as trevenue , sum(s\_acctbal) As acctbal FROM Lineitem \\ \quad \ \quad WHERE l\_shipdate $>$ date '[1995-01-01]' and l\_shipdate $<$ date \\ \quad \ \quad '[1996-01-01]'+ interval '10' month GROUP BY l\_suppkey )as r \\ GROUP BY r.trevenue, s\_acctbal \textbf{DISTANCE-ALL} WITHIN $\epsilon$ \\ USING lone/ltwo on\_overlap join-any/form-new/eliminate \textbf{or} GROUP \\ BY r.trevenue, s\_acctbal \textbf{DISTANCE-ANY} WITHIN $\epsilon$ USING lone/ltwo } \\ \hline \end{tabular} \caption{Performance Evaluation Queries on TPC-H} \label{tab-queries} \end{table} \end{sloppypar} \subsection{Implementation} \begin{sloppypar} We realize the proposed SGB operators inside PostgreSQL. In the \textit{parser}, the grammar rules, and actions related to the ``SELECT" statement syntax are updated with similarity keywords (e.g., DISTANCE-TO-ALL and DISTANCE-TO-ANY) to support the SGB query syntax. The parse and query trees are augmented with parameters that contain the similarity semantics (e.g., the threshold value and the overlap action). The \textit{Planner and Optimizer} routines use the extended query-tree to create a similarity-aware plan-tree. In this extension, the optimizer is manipulated to choose a hash-based SGB plan. The executor modifies the hash-based aggregate group-by routine. Typically, an aggregate operation is carried out by the incremental evaluation of the aggregate function on the processed data. However, the semantics of ON-OVERLAP ELIMINATE and ON-OVERLAP FORM-NEW-GROUP can realize final groupings only after processing the complete dataset. Therefore, the aggregate hash table keeps track of the existing groups in the following way. First, the aggregate hash table entry (AggHashEntry) is extended with a TupleStore data structure that serves as a temporary storage for the previously processed data points. Next, referring to the Bounds-Checking FindCloseGroups presented in Procedure \ref{alg:findcloseboundchecking}, each group's bounding rectangle is mapped into an entry inside the hash directory. Bounds-Checking FindCloseGroups linearly iterates over the hash table directory to build the sets $CandidateGroups$ and $OverlapGroups$. The Index Bounds-Checking in Procedure \ref{alg:findclose-boundindex} employs a spatial index to efficiently look up all existing groups a data point can join. Consequently, we extend the executor with an in-memory R-tree that efficiently indexes the existing groups' bounding rectangles. In the implementation of FindCloseGroupsAny in Procedure \ref{alg:findcloseany}, a spatial index is created to maintain the set of points that have been processed and assigned to groups. Moreover, we extend the executor with the Union-Find data structure Disjoint-set forest to support the operations $GetGroups$ and $MergeGroupsInsert$. \end{sloppypar} \begin{figure*} \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{pic/sgba_eps_any} \caption{\tiny{SGB-All:JOIN-ANY}} \label{fig:SGBALL-JOINANY_EPS} \end{subfigure}% ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{pic/sgba_eps_eliminate} \caption{\tiny{SGB-All:ELIMINATE}} \label{fig:SGBALL-ELIMINATE_EPS} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{pic/sgba_eps_newgroup} \caption{\tiny{SGB-All:FORM-NEW-GROUP} } \label{fig:SGBALL-FORMNEW_EPS} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{pic/sgb_eps_any} \caption{\tiny{SGB-ANY}} \label{fig:SGB-ANY_EPS} \end{subfigure} \caption{The effect of similarity threshold $\epsilon$ on the SGB-All variants and SGB-ANY}\label{fig:effect-eps} \end{figure*} \begin{figure*} \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{pic/sgba_dsize_any} \caption{\tiny{SGB-All:JOIN-ANY}} \label{fig:SGBALL-JOINANY_DS} \end{subfigure}% ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{pic/sgba_dsize_eliminate} \caption{\tiny{SGB-All:ELIMINATE}} \label{fig:SGBALL-ELIMINATE_DS} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{pic/sgba_dsize_newgroup} \caption{\tiny{SGB-All:FORM-NEW-GROUP} } \label{fig:SGBALL-newgroup_DS} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\textwidth]{pic/sgb_dsize_any} \caption{\tiny{SGB-ANY}} \label{fig:SGB_ANY_DS} \end{subfigure} \caption{The effect of increasing data size on the SGB-All variants and SGB-ANY}\label{fig:effect-datasize} \end{figure*} \subsection{Datasets} \begin{sloppypar} The goal of the experimental study is to validate the effectiveness of the proposed SGB-All and SGB-Any operators using the optimization methods discussed in Sections~\ref{section:sgball-framework} and~\ref{section:sgb-any-framework}. The datasets used in the experiments are based on the TPC-H benchmark\footnote{http://www.tpc.org/tpch/}~\cite{BIBExample:softonline}, and two real-world social checking datasets, namely Brightite\footnote{https://snap.stanford.edu/data/loc-brightkite.html} and Gowalla\footnote{https://snap.stanford.edu/data/loc-gowalla.html} ~\cite{standford_socialdata_2011}. Table~\ref{tab-queries} shows the queries used for performance evaluation experiments on TPC-H data. The multi-dimensional attribute is the combination of different tables. For example, SGB queries, i.e., SGB1/SGB2, are combination of Customer and Order Table, and the number of tuples in the Customer and Order tables is $150K\times SF$ and $1500K\times SF$, respectively, where the scale factor $SF$ ranges from 1 to 60. For Brightite and Gowalla data, SGB queries follow Queries~\ref{query:manet-any} and~\ref{query:social-sgball} to cluster users into groups by the corresponding users' check-in information (i.e., latitude and longitude). The experiments are performed on an Intel(R) Xeon (R) E5320 1.86 GHz 4-core processor with 8G memory running Linux, and using the default configuration parameters in PostgreSQL. At first, we focus on the time taken by SGB and hence disregard the data preprocessing time, (e.g., the inner join and filter predicates in Query 18). Furthermore, to understand the overhead of new SGB query, we calculate SGB response time with complicated queries (e.g., the SGB Query 3 to 6). In the paper, we only give the execution time of the $L_{2}$ distance metric because the performance when using the $L_\infty$ distance metric exhibits a similar behavior. \end{sloppypar} \subsection{Effect of similarity threshold $\epsilon$} \begin{sloppypar} The effect of the similarity threshold $\epsilon$ on the query runtime is given in Figure~\ref{fig:effect-eps} for SGB-Any and all three overlap variants of SGB-All; JOIN-ANY, ELIMINATE and FORM-NEW-GROUP. The experimental data consists of 0.5 million records. The similarity threshold $\epsilon$ varies from 0.1 to 0.9. Consider an unskewed dataset, performing SGB-All using a smaller value of $\epsilon$ (e.g., 0.1 or 0.2) forms too many output groups because the similarity predicate evaluates to true on small groups of the data. Increasing the value of $\epsilon$ forms large groups that decreases the expected number of output groups. Thus, we observe in Figure~\ref{fig:SGBALL-JOINANY_EPS},~\ref{fig:SGBALL-ELIMINATE_EPS},~\ref{fig:SGBALL-FORMNEW_EPS} that the runtime of SGB-All using the various semantics decreases as the value of $\epsilon$ approaches 0.9 with the exception of $\epsilon$ of value 0.7. The slight increase in runtime in the JOIN-ANY and FORM-NEW-GROUP semantics can be attributed to the distribution of the experimental data. The runtime and speedup in Figure~\ref{fig:SGBALL-JOINANY_EPS},~\ref{fig:SGBALL-ELIMINATE_EPS},~\ref{fig:SGBALL-FORMNEW_EPS} validate the advantage of the optimizations for \textit{Bounds-Checking} and \textit{on-the-fly Index} over \textit{All-Pairs}. The \textit{on-the-fly Index} approach shows two orders of magnitude speedup over \textit{All-Pairs}, and \textit{Bounds-Checking} approach wins one order magnitude faster than that of \textit{All-Pairs}. The reason is that \textit{All-Pairs} realizes similarity grouping by inspecting all pairs of data points in the input, and its runtime is bounded by the input size. In contrast, \textit{Bounds-Checking} defines group bounds in conjunction with the similarity threshold to avoid excessive runtime while grouping. Therefore, the runtime of \textit{Bounds-Checking} is bounded by the number of output groups. Lastly, indexing output groups using \textit{on-the-fly Index} alleviates the effect of the number of output groups on the overall runtime and makes it steady across the various ON-OVERLAP options. The effect of the similarity threshold $\epsilon$ on the query runtime for the SGB-Any query is given in Figure~\ref{fig:SGB-ANY_EPS}. The experiment illustrates that the runtime for \textit{All-Pairs} SGB-Any decreases as the value of $\epsilon$ increases. Furthermore, the runtime of the \textit{on-the-fly Index} method slightly changes. As a result, the speedup between the \textit{All Pairs} and the \textit{on-the-fly Index} methods slightly decreases. The runtime result validates that the performance of the \textit{on-the-fly Index} method is stable as we vary the value of $\epsilon$. The reason is that the Union-Find data structure efficiently finds and merges the candidate groups. Figure~\ref{fig:SGB-ANY_EPS} verifies that, for all values of $\epsilon$, the runtime performance of the \textit{on-the-fly Index} method for SGB-Any is two orders of mangitude faster than the \textit{All-Pairs} SGB-Any. \end{sloppypar} \subsection{Speedup} \begin{sloppypar} Figure~\ref{fig:SGBALL-JOINANY_DS},~\ref{fig:SGBALL-ELIMINATE_DS} and~\ref{fig:SGBALL-newgroup_DS} give the performance and speedup of the \textit{Bounds-Checking} and \textit{on-the-fly Index} methods for large datasets with scale factor up to 60. The similarity threshold $\epsilon$ is fixed to 0.2. We do not show the results for the naive approach \textit{All-Pairs} because its runtime increases quadratically as the data size increases. From Figure~\ref{fig:SGBALL-JOINANY_DS},~\ref{fig:SGBALL-ELIMINATE_DS} and~\ref{fig:SGBALL-newgroup_DS}, we observe that the runtime of the \textit{Bounds-Checking} method increases as the number and size of groups increases. The \textit{on-the-fly Index Bounds-Checking} method finds the sets $CandidateGroups$ and $OverlapGroups$ efficiently using the R-tree index, and the runtime of \textit{on-the-fly Index Bounds-Checking} method increases steadily and is consistently lower than the \textit{Bounds-Checking} methods. We observe that the speedup of the \textit{on-the-fly Index Bounds-Checking} method is one order of magnitude better than that of \textit{Bounds-Checking}. Figure~\ref{fig:SGB_ANY_DS} gives the effect of varying the data size on the runtime of SGB-Any when $\epsilon$ is fixed to 0.2. The TPC-H scale factor (SF) ranges from 1 to 32. We observe that, as the data size increases, the runtime of the \textit{All-Pairs} method increases quadratically, while the runtime of the \textit{on-the-fly Index} method has a linear speedup. Moreover, the speedup results in the figure demonstrate that the \textit{on-the-fly Index} method is approximately three orders of magnitude faster than \textit{All-Pairs SGB-Any} as the data size increases. \end{sloppypar} \subsection{Runtime Comparison with Clustering Algorithms} We compared the runtime of our SGB operators with three clustering algorithms, namely, \textit{K-means}~\cite{BIBExample:kanungo2002efficient}, \textit{DBSCAN}~\cite{BIBExample:zhang1996birch}, and \textit{BIRCH}~\cite{BIBExample:ester1996density}. Specifically, we use the state-of-the-art implementation of \textit{DBSCAN} with an R-tree from~\cite{Achtert2013v6}, the similarity threshold $\epsilon$ for both \textit{DBSCAN} and SGB is set i $0.2$, and the parameter \textit{K} of \textit{K-means} is set to $20$ and $40$, respectively. Figure~\ref{fig:SGB_clustering} shows the proposed SGB operations significantly outperform \textit{DBSCAN}, \textit{BIRCH} and \textit{K-means} by 1 to 3 order of magnitude on the real-world data respectively. The main reason is that the clustering algorithms scan the data more than once for convergence. On the contrary, SGB operations compute groups on-the-fly, and use group bounda and a spatial index to reduce the overhead of distance computation with processed tuples. In addition, clustering algorithms have to read data from the database system making them slower than our built-in SGB operations. \subsection{Overhead of SGB} Figure~\ref{fig:SGB_overhead} illustrates the effect of the various data sizes on the runtime of similarity-based groupings and traditional Group-By queries while varying the scale factor from 1G to 20G. The similarity threshold $\epsilon$ is fixed to 0.2. The semantics of the ON-OVERLAP clause plays a key role on the runtime of SGB-All. For instance, the JOIN-ANY variant achieves the best runtime among the SGB-All variants as it places overlapped elements into arbitrarily chosen groups. On the contrary, the FORM-NEW-GROUP incurs additional runtime cost while placing overlapped elements into new groups. The ELIMINATE semantics drops all overlapped elements causing the size of the output groups to shrink. Furthermore, the performance of traditional Group-by operator is comparable to the SGB-All and SGB-Any variants when using the \textit{on-the-fly Index}. For instance, The SGB-All ON-OVERLAP JOIN-ANY shows better performance than that of traditional Group-By. The SGB-All ON-OVERLAP ELIMINATE, SGB-All ON-OVERLAP FORM-NEW and SGB-Any shows 15 percent, 40 percent and 20 percent overhead than the traditional Group-By, respectively. \begin{figure} \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\columnwidth]{pic/SGB-Cluster-Brightkite} \caption{Runtime on Brightkite} \label{fig:Brightkite} \end{subfigure}% ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\columnwidth]{pic/SGB-Cluster-Gowalla} \caption{Runtime on Gowalla} \label{fig:Gowalla} \end{subfigure} \caption{SGB vs Clustering Algorithm} \label{fig:SGB_clustering} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\columnwidth]{pic/sgb_dsize_34} \caption{ GBY2 vs SGB3 and SGB4} \label{fig:GBY234} \end{subfigure}% ~ \begin{subfigure}[b]{0.23\textwidth} \includegraphics[width=\columnwidth]{pic/sgb_dsize_56} \caption{ GBY3 vs SGB5 and SGB6} \label{fig:GBY356} \end{subfigure} \caption{The effect of the data size on SGB vs. SQL GBY.} \label{fig:SGB_overhead} \end{figure} \section{Conclusion} \label{section:conclusion} In this paper, we address the problem of similarity-based grouping over multi-dimensional data. We define new similarity grouping operators with a variety of practical and useful semantics to handle overlap. We provide an extensible algorithmic framework to efficiently implement these operators inside a relational database management system under a variety of semantic flavors. The performance of SGB-All performs up to three orders of magnitude better than the naive \textit{All-Pairs} grouping method. Moreover, the performance of the optimized SGB-Any performs more than three orders of magnitude better than the naive approach. Finally, the performance of the proposed SGB operators is comparable to that of standard relational Group-by. { \scriptsize \bibliographystyle{IEEEtran}
1,116,691,498,796
arxiv
\section{Supporting Information} We construct the rate equations for three fractions: 1) the untethered fraction represented by the probability $P_b(t)$, 2) the fraction of rolling colloids of age $\tau$, whose probability to be found between the ages $\tau$ and $\tau + d \tau$ is $P_r(t,\tau )\, d\tau$, and 3) the population of colloids in arrest, with the probability \begin{equation} P_a(t)=1-P_b(t)-\int _0^t P_r(t,\tau) d\tau \; . \label{roll1} \end{equation} The time zero is chosen to coincide with the onset of the attractive interactions. Consequently, rolling particles cannot be older than $t$. The rate equation for $P_r(t,\tau)$ writes: \begin{eqnarray} \frac{d P_r(t,\tau)}{dt}&=&K(\tau ) \frac{\partial P_r(t,\tau)}{\partial \tau} + \delta(\tau ) \kappa P_b(t)+\nonumber \\ &-& a(\tau) P_r(t,\tau)-b(\tau) P_r(t,\tau)\; , \label{roll2} \end{eqnarray} where $K(\tau )$ is the ageing speed, $\delta(\tau )$ is Dirac function, $\kappa$ is the free sinking rate, corresponding to the fraction of particles absorbed per unit time by a totally absorbing bottom. It is determined from the stationary solution of Fokker-Planck equation with $z$-dependent viscous drag\cite{wall_perpendicular_brenner1961slow} and the potential energy given by a sum of Van der Waals and gravitational terms. $a(\tau)$ is the re-dispersion rate of rolling colloids of age $\tau$ back into bulk, and $b(\tau)$ is the irreversible stopping rate of rolling colloids of age $\tau$. The first term on the RHS reproduces the homogenous ageing for $K=1$, while the second term ensures that any newly sunk particle is a "just born" rolling one ($\tau=0$). It is essential to allow $\tau$ dependence in re-dispersion and arrest rates $a(\tau)$ and $b(\tau)$: it is plausible that older particles, with more sticking contacts, have lower re-dispersion and higher stopping rates. The rate equation for untethered colloids is \begin{equation} \dot{P_b}=-\kappa P_b+\int _0^t a(\tau) P_r(t,\tau) \, d\tau \; , \label{roll3} \end{equation} which, together with Eqs. \ref{roll1} and \ref{roll2}, determines completely the evolution of the system. Since in our experiment the particles are unresolved over $\tau$, starting from Eq.\ref{roll2} a simplified equation is derived, in which all rolling particles are represented by \begin{equation} P_r(t)\equiv \int _0^t P_r(t,\tau) d\tau \; . \label{def} \end{equation} We will suppose that the ageing is homogenous ($K(\tau)=1$) and that re-dispersion rate $a(\tau)$ and arrest rate $b(\tau)$ depend on $\tau$ over characteristic aging time scale $\tau ^*$, allowing us to write: $a(\tau) = k_{\mbox{\tiny off}}\Phi (\tau /\tau ^*)$ and $b(\tau) = k \chi (\tau /\tau ^*)$, where $\Phi$ is a monotonically decreasing and $\chi$ monotonically increasing function limited between 0 and 1. In order to obtain Eqs. (1) from the main text, we introduce the effective ageing functions $\rho$ and $g$ as follows: \begin{equation} \int _0^t \Phi (\tau /\tau ^*)P_r(t,\tau) d\tau= \rho(t/\tau ^*) P_r(t) \label{rho} \end{equation} and \begin{equation} \int _0^t \chi (\tau /\tau ^*)P_r(t,\tau) d\tau= g(t/\tau ^*) P_r(t) \; . \label{g} \end{equation} These relations are purely formal and do not allow to obtain the functions $\rho$ and $g$ from the original ageing functions $a$ and $b$. In this regard the present theory is merely a phenomenology because the ageing functions are not given explicitly by model parameters. However, the relations \ref{rho} and \ref{g} show that it is always possible to recast the system of rate equations for $P_r(t,\tau)$, to the system for the total number of rolling beads $P_r(t)$, and that the effective ageing functions $\rho$ and $g$ vary at the same aging time scale $\tau ^*$ as the original functions $a$ and $b$. The equations (1) from the text are readily obtained by integrating over $\tau$ the equation \ref{roll2} and using the definition \ref{def}.
1,116,691,498,797
arxiv
\section{Introduction} Nematic liquid crystals are composed of rod-like molecules characterized by average alignment of the long axes of neighboring molecules, which have simplest structures among various types of liquid crystals. The dynamic theory of nematic liquid crystals has been first proposed by Ericksen \cite{ericksen62} and Leslie \cite{leslie68} in the 1960's, which is a macroscopic continuum description of the time evolution of both flow velocity field and orientation order parameter of rod-like liquid crystals. In this paper, we will study the compressible Ericksen-Leslie system of liquid crystal flows (see \cite{Morro09}, \cite{Anna-Liu} for modeling). Let $\Omega\subset \R^3$ be a bounded domain with smooth boundary, and $\mathbb S^2$ be the unit sphere in $\R^3$. The compressible Ericksen-Leslie system is given as follows \begin{equation}\label{comlce} \begin{cases} \rho_t+\nabla\cdot(\rho {\bf u})=0,\\ \rho\dot {\bf u}+\nabla P=\nabla\cdot\sigma-\nabla\cdot\left(\frac{\partial W}{\partial\nabla {\bf n}}\otimes\nabla {\bf n}\right), \\ {\bf g}+\frac{\partial W}{\partial {\bf n}}-\nabla\cdot\left(\frac{\partial W}{\partial\nabla {\bf n}}\right)=\lambda{\bf n}. \end{cases} \end{equation} Here, $\rho(\mathbf{x}}\def\e{\mathbf{e},t):\Omega\times(0,\infty)\rightarrow \R$ is the density, ${\bf u}(\mathbf{x}}\def\e{\mathbf{e},t):\Omega\times(0,\infty)\rightarrow \R^3$ is the fluid velocity field, ${\bf n}(\mathbf{x}}\def\e{\mathbf{e},t):\Omega\times(0,\infty)\rightarrow \mathbb S^2$ is the orientation order parameters of nematic material. $\lambda$ is the Lagrangian multiplier of the constraint $|{\bf n}|=1$, $\dot f=f_t+{\bf u}\cdot\nabla f$ is the material derivative of function $f$, and $\mathbf{a}\otimes \mathbf{b}=\mathbf{a}\, \mathbf{b}^T$ for column vectors $\mathbf{a}$ and $\mathbf{b}$ in $\mathbb{R}^3$. The macrostructure of the crystals has been determined by the Oseen-Frank energy density (cf. \cite{oseen33,frank58}). One may take the Oseen-Frank energy density in the compressible case as \begin{align}\label{OFE}\begin{split} 2W(\rho, {\bf n},\nabla {\bf n})=&\frac{2}{\gamma-1}\rho^{\gamma}+K_1(\di {\bf n})^2+K_2({\bf n}\cdot\mbox{curl\,} {\bf n})^2+K_3|{\bf n}\times\mbox{curl\,} {\bf n}|^2\\ &+(K_2+K_4)[\mbox{tr}(\nabla {\bf n})^2-(\di {\bf n})^2 ], \end{split} \end{align} where $\gamma>1$, and $K_j$, $j=1,2,3$, are the positive constants representing splay, twist, and bend effects respectively, with $K_2\geq |K_4|$, $2K_1\geq K_2+K_4$. Then the pressure can be given by the Maxwell relation $$P(\rho)=\rho W_{\rho}(\rho, {\bf n},\nabla {\bf n})-W(\rho, {\bf n},\nabla {\bf n}).$$ For simplicity, we only consider the case $K_1=K_2=K_3=1$, $K_4=0$ in this paper. The Oseen-Frank energy in the compressible case becomes $$ 2W(\rho, {\bf n},\nabla {\bf n})=\frac{2}{\gamma-1}\rho^{\gamma}+|\nabla {\bf n}|^2. $$ Therefore $$ \nabla\cdot\left(\frac{\partial W}{\partial\nabla {\bf n}}\otimes\nabla {\bf n}\right)=\nabla\cdot\left(\nabla{\bf n}\odot\nabla{\bf n}\right),\quad \frac{\partial W}{\partial {\bf n}}=0,\quad \nabla\cdot\left(\frac{\partial W}{\partial\nabla {\bf n}}\right)=\Delta {\bf n},\quad P=\rho^{\gamma}-\frac12|\nabla n|^2. $$ Let $$ D= \frac12(\nabla {\bf u}+\nabla^{T} {\bf u}),\quad \omega= \frac12(\nabla {\bf u}-\nabla^{T}{\bf u})=\frac{1}{2}\left( \frac{\partial u^i}{\partial x_j}-\frac{\partial u^j}{\partial x_i}\right),\quad N=\dot {\bf n}-\omega {\bf n}, $$ represent the rate of strain tensor, skew-symmetric part of the strain rate, and the rigid rotation part of director changing rate by fluid vorticity, respectively. The kinematic transport ${\bf g}$ is given by \begin{align}\label{g} {\bf g}=\gamma_1 N +\gamma_2D{\bf n}-\gamma_2({\bf n}^TD{\bf n}){\bf n} \end{align} which represents the effect of the macroscopic flow field on the microscopic structure. The material coefficients $\gamma_1$ and $\gamma_2$ reflect the molecular shape and the slippery part between fluid and particles. The first term of ${\bf g}$ represents the rigid rotation of molecules, while the second term stands for the stretching of molecules by the flow. The viscous (Leslie) stress tensor $\sigma$ has the following form (cf. \cite{Les} \cite{Anna-Liu}) \begin{align}\label{sigma}\begin{split} \sigma=& \alpha_0({\bf n}^TD{\bf n})\mathbb I+ \alpha_1 ({\bf n}^TD{\bf n}){\bf n}\otimes {\bf n} +\alpha_2N\otimes {\bf n}+ \alpha_3 {\bf n}\otimes N\\ & + \alpha_4D + \alpha_5(D{\bf n})\otimes {\bf n}+\alpha_6{\bf n}\otimes (D{\bf n}) +\alpha_7(\mbox{tr}\,D)\,\mathbb I+\alpha_8(\mbox{tr}\,D)\,{\bf n}\otimes{\bf n}. \end{split} \end{align} These coefficients $\alpha_j$ $(0 \leq j \leq 8)$, depending on material and temperature, are called Leslie coefficients. The following relations are often assumed in the literature. \begin{align}\label{a2g} \gamma_1 =\alpha_3-\alpha_2,\quad \gamma_2 =\alpha_6 -\alpha_5,\quad \alpha_2+ \alpha_3 =\alpha_6-\alpha_5. \end{align} The first two relations are compatibility conditions, while the third relation is called Parodi's relation, derived from Onsager reciprocal relations expressing the equality of certain relations between flows and forces in thermodynamic systems out of equilibrium (cf. \cite{Parodi70}). They also satisfy the following empirical relations (cf. \cite{Les}, \cite{Anna-Liu}) \begin{align}\label{alphas} &\alpha_4>0,\quad 2\alpha_1+3\alpha_4+2\alpha_5+2\alpha_6>0,\quad \gamma_1=\alpha_3-\alpha_2>0,\\ & 2\alpha_4+\alpha_5+\alpha_6>0,\quad 4\gamma_1(2\alpha_4+\alpha_5+\alpha_6)>(\alpha_2+\alpha_3+\gamma_2)^2\notag\\ &\alpha_4+\alpha_7>\alpha_1+\frac{\gamma_2^2}{\gamma_1}\geq 0,\quad \notag\\ &2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}>\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8\geq 0.\notag \end{align} It is easy to see that an example of coefficients $\alpha_1, \cdots, \alpha_8$ satisfying \eqref{a2g} and \eqref{alphas} can be taken as follows $$ \alpha_0=\alpha_1=\alpha_5=\alpha_6=\alpha_7=\alpha_8=0,\quad \alpha_2=-1,\quad \alpha_3= \alpha_4=1, $$ so that $$ \gamma_1=\alpha_3-\alpha_2=2>0,\quad \gamma_2=\alpha_6-\alpha_5=\alpha_2+\alpha_3=0. $$ A simplified compressible Ericksen-Leslie system has been recently studied. The idea of simplification was first proposed for the incompressible system by Lin in\cite{Lin89}. In dimension one, the global strong and weak solutions have been constructed in \cite{dlww12} and \cite{dww11}. In dimension two, under the assumption that the initial data of ${\bf n}$ is contained in $\mathbb S^2_+$, global weak solutions have been constructed in \cite{jsw13}. In dimension three, the local existence of strong solutions has been studied by \cite{hww121} and \cite{hww122}, and when the initial data of ${\bf n}$ is contained in $\mathbb S^2_+$, global weak solutions have been constructed in \cite{llw15}. The incompressible limit of compressible nematic liquid crystal flows has been studied by \cite{dhxwz13}. We also mention a related work \cite{jlt19}, in which the Ericksen–Leslie’s parabolic–hyperbolic liquid crystal model has been studied. For small initial data, they have shown the existence of global solutions in dimension three. \subsection{One dimensional model and statement of main results} One of the main motivations of this paper is to investigate the impact of general Leslie stress tensors to the solutions of the compressible Ericksen-Leslie system with coefficients satisfying algebraic conditions \eqref{a2g} and \eqref{alphas} ensuring the energy dissipation property. Because of the technical complexity of the Ericksen-Leslie system in higher dimensions, we will only consider the following simpler case in one dimension, in which the director field ${\bf n}$ is assumed to map into the equator $\mathbb S^1$, $$ {\bf u}=\big(u(x,t),\ v(x,t), 0\big)^T, \quad {\bf n}=\big(\cos n(x,t),\ \sin n(x,t),0 \big)^T $$ for any $x\in [0,1]$ and $t\in (0,\infty)$. From the derivation given by Section 2 below, the system \eqref{comlce} becomes \begin{equation}\label{comlce1d} \begin{cases} \rho_t+(\rho u)_x=0,\\ (\rho u)_t+(\rho u ^2)_x+\big(\rho^{\gamma}\big)_x=J^1-n_{xx}n_x, \\ (\rho v)_t+(\rho u v)_x=J^2,\\ \gamma_1\left(\dot n-\frac12v_x\right) -\gamma_2\left(u_x\cos n\sin n+\frac12v_x(1-2\cos^2 n)\right) =n_{xx}. \end{cases} \end{equation} Here \beq\notag \begin{split} J^1=&(\alpha_0+\alpha_5+\alpha_6+\alpha_8)\big(u_x\cos^2 n\big)_x+\alpha_1\big(u_x\cos^4n\big)_x-(\alpha_2+\alpha_3)\big(\dot n\cos n\sin n\big)_x+(\alpha_4+\alpha_7)u_{xx}\\ &+\alpha_0\big(v_x\cos n\sin n\big)_x+\alpha_1\big(v_x\cos^3n\sin n\big)_x+\frac12(\alpha_2+\alpha_3+\alpha_5+\alpha_6)\big(v_x\cos n\sin n\big)_x, \end{split} \eeq and \beq\notag \begin{split} J^2=&\alpha_1\big(u_x\cos^3n\sin n\big)_x+\alpha_2\big(\dot n\cos^2 n\big)_x-\alpha_3\big(\dot n\sin^2 n\big)_x+(\alpha_6+\alpha_8)\big(u_x\cos n\sin n\big)_x\\ &+\alpha_1\big(v_x\cos^2n\sin^2 n\big)_x+\frac12(-\alpha_2+\alpha_5)\big(v_x\cos^2 n\big)_x+\frac12(\alpha_3+\alpha_6)\big(v_x\sin^2 n\big)_x+\frac12\alpha_4v_{xx}. \end{split} \eeq For this system, we consider the following initial and boundary values \beq\label{1dinitial} (\rho,\, \rho u,\, \rho v,\, n)(x,0)=(\rho_0,\, m_0,\, l_0,\, n_0)(x), \eeq \beq\label{1dbdyvalue} u(0,t)=v(0,t)=u(1,t)=v(1,t)=0,\quad n_x(0,t)=n_x(1,t)=0. \eeq Denote the energy of the system \eqref{comlce1d} by \beq\notag \mathcal E(t):=\frac{1}{2}\int_0^1\rho (u^2+v^2) +\frac{1}{\gamma-1}\int_0^1\rho^{\gamma}+\frac12\int_0^1n_x^2. \eeq For any smooth solution $(\rho, u, v, n)$, the energy functional satisfies the following energy inequality, whose proof will be provided in Section 3, \beq\label{enest1} \begin{split} \frac{d}{dt}\mathcal E(t) &=-\mathcal{D}\\ &:=-\int_0^1\left[\sqrt{\gamma_1}\dot n-\frac12\left( \frac{\gamma_2}{\sqrt{\gamma_1}}u_x\sin(2n) + \frac{1}{\sqrt{\gamma_1}}(\gamma_1-\gamma_2\cos(2n))v_x \right)\right]^2\\ &-\int_0^1\left[\frac14\left(-\alpha_1-\frac{\gamma_2^2}{\gamma_1}\right)u_x^2+(\alpha_4+\alpha_7)u_{x}^2\right] -\frac14\int_0^1\left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\right)v_x^2\\ &-\frac14\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right) \int_0^1\left(u_x\cos(2n)+v_x\sin(2n)\right)^2\\ &-(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8)\int_0^1\left[\big(u_x\cos n+\frac12v_x\sin n\big)^2-\frac14 v_x^2\sin^2 n\right]. \end{split} \eeq By the assumptions \eqref{alphas} on coefficients, the system \eqref{comlce1d} is dissipative. \begin{definition}\label{defweaksol} For any time $0<T<\infty$, a collection of functions $(\rho, u, v, n)(x,t)$ is a global weak solution to the initial and boundary value problem \eqref{comlce1d}-\eqref{1dbdyvalue} if \begin{itemize} \item[(1)] $$\rho\geq 0,\ \mbox {a.e.}, \quad \rho\in L^{\infty}(0,T; L^{\gamma}), \quad \rho u^2, \rho v^2 \in L^{\infty}(0,T; L^{1}), \quad u, v \in L^{2}(0,T; H^{1}_0)$$ $$ n\in L^{\infty}(0,T; H^{1})\cap L^{2}(0,T; H^{2}),\quad n_t\in L^{2}(0,T; L^{2}). $$ \item[(2)] The equations of $\rho$, $u$, $v$ are satisfied in the weak sense, while the equation of $n$ is valid a.e.. The initial condition \eqref{1dinitial} is satisfied in the weak sense. \item[(3)] The energy inequality is valid for a.e. $t\in(0,T)$ $$ \mathcal E(t)+\int_0^t\mathcal{D}\leq \mathcal E_0 = \frac{1}{2}\int_0^1 \frac{m_0^2+l_0^2}{\rho_0} +\frac{1}{\gamma-1}\int_0^1\rho_0^{\gamma}+\frac12\int_0^1(n_0)_x^2. $$ \end{itemize} \end{definition} The following is the main results in this paper. \begin{theorem}\label{mainth1} Assume that the coefficients of Leslie stress tensor satisfy the algebraic conditions \eqref{a2g} and \eqref{alphas}. Then, for any $0<T<\infty$ and any initial data \beq\label{asmp-initial} 0\le \rho_0\in L^{\gamma}, \quad \frac{m_0}{\sqrt{\rho_0}}, \quad \frac{l_0}{\sqrt{\rho_0}}\in L^2, \quad n_0\in H^1, \eeq there is a global weak solution $(\rho, u, v, n)(x,t)$ on $(0,1)\times (0,T)$ to the initial and boundary value problem \eqref{comlce1d}-\eqref{1dbdyvalue}. Furthermore, $\rho\in L^{2\gamma}((0,1)\times (0,T))$. \end{theorem} \medskip The main ideas of the proof utilize and extend those from \cite{fnp01}, \cite{jiangzhang01}, and \cite{feireisl04} in the study of the compressible Navier-Stokes equations, where the quantity called {\it effective viscous flux} has played crucial roles in controlling the oscillation of the density function $\rho$. However, the general Leslie stress tensors in the compressible Ericksen-Leslie system \eqref{comlce1d} induce two complicate second-order terms $J^1$ and $J^2$ that prohibit direct applications of the method of effective viscous flux. In this paper, we observe that with the algebraic conditions \eqref{a2g} and \eqref{alphas}, the system of ${\bf u}=(u,\, v)^T$ can still be shown to be uniformly parabolic (see \eqref{postiveA} and \eqref{vec1dlce} below), i.e. the coefficient matrix of the second-order terms is uniformly elliptic. Using the inverse of coefficient matrix of the second-order terms, we can then define a modified form of effective viscous flux as in Lemma \ref{lemma5.3}, which yields the desired estimates that are necessary in the limiting process of approximated solutions. \medskip The paper is organized as follows. In Section 2, we will sketch a derivation of the system \eqref{comlce1d}. In Section 3, we will derive some a priori estimates for smooth solutions of \eqref{comlce1d}. In Section 4, an approximated system will be introduced, and the existence of global regular solutions of this approximated system will be proven. In Section 5, we will prove the existence of global weak solutions through some delicate analysis of the convergence process. \section{Derivation of the model in one dimension} This section is devoted to the derivation of the system \eqref{comlce1d} in dimension one. If a solution takes the form $$ {\bf u}=\big(u(x,t),\ v(x,t)\big)^T, \quad {\bf n}=\big(\cos n(x,t),\ \sin n(x,t) \big)^T, \ (x,t)\in (0,1)\times (0,T), $$ then $$ \nabla {\bf u}= \left[ \begin{array}{cc} u_x & 0\\ v_x &0 \end{array} \right], \quad \nabla^T {\bf u}= \left[ \begin{array}{cc} u_x & v_x\\ 0 &0 \end{array} \right], $$ so that $$ D= \left[ \begin{array}{cc} u_x & \frac12v_x\\ \frac12v_x &0 \end{array} \right] \quad \omega= \left[ \begin{array}{cc} 0 & -\frac12v_x\\ \frac12v_x &0 \end{array} \right],$$ $$ \mbox{tr}\,D=u_x,\quad N=\dot {\bf n}-\omega{\bf n}=\left(\dot n-\frac12v_x\right) \big(-\sin n,\ \cos n \big)^T. $$ Direct calculations imply that $$ D{\bf n}=\left(u_x\cos n+\frac12v_x\sin n,\ \frac12v_x\cos n\right)^T,\quad {\bf n}^TD{\bf n}=u_x\cos^2 n+v_x\cos n\sin n, $$ $$ {\bf n}\otimes{\bf n}=\left[ \begin{array}{cc} \cos^2 n& \cos n\sin n\\ \cos n\sin n&\sin^2 n \end{array} \right], $$ $$ ({\bf n}^TD{\bf n}){\bf n}\otimes {\bf n}= (u_x\cos^2 n+v_x\cos n\sin n) \left[ \begin{array}{cc} \cos^2 n& \cos n\sin n\\ \cos n\sin n&\sin^2 n \end{array} \right], $$ $$ N\otimes{\bf n}=\left(\dot n-\frac12v_x\right) \left[ \begin{array}{cc} -\cos n\sin n& -\sin^2 n\\ \cos^2 n&\cos n\sin n \end{array} \right], $$ $$ {\bf n}\otimes N=\left(\dot n-\frac12v_x\right) \left[ \begin{array}{cc} -\cos n\sin n& \cos^2 n\\ -\sin^2 n&\cos n\sin n \end{array} \right], $$ $$ (D{\bf n})\otimes {\bf n}=\left[ \begin{array}{cc} u_x\cos ^2n+\frac12v_x\cos n\sin n& u_x\cos n\sin n+\frac12v_x\sin^2 n\\ \frac12v_x\cos^2 n&\frac12v_x\cos n\sin n \end{array} \right],$$ $$ {\bf n}\otimes (D{\bf n})=\left[ \begin{array}{cc} u_x\cos ^2n+\frac12v_x\cos n\sin n&\frac12v_x\cos^2 n\\ u_x\cos n\sin n+\frac12v_x\sin^2 n&\frac12v_x\cos n\sin n \end{array} \right]. $$ Hence $$ \nabla \cdot\sigma=\big(J^1, J^2\big)^T $$ where \beq\notag \begin{split} J^1=&(\alpha_0+\alpha_5+\alpha_6+\alpha_8)\big(u_x\cos^2 n\big)_x+\alpha_1\big(u_x\cos^4n\big)_x-(\alpha_2+\alpha_3)\big(\dot n\cos n\sin n\big)_x+(\alpha_4+\alpha_7)u_{xx}\\ &+\alpha_0\big(v_x\cos n\sin n\big)_x+\alpha_1\big(v_x\cos^3n\sin n\big)_x+\frac12(\alpha_2+\alpha_3+\alpha_5+\alpha_6)\big(v_x\cos n\sin n\big)_x, \end{split} \eeq and \beq\notag \begin{split} J^2=&\alpha_1\big(u_x\cos^3n\sin n\big)_x+\alpha_2\big(\dot n\cos^2 n\big)_x-\alpha_3\big(\dot n\sin^2 n\big)_x+(\alpha_6+\alpha_8)\big(u_x\cos n\sin n\big)_x\\ &+\alpha_1\big(v_x\cos^2n\sin^2 n\big)_x+\frac12(-\alpha_2+\alpha_5)\big(v_x\cos^2 n\big)_x+\frac12(\alpha_3+\alpha_6)\big(v_x\sin^2 n\big)_x+\frac12\alpha_4v_{xx}. \end{split} \eeq The terms related to ${\bf n}$ can be computed as follows $$ {\bf n}_t=n_t \big(-\sin n,\ \cos n \big)^T, $$ $$ {\bf n}_x=n_x \big(-\sin n,\ \cos n \big)^T,\quad |{\bf n}_x|^2=(n_x)^2 $$ $$ {\bf u}\cdot{\bf n}=u{\bf n}_x=un_x \big(-\sin n,\ \cos n \big)^T $$ $$ {\bf n}_{xx}=n_{xx} \big(-\sin n,\ \cos n \big)^T+(n_x)^2 \big(-\cos n,\ -\sin n \big)^T, $$ \beq\notag \begin{split} \nabla\cdot\left(\nabla{\bf n}\odot\nabla{\bf n}\right)-\frac12\nabla|\nabla {\bf n}|^2 =&\Delta {\bf n}\nabla {\bf n}=\big(n_{xx}n_x,\ 0\big)^T. \end{split} \eeq Therefore, $u(x,t)$ satisfies \beq \rho u_t+\rho u u_x+\big(\rho^{\gamma}\big)_x=J^1-n_{xx}n_x, \eeq and $v(x,t)$ satisfies \beq \rho v_t+\rho u v_x=J^2. \eeq Now we can calculate the equation of $n$ as follows. \beq\notag \begin{split} {\bf g}=&\gamma_1 N +\gamma_2D{\bf n}-\gamma_2({\bf n}^TD{\bf n}){\bf n} \\ &=\gamma_1\left(\dot n-\frac12v_x\right) \big(-\sin n,\ \cos n \big)^T+\gamma_2\left(u_x\cos n+\frac12v_x\sin n,\ \frac12v_x\cos n\right)^T\\ &-\gamma_2 \big(u_x\cos^2 n+v_x\cos n\sin n\big)\big(\cos n,\ \sin n\big)^T\\ &=\gamma_1\left(\dot n-\frac12v_x\right) \big(-\sin n,\ \cos n \big)^T\\ &+\gamma_2\left(u_x\cos n\sin^2 n+\frac12v_x\sin n(1-2\cos^2 n),\ -u_x\cos^2n\sin n+\frac12v_x\cos n(1-2\sin^2 n)\right)^T\\ &=\gamma_1\left(\dot n-\frac12v_x\right) \big(-\sin n,\ \cos n \big)^T\\ &-\gamma_2\left(u_x\cos n\sin n+\frac12v_x(1-2\cos^2 n)\right) \big(-\sin n,\ \cos n \big)^T, \end{split} \eeq $$ \lambda {\bf n}=\left(|\nabla {\bf n}|^2+\gamma_1 N\cdot{\bf n}\right){\bf n}=(n_x)^2 \big(\cos n,\ \sin n\big)^T. $$ Therefore $n(x,t)$ satisfies \beq \gamma_1\left(\dot n-\frac12v_x\right) -\gamma_2\left(u_x\cos n\sin n+\frac12v_x(1-2\cos^2 n)\right) =n_{xx}. \eeq Thus the system \eqref{comlce} reduces to \eqref{comlce1d}. \section{A priori estimates In this section, we will prove several useful a priori estimates for smooth solutions of system \eqref{comlce1d}. \begin{lemma}\label{lemma1} Any smooth solution to the system \eqref{comlce1d} satisfies the following energy inequality \beq\label{enest2} \begin{split} \frac{d}{dt}\mathcal E(t) =&-\int_0^1\left[\sqrt{\gamma_1}\dot n-\frac12\left( \frac{\gamma_2}{\sqrt{\gamma_1}}u_x\sin(2n) + \frac{1}{\sqrt{\gamma_1}}(\gamma_1-\gamma_2\cos(2n))v_x \right)\right]^2\\ &-\int_0^1\left[\frac14\left(-\alpha_1-\frac{\gamma_2^2}{\gamma_1}\right)u_x^2+(\alpha_4+\alpha_7)u_{x}^2\right] -\frac14\int_0^1\left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\right)v_x^2\\ &-\frac14\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right) \int_0^1\left(u_x\cos(2n)+v_x\sin(2n)\right)^2\\ &-(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8)\int_0^1\left[\big(u_x\cos n+\frac12v_x\sin n\big)^2-\frac14 v_x^2\sin^2 n\right]. \end{split} \eeq \end{lemma} \noindent{\bf Proof.\quad}}\def\endpf{\hfill$\Box$ Multiplying the second equation by $u$, the third equation by $v$ and integrating over $[0,1]$, we have \beq\notag \begin{split} \frac{1}{2}\frac{d}{dt}\int_0^1\rho (u^2+v^2) +\frac{1}{\gamma-1}\frac{d}{dt}\int_0^1\rho^{\gamma} =\int_0^1\left(J^1u+J^2v-un_{xx}n_x\right). \end{split} \eeq Multiplying the last equation by $\dot n$ and integrating over $[0,1]$, we obtain \beq\notag \begin{split} \frac{d}{dt}\frac12\int_0^1(n_x)^2 +\gamma_1\int_0^1\dot n^2 =\int_0^1\left[\frac12\gamma_2u_x\sin(2n)\dot n+\frac12(\gamma_1-\gamma_2\cos(2n))v_x\dot n+un_{xx}n_x\right]. \end{split} \eeq Adding these two equations together, we have \beq\label{pfsec3.1} \begin{split} &\frac{1}{2}\frac{d}{dt}\int_0^1\rho (u^2+v^2) +\frac{1}{\gamma-1}\frac{d}{dt}\int_0^1\rho^{\gamma} +\frac12\frac{d}{dt}\int_0^1(n_x)^2 \\ =&\int_0^1\left(J^1u+J^2v\right)-\gamma_1\int_0^1\dot n^2+\int_0^1\frac12\left[\gamma_2u_x\sin(2n)\dot n+(\gamma_1-\gamma_2\cos(2n))v_x\dot n\right]. \end{split} \eeq By integrating by parts, we can estimate the term related to $J^1, J^2$ as follows \beq\label{pfsec3.2} \begin{split} &\int_0^1J^1u\\ =&-\int_0^1\left[(\alpha_0+\alpha_5+\alpha_6+\alpha_8)u_x^2\cos^2 n+\alpha_1u_x^2\cos^4n+(\alpha_4+\alpha_7)u_{x}^2\right]\\ &-\int_0^1\left[\alpha_0u_xv_x\cos n\sin n+\alpha_1u_xv_x\cos^3n\sin n+\frac12(\alpha_2+\alpha_3+\alpha_5+\alpha_6)u_xv_x\cos n\sin n\right]\\ &+\int_0^1(\alpha_2+\alpha_3)u_x\dot n\cos n\sin n, \end{split} \eeq \beq\label{pfsec3.3} \begin{split} &\int_0^1J^2v\\ =&-\int_0^1\left[\alpha_1v_x^2\cos^2n\sin^2 n+\frac12(-\alpha_2+\alpha_5)v_x^2\cos^2 n+\frac12(\alpha_3+\alpha_6)v_x^2\sin^2 n+\frac12\alpha_4v_{x}^2\right]\\ &-\int_0^1\left[\alpha_1u_xv_x\cos^3n\sin n+(\alpha_6+\alpha_8)u_xv_x\cos n\sin n\right]\\ &-\int_0^1\left[\alpha_2v_x\dot n\cos^2 n-\alpha_3v_x\dot n\sin^2 n\right]. \end{split} \eeq First notice that all the terms related to $\alpha_1$ in \eqref{pfsec3.2} and \eqref{pfsec3.3} can be written as \beq\label{pfsec3.4} \begin{split} -\alpha_1\int_0^1\left[u_x^2\cos^4n+2u_xv_x\cos^3n\sin n+v_x^2\cos^2n\sin^2 n\right]\\ =-\alpha_1\int_0^1\left[u_x\cos^2n+v_x\cos n\sin n\right]^2. \end{split} \eeq The other term related to $u_xv_x$ in \eqref{pfsec3.2} and \eqref{pfsec3.3} (without terms with $\alpha_1$) can be written as \beq\label{pfsec3.5} \begin{split} &-\int_0^1\left[\alpha_0u_xv_x\cos n\sin n+\frac12(\alpha_2+\alpha_3+\alpha_5+\alpha_6)u_xv_x\cos n\sin n+(\alpha_6+\alpha_8)u_xv_x\cos n\sin n\right]\\ =&-\int_0^1u_xv_x\cos n\sin n\left[\alpha_0+\frac12(\alpha_2+\alpha_3+\alpha_5+\alpha_6)+(\alpha_6+\alpha_8)\right]\\ =&-\int_0^1\left(\alpha_0+2\alpha_6+\alpha_8\right)u_xv_x\cos n\sin n, \end{split} \eeq where we have used $\alpha_2+\alpha_3=\alpha_6-\alpha_5$. The terms related to $u_x^2$, $v_x^2$ in \eqref{pfsec3.2} and \eqref{pfsec3.3} (without terms with $\alpha_1$) can be written as \beq\label{pfsec3.6} \begin{split} &-\int_0^1\left[(\alpha_0+\alpha_5+\alpha_6+\alpha_8)u_x^2\cos^2 n+(\alpha_4+\alpha_7)u_{x}^2\right]\\ &-\int_0^1\left[\frac14(2\alpha_4-\alpha_2+\alpha_5+\alpha_3+\alpha_6)v_x^2-\frac12\gamma_2v_x^2\cos (2n)\right]. \\ \end{split} \eeq What left in \eqref{pfsec3.1}-\eqref{pfsec3.3} are all terms related to $u_x\dot n$ and $v_x\dot n$ \beq\label{pfsec3.7} \begin{split} &\int_0^1\left[\frac12\gamma_2u_x\sin(2n)\dot n+(\alpha_2+\alpha_3)u_x\dot n\cos n\sin n\right]\\ &+\int_0^1\left[\frac12(\gamma_1-\gamma_2\cos(2n))v_x\dot n-\alpha_2v_x\dot n\cos^2 n+\alpha_3v_x\dot n\sin^2 n\right]\\ =&\int_0^1\gamma_2u_x\dot n\sin(2n) +\int_0^1(\gamma_1-\gamma_2\cos(2n))v_x\dot n, \end{split} \eeq where we have used $\gamma_1=\alpha_3-\alpha_2$ and $\gamma_2=\alpha_2+\alpha_3=\alpha_6-\alpha_5$. Therefore, putting \eqref{pfsec3.4}-\eqref{pfsec3.7} into \eqref{pfsec3.1}, we obtain \beq\label{pfsec3.8} \begin{split} &\frac{1}{2}\frac{d}{dt}\int_0^1\rho (u^2+v^2) +\frac{1}{\gamma-1}\frac{d}{dt}\int_0^1\rho^{\gamma}+\frac{d}{dt}\frac12\int_0^1(n_x)^2 \\ =&-\alpha_1\int_0^1\left[u_x\cos^2n+v_x\cos n\sin n\right]^2-\int_0^1u_xv_x\cos n\sin n\left(\alpha_0+2\alpha_6+\alpha_8\right) \\ &-\int_0^1\left[(\alpha_0+\alpha_5+\alpha_6+\alpha_8)u_x^2\cos^2 n+(\alpha_4+\alpha_7)u_{x}^2\right]\\ &-\int_0^1\left[\frac14(2\alpha_4+\alpha_5+\alpha_6+\gamma_1)v_x^2-\frac12\gamma_2v_x^2\cos (2n)\right]\\ &-\gamma_1\int_0^1\dot n^2+\int_0^1\gamma_2u_x\dot n\sin(2n) +\int_0^1(\gamma_1-\gamma_2\cos(2n))v_x\dot n. \end{split} \eeq We first complete the square for all terms with $\dot n$ in \eqref{pfsec3.4} \beq\label{pfsec3.9} \begin{split} &\gamma_1\int_0^1\dot n^2-\int_0^1\gamma_2u_x\dot n\sin(2n) -\int_0^1(\gamma_1-\gamma_2\cos(2n))v_x\dot n\\ =&\gamma_1\int_0^1\dot n^2-2\cdot\frac12\int_0^1\sqrt{\gamma_1}\dot n \left( \frac{\gamma_2}{\sqrt{\gamma_1}}u_x\sin(2n) + \frac{1}{\sqrt{\gamma_1}}(\gamma_1-\gamma_2\cos(2n))v_x \right)\\ =&\int_0^1\left[\sqrt{\gamma_1}\dot n-\frac12\left( \frac{\gamma_2}{\sqrt{\gamma_1}}u_x\sin(2n) + \frac{1}{\sqrt{\gamma_1}}(\gamma_1-\gamma_2\cos(2n))v_x \right)\right]^2\\ &-\frac{1}{4}\int_0^1 \left( \frac{\gamma_2}{\sqrt{\gamma_1}}u_x\sin(2n) + \frac{1}{\sqrt{\gamma_1}}(\gamma_1-\gamma_2\cos(2n))v_x \right)^2. \end{split} \eeq The last term in \eqref{pfsec3.9} can also be rewritten as follows \beq\label{pfsec3.10} \begin{split} &\left( \frac{\gamma_2}{\sqrt{\gamma_1}}u_x\sin(2n) + \frac{1}{\sqrt{\gamma_1}}(\gamma_1-\gamma_2\cos(2n))v_x \right)^2\\ =&\frac{\gamma_2^2}{\gamma_1}u_x^2\sin^2(2n) +2\frac{\gamma_2}{\gamma_1}u_xv_x\sin(2n)(\gamma_1-\gamma_2\cos(2n)) + \frac{1}{\gamma_1}(\gamma_1-\gamma_2\cos(2n))^2v_x^2\\ =&\frac{\gamma_2^2}{\gamma_1}u_x^2\sin^2(2n) +2u_xv_x\sin(2n)\left(\gamma_2-\frac{\gamma_2^2}{\gamma_1}\cos(2n)\right)\\ &+ \left(\gamma_1-2\gamma_2\cos(2n)+\frac{\gamma_2^2}{\gamma_1}\cos^2(2n)\right)v_x^2. \end{split} \eeq To complete the square for the remaining terms, we first investigate the terms containing $u_xv_x$ in \eqref{pfsec3.9} and \eqref{pfsec3.10}: \beq\label{pfsec3.11} \begin{split} &\frac12\alpha_1\int_0^1u_xv_x\sin(2n)(1+\cos(2n))+\frac12\int_0^1\left(\alpha_0+2\alpha_6+\alpha_8\right)u_xv_x\sin (2n)\\ &-\frac{1}{2}\int_0^1 u_xv_x\sin(2n)\left(\gamma_2-\frac{\gamma_2^2}{\gamma_1}\cos(2n)\right)\\ =&\frac12\int_0^1\left(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8\right)u_xv_x\sin (2n) +\frac12\int_0^1\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right)u_xv_x\sin(2n)\cos(2n)\\ =&\int_0^1\left(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8\right)u_xv_x\sin n\cos n +\frac12\int_0^1\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right)u_xv_x\sin(2n)\cos(2n). \end{split} \eeq Thus we can calculate the terms containing $u_x^2$ in \eqref{pfsec3.9} and \eqref{pfsec3.10} as follows \beq\label{pfsec3.12} \begin{split} &\frac14\int_0^1\left[\alpha_1u_x^2(1+\cos(2n))^2-\frac{\gamma_2^2}{\gamma_1}u_x^2\sin^2(2n)\right] \\ &+\int_0^1\left[(\alpha_0+\alpha_5+\alpha_6+\alpha_8)u_x^2\cos^2 n+(\alpha_4+\alpha_7)u_{x}^2\right]\\ =&\frac14\int_0^1\left[\alpha_1u_x^2(1+2\cos(2n)+\cos^2(2n))-\frac{\gamma_2^2}{\gamma_1}u_x^2+\frac{\gamma_2^2}{\gamma_1}u_x^2\cos^2(2n)\right] \\ &+\int_0^1\left[(\alpha_0+\alpha_5+\alpha_6+\alpha_8)u_x^2\cos^2n +2(\alpha_4+\alpha_7)u_{x}^2\right]\\ =&\frac14\int_0^1\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right)u_x^2\cos^2(2n)+\int_0^1(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8)u_x^2\cos^2n \\ &+\int_0^1\left[\frac14\left(-\alpha_1-\frac{\gamma_2^2}{\gamma_1}\right)u_x^2+(\alpha_4+\alpha_7)u_{x}^2\right]. \end{split} \eeq Similarly, the terms involving $v_x^2$ in \eqref{pfsec3.9} and \eqref{pfsec3.10} can be calculated as follows \beq\label{pfsec3.13} \begin{split} &\frac14\int_0^1\alpha_1v_x^2\sin^2(2n)+\int_0^1\left[\frac14(2\alpha_4+\alpha_5+\alpha_6+\gamma_1)v_x^2-\frac12\gamma_2v_x^2\cos (2n)\right]\\ &-\frac14\int_0^1 \left(\gamma_1-2\gamma_2\cos(2n)+\frac{\gamma_2^2}{\gamma_1}\cos^2(2n)\right)v_x^2\\ =&\frac14\int_0^1\alpha_1v_x^2\sin^2(2n)+\frac14\int_0^1\left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\cos^2(2n)\right)v_x^2\\ =&\frac18\int_0^1\big(2\alpha_1+3\alpha_4+2\alpha_5+2\alpha_6\big)v_x^2\sin^2(2n)+\frac18\int_0^1\alpha_4v_x^2\sin^2(2n)\\ &+\frac14\int_0^1\left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\right)v_x^2\cos^2(2n). \end{split} \eeq For the terms with coefficient $\alpha_1+\frac{\gamma_2^2}{\gamma_1}$ in \eqref{pfsec3.11} and \eqref{pfsec3.12}, we have \beq\label{pfsec3.14} \begin{split} &\frac14\int_0^1\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right)u_x^2\cos^2(2n)+\frac12\int_0^1\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right)u_xv_x\sin(2n)\cos(2n)\\ =&\frac14\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right) \int_0^1\big[\left(u_x\cos(2n)+v_x\sin(2n)\right)^2-v_x^2\sin^2(2n)\big]. \end{split} \eeq The terms with coefficient $\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8$ \eqref{pfsec3.11} and \eqref{pfsec3.12} can be written as \beq\label{pfsec3.15} \begin{split} &(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8)\int_0^1\big(u_x^2\cos^2n+u_xv_x\sin n\cos n\big)\\ =&(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8)\int_0^1\left[\big(u_x\cos n+\frac12v_x\sin n\big)^2-\frac14 v_x^2\sin^2 n\right]. \end{split} \eeq Collecting all the terms involving $v_x^2$ in \eqref{pfsec3.13}-\eqref{pfsec3.15}, we have \beq\label{pfsec3.16} \begin{split} &\frac18\int_0^1\big(2\alpha_1+3\alpha_4+2\alpha_5+2\alpha_6\big)v_x^2\sin^2(2n)+\frac18\int_0^1\alpha_4v_x^2\sin^2(2n)\\ &+\frac14\int_0^1\left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\right)v_x^2\cos^2(2n) -\frac14\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right)\int_0^1v_x^2\sin^2(2n)\\ &-\frac14(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8)\int_0^1v_x^2\sin^2 n\\ =&\frac14\int_0^1\left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\right)v_x^2-\frac14(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8)\int_0^1v_x^2\sin^2 n. \end{split} \eeq Therefore, putting the identities \eqref{pfsec3.9} \eqref{pfsec3.14}-\eqref{pfsec3.16} into \eqref{pfsec3.8} yields \beq\notag \begin{split} &\frac{1}{2}\frac{d}{dt}\int_0^1\rho (u^2+v^2) +\frac{1}{\gamma-1}\frac{d}{dt}\int_0^1\rho^{\gamma}+\frac{d}{dt}\frac12\int_0^1(n_x)^2 \\ =&-\int_0^1\left[\sqrt{\gamma_1}\dot n-\frac12\left( \frac{\gamma_2}{\sqrt{\gamma_1}}u_x\sin(2n) + \frac{1}{\sqrt{\gamma_1}}(\gamma_1-\gamma_2\cos(2n))v_x \right)\right]^2\\ &-\int_0^1\left[\frac14\left(-\alpha_1-\frac{\gamma_2^2}{\gamma_1}\right)u_x^2+(\alpha_4+\alpha_7)u_{x}^2\right] -\frac14\int_0^1\left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\right)v_x^2\\ &-\frac14\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right) \int_0^1\left(u_x\cos(2n)+v_x\sin(2n)\right)^2\\ &-(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8)\int_0^1\left[\big(u_x\cos n+\frac12v_x\sin n\big)^2-\frac14 v_x^2\sin^2 n\right], \end{split} \eeq which completes the proof of Lemma. \endpf From the energy inequality above, we can obtain the following estimates for $n$. \begin{lemma}\label{lemma2} For any smooth solution to the system \eqref{comlce1d}, it holds that \beq\label{estnxxnt} \|n_{xx}\|_{L^2(0,T;L^2)}+\| n_t\|_{L^2(0,T;L^2)}\leq C(\mathcal E_0,T). \eeq \end{lemma} \noindent{\bf Proof.\quad}}\def\endpf{\hfill$\Box$ First notice that the equation of $n$ is \beq \gamma_1\left(\dot n-\frac12v_x\right) -\gamma_2\left(u_x\cos n\sin n+\frac12v_x(1-2\cos^2 n)\right) =n_{xx}. \eeq It is not hard to see that \beq\notag \begin{split} &\gamma_1\left(\dot n-\frac12v_x\right) -\gamma_2\left(u_x\cos n\sin n+\frac12v_x(1-2\cos^2 n)\right)\\ =&\gamma_1\dot n-\frac12\gamma_2u_x\sin(2n)-\frac12(\gamma_1-\gamma_2\cos(2n))v_x. \end{split} \eeq By the energy inequality, we obtain the estimates for $n_{xx}$. Next, by the equation of $n$ and the energy inequality, we obtain the estimate for $ n_t$. \endpf We also need to show the higher integrability of $\rho$, which is inspired by the argument in \cite{dww11}. \begin{lemma}\label{lemma3} For any smooth solution to the system \eqref{comlce1d}, it holds that \beq\label{estrho2g} \|\rho\|_{L^{2\gamma}([0,1]\times[0,T];)}\leq C(\mathcal E_0,T). \eeq \end{lemma} \noindent{\bf Proof.\quad}}\def\endpf{\hfill$\Box$ First set $$G(x,t):=\int_0^x\rho^\gamma-x\int_0^1\rho^\gamma.$$ It is easy to see that $$ \frac{\partial G}{\partial x}=\rho^\gamma-\int_0^1\rho^\gamma,\quad G(0,t)=G(1,t)=0. $$ Notice that the equation of $u$ can be written as \beq\notag (\rho u)_t+(\rho u^2)_x+\big(\rho^{\gamma}\big)_x=J^1-\frac{1}{2}((n_x)^2)_x \eeq where \beq\notag \begin{split} J^1=&(\alpha_0+\alpha_5+\alpha_6+\alpha_8)\big(u_x\cos^2 n\big)_x+\alpha_1\big(u_x\cos^4n\big)_x-(\alpha_2+\alpha_3)\big(\dot n\cos n\sin n\big)_x+(\alpha_4+\alpha_7)u_{xx}\\ &+\alpha_0\big(v_x\cos n\sin n\big)_x+\alpha_1\big(v_x\cos^3n\sin n\big)_x+\frac12(\alpha_2+\alpha_3+\alpha_5+\alpha_6)\big(v_x\cos n\sin n\big)_x. \end{split} \eeq Multiplying this equation by $G(x,t)$, integrating over $[0,1]\times(0,T)$, and using integrating by parts, we obtain that \beq\label{rho2-1} \begin{split} \int_0^T\int_0^1\rho^{2\gamma}=&\int_0^T\left(\int_0^1\rho^{\gamma}\right)^2+\int_0^T\int_0^1(\rho u)_tG(x,t)-\int_0^T\int_0^1 \rho u^2\frac{\partial G(x,t)}{\partial x}\\ &-\int_0^T\int_0^1J_1G(x,t)-\frac12\int_0^T\int_0^1|n_x|^2\frac{\partial G(x,t)}{\partial x}\\ =&\sum_{i=1}^5I_i. \end{split} \eeq For the first term, it is easy to estimate by energy inequality \beq\notag I_1\leq C(\mathcal E_0,T). \eeq For the second term, we need use integrating by parts with respect to $t$ to obtain \beq\notag \begin{split} I_2=&\int_0^1\rho uG(x,T)-\int_0^1\rho uG(x,0)-\int_0^T\int_0^1\rho uG_t(x,t)\\ &\leq C\sup\limits_{0\leq t\leq T}\left(\int_0^1\rho|u|\int_0^1\rho^\gamma\right)-\int_0^T\int_0^1\rho uG_t(x,t)\\ &\leq C\sup\limits_{0\leq t\leq T}\left(\int_0^1\rho|u|^2\int_0^1\rho^\gamma+\int_0^1\rho\int_0^1\rho^\gamma\right)-\int_0^T\int_0^1\rho uG_t(x,t)\\ &\leq C(\mathcal E_0,T)-\int_0^T\int_0^1\rho uG_t(x,t). \end{split} \eeq To estimate the last term here, we multiply the equation of $\rho$ by $\gamma\rho^{\gamma-1}$ to get \beq\notag (\rho^{\gamma})_t+(\rho^{\gamma} u)_x+(\gamma-1)\rho^{\gamma} u_x=0. \eeq Then it holds \beq\notag \begin{split} &-\int_0^T\int_0^1\rho uG_t(x,t)\\ =&-\int_0^T\int_0^1\rho u\left(\int_0^x\rho^\gamma_t-x\int_0^1\rho^\gamma_t\right)\\ =&\int_0^T\int_0^1\rho u\int_0^x\left((\rho^{\gamma} u)_x+(\gamma-1)\rho^{\gamma} u_x\right)-\int_0^T\int_0^1x\rho u\int_0^1\left((\rho^{\gamma} u)_x+(\gamma-1)\rho^{\gamma} u_x\right)\\ =&\int_0^T\int_0^1\rho^{\gamma+1} u^2+(\gamma-1)\int_0^T\int_0^1\rho u\left(\int_0^x\rho^{\gamma} u_x-x\int_0^1\rho^{\gamma} u_x\right)\\ \leq &\int_0^T\int_0^1\rho^{\gamma+1} u^2+C\int_0^T\int_0^1\rho |u|\int_0^1\rho^{\gamma} |u_x|\\ \leq &\int_0^T\int_0^1\rho^{\gamma+1} u^2+C\int_0^T\left(\int_0^1(\rho+ \rho|u|^2)\left(\int_0^1\rho^{2\gamma}\right)^{\frac12} \left(\int_0^1|u_x|^2\right)^{\frac12}\right)\\ \leq &\int_0^T\int_0^1\rho^{\gamma+1} u^2+C(\mathcal E_0,T)\int_0^T\left(\left(\int_0^1\rho^{2\gamma}\right)^{\frac12} \left(\int_0^1|u_x|^2\right)^{\frac12}\right)\\ \leq &\int_0^T\int_0^1\rho^{\gamma+1} u^2+\frac14\int_0^T\int_0^1\rho^{2\gamma}+C(\mathcal E_0,T)\int_0^T\int_0^1|u_x|^2\\ \leq &\int_0^T\int_0^1\rho^{\gamma+1} u^2+\frac14\int_0^T\int_0^1\rho^{2\gamma}+C(\mathcal E_0,T), \end{split} \eeq where we have used the Cauchy inequality, the H$\ddot{\mbox{o}}$lder inequality, the Young inequality and the energy inequality. Hence we obtain \beq\notag \begin{split} I_2\leq \int_0^T\int_0^1\rho^{\gamma+1} u^2+\frac14\int_0^T\int_0^1\rho^{2\gamma}+C(\mathcal E_0,T). \end{split} \eeq For the third term in \eqref{rho2-1}, it holds \beq\notag \begin{split} I_3=-\int_0^T\int_0^1 \rho u^2\left(\rho^\gamma-\int_0^1\rho^\gamma\right)=-\int_0^T\int_0^1 \rho^{\gamma+1} u^2+C(\mathcal E_0,T). \end{split} \eeq Then \beq\notag I_2+I_3\leq \frac14\int_0^T\int_0^1\rho^{2\gamma}+C(\mathcal E_0,T). \eeq For the fourth term in \eqref{rho2-1}, by integration by parts it holds \beq\notag \begin{split} I_4\leq &\int_0^T\int_0^1 \left(|u_x|+|\dot{n}|+|v_x|\right)\rho^{\gamma}+\int_0^T\int_0^1\left(|u_x|+|n_t|+|v_x|\right)\int_0^1\rho^\gamma\\ \leq &\frac14\int_0^T\int_0^1\rho^{2\gamma} +C\int_0^T\int_0^1\left(|u_x|^2+|\dot{n}|^2+|v_x|^2\right)\\ &+C(\mathcal E_0,T)\int_0^T\int_0^1\left(|u_x|^2+|n_t|^2+|v_x|^2\right)+C(\mathcal E_0,T)\\ \leq &\frac14\int_0^T\int_0^1\rho^{2\gamma}+C(\mathcal E_0,T). \end{split} \eeq For the last term in \eqref{rho2-1}, it holds \beq\notag \begin{split} I_5= -\frac12\int_0^T\int_0^1 |n_x|^2\left(\rho^\gamma-\int_0^1\rho^\gamma\right)\leq C(\mathcal E_0,T). \end{split} \eeq Therefore, by adding all the estimates together in \eqref{rho2-1} we obtain \beq\notag \begin{split} \int_0^T\int_0^1\rho^{2\gamma} \leq\frac12\int_0^T\int_0^1\rho^{2\gamma} +C(\mathcal E_0,T), \end{split} \eeq which implies the estimate \eqref{estrho2g}. \endpf \section{Approximated solutions} In this section, we first consider the case that the initial values are smooth enough, i.e. $\rho_0\in C^{1}$, $u_0, v_0, n_0\in C^{2}$, and $0< c_0^{-1}\leq \rho_0\leq c_0$ and $u_0=\frac{m_0}{\rho_0}$, $v_0=\frac{l_0}{\rho_0}$, and then construct the Galerkin approximation of $\rho$, $u$, $v$ and $n$. \bigskip \noindent \textbf{Step 1}. Recall that $$ \phi_j(x)=\sin\left(j\pi x\right), \quad j=1,2,... $$ is an orthogonal base of $L^2(0,1)$. For any positive integer $k$, set $$ \mathcal X_k=\mbox{span}\{\phi_1,\,\phi_2,\,\cdots\, \phi_k\}. $$ and \beq\notag u_0^k=\sum_{j=0}^k \bar c_j^k\phi_j(x),\quad v_0^k=\sum_{j=0}^k\bar d_j^k\phi_j(x), \eeq for some constants $$ \bar c_j^k=\int_0^1u_0\phi_j,\quad \bar d_j^k=\int_0^1v_0\phi_j. $$ Then $(u_0^k,\,v_0^k)\rightarrow (u_0,\,v_0)$ in $C^2$ as $k\rightarrow \infty$. Let \beq\notag u_k=\sum_{j=0}^kc_j^k(t)\phi_j(x),\quad v_k=\sum_{j=0}^kd_j^k(t)\phi_j(x) \eeq be the finite dimensional approximation of $u$, and $v$, and we want to solve the approximation system: \begin{equation}\label{applce} \begin{cases} (\rho_k)_t+(\rho_k u_k)_x=0,\\ \rho_k (u_k)_t+\rho_k u_k (u_k)_x+\big(\rho_k^{\gamma}\big)_x=J^1_k-(n_k)_{xx}(n_k)_x, \\ \rho_k (v_k)_t+\rho_k u_k (v_k)_x=J^2_k,\\ \gamma_1\left(\dot n_k-\frac12(v_k)_x\right) -\gamma_2\left((u_k)_x\cos n_k\sin n_k+\frac12(v_k)_x(1-2\cos^2 n_k)\right) =(n_k)_{xx}. \end{cases} \end{equation} Here $J_k^1$, $J_k^2$ have the same form as $J^1$, $J^2$, but with $u, v$ replaced by $u_k$, $v_k$. For this system, we consider the following initial and boundary values \beq\label{appinitial} (\rho_k,\, u_k,\, v_k,\, n_k)(x,0)=(\rho_0,\, u^k_0,\, v^k_0,\, n_0)(x), \eeq \beq\label{appbdyvalue} u_k(0,t)=v_k(0,t)=u_k(1,t)=v_k(1,t)=0,\quad (n_k)_x(0,t)=(n_k)_x(1,t)=0. \eeq \bigskip \noindent \textbf{Step 2}. The first step is to solve $\rho_k$ and $n_k$ by assuming $u_k, v_k\in C^{0}(0,T; C^{2})$ for a fixed $k$. To this end, we rewrite the equations of $\rho_k$ and $n_k$ in the Lagrange coordinate system. Without loss of generality, in this section, we assume that \beq\label{exrho_0} \int_0^1\rho_0(x)\,dx=1. \eeq For any $T>0$, we introduce the Lagrangian coordinate $(X,\tau)\in (0,1)\times [0,T)$ by \beq\notag X(x,t)=\int_0^x\rho_k(y,t)\,dy, \quad \tau(x,t)=t. \eeq If $\rho_k(x,t)\in C^1((0,1)\times[0,T))$ is positive and $\int_0^1\rho_k(x,t)\,dx=1$ for all $t\in [0,T)$, then the map $(x,t)\rightarrow (X,\tau):(0,1)\times (0,T) \to (0,1)\times (0,T) $ is a $C^1$-bijection such that $X(0,t)=0,\ X(1,t)=1$. By the chain rule, we have \beq\notag \frac{\partial}{\partial t}=-\rho_k u_k\frac{\partial}{\partial X}+\frac{\partial}{\partial \tau},\quad \frac{\partial}{\partial x}=\rho_k\frac{\partial}{\partial X}. \eeq The equation of $\rho_k$ can be rewritten as \begin{equation}\label{applceL1} (\rho_k)_{\tau}+\rho_k^2 (u_k)_X=0, \end{equation} along with the initial condition \beq\label{Linitial1} \rho_k(X,0)=\rho_0. \eeq Suppose $u_k\in C^{0}(0,T;C^{2})$ with $\|u_k\|_{C^{0}(0,T;C^{2})}\leq M_0$. Then $\rho_k$ can be solved explicitly by \beq\label{rhoXT} \rho_k(X,\tau)=\frac{\rho_0(X)}{1+\rho_0(X)\int_0^{\tau}(u_k)_X(X,s)\,ds}. \eeq Hence, for any $T\leq \frac{1}{2c_0M_0}$, we have \beq\label{2c} \rho_k(X,\tau) \leq \frac{\rho_0(X)}{1-\left|\rho_0(X)\int_0^{\tau}(u_k)_X(X,s)\,ds\right|} \leq \frac{c_0}{1-c_0M_0T}\leq 2c_0, \eeq \beq\label{2c1} \rho_k(X,\tau) \geq \frac{\rho_0(X)}{1+\left|\rho_0(X)\int_0^{\tau}(u_k)_X(X,s)\,ds\right|} \geq \frac{c_0^{-1}}{1+c_0M_0T}\geq \frac{c_0^{-1}}{2}. \eeq Similarly, since $\rho_0\in C^{1}$, $u_k\in C^{0}(0,T; C^{2})$, we conclude that for sufficiently small $T(c_0, M_0)>0$, \beq\label{M1} \|\rho_k\|_{C^{0}(0,T; C^{1})}+\| (\rho_k)_t\|_{C^{0}((0,1)\times(0,T))}\leq M_1, \eeq for some positive constant $M_1$. Furthermore, suppose that $\rho^1_k, \rho^2_k$ are solutions of equation \eqref{applceL1} corresponding to $u_k^1, u^2_k \in C^{0}(0,T; C^{2})$, with the same initial condition, we can conclude from \eqref{applceL1} that \beq\notag \left(\frac{1}{\rho_k^1}-\frac{1}{\rho_k^2}\right)_\tau=\left(u^1_k-u^2_k\right)_X. \eeq Integrating with respect to $\tau$, we obtain \beq\notag \rho^1_k-\rho^2_k=\rho^1_k\rho^2_k\int_0^\tau\left(u^1_k-u^2_k\right)_X \eeq which, combined with \eqref{M1}, implies that \beq\label{Liprho} \|\rho^1_k-\rho^2_k\|_{C^{0}(0,T; C^{1})} \leq C(M_1, T)T\|u^1_k-u^2_k\|_{C^{0}(0,T; C^{2})}. \eeq \bigskip \noindent \textbf{Step 3}. Similarly, we can rewrite the equation of $n$ in the Lagrange coordinate as \begin{equation}\label{applceL2} \gamma_1\left((n_k)_\tau-\frac12\rho_k(v_k)_X\right) -\frac{\gamma_2} {2}\left(\rho_k(u_k)_X\sin (2n_k)-\rho_k(v_k)_X\cos (2n_k)\right) =\rho_k\big(\rho_k(n_k)_{X}\big)_X. \end{equation} For this system, we consider the following initial and boundary values \beq\label{Linitial2} n_k(X,0)=n_0(X), \eeq \beq\label{Lbdyvalue2} (n_k)_X(0,\tau)=(n_k)_X(1,\tau)=0. \eeq By the standard Schauder theory of parabolic equations, we conclude that \beq\label{M2} \begin{split} \|n_k\|_{C^{1}(0,T; C^{2})} \leq&C\|n_0\|_{C^{2}}+C\|\rho_k (v_k)_X\|_{C^{0}((0,1)\times(0,T))}+C\|\rho_k (u_k)_X\|_{C^{0}((0,1)\times(0,T))} \leq M_2, \end{split} \eeq for some positive constant $M_2$. Furthermore, suppose that $n^1_k, n^2_k$ are solutions of equation \eqref{applceL2} corresponding to $\rho^1_k, \rho^2_k \in C^{1}((0,1)\times(0,T))$ and $u_k^1, u^2_k \in C^{0}(0,T; C^{2})$, subject to the same initial condition. Denote $$ \bar n_k=n^1_k-n^2_k,\quad \bar \rho_k=\rho^1_k-\rho^2_k,\quad \bar u_k=u^1_k-u^2_k. $$ Then from \eqref{applceL2} we have that \beq\notag \begin{split} &\gamma_1 (\bar n_k)_\tau-(\rho_k^1)^2(\bar n_k)_{XX}\\ &= \bar\rho_k(\rho^1_k+\rho^2_k)(n_k^2)_{XX}+\bar\rho_k(\rho_k^1)_X(n_k^1)_{X}+\rho_k^2(\bar\rho_k)_X(n_k^1)_{X}+\rho_k^2(\rho_k^2)_X(\bar n_k)_{X}\\ &+\frac{\gamma_1}{2}\left(\bar\rho_k(v_k^1)_X+\rho_k^2(\bar v_k^1)_X\right)\\ &-\frac{\gamma_2} {2}\left(\bar\rho_k(v_k^1)_X\cos (2n_k^1)+\rho_k^2(\bar v_k)_X\cos (2n_k^1)-2\rho_k^2(v_k^2)_X\sin (\bar n_k)\sin (n_k^1+n_k^2)\right) \\ &+\frac{\gamma_2} {2}\left(\bar\rho_k(u_k^1)_X\sin (2n_k^1)+\rho_k^2(\bar u_k)_X\sin (2n_k^1)+2\rho_k^2(u_k^2)_X\sin (\bar n_k)\cos (n_k^1+n_k^2)\right). \end{split} \eeq By the standard $W^{2,1}_2$-estimate of parabolic equations, we conclude that \beq\notag \begin{split} &\|\bar n_k\|_{W^{2,1}_2([0,1]\times (0,T))}\\ &\leq C\|\bar \rho_k\|_{L^2(0,T; H^{1})}+C\|\bar n_k\|_{L^{2}(0,T; L^{2})}+C\|\bar v_k\|_{L^2(0,T; H^{1})}+C\|\bar u_k\|_{L^2(0,T; H^{1})}\\ &\leq CT^{\frac12}\|\bar \rho_k\|_{C^{0}(0,T; C^{1})}+C\|\bar n_k\|_{L^{2}(0,T; L^{2})}+CT^{\frac12}\|\bar v_k\|_{C^{0}(0,T; C^{1})}+CT^{\frac12}\|\bar u_k\|_{C^{0}(0,T; C^{1})}\\ &\leq CT^{\frac12}\|\bar u_k\|_{C^{0}(0,T; C^{2})}+CT^{\frac12}\|\bar v_k\|_{C^{0}(0,T; C^{1})}+C\|\bar n_k\|_{L^{2}(0,T; L^{2})}. \end{split} \eeq Since $\bar n_k(\tau,0)=0$, we obtain that \beq\notag \|\bar n_k\|_{L^{2}(0,T; L^{2})}\leq CT\|\bar n_k\|_{W^{2,1}_2([0,1]\times (0,T))}. \eeq If we choose $T>0$ small enough, we obtain \beq\label{Lipn} \begin{split} \|\bar n_k\|_{W^{2,1}_2([0,1]\times (0,T))} \leq & C(M_1, M_2, T)T^{\frac12}\left(\|\bar u_k\|_{C^{0}(0,T; C^{2})}+\|\bar v_k\|_{C^{0}(0,T; C^{1})}\right). \end{split} \eeq \bigskip \noindent \textbf{Step 4}. To obtain the estimates for $u_k$ and $v_k$, first notice that the equation of $u_k$ and $v_k$ can be understood in the weak senses, i.e., for any $\phi(x)\in \mathcal X_k$ and $t\in [0,T]$, it holds \beq\label{applceL3} \int_0^1\rho_k u_k\phi-\int\rho_0 u_0^k\phi=\int_0^t\int_0^1 \mathcal P^1(\rho_k, u_k, v_k, n_k)\phi+(\alpha_2+\alpha_3)\int_0^t\int_0^1 \dot n\cos n\sin n\phi_x, \eeq \beq\label{applceL4} \int_0^1\rho_k v_k\phi-\int\rho_0 v_0^k\phi=\int_0^t\int_0^1 \mathcal P^2(\rho_k, u_k, v_k, n_k)\phi-\int_0^t\int_0^1 \big(\alpha_2\dot n_k\cos^2 n_k-\alpha_3\dot n_k\sin^2 n_k\big)\phi_x, \eeq where \beq\notag \begin{split} &\mathcal P^1(\rho_k, u_k, v_k, n_k)\\ =&(\alpha_0+\alpha_5+\alpha_6+\alpha_8)\big((u_k)_x\cos^2 n_k\big)_x+\alpha_1\big((u_k)_x\cos^4n_k\big)_x+(\alpha_4+\alpha_7)(u_k)_{xx}\\ &+\alpha_0\big((v_k)_x\cos n_k\sin n_k\big)_x+\alpha_1\big((v_k)_x\cos^3n_k\sin n_k\big)_x\\ &+\frac12(\alpha_2+\alpha_3+\alpha_5+\alpha_6)\big((v_k)_x\cos n_k\sin n_k\big)_x-(\rho_k u_k u_k)_x-\big(\rho_k^{\gamma}\big)_x-(n_k)_{xx}(n_k)_x, \end{split} \eeq and \beq\notag \begin{split} \mathcal P^2(\rho_k, u_k, v_k, n_k)&=\alpha_1\big((v_k)_x\cos^2n_k\sin^2 n_k\big)_x+\frac12(-\alpha_2 +\alpha_5)\big((v_k)_x\cos^2 n_k\big)_x\\ &+\frac12(\alpha_3+\alpha_6)\big((v_k)_x\sin^2 n_k\big)_x+\frac12\alpha_4(v_k)_{xx}\\ &+\alpha_1\big((u_k)_x\cos^3n_k\sin n_k\big)_x+(\alpha_6+\alpha_8)\big((u_k)_x\cos n_k\sin n_k\big)_x-(\rho_k v_k v_k)_x. \end{split} \eeq Similarly to the energy inequality \eqref{enest1}, we can obtain the same form of energy estimates for the system \eqref{applce} so that \beq\label{M3} \|u_k\|_{C^0(0,T;C^2)}+\|v_k\|_{C^0(0,T;C^2)} \leq C\|u_k\|_{C^0(0,T;L^2)}+C\|v_k\|_{C^0(0,T;L^2)}\leq M_3, \eeq provided $\inf\limits_{(x,t)}\rho_k(x,t)>0$. Here we have used the fact that the dimension of $\mathcal X_k$ is finite. To apply the contraction map theorem, we define the linear operator $\mathcal N[\rho_k]:\, \mathcal X_k\rightarrow \mathcal X_k^*$ by \beq\notag \langle\mathcal N[\rho_k]\psi, \, \phi\rangle=\int_0^1\rho_k\psi\phi, \ \psi, \phi\in \mathcal X_k. \eeq It is easy to see that \beq\notag \|\mathcal N[\rho_k]\|_{\mathcal L(\mathcal X_k, \mathcal X_k^*)} \leq C(k)\|\rho_k\|_{L^1}. \eeq If $\inf\limits_{x}\rho_k>0$, the operator $\mathcal N[\rho_k]$ is invertible and \beq\notag \|\mathcal N^{-1}[\rho_k]\|_{\mathcal L(\mathcal X_k^*, \mathcal X_k)} \leq \left(\inf\limits_{x}\rho_k\right)^{-1}. \eeq Furthermore, for any $\rho_k^i\in L^1$ and $\inf\limits_{x}\rho_k^i>0$, $i=1,2$, it is easy to see that \beq\notag \mathcal N^{-1}[\rho_k^1]-\mathcal N^{-1}[\rho_k^2] =\mathcal N^{-1}[\rho_k^2]\left(\mathcal N[\rho_k^2]-\mathcal N[\rho_k^1]\right)\mathcal N^{-1}[\rho_k^1] \eeq which implies that \beq\label{LipN} \left\|\mathcal N^{-1}[\rho_k^1]-\mathcal N^{-1}[\rho_k^2]\right\|_{\mathcal L(\mathcal X_k^*, \mathcal X_k)} \leq C\left\|\mathcal N[\rho_k^1]-\mathcal N[\rho_k^2]\right\|_{\mathcal L(\mathcal X_k, \mathcal X_k^*)} \leq C\|\rho_k^1-\rho_k^2\|_{L^1}. \eeq Hence by the estimates \eqref{Liprho}, \eqref{Lipn} and \eqref{LipN}, we can apply the standard contraction map theorem to obtain the local existence of a unique solution $u_k, v_k\in C(0,T_k; \mathcal X_k)$ to \eqref{applceL3} and \eqref{applceL4} for some $T_k>0$. Then by the equations \eqref{applceL1} and \eqref{applceL2}, we can solve for $\rho_k, n_k$, which provides a unique local solution to the approximated system \eqref{applce} for any fixed $k$. \bigskip \noindent \textbf{Step 5}. In this step, we will establish a uniform estimate of the local solution until $T_k$ in order to extend the solution beyond $T_k$ to any time $T>0$, which implies the existence of unique global solution of the system \eqref{applce} for any fixed $k$. We first show the following uniform estimate for $\rho_k$ \medskip \noindent{\it Claim: For any $x\in [0,1]$ and $t\in [0,T_k]$, it holds \beq\label{ufmrhok} \frac{1}{c_1e^{t}}\leq\rho_k(x,t)\leq c_1e^{t} \eeq for some constant $c_1>0$.} \medskip \noindent Indeed, similar to the energy inequality \eqref{enest1}, we can obtain the same form of energy estimate for system \eqref{applce} so that \beq\label{M4} \begin{split} &\|(u_k)_x\|_{L^2(0,T_k;H^2)}+\|(v_k)_x\|_{L^2(0,T_k;H^2)}\\ \leq& C\|(u_k)_x\|_{L^2((0,1)\times(0,T_k)})+C\|(v_k)_x\|_{L^2((0,1)\times(0,T_k))}\leq M_4. \end{split} \eeq By the first equation of \eqref{applce}, we can find $x_0(t)\in (0,1)$ such that \beq\notag \rho_k(x_0(t),t)=\int_0^1\rho_k=\int_0^1\rho_0=1. \eeq Then \beq\notag \begin{split} \frac{1}{\rho_k(x,t)} =&\frac{1}{\rho_k(x_0(t),t)}+\int_{x_0(t)}^x\left(\frac{1}{\rho_k}\right)_y \leq 1+\frac12\left\|\frac{1}{\rho_k(x,t)}\right\|_{L^\infty} +\frac12\int_0^1\rho_k\left|\left(\frac{1}{\rho_k}\right)_x\right|^2 \end{split} \eeq which implies \beq\label{pfclm1} \begin{split} \left\|\frac{1}{\rho_k(x,t)}\right\|_{L^\infty} \leq 2+\int_0^1\rho_k\left|\left(\frac{1}{\rho_k}\right)_x\right|^2. \end{split} \eeq By the first equation of \eqref{applce}, we have \beq\label{pfclm3} \begin{split} &\frac{d}{dt}\int_0^1\rho_k\left|\left(\frac{1}{\rho_k}\right)_x\right|^2\\ =&\int_0^1(\rho_k)_t\left|\left(\frac{1}{\rho_k}\right)_x\right|^2+2\int_0^1\rho_k\left(\frac{1}{\rho_k}\right)_x\left(\frac{1}{\rho_k}\right)_{xt}\\ =&-\int_0^1(\rho_ku_k)_x\left|\left(\frac{1}{\rho_k}\right)_x\right|^2+2\int_0^1\rho_k\left(\frac{1}{\rho_k}\right)_x\left(\frac{(\rho_ku_k)_x}{\rho_k^2}\right)_{x}. \end{split} \eeq The last term on the right hand side can be computed by \beq\label{pfclm4} \begin{split} &2\int_0^1\rho_k\left(\frac{1}{\rho_k}\right)_x\left(\frac{(\rho_ku_k)_x}{\rho_k^2}\right)_{x}\\ =&2\int_0^1\rho_k\left(\frac{1}{\rho_k}\right)_x\left[\left(\left(-\frac{1}{\rho_k}\right)_xu_k\right)_x+\left(\frac{(u_k)_x}{\rho_k}\right)_x\right]\\ =&-\int_0^1\rho_ku_k\frac{\partial}{\partial x}\left|\left(\frac{1}{\rho_k}\right)_x\right|^2+2\int_0^1\left(\frac{1}{\rho_k}\right)_x(u_k)_{xx}. \end{split} \eeq Combining \eqref{pfclm4} with \eqref{pfclm3}, we conclude that \beq\label{pfclm5} \frac{d}{dt}\int_0^1\rho_k\left|\left(\frac{1}{\rho_k}\right)_x\right|^2=2\int_0^1\left(\frac{1}{\rho_k}\right)_x(u_k)_{xx}. \eeq The right hand side can be estimated as follows \beq\notag \begin{split} &\left|\int_0^1\left(\frac{1}{\rho_k}\right)_x(u_k)_{xx}\right|\\ \leq&\int_0^1\rho_k^{\frac12}\left|\left(\frac{1}{\rho_k}\right)_x\right|\rho_k^{-\frac12}|(u_k)_{xx}|\\ \leq &\frac12\int_0^1\rho_k\left|\left(\frac{1}{\rho_k}\right)_x\right|^2+\frac12\left\|\frac{1}{\rho_k(x,t)}\right\|_{L^\infty}\int_0^1|(u_k)_{xx}|^2\\ \leq&\frac12\left(1+\int_0^1|(u_k)_{xx}|^2\right)\int_0^1\rho_k\left|\left(\frac{1}{\rho_k}\right)_x\right|^2+\int_0^1|(u_k)_{xx}|^2. \end{split} \eeq where we have used \eqref{pfclm1} in last inequality. Denote $$ \mathcal Q(\rho_k)=\int_0^1\rho_k\left|\left(\frac{1}{\rho_k}\right)_x\right|^2, \quad a(t)=1+\int_0^1|(u_k)_{xx}|^2. $$ Then by \eqref{pfclm4}, we have \beq\notag\frac{d}{dt}\mathcal Q(\rho_k)\leq a(t)\mathcal Q(\rho_k) +\int_0^1|(u_k)_{xx}|^2 \eeq which is equivalent to \beq\notag \mathcal Q(\rho_k)-\mathcal Q(\rho_0)\leq 2\int_0^t\int_0^1|(u_k)_{xx}|^2+\int_0^t a(t)\mathcal Q(\rho_k) \leq 2M_4+\int_0^t a(t)\mathcal Q(\rho_k), \eeq where we have used \eqref{M4} in last step. By the Gronwall inequality, we obtain \beq\label{pfclm2} \mathcal Q(\rho_k) \leq \big(\mathcal Q(\rho_0)+2M_4\big)\exp\left(\int_0^ta(s)\right) \leq \big(\mathcal Q(\rho_0)+2M_4\big)\exp\left(t+M_4\right)\leq Ce^t. \eeq Combining \eqref{pfclm1} and \eqref{pfclm2} together, we can prove the left hand side of \eqref{ufmrhok}. Denote $\gamma=1+2\delta$ for some $\delta>0$. Then it holds \beq\notag \begin{split} \|\rho_k^\delta\|_{L^\infty} \leq \int_0^1 \rho_k^{\delta}+\delta\int_0^1\rho_k^{\delta-1}(\rho_k)_x \leq \left(\int_0^1 \rho_k^{\gamma}\right)^{\frac{\delta}{\gamma}}+C\left(\int_0^1 \rho_k^{\gamma}\right)^{\frac12}\left(\mathcal Q(\rho_k)\right)^{\frac12}\leq Ce^t, \end{split} \eeq which completes the proof of the Claim. \medskip By using the uniform estimate \eqref{ufmrhok} for $\rho_k$ and the energy inequality, we can show \beq\notag \|u_k\|_{C^0(0,T_k;\mathcal X_k)}+\|v_k\|_{C^0(0,T_k;\mathcal X_k)} \leq C\|u_k\|_{C^0(0,T_k;L^2)}+C\|v_k\|_{C^0(0,T_k;L^2)}\leq M_5. \eeq Therefore, we can extend the solution beyond $T_k$ to any time $T>0$, which implies the existence of a unique smooth solution of the system \eqref{applce} for any fixed $k$. \section{Existence of global weak solutions} \noindent \textbf{Step 1.} Taking $k\rightarrow \infty$ in the approximated system \eqref{applce}, we may obtain the existence of a global weak solution with a smooth initial and boundary value and $\rho_0>\delta>0$. Since the limit process of this step is similar to the next step when $\delta\rightarrow 0$, we omit the details of this step. \bigskip \noindent \textbf{Step 2.} We first approximate the general initial and boundary data in Theorem \ref{mainth1} by smooth functions. We may extend $n$ to $\tilde n_0\in H^1(\R)$ such that $n_0=\tilde n_0$ on $(0,1)$, and obtain the smooth approximation of initial data by the standard mollification as follows \beq\notag \rho_0^\delta=\eta_\delta*\hat\rho_0+\delta,\quad u_0^\delta=\frac{1}{\sqrt{\rho_0^\delta}}\eta_{\delta}*\left(\widehat{\frac{m_0}{\sqrt{\rho_0}}}\right),\quad v_0^\delta=\frac{1}{\sqrt{\rho_0^\delta}}\eta_{\delta}*\left(\widehat{\frac{l_0}{\sqrt{\rho_0}}}\right),\quad n_0^\delta=\frac{\eta_\delta*\tilde n_0}{\big|\eta_\delta*\tilde n_0\big|} \eeq where, for small $\delta>0$, $\eta_\delta=\frac{1}{\delta}\eta\left(\frac{\cdot}{\delta}\right)$ is the standard mollifier, $\hat f$ is the zero extension of $f$ from $(0,1)$ to $\R$. Therefore $\rho_0^\delta, u_0^\delta, v_0^\delta, n_0^\delta \in C^{2+\alpha}([0,1])$ for $0<\alpha<1$, and it holds \beq\label{appinitial1} \rho_0^\delta\geq \delta>0, \quad \rho_0^\delta\rightarrow \rho_0 \mbox{ in } L^{\gamma}, \quad n_0^\delta\rightarrow n_0 \mbox{ in } H^{1}, \eeq \beq\label{appinitial2} \sqrt{\rho_0^\delta}u_0^\delta\rightarrow \frac{m_0}{\sqrt{\rho_0}} \mbox{ in } L^{2}, \quad \sqrt{\rho_0^\delta}v_0^\delta\rightarrow \frac{l_0}{\sqrt{\rho_0}} \mbox{ in } L^{2},\quad \rho_0^\delta u_0^\delta\rightarrow m_0 \mbox{ in } L^{\frac{2\gamma}{\gamma+1}},\quad \rho_0^\delta v_0^\delta\rightarrow l_0 \mbox{ in } L^{\frac{2\gamma}{\gamma+1}} \eeq as $\delta\rightarrow 0$. Let $(\rho_\delta, u_\delta, v_\delta, n_\delta)$ be a sequence of global weak solutions to \begin{equation}\label{lcedelta} \begin{cases} (\rho_\delta)_t+(\rho_\delta u_\delta)_x=0,\quad \rho_\delta>0,\\ (\rho_\delta u\delta)_t+(\rho_\delta u_\delta ^2)_x+\big(\rho_\delta^{\gamma}\big)_x=J^1_\delta-(n_\delta)_{xx}(n_\delta)_x, \\ (\rho_\delta v_\delta)_t+(\rho_\delta u_\delta v_\delta)_x=J^2_\delta,\\ \gamma_1\left(\dot n_\delta-\frac12(v_\delta)_x\right) -\gamma_2\left((u_\delta)_x\cos n_\delta\sin n_\delta+\frac12(v_\delta)_x(1-2\cos^2 n_\delta)\right) =(n_\delta)_{xx}, \end{cases} \end{equation} with the initial and boundary values \beq\label{deltainitial} (\rho_\delta,\, u_\delta,\, v_\delta,\, n_\delta)(x,0)=(\rho_0^\delta,\, u_0^\delta,\, v_0^\delta,\, n_0^\delta)(x), \eeq \beq\label{deltabdyvalue} u_\delta(0,t)=v_\delta(0,t)=u_\delta(1,t)=v_\delta(1,t)=0,\quad (n_\delta)_x(0,t)=(n_\delta)_x(1,t)=0. \eeq Here $J^1_\delta$ and $J^2_\delta$ have the same forms as $J^1$ and $J^2$, but with $(u,v,n)$ replaced by $(u_\delta, v_\delta, n_\delta)$. By Lemma \ref{lemma1}--Lemma \ref{lemma3}, we can find a subsequence $(\rho_{\delta}, u_{\delta}, v_{\delta}, n_{\delta})$, still denoted as $(\rho_{\delta}, u_{\delta}, v_{\delta}, n_{\delta})$, such that for any $T>0$, as $\delta\rightarrow 0$, \beq\label{pfsec5.1} \rho_{\delta}\overset{*}{\rightharpoonup} \rho, \mbox{ in } L^\infty(0,T;L^{\gamma}), \qquad \rho_{\delta}\rightharpoonup \rho,\mbox{ in } L^{2\gamma}([0,1]\times[0,T]), \eeq \beq\label{pfsec5.2} \rho_{\delta}^{\gamma}\rightharpoonup \overline{\rho^{\gamma}},\mbox{ in } L^{2}([0,1]\times[0,T]), \eeq \beq\label{pfsec5.3} u_{\delta}\rightharpoonup u,\mbox{ in } L^{2}(0,T; H_0^1),\quad v_{\delta}\rightharpoonup v,\mbox{ in } L^{2}(0,T; H_0^1), \eeq \beq\label{pfsec5.4} n_{\delta}\overset{*}{\rightharpoonup} n,\mbox{ in } L^{\infty}([0,1]\times[0,T]),\quad (n_{\delta})_x\overset{*}{\rightharpoonup} n_x,\mbox{ in } L^{\infty}(0,T; L^2), \eeq \beq\label{pfsec5.5} (n_{\delta})_t\rightharpoonup n_t,\mbox{ in } L^{2}([0,1]\times[0,T]), \quad (n_{\delta})_{xx}\rightharpoonup n_{xx},\mbox{ in } L^{2}([0,1]\times[0,T]). \eeq Since $\rho_{\delta}>0$, for any nonnegative function $f\in C^\infty_0((0,1)\times (0,T))$ it holds that \beq\notag \int_0^T\int_0^1 \rho f =\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 \rho_{\delta} f\geq 0. \eeq Since $f$ is arbitrary, we conclude that $\rho\geq 0$ a.e. in $(0,1)\times (0,T)$. We need to show the limit $(\rho, u, v, n)$ is a solution to the system \eqref{applce}. We first state several compactness results that will be used in our proof. \begin{lemma}(\cite{simon90})\label{lemma5.1} Assume $X\subset E\subset Y$ are Banach spaces and $X \hookrightarrow\hookrightarrow E$ is compact. Then the following embeddings are compact $$ \left\{f:\, f\in L^{q}(0,T;X),\, \frac{\partial f}{\partial t}\in L^1(0,T; Y)\right\}\hookrightarrow\hookrightarrow L^q(0,T;E), \mbox{ for any } 1\leq q\leq \infty, $$ $$ \left\{f:\, f\in L^{\infty}(0,T;X),\, \frac{\partial f}{\partial t}\in L^r(0,T; Y)\right\}\hookrightarrow\hookrightarrow C([0,T];E), \mbox{ for any } 1<r<\infty. $$ \end{lemma} \begin{lemma}(\cite{feireisl04})\label{lemma5.2} Let $\bar O\subset \mathbb R^n$ be compact and $X$ be a separable Banach space. Assume that $f_{\delta}:\bar O\rightarrow X^*$ is a sequence of measurable functions such that for any $k$ $$ \mbox{ess}\sup\limits_{\bar O}\|f_{\delta}\|_{X^*}\leq N<\infty. $$ Moreover, the family of functions $\langle f_{\delta}, \Phi \rangle$ is equi-continuous for any $\Phi$ belonging to a dense subset of $X$. Then $f_{\delta}\in C(\bar O; X-w)$ for any $k$, i.e., for any $g\in X*$, $\langle f_{\delta},\, g\rangle\in C(\bar O)$. Furthermore, there exists $f\in C(\bar O; X-w)$ such that (after taking possible subsequences) $$ f_{\delta}\rightarrow f,\quad \mbox{in } C(\bar O; X-w) $$ as $\delta\rightarrow 0$. \end{lemma} First observe that $\rho_{\delta}\in L^{2\gamma}([0,1]\times[0,T])$ and $u_{\delta}\in L^2(0,T;H^1_0)\subset L^2(0,T;L^{\infty})$ imply \beq\notag \rho_{\delta}u_{\delta}\in L^{\frac{2\gamma}{\gamma+1}}(0,T;L^{2\gamma}),\quad (\rho_{\delta})_t=-(\rho_{\delta}u_{\delta})_x\in L^{\frac{2\gamma}{\gamma+1}}(0,T;H^{-1}). \eeq By Lemma \ref{lemma5.1} and Lemma \ref{lemma5.2}, and $\frac{2\gamma}{\gamma+1}>1$, $\rho_{\delta}\in L^\infty(0,T;L^{\gamma})$, $L^{\gamma} \hookrightarrow\hookrightarrow H^{-1}$, we conclude \beq\label{pfsec5.6} \rho_{\delta}\rightarrow \rho, \mbox{ in } C(0,T; L^{\gamma}-\omega),\quad \rho_{\delta}\rightarrow \rho, \mbox{ in } C(0,T; H^{-1}), \eeq where $f\in C(0,T; X-\omega)$ if for any $g\in X^*$, $\langle f(t),\, g\rangle\in C([0,T])$. Hence \beq\label{pfsec5.12} \rho_{\delta}u_{\delta}\rightarrow \rho u,\mbox{ in }\mathcal D'((0,1)\times(0,T)),\quad \rho_{\delta}v_{\delta}\rightarrow \rho v,\mbox{ in }\mathcal D'((0,1)\times(0,T)), \eeq and furthermore \beq\label{pfsec5.7} \rho_t+(\rho u)_x=0,\mbox{ in }\mathcal D'((0,1)\times(0,T)). \eeq By \eqref{pfsec5.6}, it also holds that \beq\label{pfsec5.8} \rho(x,0)=\rho_0(x),\mbox{ weakly in } L^{\gamma}([0,1]). \eeq By the fact $(n_{\delta})_t \in L^2(0,T;L^2)$, \eqref{pfsec5.4} and \eqref{pfsec5.5}, we can apply Lemma \ref{lemma5.1} to obtain \beq\label{pfsec5.9} n_{\delta}\rightarrow n, \mbox{ in } C([0,1]\times[0,T]),\quad n_{\delta}\rightarrow n, \mbox{ in } L^2(0,T; C^{1}), \eeq Combining with \eqref{pfsec5.3}-\eqref{pfsec5.5}, we can show the limit $n$ satisfies the following equation: \beq\label{pfsec5.10} \gamma_1\left(\dot n-\frac12v_x\right) -\gamma_2\left(u_x\cos n\sin n+\frac12v_x(1-2\cos^2 n)\right) =n_{xx}. \eeq By \eqref{pfsec5.9}, it also holds that \beq\label{pfsec5.11} n(x,0)=n_0(x),\mbox{ in } [0,1]. \eeq By the fact $\sqrt{\rho_{\delta}}\in L^{2\gamma}([0,1]\times[0,T])$ and $\sqrt{\rho_{\delta}}u_{\delta}\in L^{\infty}(0,T;L^2)$, it holds \beq\notag \rho_{\delta}u_{\delta}\in L^{\infty}(0,T;L^{\frac{2\gamma}{\gamma+1}}). \eeq Combining with \eqref{pfsec5.3}, we have \beq\label{pfsec5.13} \rho_{\delta}u_\delta^2\rightharpoonup \rho u^2, \mbox{ in } L^2(0,T; L^{\frac{2\gamma}{\gamma+1}}). \eeq By the second equation of system \eqref{lcedelta}, we have \beq\notag (\rho_\delta u_\delta)_t=-(\rho_\delta u_\delta^2)_x-\big(\rho_\delta^{\gamma}\big)_x+J^1_\delta-(n_\delta)_{xx}(n_\delta)_x \in L^2(0,T;W^{-1, \frac{2\gamma}{\gamma+1}}), \eeq where $J^1_\delta$ has the same form as $J^1$, but with $(u,v,n)$ replaced by $(u_\delta, v_\delta, n_\delta)$. By using Lemma \ref{lemma5.1} and Lemma \ref{lemma5.2}, we conclude \beq\label{pfsec5.14} \rho_{\delta}u_\delta\rightarrow \rho u, \mbox{ in } C(0,T; L^{\frac{2\gamma}{\gamma+1}}-\omega),\quad \rho_{\delta}u_\delta\rightarrow \rho u, \mbox{ in } C(0,T; H^{-1}). \eeq Combining with \eqref{pfsec5.3}, we conclude that \beq\label{pfsec5.15} \rho_{\delta}u_{\delta}^2\rightarrow \rho u^2,\mbox{ in }\mathcal D'((0,1)\times(0,T)). \eeq Therefore \beq\label{pfsec5.16} (\rho u)_t+(\rho u^2)_x+\big(\overline{\rho^{\gamma}}\big)_x=J^1-n_{xx}n_x,\mbox{ in }\mathcal D'((0,1)\times(0,T)). \eeq By \eqref{pfsec5.14}, it holds that \beq\label{pfsec5.17} \rho u(x,0)=m_0(x),\mbox{ weakly in } L^{\frac{2\gamma}{\gamma+1}}([0,1]). \eeq Similarly, we can also prove that \beq\label{pfsec5.18} (\rho v)_t+(\rho u v)_x=J^2,\mbox{ in }\mathcal D'((0,1)\times(0,T)), \eeq \beq\label{pfsec5.19} \rho v(x,0)=n_0(x),\mbox{ weakly in } L^{\frac{2\gamma}{\gamma+1}}([0,1]). \eeq By \eqref{pfsec5.15}, for some $t\in (0,T)$ and small $\epsilon>0$, it holds \beq\notag \frac{1}{\epsilon}\int_t^{t+\epsilon}\int_0^1\rho u^2 =\frac{1}{\epsilon}\int_t^{t+\epsilon}\lim\limits_{\delta\rightarrow 0}\int_0^1\rho_\delta u^2_\delta \leq \frac{1}{\epsilon}\int_t^{t+\epsilon}\overline{\lim\limits_{\delta\rightarrow 0}}\int_0^1\rho_\delta u^2_\delta. \eeq Sending $\epsilon\rightarrow 0^+$ and using the Lebesgue Differentiation Theorem, we obtain \beq\notag \int_0^1\rho u^2 \leq \overline{\lim\limits_{\delta\rightarrow 0}}\int_0^1\rho_\delta u^2_\delta, \eeq for a.e. $t\in (0,T)$. Combining this limit with the lower semicontinuity, we can prove the energy inequality is valid. \bigskip The only thing left is to show $\overline{\rho^\gamma}=\rho^\gamma$. To this end, we denote \beq\notag A(n)=(A_{ij}(n))_{2\times 2} \eeq where the elements of $A_{ij}$ are given as follows \beq\notag A_{11}(n)=(\alpha_0+\alpha_5+\alpha_6+\alpha_8)\cos^2 n+\alpha_1\cos^4n+(\alpha_4+\alpha_7) \eeq \beq\notag A_{12}(n)= \alpha_0\cos n\sin n+\alpha_1\cos^3n\sin n+\frac12(\alpha_2+\alpha_3+\alpha_5+\alpha_6)\cos n\sin n \eeq \beq\notag A_{21}(n)=\alpha_1\cos^3n\sin n+(\alpha_6+\alpha_8)\cos n\sin n \eeq \beq\notag A_{22}(n)=\alpha_1\cos^2n\sin^2 n+\frac12(-\alpha_2+\alpha_5)\cos^2 n+\frac12(\alpha_3+\alpha_6)\sin^2 n+\frac12\alpha_4. \eeq By the relations \eqref{alphas}, direct computations imply that there exist two positive constants $\lambda,\Lambda<\infty$ such that for any ${\bf y}\in \R^2$ \beq \label{postiveA} \lambda|{\bf y}|^2\leq {\bf y}^TA(n){\bf y}\leq \Lambda|{\bf y}|^2. \eeq In fact \beq\notag \begin{split} {\bf y}^TA(n){\bf y}=&A_{11}(n)y_1^2+(A_{12}(n)+A_{21}(n))y_1y_2+A_{22}(n)y_2^2\\ =&\big[(\alpha_0+\alpha_5+\alpha_6+\alpha_8)\cos^2 n+\alpha_1\cos^4n+(\alpha_4+\alpha_7)\big]y_1^2\\ &+\big[(\alpha_0+\alpha_6+\alpha_8)\cos n\sin n+2\alpha_1\cos^3n\sin n+\frac12(\alpha_2+\alpha_3+\alpha_5+\alpha_6)\cos n\sin n\big]y_1y_2\\ &+\big[\alpha_1\cos^2n\sin^2 n+\frac12(-\alpha_2+\alpha_5)\cos^2 n+\frac12(\alpha_3+\alpha_6)\sin^2 n+\frac12\alpha_4\big]y_2^2\\ =&\frac14\left( \frac{\gamma_2}{\sqrt{\gamma_1}}y_1\sin(2n)+ \frac{1}{\sqrt{\gamma_1}}(\gamma_1-\gamma_2\cos(2n))y_2 \right)^2\\ &+\frac14\left(-\alpha_1-\frac{\gamma_2^2}{\gamma_1}\right)y_1^2+(\alpha_4+\alpha_7)y_1^2 +\frac14\left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\right)y_2^2\\ &\frac14\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right)\left(y_1\cos(2n)+y_2\sin(2n)\right)^2\\ &+(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8)\left[\big(y_1\cos n+\frac12y_2\sin n\big)^2-\frac14 y_2^2\sin^2 n\right]. \end{split} \eeq Therefore \beq\notag \begin{split} {\bf y}^TA(n){\bf y}\geq& \frac14\left(-\alpha_1-\frac{\gamma_2^2}{\gamma_1}\right)y_1^2+(\alpha_4+\alpha_7)y_1^2 +\frac14\left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\right)y_2^2\\ &-\frac14(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8) y_2^2\sin^2 n. \end{split} \eeq If we take \beq\notag \lambda=\min\left\{ (\alpha_4+\alpha_7)-\frac14\left(\alpha_1+\frac{\gamma_2^2}{\gamma_1}\right),\ \left(2\alpha_4+\alpha_5+\alpha_6-\frac{\gamma_2^2}{\gamma_1}\right)-(\alpha_0+\alpha_1+\alpha_5+\alpha_6+\alpha_8) \right\}, \eeq then by the relation \eqref{alphas}, we know that $\lambda>0$ and we have shown the estimate \eqref{postiveA}. By the definition of $A(n)$, we see that the matrix valued function $A(\cdot)\in C^{\infty}$. By the estimate \eqref{postiveA}, the inverse matrix function $A^{-1}$ exists and $$ \frac{d}{dn}\big(A^{-1}(n)\big)=A^{-1}\frac{d}{dn}\big(A(n)\big)A^{-1}. $$ The equations for ${\bf u}=(u,\, v)^T$ can be written as \beq\label{vec1dlce} \rho{\bf u}_t+\rho u{\bf u}_x+ {\bf P}_x=\big(A(n){\bf u}_x\big)_x+(B_1(n))_x-B_2(n) \eeq where \beq\notag {\bf P}=(\overline{\rho^{\gamma}},\, 0)^T, \eeq \beq\notag B_1(n)=\big((\alpha_2+\alpha_3)\dot n\cos n\sin n,\ \alpha_2\dot n\cos^2 n-\alpha_3\dot n\sin^2 n\big)^T, \eeq \beq\notag B_2(n)=\big(n_{xx}n_x,\, 0\big)^T. \eeq Similarly, we can rewrite the equations for ${\bf u}_\delta=(u_\delta,\, v_\delta)^T$, ${\bf P}_\delta=(\rho_\delta^{\gamma},\, 0)^T$ in the similar form \beq\label{Dvec1dlce} \rho_\delta({\bf u}_\delta)_t+\rho_\delta u_\delta({\bf u}_\delta)_x+ ({\bf P_\delta})_x=\big(A(n_\delta)({\bf u}_\delta)_x\big)_x+(B_1(n_\delta))_x-B_2(n_\delta). \eeq Denote \beq\notag {\mathcal H}={\bf u}_x-A^{-1}(n){\bf P},\quad {\mathcal H}_\delta=({\bf u}_\delta)_x-A^{-1}(n_\delta){\bf P}_\delta. \eeq We have the following lemma. \begin{lemma}\label{lemma5.3} As $\delta\rightarrow 0$, it holds \beq\label{pfsec6.1} \rho_{\delta}{\mathcal H}_\delta\rightarrow \rho {\mathcal H},\mbox{ in }\mathcal D'((0,1)\times(0,T)). \eeq \end{lemma} \noindent{\bf Proof.\quad}}\def\endpf{\hfill$\Box$ The main difficulty of the proof arises from $\rho u\not\in L^2$. To overcome it, we need to mollify the density $\rho$ by $\langle\hat\rho\rangle_\sigma=\eta_\sigma*\hat\rho$, where $\eta_\sigma=\frac{1}{\sigma}\sigma\left(\frac{\cdot}{\sigma}\right)$ is the standard mollifier, $\hat f$ is the zero extension of $f$ from $(0,1)$ to $\R$. By Lemma 3.3 in \cite{feireisl04}, the zero-extension of $\hat\rho$ still satisfies the same equation \beq\label{rhohat} (\hat\rho)_t+(\hat\rho \hat u)_x=0, \quad \mbox{in }\mathcal D'(\R\times(0,T)). \eeq Denote $\tau^{\sigma}=(\langle\hat\rho\rangle_\sigma\hat u)_x-\langle(\hat \rho\hat u)_x\rangle_\sigma$. By Lemma 2.3 in \cite{lions96}, we know that $\tau^{\sigma}\in L^{\frac{2\gamma}{\gamma+1}}(\R\times(0,T)$, and as $\sigma\rightarrow 0$ \beq\label{taus} \tau^{\sigma}\rightarrow 0,\quad \mbox{in }L^1(\R\times(0,T)). \eeq Taking the standard mollifier as the test function, we obtain \beq\label{rhosigma} (\langle\hat\rho\rangle_\sigma)_t+(\langle\hat\rho\rangle_\sigma \hat u)_x=\tau^\sigma, \quad \mbox{in }\mathcal D'(\R\times(0,T)). \eeq Similarly, it also hold for the approximate solutions \beq\label{rhods} (\langle\hat\rho_\delta\rangle_\sigma)_t+(\langle\hat\rho_\delta\rangle_\sigma \hat u_\delta)_x=\tau^\sigma_\delta, \quad \mbox{in }\mathcal D'(\R\times(0,T)), \eeq where $\tau^\sigma_\delta$ has the same form as $\tau^\sigma$, but with $\rho, u$ replaced by $\rho_\delta, u_\delta$. We also know that, for any $\delta>0$, $\tau^{\sigma}_\delta\in L^{\frac{2\gamma}{\gamma+1}}(\R\times(0,T)$, and as $\sigma\rightarrow 0$ \beq\label{tauds} \tau^{\sigma}_\delta\rightarrow 0,\quad \mbox{in }L^1(\R\times(0,T)). \eeq Multiplying the equation \eqref{Dvec1dlce} by $\varphi\phi A^{-1}(n_\delta)\int_0^x\langle\hat\rho_\delta\rangle_\sigma$ from left for any $\varphi\in C^\infty_0(0,T)$ and $\phi\in C^\infty_0(0,1)$, and integrating by parts, we obtain \beq\notag \begin{split} &\int_0^T\int_0^1\varphi\phi{\mathcal H}_\delta\langle\hat\rho_\delta\rangle_\sigma\\ =&\int_0^T\int_0^1\varphi'\phi\rho_\delta A^{-1}(n_\delta){\bf u}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma+\int_0^T\int_0^1\varphi\phi\rho_\delta A^{-1}(n_\delta){\bf u}_\delta\left(\int_0^x\langle\hat\rho_\delta\rangle_\sigma\right)_t\\ &+\int_0^T\int_0^1\varphi\phi\rho_\delta \big(A^{-1}(n_\delta)\big)_t{\bf u}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma+\int_0^T\int_0^1\varphi\phi'\rho_\delta u_\delta A^{-1}(n_\delta){\bf u}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma\\ &+\int_0^T\int_0^1\varphi\phi\rho_\delta\langle\hat\rho_\delta\rangle_\sigma u_\delta A^{-1}(n_\delta){\bf u}_\delta+\int_0^T\int_0^1\varphi\phi\rho_\delta u_\delta \big(A^{-1}(n_\delta)\big)_x{\bf u}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma\\ &+\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)(B_1(n_\delta))_x\int_0^x\langle\hat\rho_\delta\rangle_\sigma-\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)B_2(n_\delta)\int_0^x\langle\hat\rho_\delta\rangle_\sigma\\ &-\int_0^T\int_0^1\varphi\phi'{\mathcal H}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma-\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)\big(A(n_\delta)\big)_x{\mathcal H}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma. \end{split} \eeq The equation \eqref{rhods} implies $$ \frac{\partial}{\partial t}\left(\int_0^x\langle\hat\rho_\delta\rangle_\sigma\right)=-\langle\hat\rho_\delta\rangle_\sigma \hat u_\delta+\tau^\sigma_\delta. $$ Using this fact, we have \beq\notag \begin{split} &\int_0^T\int_0^1\varphi\phi{\mathcal H}_\delta\langle\hat\rho_\delta\rangle_\sigma\\ =&\int_0^T\int_0^1\varphi'\phi\rho_\delta A^{-1}(n_\delta){\bf u}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma+\int_0^T\int_0^1\varphi\phi\rho_\delta A^{-1}(n_\delta){\bf u}_\delta\int_0^x\tau^\sigma_\delta\\ &+\int_0^T\int_0^1\varphi\phi\rho_\delta \big(A^{-1}(n_\delta)\big)_t{\bf u}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma+\int_0^T\int_0^1\varphi\phi'\rho_\delta u_\delta A^{-1}(n_\delta){\bf u}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma\\ &+\int_0^T\int_0^1\varphi\phi\rho_\delta u_\delta \big(A^{-1}(n_\delta)\big)_x{\bf u}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma\\ &+\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)(B_1(n_\delta))_x\int_0^x\langle\hat\rho_\delta\rangle_\sigma-\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)B_2(n_\delta)\int_0^x\langle\hat\rho_\delta\rangle_\sigma\\ &-\int_0^T\int_0^1\varphi\phi'{\mathcal H}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma-\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)\big(A(n_\delta)\big)_x{\mathcal H}_\delta\int_0^x\langle\hat\rho_\delta\rangle_\sigma. \end{split} \eeq By the Lebesgue Dominated Convergence theorem and \eqref{tauds}, we may take the limit $\sigma\rightarrow 0$ and get \beq\label{pfsec6.3} \begin{split} &\int_0^T\int_0^1\varphi\phi{\mathcal H}_\delta\rho_\delta\\ =&\int_0^T\int_0^1\varphi'\phi\rho_\delta A^{-1}(n_\delta){\bf u}_\delta\int_0^x\rho_\delta +\int_0^T\int_0^1\varphi\phi\rho_\delta \big(A^{-1}(n_\delta)\big)_t{\bf u}_\delta\int_0^x\rho_\delta\\ &+\int_0^T\int_0^1\varphi\phi'\rho_\delta u_\delta A^{-1}(n_\delta){\bf u}_\delta\int_0^x\rho_\delta+\int_0^T\int_0^1\varphi\phi\rho_\delta u_\delta \big(A^{-1}(n_\delta)\big)_x{\bf u}_\delta\int_0^x\rho_\delta\\ &+\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)(B_1(n_\delta))_x\int_0^x\rho_\delta-\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)B_2(n_\delta)\int_0^x\rho_\delta\\ &-\int_0^T\int_0^1\varphi\phi'{\mathcal H}_\delta\int_0^x\rho_\delta-\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)\big(A(n_\delta)\big)_x{\mathcal H}_\delta\int_0^x\rho_\delta. \end{split} \eeq By the definition of $B_2(n_\delta)$ and integration by parts, we obtain \beq\label{pfsec6.6} \begin{split} &-\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)B_2(n_\delta)\int_0^x\rho_\delta\\ =&\frac12\int_0^T\int_0^1\varphi\phi' A^{-1}(n_\delta)\big(|(n_\delta)_x|^2,\, 0\big)^T\int_0^x\rho_\delta +\frac12\int_0^T\int_0^1\varphi\phi\big(A^{-1}(n_\delta)\big)_x\big(|(n_\delta)_x|^2,\, 0\big)^T\int_0^x\rho_\delta\\ &+\frac12\int_0^T\int_0^1\varphi\phi \rho_\delta A^{-1}(n_\delta)\big(|(n_\delta)_x|^2,\, 0\big)^T. \end{split} \eeq By the definition of $B_1(n_\delta)$, we obtain \beq\notag \begin{split} &\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)\big(B_1(n_\delta)\big)_x\int_0^x\rho_\delta\\ =&\int_0^T\int_0^1\varphi\phi \left(A^{-1}(n_\delta)B_1(n_\delta)\right)_x\int_0^x\rho_\delta -\int_0^T\int_0^1\varphi\phi \left(A^{-1}(n_\delta)\right)_xB_1(n_\delta)\int_0^x\rho_\delta. \end{split} \eeq It is not hard to see that there is a vector function $\mathcal F(n_\delta)$ (smooth in $n_\delta$) such that \beq\notag A^{-1}(n_\delta)B_1(n_\delta)=\mathcal F_t(n_\delta)+u_\delta\mathcal F_x(n_\delta). \eeq Then \beq\label{pfsec6.7} \begin{split} &\int_0^T\int_0^1\varphi\phi A^{-1}(n_\delta)\big(B_1(n_\delta)\big)_x\int_0^x\rho_\delta\\ =&-\int_0^T\int_0^1\varphi'\phi \mathcal F_x(n_\delta)\int_0^x\rho_\delta-\int_0^T\int_0^1\varphi\phi' u_\delta\mathcal F_x(n_\delta)\int_0^x\rho_\delta\\ &-\int_0^T\int_0^1\varphi\phi \left(A^{-1}(n_\delta)\right)_xB_1(n_\delta)\int_0^x\rho_\delta. \end{split} \eeq To estimate the second term on right side of \eqref{pfsec6.3}, we use $\varphi\phi n $ as the test function for the first equation of \eqref{lcedelta} to obtain \beq\notag \int_0^T\int_0^1\varphi\phi\rho_\delta (n_\delta)_t =-\int_0^T\int_0^1\varphi'\phi\rho_\delta n_\delta -\int_0^T\int_0^1\varphi\rho_\delta u_\delta (n_\delta\phi)_x. \eeq Similarly, it holds \beq\notag \int_0^T\int_0^1\varphi\phi\rho n_t =-\int_0^T\int_0^1\varphi'\phi\rho n -\int_0^T\int_0^1\varphi\rho u (n\phi)_x. \eeq Taking the difference, and using \eqref{pfsec5.1}, \eqref{pfsec5.12} and \eqref{pfsec5.9}, we have \beq\label{pfsec6.8} \rho_{\delta}(n_{\delta})_t\rightarrow \rho n_t,\mbox{ in }\mathcal D'((0,1)\times(0,T)). \eeq Furthermore, since $$ \int_0^x\rho_\delta\in L^\infty(0,T;W^{1,\gamma}), \quad \frac{\partial}{\partial t}\left(\int_0^x\rho_\delta\right)=-\rho_\delta u_\delta\in L^\infty\left(0,T;L^{\frac{2\gamma}{\gamma+1}}\right) $$ we obtain by Lemma \ref{lemma5.1} and \eqref{pfsec5.1} \beq\label{pfsec6.4} \int_0^x\rho_\delta\rightarrow \int_0^x\rho, \quad \mbox{in }C([0,1]\times[0,T]),\quad \mbox{as }\delta\rightarrow 0. \eeq Now, we are ready to take limit in \eqref{pfsec6.3}. Letting $\delta\rightarrow 0$ in \eqref{pfsec6.3} \eqref{pfsec6.6} and \eqref{pfsec6.7}, and using the facts \eqref{pfsec6.4}, \eqref{pfsec6.8}, \eqref{pfsec5.1}-\eqref{pfsec5.3}, \eqref{pfsec5.12}, \eqref{pfsec5.9} and \eqref{pfsec5.15}, we obtain \beq\label{pfsec6.5} \begin{split} &\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1\varphi\phi{\mathcal H}_\delta\rho_\delta\\ =&\int_0^T\int_0^1\varphi'\phi\rho A^{-1}(n){\bf u}\int_0^x\rho +\int_0^T\int_0^1\varphi\phi\rho \big(A^{-1}(n)\big)_t{\bf u}\int_0^x\rho\\ &+\int_0^T\int_0^1\varphi\phi'\rho u A^{-1}(n){\bf u}\int_0^x\rho+\int_0^T\int_0^1\varphi\phi\rho u \big(A^{-1}(n)\big)_x{\bf u}\int_0^x\rho\\ &+\int_0^T\int_0^1\varphi\phi A^{-1}(n)(B_1(n))_x\int_0^x\rho-\int_0^T\int_0^1\varphi\phi A^{-1}(n)B_2(n)\int_0^x\rho\\ &-\int_0^T\int_0^1\varphi\phi'{\mathcal H}\int_0^x\rho-\int_0^T\int_0^1\varphi\phi A(n)\big(A^{-1}(n)\big)_x{\mathcal H}\int_0^x\rho. \end{split} \eeq We may go through the same arguments for $\rho$ and $u$, and show that right side of \eqref{pfsec6.5} is exactly $$ \int_0^T\int_0^1\varphi\phi{\mathcal H}\rho, $$ which completes the proof of the lemma. \endpf We also need the following result. \begin{lemma}(\cite{feireisl04})\label{lemma5.5} Let $\bar O\subset \mathbb R^n$ be a measurable set and $f_k \in L^1(O;\R^N)$ for $k\in \mathbb Z_+$ such that $$ f_k\rightharpoonup f, \quad \mbox{in }\ L^1(O;\R^N). $$ Let $\Phi:\R^N\rightarrow (-\infty,\infty]$ be a lower semi-continuous convex function such that $\Phi(f_k)\in L^1(O)$ for any $k$ and $$ \Phi(f_k)\rightharpoonup \overline{\Phi(f)}, \quad \mbox{in }\ L^1(O). $$ Then $$ \Phi(f)\leq \overline{\Phi(f)}, \quad a.e.\ \mbox{in }\ O. $$ Moreover, if $\Phi$ is strictly convex on an open convex set $U\subset \R^N$ and $$ \Phi(f)= \overline{\Phi(f)}, \quad a.e.\ \mbox{in }\ O, $$ then $$ f_k\rightarrow f, \quad \mbox{for }\ a.e.\ y\in \{y\in O\,|\, f(y)\in U\}. $$ \end{lemma} \medskip The proof of Theorem \ref{mainth1} will be completed by the following Lemma. \begin{lemma}\label{lemma5.4} As $\delta\rightarrow 0$, it holds \beq\label{pfsec6.2} \lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1\rho_\delta\log(\rho_\delta) =\int_0^T\int_0^1\rho \log\rho. \eeq \end{lemma} \noindent{\bf Proof.\quad}}\def\endpf{\hfill$\Box$ By Proposition 4.2 in \cite{fnp01}, if $\rho\in L^2((0,1)\times(0,T))$, $u\in L^2(0,T;H_0^1)$ solves the equation \beq\notag \rho_t+(\rho u)_x=0, \quad \mbox{in } \mathcal D'((0,1)\times(0,T)) \eeq then \beq\label{pssec8.1} (b(\rho))_t+(b(\rho)u)_x+(b'(\rho)\rho-b(\rho))u_x=0, \quad \mbox{in } \mathcal D'((0,1)\times(0,T)) \eeq for any $b\in C^1(\mathbb R)$ such that $b'(x)\equiv 0$ for all large enough $x\in \mathbb R$. For any positive integers $j, K$, we may take a family of functions $b_K^j\in C^1(\R)$ with \beq\notag b_K^j(x)=\left\{ \begin{array}{ll} \displaystyle\left(x+\frac{1}{j}\right)\log\left(x+\frac{1}{j}\right),&\quad\mbox{if }0\leq x\leq K,\\ \displaystyle\left(K+1+\frac{1}{j}\right)\log\left(K+1+\frac{1}{j}\right),&\quad\mbox{if }x\geq K+1. \end{array} \right. \eeq Since $\rho\in L^\infty(0,T;L^{\gamma})$, we have $\rho<\infty$ a.e. in $(0,1)\times(0,T)$. This implies that $b_K^j(\rho)\rightarrow (\rho+\frac1j)\log(\rho+\frac1j)$ a.e. in $(0,1)\times(0,T)$ as $K\rightarrow \infty$. Hence, by using the Lebesgue Dominated Convergence theorem, we conclude \beq\label{pfsec8.2} \left(\left(\rho+\frac1j\right)\log\left(\rho+\frac1j\right)\right)_t+\left(\left(\rho+\frac1j\right)\log\left(\rho+\frac1j\right)u\right)_x+\Big(\rho -\frac1j\log\big(\rho+\frac1j\big)\Big)u_x=0, \eeq in $\mathcal D'((0,1)\times(0,T))$. It is easy to see that $\left(\rho+\frac1j\right)\log\left(\rho+\frac1j\right)\in L^2((0,1)\times (0,T))$ since $\rho\in L^{2\gamma}((0,1)\times (0,T))$. By Lemma 3.3 in \cite{feireisl04}, the zero-extension of $\rho$ outside $(0,1)$ satisfies the same equation. By the mollification, the integration by parts and the limiting process, we may take the test function to be the constant $1$ so that \beq\label{pfsec8.3} \begin{split} &\int_0^T\int_0^1\rho u_x\\ =&\int_0^1\left(\rho_0+\frac1j\right)\log\left(\rho_0+\frac1j\right)-\int_0^1\left(\rho+\frac1j\right)\log\left(\rho+\frac1j\right)(T)\\ &+\frac1j\int_0^T\int_0^1u_x\log\left(\rho+\frac1j\right). \end{split} \eeq Similar estimates are valid for approximated solutions $\rho_\delta$, $u_\delta$. More precisely, we have \beq\label{pfsec8.4} \left(\rho_\delta\log\left(\rho_\delta\right)\right)_t+\left(\rho_\delta\log\left(\rho_\delta\right)u_\delta\right)_x+\rho_\delta (u_\delta)_x=0, \eeq in $\mathcal D'((0,1)\times(0,T))$, and \beq\label{pfsec8.5} \begin{split} \int_0^T\int_0^1\rho_\delta (u_\delta)_x =\int_0^1\rho_0^\delta\log\left(\rho_0^\delta\right) -\int_0^1\rho_\delta\log\left(\rho_\delta\right)(T)\\ \end{split} \eeq Since $\rho_\delta\in L^\infty(0,T;L^\gamma),$ we have \beq\notag \rho^\delta\log\left(\rho^\delta\right)\in L^\infty(0,T;L^{\tilde\gamma}) \eeq for $1<\tilde{\gamma}<\gamma$. By the equation \eqref{pfsec8.4}, we obtain \beq\notag \left(\rho_\delta\log\left(\rho_\delta\right)\right)_t\in L^{\frac{2\gamma}{\gamma+1}}(0,T;W^{-1,\frac{2\gamma}{\gamma+1}}). \eeq By Lemma \ref{lemma5.2}, we conclude as $\delta\rightarrow 0$ \beq\notag \rho^\delta\log\left(\rho^\delta\right) \rightarrow \overline{\rho\log\left(\rho\right)},\quad \mbox{in } C([0,T]; L^{\tilde\gamma}-\omega). \eeq This implies \beq\label{pfsec8.10} \lim\limits_{\delta\rightarrow 0}\int_0^1\rho^\delta\log\left(\rho^\delta\right)(T) {\color{yellow}{=}} \int_0^1\overline{\rho\log\left(\rho\right)}(T). \eeq Since the function $x\log\left(x\right)$ is convex for any $x>0$, Lemma \ref{lemma5.5} implies that \beq\label{pfsec8.6} \rho\log\left(\rho\right)\leq \overline{\rho\log\left(\rho\right)}, \quad \mbox{a.e. in } (0,1)\times(0,T). \eeq Subtracting \eqref{pfsec8.3} by \eqref{pfsec8.5} and sending $\delta\rightarrow 0$, we have \beq\label{pfsec8.7} \begin{split} &\int_0^1\overline{\rho\log\left(\rho\right)}(T)-\int_0^1\left(\rho+\frac1j\right)\log\left(\rho+\frac1j\right)(T)\\ =&\int_0^1\rho_0\log\left(\rho_0\right)-\int_0^1\left(\rho_0+\frac1j\right)\log\left(\rho_0+\frac1j\right)\\ &+\int_0^T\int_0^1\rho (u)_x-\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1\rho_\delta (u_\delta)_x-\frac1j\int_0^T\int_0^1u_x\log\left(\rho+\frac1j\right). \end{split} \eeq The first two terms of right hand side can be estimated as follows \beq\label{pfsec8.8} \begin{split} &\int_0^T\int_0^1\rho (u)_x-\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1\rho_\delta (u_\delta)_x\\ =&\int_0^T\int_0^1\rho (u)_x-\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1\rho_\delta {\mathcal H}_\delta^1-\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 A^{-1}_{11}(n_\delta)\rho_\delta^{\gamma+1}\\ =&\int_0^T\int_0^1\rho (u)_x-\int_0^T\int_0^1\rho {\mathcal H}^1-\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 A^{-1}_{11}(n_\delta)\rho_\delta^{\gamma+1}\\ =&\int_0^T\int_0^1\rho A^{-1}_{11}(n)\overline{\rho^{\gamma}}-\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 A^{-1}_{11}(n)\rho_\delta^{\gamma+1}-\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 \left(A^{-1}_{11}(n_\delta)-A^{-1}_{11}(n)\right)\rho_\delta^{\gamma+1}\\ =&\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 A^{-1}_{11}(n)\left(\rho\overline{\rho^{\gamma}}- \rho_\delta^{\gamma+1}\right), \end{split} \eeq where we have used Lemma \ref{lemma5.3} in the second equality, and \eqref{pfsec5.9}, $\gamma>1$, and \eqref{estrho2g} in the last step. Here ${\mathcal H}^1$ is the first element of ${\mathcal H}$, and $A^{-1}_{11}(\cdot)$ is the $(1,1)$ element of inverse matrix $A^{-1}(\cdot)$. By the estimate \eqref{postiveA} and the property of $2\times 2$ matrices, $A^{-1}_{11}(\cdot)>0$. Since $\rho, \rho_\delta\geq 0$, it is not hard to verify that \beq\notag (\rho-\rho_\delta)^{\gamma+1}= (\rho-\rho_\delta)^{\gamma}(\rho-\rho_\delta) \leq \left(\rho^\gamma-\rho_\delta^\gamma\right)(\rho-\rho_\delta). \eeq Thus \beq\label{pfsec8.9} \begin{split} &\overline{\lim\limits_{\delta\rightarrow 0}}\int_0^T\int_0^1 A^{-1}_{11}(n)(\rho-\rho_\delta)^{\gamma+1}\\ \leq &\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 A^{-1}_{11}(n) \left(\rho^\gamma-\rho_\delta^\gamma\right)(\rho-\rho_\delta)\\ =&\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 A^{-1}_{11}(n) \left(\rho^{\gamma+1}-\rho^{\gamma}\rho_\delta-\rho_\delta^\gamma\rho+\rho_\delta^{\gamma+1}\right)\\ =&\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 A^{-1}_{11}(n) \left(\rho_\delta^{\gamma+1}-\rho\overline{\rho^{\gamma}}\right)+\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 A^{-1}_{11}(n) \left(\rho^{\gamma+1}-\rho^{\gamma}\rho_\delta-\rho_\delta^\gamma\rho+\rho\overline{\rho^{\gamma}}\right)\\ =&\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1 A^{-1}_{11}(n) \left(\rho_\delta^{\gamma+1}-\rho\overline{\rho^{\gamma}}\right). \end{split} \eeq Substituting \eqref{pfsec8.9} into \eqref{pfsec8.8}, we have \beq\notag \int_0^T\int_0^1\rho (u)_x-\lim\limits_{\delta\rightarrow 0}\int_0^T\int_0^1\rho_\delta (u_\delta)_x\leq 0. \eeq Combing this inequality with \eqref{pfsec8.7}, we conclude that \beq\notag \begin{split} &\int_0^1\overline{\rho\log\left(\rho\right)}(T)-\int_0^1\left(\rho+\frac1j\right)\log\left(\rho+\frac1j\right)(T)\\ \leq &\int_0^1\rho_0\log\left(\rho_0\right)-\int_0^1\left(\rho_0+\frac1j\right)\log\left(\rho_0+\frac1j\right)-\frac1j\int_0^T\int_0^1u_x\log\left(\rho+\frac1j\right). \end{split} \eeq Sending $j\rightarrow \infty$, we obtain that \beq\notag \begin{split} \int_0^1\overline{\rho\log\left(\rho\right)}(T)-\int_0^1\rho\log\left(\rho\right)(T)\leq 0. \end{split} \eeq This and \eqref{pfsec8.6} imply that $\overline{\rho\log\left(\rho\right)}=\rho\log\left(\rho\right)$, combined with \eqref{pfsec8.10}, implies \eqref{pfsec6.2}. Combining Lemma \ref{lemma5.4} with Lemma \ref{lemma5.5}, and using the strict convexity of $\rho\log\rho$ for $\rho\geq 0$, we know that \beq\notag \rho_\delta\rightarrow\rho, \quad \mbox{a.e. in } (0,1)\times(0,T). \eeq It follows from the Egorov theorem that for any $\epsilon>0$, there is $I_{\epsilon}\subset (0,1)\times(0,T)$ such that $|\big((0,1)\times(0,T)\big)\setminus I_{\epsilon}|<\epsilon$ and $$ \sup\limits_{(x,t)\in I_{\epsilon}} |\rho_\delta(x,t)-\rho(x,t)|\rightarrow 0. $$ Since $\rho_\delta$ is uniformly bounded in $L^{2\gamma}$, we can estimate \begin{eqnarray*} \notag \int_0^T\int_0^1|\rho_\delta-\rho|^{\gamma} &\leq& \sup\limits_{(x,t)\in I_{\epsilon}} |\rho_\delta(x,t)-\rho(x,t)||I_\epsilon| +C|\big((0,1)\times(0,T)\big)\setminus I_{\epsilon}|^{\frac12}\|\rho_\delta-\rho\|_{L^{2\gamma}}^{\gamma} \\ \rightarrow 0, \ \ {\rm{as}}\ \ \delta\rightarrow 0. \end{eqnarray*} This implies that $\overline{\rho^\gamma}=\rho^\gamma$ in $(0,1)\times (0,T)$. This completes the proof of Lemma \ref{lemma5.4}. \endpf \bigskip
1,116,691,498,798
arxiv
\section{Introduction} \label{sec:intro} Dark matter could be a single species of particles with only gravitational interactions as in the cosmological standard model, $\Lambda$CDM. Alternatively, it might have multiple components. If there is a dominant non-interacting component then other components can have interesting non-gravitational interactions. Recent observations of the Cosmic Microwave Background (CMB) and matter power spectrum (MPS) are already sensitive to non-standard dark matter components which comprise only a few \% of the total, stage 4 experiments will be able to push the sensitivity below the percent level. Interestingly, precision fits with current cosmological data show some tension with predictions of $\Lambda$CDM for the expansion rate of the universe $H_0$ \cite{Riess:2016jrr,Bonvin:2016crt} and the amplitude of fluctuations in the MPS on galaxy cluster scales, $\sigma_8$ \cite{Heymans:2013fya,Joudaki:2016mvz,Ade:2015fva,Ade:2013lmv,Kohlinger:2017sxk,Joudaki:2017zdt}\footnote{For recent work motivated by these discrepancies see \cite{Buen-Abad:2015ova,Lesgourgues:2015wza,Chacko:2016kgg,Poulin:2016nat,MacCrann:2014wfa,Canac:2016smv,Bernal:2016gxb,Chudaykin:2016yfk,Archidiacono:2016kkh,Joudaki:2016kym,Buen-Abad:2017gxg,Raveri:2017jto,Lancaster:2017ksf,Oldengott:2017fhy,Ko:2016fcd,Ko:2016uft,Ko:2017uyb,Chacko:2018vss,Poulin:2018zxs,Pan:2018zha}.}. Motivated by the significant projected improvement in measurements of the MPS we propose and explore the possibility that a small component of dark matter is ``cannibalistic''. Cannibal dark matter consists of massive particles with an efficient number-changing self-interaction \cite{Carlson:1992fn}. The most important process that such interactions mediate is from three particles in the initial state to two particles in the final state. In such a \ensuremath{3\rightarrow 2\ } process mass is turned into kinetic energy of the outgoing particles which heats the gas of particles\footnote{Cannibals cannot constitute the entirety of the dark matter in the Universe precisely because they are heated up by their self-interactions and that interferes with the formation of structure \cite{Machacek:1994vg,deLaix:1995vi}. Proposed solutions to this problem are to let cannibalism end much before matter domination \cite{Soni:2016gzf,Pappadopulo:2016pkp,Dey:2016qgf,Bernal:2015ova} or to cool it through couplings to the Standard Model, like in the ELDER \cite{Kuflik:2015isi} or SIMP \cite{Hochberg:2014dra} paradigms. The SIMP mechanism has been the object of intense study in recent years \cite{Hochberg:2014kqa,Bernal:2015bla,Bernal:2015xba,Bernal:2017mqb,Lee:2015gsa,Kamada:2016ois,Choi:2017mkk,Choi:2017zww,Choi:2018iit}.}. If there are also rapid \ensuremath{2\rightarrow 2\ } interactions the cannibalizing particle gas remains in thermal and chemical equilibrium, and can be described by the Boltzmann distribution with a temperature $T(a)$ and vanishing chemical potential. Because of the cannibalization process the temperature drops only logarithmically with the scale factor $T/m \sim 1/\log a$. This is very different from the case of non-relativistic matter which cools very quickly, $T/m \sim 1/a^2$. Cannibal matter also has an unusual scaling of its number and energy densities. The number density dilutes like $n_\mathrm{can}\sim 1/(a^3 \log a)$ where the $1/a^3$ is the usual volume dilution while the $1/\log a$ comes from the cannibalization. Ignoring kinetic energy, the energy density is then simply $\rho_\mathrm{can} \approx m\, n_\mathrm{can}$. Thus the energy density of cannibals scales intermediate between ordinary matter for which $\rho_{m} \sim 1/a^3$ and radiation where $\rho_{r} \sim 1/a^4$. Note that for these scalings to hold it is necessary that the cannibal particles are isolated from all other sectors, \textit{i.e.}\ no significant interactions, so that any heat produced from cannibalization does not dissipate to other sectors. We now discuss the impact of the cannibal fluid on cosmology with particular attention to the MPS. First, note that since the cannibal temperature decays very slowly the cannibal fluid has significant pressure $P/\rho \approx T/m$. This pressure prevents growth of density perturbations in the cannibal fluid, instead one obtains ``cannibal acoustic oscillations". Overdensities in the cannibal fluid remain small and make only negligible contributions to the gravitational potential. On the other hand, the cannibal fluid does contribute to the overall energy density of the universe which determines the Hubble expansion rate. Since the gravitational potential drives the growth of structure whereas the Hubble expansion acts to slow it (``Hubble friction") the net effect of the cannibal fluid is to suppress the MPS. This is the main result of our paper. In Section \ref{sec:cosmo} we derive this result quantitatively. The connection to the physical explanation in the previous paragraph will become clear after we derive the M\'{e}sz\'{a}ros equation for the growth of cold dark matter (CDM) perturbations $\delta_\mathrm{cdm}$ in the presence of the cannibal fluid: \begin{equation}\label{eq:cdmperts} a^2 \delta_\mathrm{cdm}'' + \frac{3}{2} a \delta_\mathrm{cdm}' - \frac{3}{2} \frac{\rho_\mathrm{cdm}}{\rho_\mathrm{cdm}+\rho_\mathrm{can}} \delta_\mathrm{cdm} = 0 \ . \end{equation} This equation is valid during matter domination and for perturbations which are deep inside the horizon.% \footnote{We have simplified further by dropping terms which are suppressed by $T/m$ of the cannibals.} Here the derivatives are with respect to the scale factor $a$, and $\rho_\mathrm{cdm}$ and $\rho_\mathrm{can}$ are the background (average) energy densities of the cold dark matter and the cannibals, respectively. For zero cannibal energy density this has the usual linear growth of the matter perturbations $\delta_\mathrm{cdm} \sim a$ as a solution. Expanding for small energy density in cannibals $\rho_\mathrm{can} \ll \rho_\mathrm{cdm}$ one finds a suppressed rate of growth: $\delta_\mathrm{cdm} \sim a^{1-\gamma}$ with $\gamma = \frac35\, \rho_\mathrm{can}/ \rho_\mathrm{cdm}$. Given that current data suggest a suppression of matter perturbations by $\sim 5\%$ and that the universe expands by a factor of $a_{\rm today}/a_{\rm equality} \sim 10^3$ during matter domination we see that the preferred parameter space should have on the order of 1\% of matter in cannibals, \textit{i.e.}\ a fraction $\rho_\mathrm{can}/ \rho_\mathrm{cdm} \sim 1\%$ which slowly changes in time due to the extra $1/\log a$ in $\rho_\mathrm{can}$. The minimal field theoretic model which exhibits cannibalism has a real scalar field with the Lagrangian \begin{eqnarray}\label{eq:lagrange} \mathcal{L}=\frac12 (\partial \phi)^2 - \frac12 m^2 \phi^2 - \kappa_3 m \lambda\frac{\phi^3}{3!} - \kappa_4 \lambda^2 \frac{\phi^4}{4!} \ . \end{eqnarray} In this minimal cannibal (MC) model $m$ is the mass of the particle, $\lambda$ denotes the overall strength of $\phi$-interactions and $\kappa_{3,4}$ are numbers which we will take to be of order 1. The interactions mediate $\phi$-number preserving $\phi\phi \rightarrow \phi \phi$ processes as well as $\phi$-number changing processes such as $\phi\phi\phi \rightarrow \phi\phi$ (with a rate proportional to $\lambda^6$). At temperatures above the $\phi$ mass the $\phi$ particles can be described by an interacting relativistic fluid in equilibrium. Once the $\phi$-fluid cools below the mass of the particles the \ensuremath{3\rightarrow 2\ } cannibalism interaction starts processing mass into temperature. This slows the cooling of the fluid. The fluid remains in thermal equilibrium during cannibalization because the \ensuremath{2\rightarrow 2\ } interactions are very rapid compared with the cannibal interactions and with the expansion rate of the universe and rethermalize the fluid. Furthermore, since the $\phi$ particles are isolated from all other fluids (such as the Standard Model and the cold dark matter) and heat cannot be dissipated to the other sectors the comoving entropy in the $\phi$-fluid is conserved. Eventually, at late times, the number density of $\phi$ particles becomes too small for the \ensuremath{3\rightarrow 2\ } interactions to compete with the expansion rate and they turn off, bringing cannibalism to an end. At that point the surviving particles become cold dark matter, their number density diluting with the volume and their temperature dropping rapidly proportional to $1/a^2$. This thermal history is summarized in the following table: the $\phi$-fluid cools like radiation while its temperature is above the $\phi$ mass, at $a\sim a_\mathrm{can}$ it enters the cannibalistic phase where the temperature drops logarithmically, and at $a\sim a_\mathrm{nr}$ the \ensuremath{3\rightarrow 2\ } interactions decouple and it cools like ordinary non-relativistic matter. \begin{eqnarray} \begin{array}{c|c|c}\label{tab:scaling} {\rm relativistic} & {\rm cannibal} & {\rm non\!-\!relativistic} \\ a<a_\mathrm{can} & a_\mathrm{can}<a<a_\mathrm{nr} & a_\mathrm{nr}<a \\ \hline T\sim 1/a & T\sim {1}/{\log a } & T\sim 1/a^2 \\ \rho \sim 1/a^4 & \rho \sim {1}/(a^3 \log a) & \rho \sim 1/a^3 \end{array} \nonumber \end{eqnarray} In \Fig{fig:temp} we plot the temperature-to-mass ratio as a function of scale factor for an example point in parameter space of the minimal cannibal model. Note the transition from relativistic behavior to cannibalism at $T/m\sim1/3 \ \leftrightarrow \ a_\mathrm{can} \sim 10^{-6}$ and the decoupling transition to non-relativistic matter at $a_\mathrm{nr} \sim 10^{-1}$. The ratio of scale factors between start and end of the cannibalistic phase $a_\mathrm{nr}/a_\mathrm{can}\sim 10^5$ depends on the strength of the interaction $\lambda$. We will be interested in models where $\lambda$ is strong (between 1 and $4\pi$); then the duration of cannibalism $a_\mathrm{nr}/a_\mathrm{can}$ is between $10^{-4}$ and $10^{-5}$ with only a mild dependence on other model parameters. \begin{figure}[!htbp]% \centering \includegraphics[width=0.55\textwidth]{temp.pdf} \caption{Temperature to mass ratio as a function of scale factor $a$ for the minimal cannibal (MC) model. The temperature drops like $1/a$ while the particles are relativistic, it drops logarithmically in $a$ while the particles cannibalize, and it drops like $1/a^2$ after the cannibalizing interaction decouples and the particles cool like ordinary non-relativistic matter. The temperature curve shown here was found by solving the background equations (\ref{eq:A32}) numerically and includes the decoupling of \ensuremath{3\rightarrow 2\ } interactions.}% \label{fig:temp} \end{figure} \begin{figure}[!htbp]% \centering \includegraphics[width=0.75\textwidth]{3rhos.pdf} \caption{Energy densities for MC models with mass and temperature chosen such that $\rho_\mathrm{can} < \rho_{\Lambda\mathrm{CDM}}$. A MC model for which cannibalism occurs throughout matter domination is shown in green with its characteristic $\rho_\mathrm{can} \sim 1/(a^3 \log a)$ dilution. The orange model has a late onset of cannibalism, making the $\phi$-fluid behave like radiation throughout most of the history of the Universe. In the blue model the cannibalism phase is shifted very early so that cannibalism stops before matter domination. Then the $\phi$-fluid behaves like cold dark matter. For comparison, we also show the total energy density in the components of $\Lambda$CDM (black).} \label{fig:3rhos} \end{figure} From preceding discussions it is clear that we can choose parameters in the cannibal sector such that the cannibalistic phase overlaps with the matter-dominated era of the universe. This choice of parameters is the most interesting because then the cannibals suppress the matter power spectrum. We dedicate most of this paper to its study. In \Fig{fig:3rhos} we show the evolution of the energy density of the cannibal fluid (green) in a model where the cannibal transition happens at $a_c \sim 10^{-5}$ and decoupling at $a_\mathrm{nr} \sim 1$. For comparison we show the total energy density in the $\Lambda$CDM components (black) with its radiation-, then matter-, and finally cosmological constant-dominated scale dependence. We also show the energy densities for two different MC models: one where the cannibal transition happens well after matter-radiation equality (orange) so that the cannibals act as radiation while they have significant energy densities. Such a model is indistinguishable from a model with extra neutrinos $\Delta N_\mathrm{eff}$. The other model (blue) is one in which the cannibal transition happens so early that the cannibal interactions already decouple before matter-radiation equality. Then the cannibals behave like ordinary cold dark matter. The MC model in \Eq{eq:lagrange} is ugly because the cannibal mass is unprotected from quadratically divergent quantum corrections and has a naturalness problem. Fortunately, natural UV completions are easy to construct. Our favorite is a simple non-Abelian gauge sector without matter (\textit{i.e.}\ pure-glue). Such a model has a single coupling constant, the gauge coupling. The theory is asymptotically free in the UV. The gauge coupling becomes strong in the IR, the theory confines and the spectrum is one of glueball resonances. The effective low-energy description below the confinement scale is the MC model \Eq{eq:lagrange} where $\phi$ is the lightest glueball, $m$ is its mass, and $\lambda \sim 4\pi$. In addition to the renormalizable interactions shown in \Eq{eq:lagrange} one also obtains higher-dimensional couplings of the form $\lambda^{n-2} \phi^n/m^{n-4}$ which contribute to scattering with the same parametrics as the renormalizable couplings. The cannibalism phase is not sensitive to the precise form of the interactions: what matters is that the number-changing transitions are faster than the Hubble expansion. Then the cannibal fluid satisfies thermal and chemical equilibrium and its evolution becomes independent of the details of the spectrum of glueballs and interactions. Note also that this UV completion very naturally explains the absence of couplings between $\phi$ and the Standard Model. In the UV theory gauge invariance forbids any renormalizable coupling between the two sectors. We describe such UV completions and study the dependence of our results on the UV completion in Section \ref{sec:simplest}. Finally, we do not consider but cannot resit mentioning the possibility that the cold dark matter required in our model might be ``Baryons" or ``Mesons" made of heavy dark quarks charged under the dark gauge group \cite{Boddy:2014qxa} although important details of the confining phase transition and entry into the cannibal phase would change from what we study in this paper. We study the MC model of \Eq{eq:lagrange} and its thermal history in \Sec{sec:thermo} where we also estimate the boundaries of the preferred parameter space. Within these boundaries we compute the effects of cannibalism on the matter power spectrum in \Sec{sec:cosmo}. \Sec{sec:simplest} gives possible UV realizations of the MC model in terms of simple confining (pure-glue) non-Abelian gauge theories. We also study the depedence of our results on the UV completion of the MC model. In the Conclusions (\Sec{sec:conc}) we discuss the shape of the predicted MPS as a function of model parameters. We review the derivation of the background and perturbation equations for the cannibal fluid starting from the Boltzmann equation in an Appendix; our results agree with those given in \cite{Ma:1995ey}. \section{The minimal cannibal: thermal history and parameters} \label{sec:thermo} In this Section we study the thermal history of the MC model fluid, identify the most useful parameters to describe it, and explore their parameter space. In order to do this we need to consider what the properties of the cannibal fluid are. During its relativistic and cannibalistic phases the $\phi$-fluid is in both thermal and chemical equilibrium. This means that its phase space distribution function $f(p,a)$ is entirely parameterized by the mass of the particles $m$ and the temperature $T$ of the fluid\footnote{Here $T(a)$ denotes the temperature of the cannibal fluid which may be different from the temperature of the Standard Model (e.g. photons).}: \begin{equation} f(p,a)=\frac{1}{e^{E/T(a)}-1} \ , \label{eq:distribution} \end{equation} where $E=\sqrt{m^2 + p^2}$ is the energy of the $\phi$ particles. Here we only consider the homogeneous and isotropic background of the cannibal fluid which means that $f$ does not depend on position. We will study $x$-dependent perturbations about this background in the following Section. The time dependence of $f$, encoded in the scale factor $a(t)$, arises solely from that of the temperature. All other background quantities that describe the $\phi$-fluid (such as energy and number densities) are momentum integrals of $f$, and therefore they depend on the two parameters $m$ and $T(a)$. Since the cannibal fluid has no interactions with other fluids its (comoving) entropy $S_\mathrm{can}$ is conserved. This makes $S_\mathrm{can}$ a useful parameter of the MC model. We now derive formulae for the temperature and energy density of the $\phi$-fluid in terms of the model parameters $m$ and $S_\mathrm{can}$. From the Second Law of Thermodynamics: \begin{equation}\label{eq:entropy_def} S_\mathrm{can} = a^3 \frac{\rho_\mathrm{can} + P_\mathrm{can}}{T} \ . \end{equation} In the relativistic limit, $T\gg m$, the phase space distribution function, \Eq{eq:distribution}, is easily integrated to obtain expressions for the energy density $\rho = \frac{\pi^2}{30} T^4$ and pressure $P = \rho/3$ so that: \begin{equation}\label{eq:entropy_uv} S_\mathrm{can} = a^3 \frac{2 \pi^2}{45} T^3 \ . \end{equation} Solving for $T$ we find: \begin{equation} T = \left( \frac{45}{2 \pi^2} \right)^{1/3} \frac{S_\mathrm{can}^{1/3}}{a} \ , \quad \rho_\mathrm{can} = \frac{3}{4} \left( \frac{45}{2 \pi^2} \right)^{1/3} \frac{S_\mathrm{can}^{4/3}}{a^4} \ . \label{eq:temp_rho_uv} \end{equation} Note that $T \sim 1/a$ and $\rho_\mathrm{can} \sim 1/a^4$, as expected for radiation components.\footnote{\Eq{eq:entropy_uv} contains a factor of $g$ that accounts for the degrees of freedom of the dark sector. This factor is 1 in the $\phi$ cannibal model but will be different in UV completions.} Once $T \sim m$ the $\phi$-fluid enters its cannibalistic phase. After the temperature drops sufficiently far below the mass an expansion in $T/m$ becomes appropriate, and the dominant contribution to the energy density comes from the mass of the particles, $\rho_\mathrm{can} \approx m n_\mathrm{can}$, where $n_\mathrm{can} = (\frac{mT}{2 \pi})^{3/2} e^{-m/T}$ is the equilibrium number density of $\phi$. The contribution of the pressure $P_\mathrm{can} \approx T n_\mathrm{can}$ to the entropy in \Eq{eq:entropy_def} is smaller by $T/m$ relative to $\rho_\mathrm{can}$ so that: \begin{eqnarray}\label{eq:entropy_ir} S_\mathrm{can} \simeq a^3 \frac{\rho_\mathrm{can}}{T} \simeq \frac{a^3 m^3}{(2 \pi)^{3/2}} \left( \frac{T}{m} \right)^{1/2} e^{-m/T} \ , \end{eqnarray} and solving for $T$ and $\rho_\mathrm{can}$ in a leading-log approximation we have: \begin{equation} T \simeq \frac{m}{3 \log \left( \frac{m \, S_\mathrm{can}^{-1/3}}{\sqrt{2 \pi}} \, a \right)} \ , \quad \rho_\mathrm{can} \simeq \frac{m \, S_\mathrm{can}}{3 a^3 \log\left( \frac{m \, S_\mathrm{can}^{-1/3}}{\sqrt{2 \pi}} \, a \right)} \ . \label{eq:temp_rho_ir} \end{equation} Note that $T\sim 1/\log a$ and $\rho_\mathrm{can} \sim 1/(a^3 \log a)$ as stated in the previous Section. Having written $T$ and $\rho_\mathrm{can}$ as functions of $a$ and the parameters $m$ and $S_\mathrm{can}$, we now study the parameter space. Our goal is to estimate the values of the parameters for which the cannibal sector suppresses the matter power spectrum by about the amount that is preferred by the $\sigma_8$ measurements. As shown in the Introduction this requires a fraction of dark matter energy density in the $\phi$-fluid $f_\mathrm{can} \equiv \rho_\mathrm{can}/\rho_\mathrm{cdm} \sim \mathcal{O}(1\%)$. Of course, since $\rho_\mathrm{cdm} \sim 1/a^3$ but $\rho_\mathrm{can} \sim 1/(a^3 \log a)$, this fraction evolves as $f_\mathrm{can} \sim 1/\log a$. But the change in $f_\mathrm{can}$ during matter domination is small enough (of order of a few) that we ignore it for the purpose of estimating the rough region of $m$ - $S_\mathrm{can}$ parameter space where we can expect to find good fits. The good region of parameter space is the one in which the cannibalism phase overlaps with matter domination, which corresponds to conditions on $a_c$ and $a_\mathrm{nr}$, and in which $f_\mathrm{can} \sim \mathcal{O}(1\%)$. In the remainder of this Section we use these conditions to derive that \begin{eqnarray}\label{eq:parameter_bounds} \mathrm{eV} \lsim m \lsim \mathrm{keV}\, , \quad \frac{S_\mathrm{can}}{S_\mathrm{SM}} \sim\, 0.1\, \left[\frac{1\mathrm{eV}}{m}\right]\ . \end{eqnarray} A reader who is not interested in the following somewhat tedious derivation of these boundaries of the relevant parameter space may skip ahead to \Sec{sec:cosmo} where we derive and solve the density perturbation equations. We first derive the lower bound on $m$. Define the scale factor $a=a_\mathrm{can}$ at which $T(a_\mathrm{can}) \equiv m/3$, \textit{i.e.}\ where the $\phi$-fluid stops being relativistic and starts cannibalizing. From \Eq{eq:temp_rho_ir} we obtain $a_\mathrm{can} \sim 10\, S_\mathrm{can}^{1/3}/m$. Since we want cannibalism to act during matter domination, we require the start of cannibalism to be before matter-radiation equality, \textit{i.e.}\ $a_\mathrm{can} < a_\mathrm{eq}$. Ignoring the $\log a$ dependence (for simplicity) and using $a_\mathrm{can} \sim 10\, S_\mathrm{can}^{1/3}/m$ we express $\rho_\mathrm{can}$ in \Eq{eq:temp_rho_ir} in terms of $a_\mathrm{can}$ \begin{equation}\label{eq:rho_ac} \rho_\mathrm{can} \sim \frac{m^4}{10^3 (a/a_\mathrm{can})^3} \ . \end{equation} We solve this for $m$, substitute $\rho_\mathrm{can} = f_\mathrm{can} \rho_\mathrm{cdm}$, evaluate it today ($a=1$) and impose $a_\mathrm{can} < a_\mathrm{eq}$ to obtain: \begin{equation}\label{eq:mlower1} m^4 \sim 10^3 \times \frac{f_{\mathrm{can},\, 0} \, \rho_{\mathrm{cdm},\, 0}}{a_\mathrm{can}^3} > 10^3 \times \frac{f_{\mathrm{can},\, 0} \, \rho_{\mathrm{cdm},\, 0}}{a_\mathrm{eq}^3} \ , \end{equation} which for $a_\mathrm{eq} \approx 3\times 10^{-4}$ gives the lower bound: \begin{equation}\label{eq:mlower2} m \gsim \,1\, \mathrm{eV} \times \left[ \frac{f_{\mathrm{can},\, 0}}{0.01} \right]^{1/4} \left[ \frac{\rho_{\mathrm{cdm},\, 0}}{10^{-11}\, \mathrm{eV}^4} \right]^{1/4} \ . \end{equation} At the edge of the preferred parameter space, when $m$ saturates the bound, the $\phi$-fluid enters its cannibalistic phase right at matter-radiation equality. Then the UV completion of the $\phi$ model is needed to determine the cannibal sector energy density for $a < a_\mathrm{eq}$. Thus in this case the matter power spectrum is sensitive to details of the UV completion such as the glueball spectrum and the size of the UV gauge group. We will study this model dependence in \Sec{sec:simplest}. For masses much smaller than the bound the cannibal sector is still relativistic at $a_\mathrm{eq}$. In that case the cannibal fluid behaves like extra radiation ($\Delta N_\mathrm{eff}$) at the time of the CMB. Imposing observational bounds on $\Delta N_\mathrm{eff}$ bounds the energy density in the cannibal fluid at $a_\mathrm{eq}$ and by the time cannibalism turns on at $a_\mathrm{can} > a_\mathrm{eq}$ the energy density in the cannibal fluid has already become negligible compared with that in $\Lambda$CDM (orange curve of \Fig{fig:3rhos}). Thus this is not a region in parameter space that we are interested in. We can also derive an upper bound on $m$. To do so we first solve for the scale factor $a_\mathrm{nr}$ when the \ensuremath{3\rightarrow 2\ } interactions decouple and the $\phi$-fluid transitions from cannibal behavior to standard non-relativistic behavior. Dimensional analysis allows us to estimate the non-relativistic \ensuremath{2\rightarrow 2\ } and \ensuremath{3\rightarrow 2\ } scattering cross sections in the $\phi$ theory from \Eq{eq:lagrange}: \begin{eqnarray}\label{eq:scatter} \sigma_{22} v \approx \frac{\alpha^2}{m^2} & \Rightarrow & \Gamma_{22} \equiv n_\mathrm{can} \langle \sigma_{22} v \rangle \approx \frac{\alpha^2}{m^{3}}\, \rho_\mathrm{can} \ , \\ \ \sigma_{32} v^2 \approx \frac{\alpha^3}{m^5} & \Rightarrow & \Gamma_{32} \equiv n_\mathrm{can}^2 \langle \sigma_{32} v^2 \rangle \approx \frac{\alpha^3}{m^{7}}\, \rho_\mathrm{can}^2 \ ;\label{eq:scatter32} \end{eqnarray} where $\alpha \sim \lambda^2/(4\pi)$, $\Gamma_{ij}$ are the $i \rightarrow j$ interaction rates, and we have been cavalier with factors of order 1 and $\pi$. Keeping in mind a strongly coupled UV completion of the cannibal sector we expect $\alpha$ somewhere between 1 and $4\pi$. Eventually $\Gamma_{32}$ cannot keep up with the rate of expansion of the Universe $H$ and the \ensuremath{3\rightarrow 2\ } interactions decouple and cannibalism stops at $a_\mathrm{nr}$. Setting $\Gamma_{32} = H$ and using \Eqs{eq:scatter32}{eq:rho_ac} we can solve for the duration of the cannibalistic phase \begin{eqnarray}\label{eq:duration1} \frac{a_\mathrm{nr}}{a_\mathrm{can}} \approx \frac{\alpha^{1/2}}{10} \left( \frac{m}{H(a_\mathrm{nr})} \right)^{1/6} \ . \end{eqnarray} Note the small exponent of 1/6. This shows that the duration of the cannibalism phase is only weakly dependent on the model parameters $m$ and $S_c$. In particular, the duration of the cannibalism phase is rather insensitive to when the decoupling occurs. For example, if cannibalism ends at matter-radiation equality ($a_\mathrm{nr} = a_\mathrm{eq}$) then $(H(a_\mathrm{eq})/\mathrm{eV})^{1/6} \sim 10^{-5}$; whereas if it ends today ($a_\mathrm{nr} = 1$), then $(H_0/\mathrm{eV})^{1/6} \sim 10^{-6}$; a change of only one order of magnitude. The duration of the cannibalistic phase is therefore between 4 and 5 decades in the scale factor: \begin{equation}\label{eq:duration2} \frac{a_\mathrm{nr}}{a_\mathrm{can}} \approx 10^5 \times \left[ \frac{\lambda}{4 \pi} \right] \left[ \frac{m}{1\mathrm{eV}} \right]^{1/6} \left[ \frac{10^{-33}\, \mathrm{eV}}{H(a_\mathrm{nr})} \right]^{1/6} \ . \end{equation} We will use the approximation $a_\mathrm{can} \sim 10^{-5} a_\mathrm{nr}$. Substituting this in \Eq{eq:rho_ac} yields: \begin{equation}\label{eq:mupper1} m^4 \sim 10^{18} \times \frac{f_{\mathrm{can},\, 0} \, \rho_{\mathrm{cdm},\, 0}}{a_\mathrm{nr}^3} \ . \end{equation} In order to find an upper bound on the interesting range of $m$ we impose a condition on $a_\mathrm{nr}$, the scale factor when cannibalism stops. Demanding that cannibalism lasts throughout matter domination and does not end before today so as to maximize the suppression of the MPS is a possibility. But this is really too aggressive because even when cannibalism stops midway through matter domination the MPS is suppressed relative to $\Lambda$CDM. We impose - admittedly somewhat arbitrarily - that $a_\mathrm{nr}\gsim 10^{-2}$. This together with \Eq{eq:mupper1} implies \begin{equation}\label{eq:mupper3} m \lsim \,1\, \mathrm{keV} \times \left[ \frac{f_{\mathrm{can},\, 0}}{0.01} \right]^{1/4} \left[ \frac{\rho_{\mathrm{cdm},\, 0}}{10^{-11}\, \mathrm{eV}^4} \right]^{1/4} \ . \end{equation} For masses much larger than this bound the end of cannibalism occurs too close to (or before) matter-radiation equality, so that the $\phi$-fluid clusters like cold dark matter during matter domination as discussed in the previous section (blue curve in \Fig{fig:3rhos}). Comparing \Eq{eq:mupper3} and \Eq{eq:mlower2} we see the range of masses, $\mathrm{eV} < m < \mathrm{keV}$, which satisfies both constraints. Having restricted the mass of the $\phi$ particles to a range for which cannibalization has an interesting effect on the MPS we now focus our attention on the other parameter of the MC model, the entropy. Starting again from the relationship between the energy density and the entropy in \Eq{eq:temp_rho_ir}, approximating $\log a_\mathrm{can}^{-1}\sim 8$, demanding that the energy density in cannibals be a small fraction $f$ of that in the the $\Lambda$CDM sector, and evaluating energy densities today we obtain \begin{equation}\label{eq:entropyofm} S_\mathrm{can} \sim \frac{S_\mathrm{SM}}{10} \left[ \frac{2.2\times 10^{-11}\, \mathrm{eV}^3}{S_\mathrm{SM}} \right] \left[ \frac{f_{\mathrm{can},\, 0}}{0.01} \right] \left[ \frac{\rho_{\mathrm{cdm},\, 0}}{10^{-11} \, \mathrm{eV}^4} \right] \left[ \frac{1\, \mathrm{eV}}{m} \right] \ , \end{equation} where we have chosen to write the comoving cannibal sector entropy $S_\mathrm{can}$ in terms of the comoving entropy in the Standard Model sector today, $S_\mathrm{SM} =2.2 \times 10^{-11} \mathrm{eV}^3$. One sees that the values of $S_\mathrm{can}$ which give the correct suppression of the MPS are inversely proportional to $m$. Finally, let us verify that thermal (kinetic) equilibrium is maintained until today in the region of parameter space we have obtained. We must check that the rate of \ensuremath{2\rightarrow 2\ } interactions is faster than the expansion rate of the Universe. From \Eq{eq:scatter} \begin{equation} \Gamma_{22, 0} \approx 10^{22} \left[ 10^{-33} \, \mathrm{eV} \right] \left[ \frac{\alpha}{4\pi}\right]^2 \left[ \frac{1 \, \mathrm{eV}}{m} \right]^{3} \left[ \frac{f_{\mathrm{can},0}}{0.01} \right] \left[ \frac{\rho_{\mathrm{cdm},0}}{10^{-11} \, \mathrm{eV}^4} \right] \ , \end{equation} clearly bigger than $H_0 \sim 10^{-33} \ \mathrm{eV}$. This is not surprising because \ensuremath{2\rightarrow 2\ } interactions are much more rapid than \ensuremath{3\rightarrow 2\ } interactions which are suppressed by an additional power of the particle number density. In summary, in order for the cannibalistic phase to overlap with matter domination and suppress the matter perturbations at galaxy cluster scales by about 5\% we need $f_{\mathrm{can},\, 0} \sim 0.01$ and $a_\mathrm{can} \lsim a_\mathrm{eq}$ and $a_\mathrm{nr} \gsim 10^{-2}$. This corresponds to the parameter range in \Eq{eq:parameter_bounds}. \section{Density perturbations in the cannibal model} \label{sec:cosmo} With the thermal history and parameter space of the MC model determined we now study the effects of the cannibal fluid on density perturbations. In particular, we derive the suppression of the matter power spectrum (MPS) and solve for the region in parameter space with the correct amount of suppression to address the large-scale structure (LSS) discrepancy on $\sigma_8$. We start from the equations governing the evolution of the cosmological perturbations in the energy density and velocity of the different components of the Universe, focusing on the dark matter and cannibal fluids. In this Section we simply state the equations and study their solutions, first numerically and then analytically using simplifying approximations. We review the derivation of the perturbation equations in \App{appA}. The equations for the cannibal and CDM perturbations in Fourier space are \cite{Ma:1995ey}: \begin{eqnarray} \dot\delta_\mathrm{can} & = & -(1+w_\mathrm{can}) \left( \theta_\mathrm{can} - 3 \dot\varphi \right) - 3 \mathcal{H} \left( c_s^2 - w_\mathrm{can} \right)\delta_\mathrm{can} \ , \label{eq:cannperts1a}\\ \dot\theta_\mathrm{can} & = & - \mathcal{H} \left( 1-3 c_s^2 \right)\theta_\mathrm{can} + k^2 \left( \psi + \frac{c_s^2}{1+w_\mathrm{can}}\delta_\mathrm{can} \right) \ ; \label{eq:cannperts1b} \\ \dot\delta_\mathrm{cdm} & = & -\theta_\mathrm{cdm} + 3 \dot\varphi \ , \label{eq:cdmperts1a}\\ \dot\theta_\mathrm{cdm} & = & - \mathcal{H} \theta_\mathrm{cdm} + k^2 \psi \ , \label{eq:cdmperts1b} \end{eqnarray} where the dots represent derivatives with respect to conformal time $\eta$; $k$ is the Fourier momentum mode, $\mathcal{H} \equiv aH = \dot a/a$, $\delta \equiv \delta \rho /\rho$ and $\theta$ are the density contrast and the velocity divergence perturbations, while $\varphi$ and $\psi$ are the scalar perturbations of the metric.\footnote{$\delta$ and $\theta$ are part of the stress-energy-momentum tensor $T_{\mu\nu}$ of their corresponding fluid, and their equations are obtained from the continuity equation $\nabla_\mu T^{\mu\nu} = 0$. For details see \App{appA}.} Finally, $w_\mathrm{can} \equiv P_\mathrm{can}/\rho_\mathrm{can}$ is the equation of state of the $\phi$-sector, while $c_s^2 \equiv \dot P_\mathrm{can} / \dot \rho_\mathrm{can} = w_\mathrm{can} - \frac{\dot w_\mathrm{can}}{3\mathcal{H}(1+w_\mathrm{can})}$ is the speed of sound of the $\phi$-fluid. Recall that during the cannibalistic phase $\rho_\mathrm{can} \approx m n_\mathrm{can}$ and $P_\mathrm{can} \approx T n_\mathrm{can}$ and therefore $w_\mathrm{can} \approx T/m \sim 1/\log a$. For the rest of this Section we make the following simplifications: {\it i.} ignore the baryons, adding their energy density to that of CDM, {\it ii.} ignore the anisotropic stress of the neutrinos, taking $\varphi = \psi$, and {\it iii.} add the neutrino energy density to that of the photons. Since we are only interested in the effects of cannibals on the MPS, we will compare the MPS in the theory with cannibals to the MPS in $\Lambda$CDM, evaluated today, and denote the ratio by $R(k)$: \begin{eqnarray}\label{eq:ratio} R(k) & \equiv & \frac{\mathrm{MPS}(k)_c}{\mathrm{MPS}(k)_\Lambda} \bigg\vert_\mathrm{today} = \frac{(\rho_\mathrm{cdm} \delta_\mathrm{cdm} + \rho_\mathrm{can} \delta_\mathrm{can})_c^2}{(\rho_\mathrm{cdm} \delta_\mathrm{cdm})_\Lambda^2} \bigg\vert_\mathrm{today} \nonumber\\ & = & \left( \frac{\delta_{\mathrm{cdm},\,c}}{\delta_{\mathrm{cdm},\,\Lambda}} + f_\mathrm{can} \frac{\delta_\mathrm{can}}{\delta_{\mathrm{cdm},\,\Lambda}} \right)^2\bigg\vert_\mathrm{today} \ , \end{eqnarray} where the index $c$ denotes the value in the theory with cannibals, while $\Lambda$ means $\Lambda$CDM. With the assumptions mentioned above, we solved \Eqst{eq:cannperts1a}{eq:cdmperts1b} numerically and calculated $R(k)$. We now describe the solutions for $\delta_\mathrm{can}$ and $\delta_\mathrm{cdm}$, and the resulting $R(k)$. \begin{figure}[!htbp]% \centering \includegraphics[width=0.55\textwidth]{discussion_perts.pdf} \caption{The cannibal perturbations for three choices of the MC model parameters, compared with the CDM perturbation from $\Lambda$CDM (black curve). The choice with early end of cannibalism is shown in blue, that with a late start of cannibalism in orange, while in green is that with the cannibalistic phase overlapping with matter domination. We have chosen $k=0.2h \, \mathrm{Mpc}^{-1}$ with $h=0.68$; this corresponds to perturbations at the wave length which $\sigma_8$ is most sensitive to.}% \label{fig:perts} \end{figure} The evolution of the $\delta_\mathrm{can}$ perturbations can be appreciated in \Fig{fig:perts}, for different choices of the parameters $m$ and $S_\mathrm{can}$ of the MC model, having fixed $\alpha = 4 \pi$. One choice of the parameters corresponds to early decoupling (blue curve), where the cannibalistic phase ends well before equality and the perturbations behave just like CDM. Another choice shows late cannibalization (orange line) in which the $\phi$-sector behaves just like radiation throughout most of the history of the Universe. In this case $\delta_\mathrm{can}$ oscillates like radiation perturbations do. Since in this case the cannibalistic phase only starts when $\rho_\mathrm{can}$ is already a negligible contribution to the total energy density, the cannibalism itself has no impact on the MPS. The green curve corresponds to the case of most interest: the cannibalistic phase overlaps with matter domination. The early part of the curve shows that cannibal perturbations perform acoustic oscillations after entering the horizon. The oscillations are due to the pressure term proportional to the speed of sound $c_s^2$ during cannibalism. Once the cannibalistic phase ends at $a_\mathrm{nr}$ the $\phi$ particles become non-relativistic and the speed of sound quickly drops $c_s^2 \approx T/m \sim a^{-2}$. This causes the $\delta_\mathrm{can}$ perturbations to stop oscillating and to start growing by falling into the gravitational potentials sourced by the already clustered dark matter. This can be seen in the large-$a$ behavior of the green curve in \Fig{fig:perts}. The cannibal fluid affects the perturbation equations for the CDM in two ways: through its contributions to the gravitational potential term $k^2 \psi$ in \Eq{eq:cdmperts1b} and through its contribution to the energy densities in the Hubble friction term $-\mathcal{H} \theta_\mathrm{cdm}$ in \Eq{eq:cdmperts1b}; the $\dot\phi$ term in \Eq{eq:cdmperts1a} is negligible for the modes of interest. Since $\delta_\mathrm{can}$ oscillates and does not grow during the cannibalistic phase its contributions to the gravitational potential $\psi$ remain negligible and do not enhance the growth of CDM perturbations. On the other hand, the contribution of $\rho_\mathrm{can}$ to the Hubble expansion rate during matter domination and therefore to the Hubble friction term is significant. The net effect, no enhancement of the potential but more friction, is to slow the growth of CDM perturbations relative to $\Lambda$CDM. Thus the MPS is suppressed in theories with cannibals. This is the main result of our paper. \Fig{fig:growthratio} illustrates this result. We plot the ratio of $\delta_\mathrm{cdm}$ in the presence of cannibals to its value in $\Lambda$CDM as a function of the scale factor $a$ for the mode $k=0.2h \, \mathrm{Mpc}^{-1}$. The three curves correspond to three models with parameters $m$ and $S_\mathrm{can}$ chosen such that the MPS today for that mode is suppressed by 10\% (\textit{i.e.}\ $R(0.2h \, \mathrm{Mpc}^{-1})=0.9$). Note that after some transitory behavior after the mode first enters the horizon the suppression increases monotonically during matter domination. This shows that the rate of growth in the presence of cannibals is smaller than in $\Lambda$CDM. This ratio behaves approximately like a power law in $a$, with a slight decrease of its slope which comes from the time dependence of $f_\mathrm{can} \equiv \rho_\mathrm{can}/\rho_\mathrm{cdm}$. \Fig{fig:mps_contour} we show the $m$ - $S_\mathrm{can}$ parameter space, with $S_\mathrm{can}$ normalized to the entropy of the standard model today $S_\mathrm{SM}$. The black contour lines show $R(0.2h \, \mathrm{Mpc}^{-1})$. In all the calculations for this plot we chose $\alpha = 4\pi$. We will study the (very small) dependence of the suppression $R(k)$ on the choice of $\alpha$ at the end of this Section. The brown dotted lines show the fraction $f_{\mathrm{can},\, 0}$ of cannibal dark matter today. The green band in \Fig{fig:mps_contour} represents the region of parameter space that yields a suppression in the value of the MPS today within 1$\sigma$ of the preferred value of $\sigma_8$ according to \cite{Joudaki:2016mvz}, about a 10\% suppression ($R(0.2h \, \mathrm{Mpc}^{-1})=0.9$). We see that this roughly corresponds to $f_{\mathrm{can},\,0} \sim 1\%$. The orange region corresponds to the lower bound on $m$ we estimated in \Sec{sec:thermo}, made up of those parameter values for which $a_\mathrm{can} > a_\mathrm{eq}$. Deep inside this region the $\phi$-fluid behaves just like radiation. The blue region corresponds to the upper bound also estimated in \Sec{sec:thermo}, for which $a_\mathrm{nr} < 10^{-2}$. Deep inside this region the $\phi$-fluid behaves like ordinary CDM. Finally, the red band corresponds to a region in parameter space in which the $\phi$-fluid would contribute too much radiation ($\Delta N_\mathrm{eff} > 0.66$) to the energy density of the Universe at the time of Big Bang Nucleosynthesis \cite{Steigman:2012ve}. However, as we will show in \Sec{sec:simplest} this constraint is relaxed in UV completions of the MC model because the energy density in radiation in the UV is reduced in such models. \begin{figure}[!htbp]% \centering \includegraphics[width=0.8\textwidth]{delta_cdm_supp.pdf} \caption{Evolution of the perturbation $\delta_\mathrm{cdm}$ for wave number $k=0.2h \, \mathrm{Mpc}^{-1}$ in the presence of cannibals compared to its value in $\Lambda$CDM, for three different choices of model parameters. Models were chosen to give a $10\%$ suppression in the MPS today (\textit{i.e.}\ $R=0.9$). The three choices of $m$ and $S_\mathrm{can}$ are also indicated as red, green, and blue points in \Fig{fig:mps_contour}.}% \label{fig:growthratio} \end{figure} \begin{figure}[!htbp]% \centering \includegraphics[width=0.6\textwidth]{mps.pdf} \caption{$m$ versus $S_\mathrm{can}/S_\mathrm{SM}$ parameter space where $S_\mathrm{SM} = 2.2\times 10^{-11}\, \mathrm{eV}^3$ is the entropy in the Standard Model today. The black lines are contours of the ratio of the MPS in the presence of cannibal dark matter to that of $\Lambda$CDM. The brown dotted curves correspond to constant $f_{\mathrm{can},\,0}$. The green band is an estimate for the suppression that gives a $\sigma_8$ within 1$\sigma$ of the value quoted in \cite{Joudaki:2016mvz}. The orange region corresponds to MC models that enter the cannibalistic phase after matter-radiation equality, while the blue one corresponds to those for which cannibalism ends before $a=10^{-2}$. In red are those models whose $\rho_\mathrm{can}$ contributes to $\Delta N_\mathrm{eff} \vert_\mathrm{BBN}>0.66$ \cite{Steigman:2012ve} when they are in their radiation phase. The red, green, and blue points correspond to the three choices of $m$ and $S_\mathrm{can}$ in \Fig{fig:growthratio}.}% \label{fig:mps_contour} \end{figure} The black contours showing the values for $R(k)$ were calculated for $\alpha=4\pi$. Since the value of $\alpha$ determines the scale factor at which the \ensuremath{3\rightarrow 2\ } interactions decouple and cannibalism ends, we expect some dependence of the predicted MPS on $\alpha$. However, within the range of parameters in \Fig{fig:mps_contour} this dependence is very weak. The two main effects are that cannibal perturbations stop oscillating and start catching up to the dark matter perturbation after decoupling. If they have enough time to grow they can have a non-negligible impact on the MPS via the second term in \Eq{eq:ratio} and they contribute to the gravitational potential. However for the points that we are interested in the cannibal perturbations remain too small to be important. A numerically more significant effect is that when the cannibal fluid stops cannibalizing its energy density transitions from scaling like $1/(a^3 \log a)$ to $1/a^3$. Thus a model in which the $\phi$ particles stop cannibalizing earlier will have more energy density in cannibals and therefore more Hubble friction. This effect is somewhat more important but still small. For example, choosing $m$ and $S_\mathrm{can}$ as for the blue dot in \Fig{fig:mps_contour} but choosing $\alpha=1$ and $\alpha=\infty$ (\textit{i.e.}\ no decoupling of the \ensuremath{3\rightarrow 2\ } interactions) we obtain $R=0.92$ and $R=0.902$ for the MPS ratio respectively, a very small effect. Having shown that the presence of cannibals suppress the MPS by numerically solving the equations for the perturbations, we devote the rest of this Section to understanding this result from \Eqst{eq:cannperts1a}{eq:cdmperts1b}. We will only be interested in $k$ modes which are well inside the horizon during matter domination, \textit{i.e.}\ modes for which $k \gg 1/ \eta_\mathrm{eq} \sim 0.01 \mathrm{Mpc}^{-1}$. Let us start with the cannibal perturbations. For modes deep inside the horizon the gravitational potential is approximately constant so that we can ignore derivatives of $\psi$. In addition, we can use $w_\mathrm{can} \ll 1$, $c_s^2 \ll 1$ to drop all subleading terms in \Eqs{eq:cannperts1a}{eq:cannperts1b}. Then taking the second derivative of $\delta_\mathrm{can}$ and substituting \Eq{eq:cannperts1b} into \Eq{eq:cannperts1a} yields: \begin{equation}\label{eq:cannperts2} \ddot \delta_\mathrm{can} + \mathcal{H} \dot \delta_\mathrm{can} + k^2c_s^2 \delta_\mathrm{can} = - k^2 \psi \ , \end{equation} where the term on the right-hand-side is the solution of the Poisson equation \begin{equation}\label{eq:poisson} -k^2 \psi = \frac{3}{2}\, \frac{a^2}{3 M_\mathrm{Pl}^2} \sum_i \rho_i \delta_i \ . \end{equation} Anticipating that the CDM contribution dominates the sum during matter domination (duh!), and that perturbations in the CDM fluid grow linearly, $\delta_\mathrm{cdm} \sim a$, one sees explicitly that $\psi$ is constant during matter domination. Thus \Eq{eq:cannperts2} is a simple harmonic oscillator with friction and the gravitational potential corresponds to a constant shift of the zero point. The solutions are oscillatory as long as $k c_s > \mathcal{H} \sim 1/\eta$, \textit{i.e.}\ as long as the $k$-modes are small compared to the sound horizon, $2\pi/k \ll c_s \eta$. Recalling that $c_s^2 \approx w_\mathrm{can} \approx T/m \sim 1/\log a$ for cannibals and $\eta \sim \sqrt{a}$ during matter domination it is clear that modes which are inside the Hubble horizon also enter the growing sound horizon $c_s \eta \sim \sqrt{a/\log a}$ and oscillate. However, once cannibalism ends, $c_s \sim 1/a$. Then the sound horizon $c_s \eta \sim 1/\sqrt{a}$ shrinks and the mode eventually exits the sound horizon, stops oscillating and starts growing. However, for the region of parameter space that we are interested in the cannibal perturbations do not catch up to the CDM perturbations, thus justifying our approximation to only keep the CDM term in the gravitational potential, \Eq{eq:poisson}. We now turn our attention to the CDM perturbations. Following the same procedure as before, combining \Eqs{eq:cdmperts1a}{eq:cdmperts1b} gives: \begin{equation}\label{eq:cdmperts2} \ddot \delta_\mathrm{cdm} + \mathcal{H} \dot \delta_\mathrm{cdm} + k^2 \psi =0 \ , \end{equation} where $\psi$ is given by \Eq{eq:poisson} but only keeping the CDM contribution $\rho_\mathrm{cdm} \delta_\mathrm{cdm}$ in the sum. Using this, rewriting the Hubble parameter in terms of the energy density during matter domination $\rho_\mathrm{tot} \simeq \rho_\mathrm{cdm} + \rho_\mathrm{can}$, and changing variables from $\eta$ to $a$ we can write: \begin{equation}\label{eq:cdmperts3} (\rho_\mathrm{cdm} + \rho_\mathrm{can}) \, a^2 \delta_\mathrm{cdm}'' + \frac{3}{2}\left( \rho_\mathrm{cdm} + \rho_\mathrm{can} \right) a \delta_\mathrm{cdm}' - \frac{3}{2}\rho_\mathrm{cdm} \delta_\mathrm{cdm} = 0 \ . \end{equation} Were it not for the cannibals, this would be the M\'{e}sz\'{a}ros equation during matter domination, whose growing solution is $\delta_\mathrm{cdm} \sim a$. \Eq{eq:cdmperts3} shows that cannibal dark matter increases the Hubble friction ($\delta_\mathrm{cdm}'$ term) felt by the CDM perturbations but does not contribute to the gravitational pull from the Poisson term. This explains the smaller rate of growth of $\delta_\mathrm{cdm}$ we discovered in our numerical solutions. To get a rough idea of what this change in the growth rate is let us further simplify \Eq{eq:cdmperts3} by taking $\rho_\mathrm{can}/\rho_\mathrm{cdm} \ll 1$ and dividing by $\rho_\mathrm{cdm} + \rho_\mathrm{can}$ to arrive at \Eq{eq:cdmperts}. This is easily integrated in an approximation where we neglect the slow $\log a$ dependence of $\rho_\mathrm{can}$. In fact, this equation for the growth of perturbations without the $\log a$ dependence applies to a model with CDM and a subdominant component of dark plasma \cite{Chacko:2016kgg,Buen-Abad:2017gxg}. The solution for the growing mode is the power law $\delta_\mathrm{cdm} \sim a^{1- \frac{3}{5} \rho_\mathrm{can}/\rho_\mathrm{cdm}}$ \cite{Lesgourgues:1519137,Chacko:2016kgg,Buen-Abad:2017gxg}, a growth rate smaller than the linear one from the usual M\'{e}sz\'{a}ros equation. For the decaying mode, one finds $a^{-\frac32+\frac{3}{5} \rho_\mathrm{can}/\rho_\mathrm{cdm}}$. In the cannibal case the exponent is a slowly varying integral function of $f_\mathrm{can}$ that depends on $a$ (because of the slow logarithmic decay of $f_\mathrm{can}$), which explains the change in the slope of the suppression we saw in \Fig{fig:growthratio}. \section{Natural UV completions from secluded gauge sectors} \label{sec:simplest} In this Section we discuss our favorite UV completion of the MC model, a simple non-Abelian ``pure-glue" gauge sector which confines at low energies and produces canniballistic glueballs. Consider an $SU(N)$ gauge theory with no light matter fields. Such a theory has two marginal operators, the gauge kinetic term \begin{eqnarray} -\frac{1}{4g_D^2} F_{\mu\nu}^2 \label{eq:kineticterm} \end{eqnarray} and the CP-violating $\theta F\tilde F$ term. We set $\theta=0$ mostly because it makes no qualitative difference but also because it is zero if the dark sector preserves CP. All other operators as well as couplings to the SM are irrelevant (in the sense of their scaling with energy) and therefore do not impact the confining dynamics and cannibalism. The dark sector could be coupled to the SM in the UV by heavy matter fields which are charged under both the SM and dark $SU(N)$ gauge group. Then it would be natural for the two sectors to have a common temperature in the UV. However if inflation and reheating occur at temperatures below the coupling of the two sectors or if there is a phase transition or there are heavy particles with associated entropy production then the two sectors may end up with very different temperatures. We take the temperature of the cannibal sector to be a free parameter $T$. Assuming that the $SU(N)$ gauge coupling in the UV is not too small the coupling runs strong in the IR and the theory confines at temperatures below some scale $\Lambda_c$. The confining gauge theory has a spectrum of stable glueball states with varying spin and parity quantum numbers \cite{Cornwall:1982zn,Morningstar:1999rf,Kribs:2009fy,Forestell:2016qhc}. The most important of these glueballs for cosmology is the lightest glueball $\phi$ with mass $m \sim \Lambda_c$ which is a parity even scalar and carries no conserved quantum number. It has number-changing interactions and its low energy effective description is the Lagrangian \Eq{eq:lagrange} plus higher-dimensional operators of the form $\phi^n/m^{n-4}$. The important parameters of this low-energy theory are the glueball mass $m$ and the entropy in the glueballs $S_\mathrm{can}$. There is also a dependence on the coupling $\lambda$ which determines the end of the cannibalism phase when $\Gamma_{32} = H$. For a strongly coupled $SU(N)$ theory naive dimensional analysis predicts $\lambda \simeq 4 \pi/\sqrt{N}$. Changing this coupling by a factor of 2 would change the duration of the cannibalism phase by 1/2, see \Eq{eq:duration2}, this has very little impact on the cosmology. Note that the number density of heavier glueballs $\phi_H$ is exponentially suppressed relative to $\phi$ at low temperatures even if they are stable because they can efficiently annihilate $\phi_H + \overline \phi_H \rightarrow \phi + \phi$. Since the $\phi$ particles have no conserved quantum number they are unstable to decay. $\phi$ has no other particles to decay to in the dark sector but it can decay to gravitions or SM particles through higher dimensional operators. For example, the width to decay into gravitons is roughly $m^5/M_{Planck}^4 \sim 10^{-108}\, \mathrm{eV} [m/\mathrm{eV}]^5$. This is much smaller than the Hubble constant today for the masses we consider. In fact, even decays mediated by a dimension 6 operator suppressed by a scale of 1 GeV are too slow to be cosmologically relevant for $m \sim 1$ eV. This justifies treating the $\phi$ particles as stable. This completes our description of the UV completion of the MC model. In most of the interesting parameter space, \Fig{fig:mps_contour}, the UV completion is not needed for the computation of the MPS. This is because either {\it i.} the confining transition happens well before matter-radiation equality and the energy density in the cannibal sector is negligible during and before the transition or {\it ii.} because the confining transition happens well after matter-radiation equality. In the latter case the cannibal Sector is ``gluon" radiation well into matter domination and its energy density redshifts to being negligible before cannibalism even starts. Thus only in models where the confinement transition happens close to matter-radiation equality (red dot in \Fig{fig:mps_contour}) is the UV completion needed for the computation of the MPS. We study this special case in the remainder of this Section. Computing the cosmological evolution of the cannibal fluid through the confining phase transition exactly is very difficult as one would have to solve for the dynamics of a strongly coupled thermal gauge theory \cite{Witten:1984rs}. We take a simplified approach and match the UV theory with $N^2-1$ weakly interacting gluons onto the confined theory of the lightest glueball $\phi$. This matching depends on the size of the gauge group, $N$, the details of the phase transition (it can be 1st or 2nd order), the glueball spectrum, and couplings in the strongly coupled regime. It is believed that for $N=2$ the phase transition is 2nd order so that entropy is conserved in the phase transition, for $N=3$ it is probably weakly 1st order and for higher $N$ strongly 1st order \cite{Lucini:2003zr,Lucini:2005vg}. Note that in the presence of extra matter with mass near the confinement scale the order of the phase transition can change. Thus we treat the order of the phase transition as an additional uncertainty. In the case of a strongly 1st order phase transition the gluon plasma super-cools below the confinement scale before critical bubbles of the confined phase appear. In such a scenario, the entropy increases during the phase transition, and because of the super-cooling only the lightest glueballs are abundant after the phase transition. The model dependence due to unknown physics of the phase transition enters into the matching onto the UV theory. A single IR theory which is specified by giving $S_c$ and $m$ can match onto different UV theories, with different values of $N$ and possibly different phase transitions. To study the sensitivity of the MPS predictions to this model-dependence we look at two simplified cases: a smooth 2nd order phase transition with conserved entropy and a simplified glueball spectrum and a very strongly 1st order phase transition with a jump in entropy and temperature (see for example \cite{Megevand:2016lpr}). To model the 2nd order phase transition we assume that the full theory is described by $g_*=2(N^2-1)$ bosonic degrees of freedom. The lightest, $\phi$, has mass $m$, and the others have a common mass $M$ which we vary from 1.25$m$ to 3$m$. We also assume entropy conservation and that the theory remains in chemical and thermal equilibrium throughout the phase transition. Then all distributions are simply given by Boltzmann distribution functions for the $2(N^2-1)$ degrees of freedom. In the UV, when all masses can be ignored, this reproduces the physics of free $SU(N)$ gluons. In the transition region where $T\sim m$ the ``heavy glueballs" of mass $M$ pair annihilate into the lightest glueballs $\phi$. And in the IR, when the temperature drops below $m$, only the cannibals remain. \begin{figure}[!htbp]% \centering \includegraphics[width=0.4\textwidth]{uv2_bckg.pdf} \includegraphics[width=0.55\textwidth]{uv2_perts.pdf} \caption{Plots of $a^3\rho_\mathrm{can}$ (left) and $\delta_\mathrm{cdm}$ ratio (right) for different UV completions with 2nd order phase transitions compared to the MC model that gives $R(0.2h \, \mathrm{Mpc}^{-1})=0.9$ and $a_\mathrm{can} = a_\mathrm{eq}$ (\textit{i.e.}\ $m=1.8 \, \mathrm{eV}$, $S_\mathrm{can}/S_\mathrm{SM} = 0.04$, corresponding to the red dot in \Fig{fig:mps_contour}). The black lines correspond to $\Lambda$CDM while the colored lines to the cannibal fluid in different models. The energy densities are continuous in $a$, because entropy is conserved throughout the transition. For the different UV completions we vary the number of UV degrees of freedom $g_*=2(N^2-1)$ corresponding to dark gauge groups $SU(N)$ as well as the masses $M$ of the heavier glueball states. The MPS ratio is less suppressed, from $R=0.905$ to $R=0.925$ for the $N=2$ and $M/m=3$ (solid blue) and $N=7$ and $M/m = 1.25$ (dashed green) lines respectively.}% \label{fig:uv2} \end{figure} For the very strongly 1st order phase transition we match a UV theory of $N^2-1$ massless gluons onto the IR theory with a jump in entropy at a scale factor $a_\mathrm{can}$. We choose the matching scale factor such that the temperature evaluated in the IR theory (the theory of the cannibal $\phi$) equals $m/3$ at the matching scale. There we match onto the UV theory with $g_*=2(N^2-1)$ massless bosonic degrees of freedom and a jump in entropy (increasing from the UV to the IR) by a multiplicative factor which we vary from 1 to 2. The discontinuity in degrees of freedom and entropy at the matching point also implies a discontinuity in other background quantities. \begin{figure}[!htbp]% \centering \includegraphics[width=0.4\textwidth]{uv1_bckg.pdf} \includegraphics[width=0.53\textwidth]{uv1_perts.pdf} \caption{Plots of $a^3\rho_\mathrm{can}$ (left) and $\delta_\mathrm{cdm}$ ratio (right) for different UV completions with 1st order phase transitions compared to an MC model that gives $R(0.2h \, \mathrm{Mpc}^{-1})=0.9$ and $a_\mathrm{can} = a_\mathrm{eq}$ (\textit{i.e.}\ $m=1.8 \, \mathrm{eV}$, $S_\mathrm{can}/S_\mathrm{SM} = 0.04$, corresponding to the red dot in \Fig{fig:mps_contour}). For the different UV completions we vary the strength of the discontinuity in the entropy at the matching scale $a_\mathrm{can}$, parametrized by the multiplicative factor $r_S\equiv S_\mathrm{can}/S_{UV}|_{a_\mathrm{can}}$ and the size of the UV gauge group $SU(N)$. The MPS ratio is less suppressed, from $R=0.912$ to $R=0.916$ for $N=3$ and $r_S = 1$ (solid blue) and $N=7$, $r_S=2$ (dashed green) lines respectively.}% \label{fig:uv1} \end{figure} \section{Conclusions} \label{sec:conc} We have studied the possibility that a subdominant component of the dark matter might posses a cannibalistic phase. If this phase overlaps with matter domination then the most significant impact is on the matter power spectrum. This is particularly interesting because there is 2-3 $\sigma$ tension in direct observations of the matter power spectrum a 8 Mpc$^{-1}$ scales with the matter power spectrum inferred from $\Lambda$CDM and the precision fit to the CMB data from Planck \cite{Heymans:2013fya,Joudaki:2016mvz,Ade:2015fva,Ade:2013lmv,Kohlinger:2017sxk,Joudaki:2017zdt}. Even if one dismisses the hints for new physics from this source observations of the matter power spectrum are going to improve significantly in the coming years with much more precision on the full spectral shape (as a function of $k$) expected. Thus we find it interesting to explore what impact different types of new physics may have on the shape of the matter power spectrum. The simple cannibal model of \Eq{eq:lagrange} has three parameters which characterize its fluid description. Given our preference for strongly coupled UV completions of the simple model, one of them is more or less fixed: $\alpha \sim 4\pi$. Its significance is to determine the scale factor at which the \ensuremath{3\rightarrow 2\ } interactions decouple and the $\phi$ particles stop cannibalizing and turn into cold dark matter. Smaller values of $\alpha$ would lead to a shorter period of cannibalization. The other two parameters characterizing the cannibal fluid are its entropy $S_\mathrm{can}$ and the mass $m$ of the cannibal particle. We conclude this Section with two plots which show the impact of these two parameters on the predicted matter power spectrum shape. \Fig{fig:mps_anr} shows the dependence of the MPS on the decoupling scale $a_\mathrm{nr}$. For fixed $\alpha = 4\pi$ we have roughly $a_\mathrm{nr} \sim 10^5\, a_\mathrm{can} \sim 10^6\, S_\mathrm{can}^{1/3}/m$, thus $a_\mathrm{nr}$ depends on the ratio of $S_\mathrm{can}^{1/3}$ and $m$. This scale is when cannibalism stops, therefore any wave mode $k$ which enters the (sound) horizon after this scale cannot be affected by the cannibal fluid oscillations and will take on the same value as in $\Lambda$CDM. Thus $a_\mathrm{nr}$ can be understood to determine the smallest values of $k$ which are suppressed by cannibalism. Therefore changing the ratio $ S_\mathrm{can}^{1/3}/m$ which changes $a_\mathrm{nr}$ is equivalent to shifting the MPS suppression curve in the horizontal $k$ direction. For the purposes of this plot we fixed the fraction of the energy density in the cannibal fluid today relative to the ordinary dark matter energy density to $f_{\mathrm{can},\,0} =0.01$ for all models. The $\Lambda$CDM reference power spectrum which we compare to (the denominator of $R$) has $1\%$ of additional dark matter instead of the cannibal fluid so that all models being compared have the same value of $H_0$. This removes the background effect of the additional energy density in the cannibal fluid. \begin{figure}[!htbp]% \centering \includegraphics[width=0.7\textwidth]{rescR_f001.pdf} \caption{MPS ratio $R(k)$ for different values of $a_\mathrm{nr}$ and fixed $f_{\mathrm{can},\,0} =1\%$, normalized such that there is a $1\%$ of extra CDM in the $\Lambda$CDM theory in order to cancel some background effects. The later $a_\mathrm{nr}$ is, the more small $k$ modes can enter the sound horizon and present cannibal acoustic oscillations, suppressing the MPS. Note that even though $f_{\mathrm{can},\,0}$ is fixed the large-$k$ MPS suppression is not the same for different $a_\mathrm{nr}$. This is because if cannibalism is still happening during matter domination (\textit{i.e.}\ $a_\mathrm{nr} > a_\mathrm{eq}$) then $f_\mathrm{can}$ is bigger earlier in the Universe, because of its logarithmic scaling, and this enhances the suppression.} \label{fig:mps_anr} \end{figure} \Fig{fig:mps_mS} shows the dependence on the orthogonal combination of parameters. \textit{i.e.}\ varying $S_\mathrm{can}^{1/3}$ and $m$ while holding their ratio fixed. This keeps the scales in $k$ at which the suppression occurs fixed but it changes the overall energy density in the cannibal fluid and therefore changes mostly the amplitude of the suppression. \begin{figure}[!htbp]% \centering \includegraphics[width=0.8\textwidth]{rescR_S13m.pdf} \caption{MPS ratio $R(k)$ for different values of the product $m S_\mathrm{can}$ but fixed ratio $S_\mathrm{can}^{1/3}/m$ (\textit{i.e.}\ fixed $a_\mathrm{can}$). This corresponds to different fractions $f_{\mathrm{can},\,0}$ of cannibals, from $1\%$ (purple) to $10\%$ (red). We have normalized $R$ such that there is a corresponding extra amount of CDM in the $\Lambda$CDM theory, in order to cancel out some background effects. With a fixed $a_\mathrm{nr}$ it is clear that the same $k$ modes are suppressed, but the amount of suppression is dialed by $f_{\mathrm{can},\,0}$.}% \label{fig:mps_mS} \end{figure} Note that this second dependence is similar to that of the MPS on neutrino mass \cite{Lesgourgues:2006nd}. However the smallest $k$ affected by non-zero neutrino masses is constrained to within a factor of a few of $k_{NR} \sim 0.01\, \mathrm{Mpc}^{-1}$ whereas for cannibals the onset of the suppression in the MPS can lie anywhere within $k\sim 0.001 - 0.1\, \mathrm{Mpc}^{-1}$ (see \Fig{fig:mps_anr}). Finally, we wish to mention the other ``anomaly" in cosmological precision fits: the discrepancy between the value of $H_0$ inferred from the Planck CMB data (and BAO) within $\Lambda$CDM and the direct measurement of $H_0$ from \cite{Riess:2016jrr,Bonvin:2016crt}. To see if cannibals could also help with this anomaly while remaining consistent with everything else would require a global fit of the cannibal model. \section*{Acknowledgments} \label{sec:ack} We wish to thank Ami Katz and William Sheperd for discussions on phase transitions and David E. Kaplan and Neil Weiner for suggestions which lead to the birth of this project. We are also thankful to Julien Lesgourgues and Deanna Hooper for catching a typo in one of our equations. The work of MB and MS is supported by DOE grant DE-SC0015845. The work of RE was supported by Hong Kong University through the CRF Grants of the Government of the Hong Kong SAR under HKUST4/CRF/13. RE acknowledges the members of the particle theory group at Boston University for their very warm hospitality and support during her visit to BU when most of this work was done. MS appreciated the hospitality of the IAS at HKUST where his stay was just amazing.
1,116,691,498,799
arxiv
\section{Introduction} Initial synchronization, or acquisition of a direct sequence (DS) signal appears to be a quite common first step that a communication receiver has to perform after switching the power on, because many wireless standards either use a DS signaling or their preamble, used for synchronization purposes, is a DS signal. These standards include GSM, LTE, UMTS, GPS, GALILEO, WIMAX, Zigbee, and many others wireless standards \cite{ergen2009,halonen2004,farahani2011,prasad2005}. For example, LTE systems use two DS signals, i.e., a 62-length Zadoff-Chu sequence and an 31-length M-sequence, as primary and secondary synchronization signals \cite{karami2015}. On the other hand, the performance of channel estimation and equalization, and data detection, algorithms is significantly affected by the accuracy of the initial synchronization \cite{karami2007blind,karami2004maximum,bennis2007performance,karami2013performance,karami2004joint,karami2006decision,karami2007equalization,estarki2007joint}. One solution to improve the synchronization robustness is to use interference cancellation (IC) signal processing \cite{karami2006very,karami2003new,karami2007near,karami2008near}. Notch filters are well-known examples of these. Another application for these IC units is spectrum sensing in cognitive radios. A notch filter may be a separate stand-alone unit in the front of a conventional receiver, but they may also be integrated into a frequency-domain receiver, which reduces the complexity, because the required transformations may be shared. frequency-domain receivers require less complexity and hence, have found many applications. One particularly interesting type of filtering is matched filtering, which allows fast acquisition \cite{torrieri2015,SIMON02}. In traditional frequency-domain filtering, where the filter is in a one piece, overlap-save (OLS) or overlap-add (OLA) methods have to be acquired to properly handle the convolution process \cite{ingle2016}. Moreover, the frequency-domain receivers may be of interest in multipurpose or universal receivers, because they can be naturally used not only to receive multi-carrier signals such as orthogonal frequency division multiplexing (OFDM) and its variants as MC-CDMA \cite{pintelon2012} and generalized-multi-carrier (GMC) \cite{bica2016} signals but also to receive single-carrier signals \cite{hassanieh2012}. \par Some systems employ long DS codes and consequently require long filters which are difficult to implement \cite{karami2002efficient,karami2012novel}. In such cases, the filtering has to be divided into blocks and the required filtering process has to be performed using a process known as a block or partitioned filtering \cite{rao2014}. This technique is well-known in audio signal processing \cite{gay2012, smith2013}. Even overlapping blocks may be used \cite{KUK05, smith2013}. Block filters may also be adapted to acquire larger Doppler shifts than sole filters, see \cite{BETZ04,saarnisaari2008frequency,mohammadkarimi2017number,mohammadkarimi2015novel} for time-domain approach. Block filtering is equal to DFT filter banks (multi-rate filters) and linear periodic time varying (LPTV) filtering \cite{rao2014} but also short time Fourier transform (STFT)-based-filtering \cite{le2013}. The STFT adds windows, not used in DFT filter banks or LPTV filters, to the overall picture. The windows may be used to perform the pulse shape filtering, i.e., to match the filter frequency response to that of the signal and to improve the performance of notch filters by reducing the spectral leakage. However, although essential for proper performance of notch filters, windowing is known to cause signal-to-noise ratio (SNR) losses which are up to 3 dB for good windows. This loss may be reduced almost down to zero dB using overlapping segments, which are also elementary for STFT-based-processing \cite{CAPOZZA00}. A STFT-based-correlator DS-receiver is presented in \cite{QUYANG01}. In addition to data demodulation investigated in \cite{QUYANG01}, it may be used for serial search acquisition, which is known to result a slower acquisition than the matched filtering acquisition investigated herein. This paper presents a frequency-domain, windowed, overlapped block filtering approach for DS signal acquisition. In addition to introducing the filtering and the acquisition concept, its other possible applications in radio communications will be briefly discussed. These include i) addition of a particular notch filter method \cite{SAARNISAARI05} into the receiver chain, ii) processing of different signals like conventional DS, constant envelope DS, OFDM (WIMAX), MC-CDMA and GMC, iii) adaption the receiver to handle large Doppler frequency uncertainties and iv) possible changes when receiver is turned to the demodulation phase after acquisition. Furthermore, the paper includes analysis of computational complexity of the receiver compared to the conventional (non-block) matched filter implementation in the time or frequency-domain as well as analysis of acquisition probabilities in additive white Gaussian noise (AWGN) and Rayleigh flat fading channels, of which the latter are novel results. The probabilities include conventional detection and false alarm probabilities, maximum-search-based-probabilities and maximum search followed by threshold-detection-based-probabilities offering a very comprehensive picture of receiver's performance. These probabilities may be used to set the detection threshold and to predict the receivers performance in practice. As a summary of this it could be said that the paper introduces a flexible baseband architecture that may be used with most existing and future signals and which offers spectrum sensing or narrowband interference rejection capability with a low additional cost. Therefore, the proposed receiver structure is a candidate receiver architecture for future multi-waveform platforms. The rest of the paper is organized as follows. Section \ref{blockfiltering} introduces the filtering concept whereas applications and modifications are discussed in section \ref{applications}. The acquisition process is analyzed in section \ref{analysis} and simulation results confirming the analysis are shown in section \ref{simulations}. Finally, conclusions will be drawn in section \ref{conclusions}. \section{Block Filtering}\label{blockfiltering} This section first discusses block-wise convolution to provide an insight how the block filtering works and then present its mathematical frequency-domain basis, the STFT-based time-varying filtering. \subsection{An Example}\label{convolution} A simple example is probably the best way to explain how the block filtering differs from the conventional one. Let $x_1,x_2,x_3,x_4$ be the signal block to be filtered. In the conventional filtering, the signal is continuously fed into the filter whose impulse response is $h_1,h_2,h_3,h_4$. As a consequence, the response sequence is $x_1h_1, x_1h_2+x_2h_1, x_1h_3+x_2h_2+x_3h_1,x_1h_4+x_2h_3+x_3h_2+x_4h_1\, (\text{desired phase in acquisition}),\, x_2h_4+x_3h_3+x_4h_2,x_3h_4+x_4h_3,x_4h_4$. The block-wise convolution should end up to the same result. In the block processing, the signal and the filter are divided into blocks using equal divisions. In the example, the division of the signal could be (the filter is divided correspondingly) \begin{equation*} \label{eq:example} \begin{bmatrix} x_3 & x_1 \\ x_4 & x_2\end{bmatrix}, \end{equation*} where the block size $M=2$ and totality is $2\times 2$ matrix. In the absence of noise, the signal stream includes zero blocks in both sides. In other words, the signal matrix stream is \begin{equation*} \begin{matrix} 0 & x_3 & x_1 & 0 \\ 0 & x_4 & x_2 & 0 \\ \end{matrix}. \end{equation*} This is divided into $2\times 2$ matrices by discarding the oldest data and taking a new block in. The input matrices, the first on right, are therefore \begin{equation*} \begin{matrix} \begin{bmatrix} x_1 & 0 \\ x_2 & 0 \\ \end{bmatrix} & \begin{bmatrix} x_3 & x_1 \\ x_4 & x_2 \\ \end{bmatrix} & \begin{bmatrix} 0 & x_3 \\ 0 & x_4 \\ \end{bmatrix} \end{matrix}. \end{equation*} In the block processing, only one input matrix is processed at each time instant, called as a filtering cycle. Each block (column) of an input matrix is convolved with the corresponding block (column) of a filter and the results are added together. Then, the next input matrix in the next cycle is received and the operations are repeated. Therefore, $M$ responses are calculated in one time cycle. To obtain the whole response, the operation has to be repeated for all $L$ possible cycles. Since the length of block convolution is $2M-1$, the tails have to be added to the corresponding convolutions in the next cycle. This is clarified next. It is assumed that each block (column) of the signal (matrix) passes a filter block from down to top. The corresponding convolution results are added together from each filtering cycle. The cycles are separated by bars and tails are below the dot lines. This results \begin{equation*} \begin{matrix} x_1h_1 & | & x_1h_3 + x_3h_1 & | & x_3h_3 \\ x_1h_2+x_2h_1 & | & x_1h_4+x_2h_3 & | & x_3h_4+x_4h_3 \\ & | & + x_3h_2+x_4h_1 & | &\\ \ldots & | & \ldots & | & \ldots \\ x_2h_2 & | & x_2h_4 + x_4h_2 & | & x_4h_4 \\ \end{matrix}. \end{equation*} The tails of the convolution have to be added to the head of the convolution in the next cycle. More precisely, let $c_k=[h_h\ t_k]$ denote the convolution result in cycle $k$, where $h_k$ is the head (first $M$ samples) and $t_k$ the tail. In the next cycle $c_{k+1}=[h_{k+1}+t_k\ t_{k+1}]$. Therefore, the response of the block convolution becomes equivalent to the conventional convolution. As a summary: the signal stream is block by block fed through the filter, the column-wise convolution between the signal and the filter is performed, the convolution results are added column-wise together and the tails have to be added to the head of the next cycle. Since the convolution in the time-domain might be equally well performed in the frequency-domain, in each cycle the FFT of the signal (matrix) can be element-wise multiplied by the FFT of the filter (matrix) and then is inverse transformed to obtain the time-domain convolution. After that, the convolution results are added together and OLA processing is performed. However, only one FFT per incoming signal block has to be calculated, because these transformations flow matrix-wise through the filter. \par By using a similar example, one can easily see that in the overlapping segments case (like $x_1,x_2;\, x_2,x_3;\, x_3,x_4$), the response of the block-wise convolution is not equal to the one of the conventional convolution. Instead, the original signal and its overlapped version have to be processed separately and the results have to be added afterwards. The filter has to be overlapped correspondingly. \subsection{STFT-Based Block-Filtering} All this is put into the STFT framework as follows. Let $x(n),\ n=0,\ldots,N-1$ be a discrete signal. Its STFT is \cite{le2013} \begin{equation} \label{eq:analysis} X_{lm}=\sum_{n=0}^{N-1} x(n)w(n-lR)e^{j2\pi mn/M}, \end{equation} where the analysis window $w(n)$ has length $M$ with non-zero values being in the interval $n=0,\ldots,M-1$. It is obvious that the signal is divided into blocks of $M$ samples and the blocks may overlap depending on the parameter $R$; if $R=M$ there is no overlapping, but just consecutive blocks. As a result of the analysis process \eqref{eq:analysis}, the signal is presented by a $M\times LM/R$ array of coefficients $X_{lm}$. For the simplicity, assume that $N=LM$ and $M/R=1,2,4,\ldots$. The case $M=R$ is called the critical sampling case. The selected restrictions yield to a simple implementation through FFT, but are still quite flexible. More general case is studied in \cite{XIQI06}, but without considering signal acquisition. There are several alternatives to recover the signal \cite{le2013}. One particularly interesting form is \begin{equation} \label{eq:synthesis} x(n)=\sum_{l=0}^{L-1}g(n-lR)\sum_{m=0}^{M-1} X_{lm}e^{j2\pi mn/M}, \end{equation} where $g(n)$ is the synthesis window of length $M$. Assuming that $w(n)$ and $g(n)$ satisfy some restrictions \cite{le2013}, the signal $x(n)$ can be perfectly reconstructed (synthesized) from its STFT coefficients $X_{lm}$. In other words, the STFT columns are first inverse-Fourier-transformed (rightmost sum in \eqref{eq:synthesis}), then windowed and finally added together in OLA fashion. Note that since the (I)FFT is a linear operator the order of addition and (I)FFT can be changed. Thus, if the synthesis window is rectangular, the complexity may be reduced performing addition before the IFFT. This naturally is a sensible operation, only if partial filtering results are not required like in Doppler processing or in filtering of several symbols during a filtering cycle. \par Let $H_{lm}$ be the STFT of the filter. It can be shown \cite{le2013} that the output of the filter is the inverse STFT of $X_{lm}H_{lm}$ (element-wise product). Thus, the filtering includes multiplication of the signal's STFT by that of the filter, and inverse transformation of the product. In the paper's case, the frequency response of the filter is zero outside an interval. Therefore, the output is computed as multiplying finite portion of signal's STFT with the filter's STFT. Furthermore, to handle the heads and tails properly, the FFT size has to be $2M$. The overlapping effect is taken into account by stepping the input signal STFT stream in steps of size $M/R$, the number of overlapping segments. The filtering process is illustrated in Fig. \ref{fi:STFTfilter}. Obviously, if $N=M=R$ the described block FFT filtering method reduces to the conventional FFT OLA filtering \cite{gay2012}. \begin{figure*} \centering \includegraphics[width=16cm]{Stft} \caption{An illustration of the windowed and overlapped block filtering approach.}\label{fi:STFTfilter} \end{figure*} \subsection{Complexity Comparison} Herein, the complexity of generic time, conventional frequency-domain and block filtering are compared in terms of complex multiplications (CM). In the time-domain each output needs $N$ CM and there are $N$ outputs such that total complexity is $N^2$ CM. The conventional frequency-domain OLA processing needs FFT and IFFT of size $2N$ and multiplication by filters frequency response of size $2N$ yielding total complexity of $2N(\log_2N+1)$ CM. The proposed block filtering (assuming rectangular windows) requires FFT of size $2M$, which has to be repeated $LM/R$ times, multiplying by filter of size $2M\times LM/R$ which has to be repeated $LM/R$ times. The conventional form still needs $LM/R$ IFFTs of size $2M$, whereas the simpler form has only $L$ IFFTs. Therefore, the total complexity of the conventional form is $\tfrac{2MN}{R}\big(\log_2 2M+\tfrac{LM}{R}\big)$ CM and that of the simpler form $\tfrac{2MN}{R}\big(\tfrac{1}{2}(1+\tfrac{R}{M})\log_2 2M+\tfrac{LM}{R}\big)$ CM. It can be observed that the complexity of the block filtering becomes equal than that of the conventional OLA filtering if $N=M=R$, as it should. The complexity comparison results to a conclusion that the block filtering is more complex than the conventional one, but without overlapping the complexity increase is marginal. In addition, both frequency-domain versions are simpler than the generic time-domain implementation. However, possible windowing increases the complexity. \section{Applications}\label{applications} \subsection{Matched Filtering for Acquisition} Symbol or chip synchronization is conventionally performed by correlation, but this results in a slow synchronization phase, see, e.g., \cite{torrieri2015}. A way to speed it up is to implement several correlators in parallel to simultaneously compute a number of test variables (search cells) \cite{BRAASCH07}. If the signal, to be synchronized, consist of $N$ symbols or chips, then the receiver usually has $qN$ search cells in time-domain, where $q$ is the oversampling factor. Additionally, there might be search cells in frequency as will be seen later but in this section only time uncertainty will be investigated. It is reminded that the receiver conventionally first includes a pulse shape filter whose response in fed into the correlator as one sample per symbol or chip basis. In the oversampling case, the response has to be split into $q$ steams and each stream is separately processed \cite{CHAPMAN01}. Alternatively, the pulse shape may be taken into account in the correlation \cite{MILLER06}. In this case, the receiver does not include a separate pulse shaping filter. This processing may be called as waveform-based-correlation, whereas the another processing may be called as training-symbol or chip-sequence-based-correlation. The STFT-based block-filtering may adopt both ways. In the former the analysis window, indeed, may be matched to form a suitable response. Another way to speed it up is to calculate the test variables through matched filtering either in time or frequency-domain \cite{SAARNISAARI04_PLANS2,BRAASCH07,MILLER06}. In the serial search matched filter (MF) acquisition the outputs of the MF (computed in any possible way) are compared to a threshold in a serial fashion. In the maximum search a period of outputs is calculated and the maximum is found. This maximum is then compared the threshold. In both cases, it is claimed that the signal is present and symbol or chip synchronization has been acquired if the threshold is exceeded. This will be considered more detailed later in section \ref{analysis}. \subsection{Different Modulations} It is obvious that OFDM systems are a special case of the synthesized signal \eqref{eq:synthesis}, i.e., the window is rectangular and $L=1$. Conventional WIMAX synchronization is performed in time-domain. In WIMAX the preamble DS code is put into even subcarriers whereas odd ones are zero. This makes the time-domain signal periodic with two periods of size $N/2$, where $N$ is the number of subcarriers. The acquisition unit performs sliding correlation between two consecutive $N/2$ blocks \cite{SCHMIDL97}. There are also other variants and also a possibility that the matched filter and frequency-domain processing is used \cite{PUSKA07}. It should also be noticed that frequency-domain processing may be used to compute sliding correlation. However, sample by sample sliding results both in the time and frequency-domain to a high complexity. Therefore, the sliding step may be larger, e.g., $N/4$ or $N/8$. This is closely related to the overlap processing. The generalized multi-carrier (GMC) transmission technique presented, e.g., in \cite{hassanieh2012,kliks2011,HUNZIKER03,XIQI06}, is a possible candidate for the future wireless communication systems. This is due to its better time-frequency localization properties which may reduce intersymbol and intercarrier interference, and remove need for the cyclic prefix needed in conventional OFDM systems \cite{bica2016}. The refereed papers consider different aspects of the GMC signal but not synchronization. In \cite{hassanieh2012} it was just mentioned that synchronization may be performed on subcarrier basis. The GMC signal may be explained as follows. The STFT coefficients $X_{lm}$ are the transmitted data symbols. In this case the signal \eqref{eq:synthesis} is called the GMC signal, or, if considered during interval $0,\ldots,N-1$, a GMC symbol corresponding the OFDM symbol definition. If a GMC system uses a known preamble symbol or symbols, its acquisition can be performed just as described here. By the authors knowledge, this is the first paper considering acquisition of GMC signals. Linearly modulated single carrier signals can be obtained setting $M=1$. However, to keep the receiver universal one might to want to receive also these using the frequency-domain processing instead of conventional time-domain processing. This is possible since filtering, essential to all receivers, may be done either in the time or frequency-domain. This paper has readily shown how this is performed using frequency-domain block filtering. It is worth noting that also constant envelope DS signals may be received using a conventional matched filter \cite{BAIER84}, and thus the proposed frequency-domain block filter. \subsection{Doppler Processing} In the ideal Doppler processing, the input signal is transformed into different frequency offset corresponding to the possible Doppler values. In a simpler solution, the filter is divided (partitioned) into blocks and the outputs of these blocks are then Fourier transformed as shown in \cite{BETZ04} (and references therein). The reference uses time-domain processing, but as already shown, this partitioned matched filtering can be done also in the frequency-domain. If the Doppler is chancing (due to accelerated motion) between the blocks one may possible search over all possible (but sensible) Doppler tracks in the resulting time-frequency uncertainty grid. The acquisition probabilities concerning the Doppler processing are analyzed in \cite{BETZ04} and not repeated in this paper. Another way to increase Doppler resistance is to combine the partial responses either non-coherently or in a differentially coherent way \cite{MOON05}. \subsection{Long Codes} Some systems have a basic long code and direct implementation of a filter matched to it may be infeasible. Block filtering is a possible solution with shorter elements that are feasible to implement. Another case where block matched filters may be needed is when a long code is divided into subintervals and each subinterval contains a symbol. In this case responses of the partitioned matched filters are variables used for symbol demodulation (naturally sampled at symbol synchro position). This may be needed in a long code system where data rate is adjusted using code partitioning, but for some reasons short DS codes are not willed to be used. A possible example where block filtering may be applied is the UMTS system where the uplink preamble consist of several scrambled repeats of a short code \cite{sesia2015}. \subsection{Spectrum Sensing} frequency-domain processing allows easy adaption of spectrum sensing algorithms since FFT is readily included into the processing chain. Spectrum sensing may be applied in cognitive radios to found available spectrum holes \cite{karami2011cluster}. Another application is interference cancellation (IC) needed especially in military systems. In these cases the process is known as notch filtering, but in both the cases the technique is basically the same. The window, inherent to the proposed receiver, is helpful since it reduces spectral leakage. However, a drawback of the windowing is the SNR loss, which may be 3 dB. Luckily overlapping, also inherent to the receiver, reduces this loss almost down to zero dB. \par An important aspect, to notify when doing spectrum sensing or IC, is that if the desired underlaying signal is not flat, or white, in the frequency-domain also it may be detected (if SNR is high enough) or, what is worse, canceled. To avoid this unpleasant phenomena, the receiver should be designed using one sample per symbol/chip processing, i.e., the receiver should have a traditional pulse shaping filter at front and parallel processing of over-sampled streams. In this case we may loose an advantage of windows, but the complexity remains (almost) the same. \subsection{Demodulation} Once the acquisition is performed, the receiver turns its attention into tracking and data demodulation. In this turn the receiver may continue matched filtering if the signal has a DS component. The block filtering allows different code lengths; the short are needed at high data rates and the long are used in low data rates or when DS processing gain is needed for interference tolerance. In addition, the time varying nature \cite{le2013} of the filter allows de-spreading of scrambled signals. However, in this case the filter's or correlator's frequency response has to be updated frequently. Alternatively, the receiver uses correlation in the DS component case, pure FFT in the OFDM case or frequency-domain pulse shape filtering in the single carrier case. In the latter the filter may filter several symbols at one filtering cycle and the filter's frequency response is just the pulse shape. This pulse shaping goal may also be achieved using a suitable analysis window. \section{Acquisition Analysis}\label{analysis} One usually requires detectors insensitive to signal level variations called as constant false alarm rate (CFAR) detectors. These CFAR detectors may also be derived using generalized likelihood ratio detectors \cite{kay2013}. A CFAR detector is presented in \cite{SAARNISAARI04_ISSSTA,SAARNISAARI06-DS-FH}. Let $\vec{y}_k=a_k\vec{s}+\vec{n}_k$ denote the $k$th received signal including $N$ samples, where $a_k$ is a channel amplitude, $\vec{s}$ a preamble signal such that $\norm{\vec{s}}=1$ ($\norm{\ }$ denotes the Euclidean norm) and $\vec{n}_k$ a complex white Gaussian noise with variance $\sigma^2$. In addition, let $r(n)$ be an output of the MF (a test variable). If the signal is not present $a_k=0$. The detector is \begin{equation} \abs{r(n)}^2 > \gamma \norm{\vec{s}}^2\norm{\vec{y}_k}^2, \end{equation} where $\gamma$ is a parameter depending on the desired false alarm rate. The average signal power in the right hand side makes the detector a CFAR detector. It basically is an estimator of the thermal noise level, but it also makes the detector insensitive to interference. Note that if an IC unit is used, the average signal power has to be measured after the IC unit, i.e., after mitigation. This is so because mitigation may remove the interference that otherwise could deny detection. In other words, the mean signal power would be too high. Another concern is that the effect of window on the threshold since it affects the signal power. This is more important, if input signal is windowed but the reference (filter) is not. Let $\vec{w}_a$ and $\vec{w}_r$ denote window vectors used for signal and filter analysis. Then, one has to use the power difference of windows as a normalizing factor. Furthermore, overlapping means that the computed response is replicated $M/R$ times. As a consequence, the signal power should be modified as \begin{equation} \norm{\vec{s}}\equiv \frac{M}{R}\frac{\norm{\vec{w}_a}}{\norm{\vec{w}_r}}\norm{\vec{s}}. \end{equation} It can be shown \cite{SAARNISAARI04_ISSSTA,SAARNISAARI06-DS-FH} that the false alarm probability $P_{\text{FA}}$, i.e., the probability that the threshold is exceeded even though the signal is not present, can be approximated as \begin{equation} P_{\text{FA}}=e^{-\gamma N}, \end{equation} from which $\gamma$ can easily be obtained as $\gamma=\frac{1}{N}\ln(P_{\text{FA}})$, where . Another useful probability is the probability that the maximum exceeds the threshold. It can be shown to be \cite{SAARNISAARI04_ISSSTA,SAARNISAARI06-DS-FH} \begin{equation} P_{\text{FA,M}}=1-(1-P_{\text{FA}})^N. \end{equation} The probability of detection $P_{\text{D}}$, i.e., the probability that the test cell exceeds the threshold when the actual synchro position is investigated, can be approximated \cite{SAARNISAARI04_ISSSTA,SAARNISAARI06-DS-FH} as \begin{equation} \label{eq:PDawgn} P_{\text{D}}=Q_0\big(\sqrt{2\mu},\sqrt{2\gamma(N+\mu})\big), \end{equation} where $\mu=\abs{a_k}^2/ \sigma^2$ is the signal-to-noise ratio (SNR) of the preamble signal and $Q_m(a,b)$ is the generalized Marcum Q-function \cite{ingle2016}. Another useful probability is the probability $P_{\text{m}}$ that the maximum occurs at the actual synchro position. The approximation in \cite{SAARNISAARI04_ISSSTA,SAARNISAARI06-DS-FH} is not too accurate. Therefore, another attempt that will result a closer approximation is provided. Briefly explained, the analysis tool in \cite{SAARNISAARI04_ISSSTA,SAARNISAARI06-DS-FH} considers the distribution of $r(n)$ as it is and assumes that $\norm{\vec{y}_k}^2$ converges to its average. This simplifies analysis since only one random variable has to be considered, but the method still has its roots on probability and statistics \cite{ROHATGI2015}. Now, at the synchro position $r(n)$ is a complex Gaussian variable with mean $a_k$ and variance $\sigma^2$. Thus, $\abs{r(n)}^2$ has a non-central chi-square distribution. Assuming insignificant sidelobes on the autocorrelation function of the preamble signal, the non-synchro positions are zero mean Gaussian variables with variance $\sigma^2$. Now, the probability of interest is $P_{\text{m}}=P(\abs{r_{\text{synchro}}(n)}^2\ > \abs{r_{\text{non-synchro}}(i)}^2, \ \forall i)$. A direct application of the analysis principle yields to result in \cite{SAARNISAARI04_ISSSTA,SAARNISAARI06-DS-FH}. However, this probability is equivalent the probability that the decision variable at the synchro position is larger than one of the largest non-synchro position. It is well-known that 98 \% of Gaussian variables are within 2.33 standard deviations from the mean. Thus, the novel approximation is \begin{equation}\label{eq:pm} P_{\text{m}}=Q_0\big(\sqrt{2\mu},\sqrt{2(2.33)^2}\big). \end{equation} For very long (large $N$) preamble signals the confidence probability may be higher, e.g., 99.5 \%, since it is natural that then, on average, the largest test variable at non-synchro positions may be larger than with short signals. See \cite{TURUNEN07} for another solution to this problem. Still another probability of interest is the probability that the maximum exceeds the threshold independent of the fact is it the synchro position or not. This is \cite{SAARNISAARI06-DS-FH} \begin{equation} P_{\text{D,M}}=1-(1-P_{\text{FA}})^{N-1}(1-P_{\text{D}}). \end{equation} Finally, the probability that the maximum is at the synchro position and it exceeds the threshold is $P_{\text{M}}=P_{\text{m}}P_{\text{D,M}}$. The above results are derived in additive white Gaussian noise (AWGN) case. In fading channels the situation is different. In Rayleigh fading channels, at the synchro position variable $r(n)$ follows a zero mean complex Gaussian distribution with variance $\sigma_s^2+\sigma^2$, where $\mu=E\{\abs{a_k}^2\}/\sigma^2=\sigma_s^2/\sigma^2$ is the average SNR, i.e., \begin{equation} P(\abs{r(n)}^2)\equiv P(y)=1-e^{-y/(\sigma_s^2+\sigma^2)}. \end{equation} If the analysis tool in \cite{SAARNISAARI04_ISSSTA,SAARNISAARI06-DS-FH} is adopted, it follows that \begin{equation} P_{\text{D}}=e^{-\gamma N/(\mu+1)}, \end{equation} whereas the paper's approach yields \begin{equation} P_{\text{m}}=\Big(e^{-(2.33)^2/(\mu+1)}\Big). \end{equation} In Rician fading channels, the decision variable of interest follows a complex Gaussian distribution with mean $a_k$ and variance $\sigma_s^2+\sigma^2$. Let the ratio of the power of the constant and random element be $\abs{a_k}^2/\sigma_s^2=\kappa$ and let $\mu=\abs{a_k}^2 / \sigma^2$ be the SNR of the constant element. Then, $\sigma_s^2+\sigma^2=\sigma^2(\mu/ \kappa+1)$. As a consequence, \begin{equation} P_{\text{D}}=Q_0\Big(\sqrt{\frac{2\mu}{\tfrac{\mu}{\kappa}+1}},\sqrt{\frac{2\gamma(N+\mu)}{\tfrac{\mu}{\kappa}+1}}\Big), \end{equation} which reduces to that \eqref{eq:PDawgn} in the AWGN channel (as it should) if the random element is weak since when $\sigma_s^2=0$, then $\kappa=\infty$. Correspondingly, \begin{equation} P_{\text{m}}=Q_0\Big(\sqrt{\frac{2\mu}{\tfrac{\mu}{\kappa}+1}},\sqrt{\frac{2(2.33)^2}{\tfrac{\mu}{\kappa}+1}}\Big). \end{equation} \subsection{More Exact Analysis} This section provides more exact analysis of $P_{\text{m}}$ in AWGN, Rayleigh, and Rician channels. \subsection{AWGN Channel} Obviously, in an AGWN channel $P_{\text{m}}$ can be calculated as \begin{equation}\label{eq:pm_awgn} P_{\text{m}}^{AWGN}=Q_0\big(\sqrt{2\mu},\alpha_{N-1})\big), \end{equation} where $\alpha_{N}$ is defined as, \begin{equation} \alpha_{N}=\frac{E\{\max \abs{r(n)}^2, n=0,\ldots,N-1\} }{E\{\abs{r(0)}^2\} }, \end{equation} where $E\{.\}$ is the expectation operator and $r(0)$ is the decision variable at the actual delay. The value of $\alpha_{N}$ closely follows a logarithmic function of $N$. In the random channels the integral \eqref{eq:pm_awgn} has to be averaged over channel variations, i.e, integral \begin{equation} P_{\text{m}}=\int_{0}^{\infty} Q_0\big(\sqrt{2 \mu},\alpha_{N-1}\big) P(\mu)\, d\mu \end{equation} has to be solved, where $P(\mu)$ is the distribution of the SNR in a channel. \subsection{Rayleigh Channel} In a Rayleigh channel \begin{equation}\label{eq:pm_rayl1} P(\mu)=\frac{\mu}{\bar{\mu}} \exp(-\frac{\mu^{2}}{\bar{\mu}}) \end{equation} where $\bar{\mu}$ is the average SNR. To solve the above integral, the generalized Marcum Q-function is replaced by its integral form. After some manipulations we will have \begin{equation}\label{eq:pm_rayl2} P_{\text{m}}^{Rayl}=(\frac{K \bar{{\mu}}}{K \bar{{\mu}}+1})^{1-K} \exp(-\frac{K \alpha_{N-1}}{K \bar{{\mu}}+1}), \end{equation} where $K$ is the number of PN sequences used for synchronization. \subsection{Rice Channel} In a Rician channel \begin{equation}\label{eq:pm_rice1} P(\mu)=\frac{\mu}{{\tilde{\mu}}} \exp(-\frac{\mu^{2}+\mu_{0}}{{\tilde{\mu}}}) I_{0}\big(\frac{\sqrt{\alpha_{N-1}}\mu}{\tilde{\mu}}\big), \end{equation} where $\tilde{\mu}$ is the average of the variable part of the SNR, $\mu_{0}$ is fixed part of the SNR such that $\bar{\mu}=\mu_{0}+\tilde{\mu}$, and $I_{0}(.)$ is the zero order modified Bessel function. To solve the needed integral, we have to replace the generalized Marcum Q-function with its equivalent integral form whereas the Bessel function is replaced by its Taylor series expansion. This series of integrals results \begin{equation} \begin{split} P_{\text{m}}^{Rice}=&\sum_{n=0}^{\infty}\frac{2^{K-1}(K^2{\tilde{\mu}})^{n}}{(K {\tilde{\mu}}+1)}^{n+1}F(n+1,1,\frac{\mu_{0}}{2{\tilde{\mu}}(K {\tilde{\mu}}+1)})\\ &\cdot e_{n+K}(K\alpha_{N-1}),\label{eq:pm_rice2} \end{split} \end{equation} where $F(.,.,.)$ is the hyper geometric function and $e_{n+K}(K\alpha_{N-1})$ is the incomplete exponential function defined as \begin{equation} e_{n+K}(K\alpha_{N-1})=\sum_{m=0}^{n+K-1}(K\alpha_{N-1})^{m}. \end{equation} Solution of \eqref{eq:pm_rice2} converges slowly. Convergence can be speed up by manipulating \eqref{eq:pm_rice2} into form \begin{equation} \begin{split} P_{\text{m}}^{Rice}=&1-\sum_{n=0}^{\infty}\frac{2^{K-1}(K^2{\tilde{\mu}})^{n}}{(K {\tilde{\mu}}+1)}^{n+1}\\&F(n+1,1,\frac{\mu_{0}}{2{\tilde{\mu}}(K {\tilde{\mu}}+1)})\\& \cdot (\exp(K\alpha_{N-1}) -e_{n+K}(K\alpha_{N-1}).\label{eq:pm_rice3} \end{split} \end{equation} \section{Numerical Results}\label{simulations} In this section the proposed windowed frequency-domain acquisition technique is simulated and then compared to the analytical results which provide bounds on the acquisition performance. As a reference, it is reminded that conventional non-windowed, non-overlapped approached achieve the theoretical bounds in the AWGN channel. Herein, it is trusted to a ``fact'' that if the analysis holds in Rayleigh fading channels it holds also in AWGN channels. Therefore, only Rayleigh fading channels are used in simulations. In all the simulations a 64 chips preamble sequence is used. It was a 63 chip Gold code extended by one. The signal is sampled one sample per chip. Simulation results are averaged over 1000 independent trials. SNR is expressed per preamble sequence. The desired false alarm rate was quite high $10^{-2}$. Figure \ref{Fig_Flat_rayl} shows the results in a flat Rayleigh fading channel when $M=R=32$, i.e., overlapping is not used, and the window is rectangular. The results show that the simulated and theoretical results coincide, i.e., the approximative analysis is a proper one. \begin{figure} \centering \includegraphics[width=1\linewidth]{rayleigh.eps} \caption{Simulation and analysis results for a flat Rayleigh fading channel.}\label{Fig_Flat_rayl} \end{figure} Figure \ref{Fig_freqsel} shows interesting results concerning a frequency selective Rayleigh fading channel, which has two equal power multipath components with one chip separation. SNR is defined per path. In practice, the receiver does not know is the detected signal sample from the first or second path. Therefore, also probability that either the first or second path exceed the threshold ($P_{D2}$), and probability that either the first or second path provides the maximum ($P_{m2}$) are reported. It can be concluded from the results that diversity in the multipath channels is very beneficial for the synchronization. Of course, this benefit is lost if the second path is weak and situation becomes close to that in a single path channel. It can be seen that multipath propagation causes SNR losses to $P_D$. This is due to non-zero autocorrelation sidelobes, which are inversely proportional to the preamble length. Another observation is that $P_m$ becomes close to half. This is easily understood since half of the time the second path is stronger than the first path if the paths have an equal power. It appears, although not shown in the figure for clarity reasons, that a good explanation of for $P_{i2}$, where $i$ is either $D$ or $m$, is \begin{equation} \label{eq:diversity} P_{i2}=1-\prod_k \big(1-P_{i2}(\text{SNR}_k)\big), \end{equation} where the probabilities are expressed as a function of SNR and $\text{SNR}_k$ is the SNR of the $k$th path. This result follows from a though chain that probability that either the first or second (or $k$th) path exceeds the threshold is equal to probability that they all are below the threshold. \begin{figure} \centering \includegraphics[width=1\linewidth]{freqsel.eps} \caption{Simulation results for a frequency selective Rayleigh fading channel. Theoretical values are for the flat fading channel.}\label{Fig_freqsel} \end{figure} The last set of simulations concerns effects of windowing and overlapping. The used analysis window is the Kaiser window with the parameter 8, which has very low tail values. The window for the reference is rectangular. The overlapping is either non, 50 \% or 75 \%, i.e., $R=$ 64, 32 or 16 while $M=N=64$. The channel is a flat fading Rayleigh channel. The simulated false alarm rates with the original threshold setting are 0.01 (the desired value as it should), 0.16, 0.54, respectively, without the window and $4.7\times 10^{-6}$, $3.4\times 10^{-4}$, 0.05 with the window (640000 samples). This shows that threshold tuning is needed if a desired false alarm rate is needed with windows and overlapping. The trend seems to be that a non-rectangular window decreases the false alarm rate, whereas overlapping increases it. As a consequence, simulations with the original threshold setting would not be fair with respect the false alarm rate. Therefore, a proper threshold (multiplier of the original) was determined by simulations for overlapped and windowed cases. The results with equal false alarm rates are shown in Fig. \ref{fi:wind_overlap}. The results show that overlapping does not affect the performance significantly, but windowing does. Overlapping and windowing is even worse (by 2 dB) than just windowing the conventional non-blocked MF (M=N=R=64). The windowing loss with the conventional MF is 2--3 dB with this window. \begin{figure} \centering \includegraphics[width=1\linewidth]{windoverlap.eps} \caption{Probability of detection $P_{D}$ simulation results for a flat Rayleigh fading channel when a window and overlapping are used.}\label{fi:wind_overlap} \end{figure} The last result is contrary to the expectation that overlapping reduces windowing losses. Therefore, the last simulations use window also for the reference to see if that affects the situation. Fig. \ref{fi:wind_ovlap_rwin} demonstrates that adding of the reference window reduces the performance (decreases sensitivity), but now the overlapping does not further decrease it. The total loss compared to the theory is 5 dB. The results indicate that if the same sensitivity is required, then windowed cases have to have a higher false alarm rate. Maybe the mentioned expectation results from the fact that overlapping increases sensitivity if the threshold is kept constant. Therefore, paper's results might not be in contradiction to early ones. \begin{figure} \centering \includegraphics[width=1\linewidth]{windoverlaprwin.eps} \caption{Probability of detection $P_{D}$ simulation results for a flat Rayleigh fading channel when analysis and reference windows and overlapping are used.}\label{fi:wind_ovlap_rwin} \end{figure} \section{Conclusions}\label{conclusions} The paper has provided insight into the windowed, overlapped, frequency-domain block filtering approach by explaining it and then showing (some of) its possible applications in radio communications. It was shown that this filtering approach may be used as a universal baseband receiver in communication systems, i.e., a single baseband architecture was shown to be able to receive all kind of signals. This is especially helpful in multipurpose platforms, which can (hereafter) be based on single architecture simplifying the design. Further investigations will be needed to see if this would reduce also other aspects in the receivers such as power consumption or silicon area. In particular, the proposed approach was applied to signal acquisition with some novel analysis of acquisition probabilities in fading channels. This application and provided analysis and simulation results verify usefulness of the architecture for a wide range of the channel conditions. In addition, simulations showed that windowing reduces sensitivity if a desired false alarm rate is the receiver design goal. Therefore, one has to use windows with a care, e.g., in environments where they are really needed. One future research topic with the proposed filter is that could a proper synthesis window be used to reduce the sensitivity losses the windows produce. Such a finding would improve usefulness of the filter. A way to find an answer might be the dual window. Another open question is the automatic detection threshold determination based on a given false alarm rate with overlapping blocks and windows. \biboptions{numbers,sort&compress} \bibliographystyle{elsarticle-num}
1,116,691,498,800
arxiv
\section{} \begin{acknowledgments} We acknowledge funding from the European Union Seventh Framework Programme under Grant Agreement No. 604391 Graphene Flagship and the Deutsche Forschungsgemeinschaft (BE 2441/9-1) and support by the Helmholtz Nano Facility (HNF)\cite{HNF} at the Forschungszentrum J\"ulich. Growth of hexagonal boron nitride crystals was supported by the Elemental Strategy Initiative conducted by the MEXT, Japan and JSPS KAKENHI Grant Numbers JP26248061, JP15K21722 and JP25106006. \end{acknowledgments}
1,116,691,498,801
arxiv
\section{Introduction} \label{sec::Introduction} For the last decade, decays involving $b\to s\mu^+\mu^-$ transitions have been a focus of flavour physics community due to the substantial number of so-called ``$b$ anomalies''. These anomalies are a pattern of deviations between theoretical expectations, within the Standard Model of particle physics (SM), and experimental measurements, chiefly by the LHCb experiment~\cite{% LHCb:2017avl,LHCb:2020lmf,LHCb:2021vsc,LHCb:2021zwz,LHCb:2021xxq,LHCb:2021trn% }. Compatible experimental results, for many of these measurements, have since been obtained by the ATLAS~\cite{ATLAS:2018cur,ATLAS:2018gqc}, CMS~\cite{CMS:2019bbr,CMS:2017rzx,CMS:2020oqb}, and Belle~\cite{Belle:2016fev} experiments. There is substantial interest in corroborating the $b$ anomalies through decay channels that feature complementary sources of theoretical systematic uncertainties \emph{and} complementary sensitivity to effects beyond the SM. The decay $\Lambda_b\to\Lambda(\to p \pi^-)\mu^+\mu^-$ is a prime candidate for this task~\cite{Boer:2014kda}. In contrast to $B \to K^*(\to K\pi) \mu^+\mu^-$ decays, the local form factors for $\Lambda_b \to \Lambda\mu^+\mu^-$ decays correspond to transition matrix elements between stable single-hadron states in QCD. This allows precise lattice QCD calculations using standard methods, and results for the $\Lambda_b\to\Lambda$ form factors have been available for some time \cite{Detmold:2016pkz}. Measurements of $\Lambda_b\to\Lambda(\to p \pi^-)\mu^+\mu^-$ observables \cite{LHCb:2015tgy,LHCb:2019wwi} have been included in global fits of the $b\to s\mu^+\mu^-$ couplings~\cite{Blake:2019guk,Altmannshofer:2021qrr,Hurth:2020rzx,Bhom:2020lmk}, and dedicated analyses for effects beyond the SM, even accounting for production polarization of the $\Lambda_b$, have been performed in recent years~\cite{Meinel:2016grj,Blake:2019guk}. Lepton-flavor universality violation in baryonic $b \to s \ell^+ \ell^-$ decay modes has also been studied theoretically; in Ref.~\cite{Bordone:2021usz} the angular distribution of $\Lambda_b \to \Lambda \ell^+ \ell^-$ has been computed for the full base of New Physics operator (partial results are available in Ref.~\cite{Sahoo:2016nvx,Das:2019omf}). Measurements by LHCb are also available for the branching fraction of the $\Lambda_b \to \Lambda\gamma$ decay~\cite{LHCb:2019wwi}. In this work, we investigate one of the two main sources of theoretical uncertainties that arise in the predictions of $\Lambda_b\to \Lambda\ell^+\ell^-$ and $\Lambda_b \to \Lambda\gamma$ transitions; the hadronic form factors of local $\bar{s}\Gamma\,b$ currents of mass dimension three. The complete set of scalar-valued hadronic form factors describing these currents is comprised of ten independent functions of the dilepton invariant mass squared, $q^2$. A convenient Lorentz decomposition of the hadronic matrix elements is achieved in terms of helicity amplitudes~\cite{Feldmann:2011xf}. Here, we set out to improve the description of the form factors as functions of $q^2$ across the whole kinematic phase space available to the $\Lambda_b\to\Lambda\ell^+\ell^-$ decay. To that end, we derive dispersive bounds for the form factors in the six $\bar{s} \Gamma b$ currents: the (pseudo)scalar, the (axial)vector, and the two tensor currents. We demonstrate that previous analyses of dispersive bounds for baryon-to-baryon form factors~\cite{% Boyd:1995tg,Hill:2010yb,Bhattacharya:2011ah,Cohen:2019zev% } overestimate the saturation of the bounds (see also the discussion in Ref.~\cite{Gambino:2020jvv}). Our formulation of the bounds uses polynomials that are orthonormal on an arc of the unit circle in the variable $z$ (see Sec.~\ref{sec:th:parametrization} for the definition). As a consequence, benefits inherent to meson-to-meson form-factor parametrizations with dispersive bounds now also apply to our approach. We illustrate the usefulness of our formulation of the dispersive bounds for the form factor parameters for $\Lambda_b\to\Lambda$, but note that it applies similarly to other ground-state baryon to ground-state baryon form factors ({\it e.g.} $\Lambda_b \to \Lambda_c$ transitions). As inputs, we use lattice QCD determinations of the form factors at up to three different points in $q^2$. Our analysis also paves the way for the application of the bounds directly, through a modified $z$-expansion, within future lattice QCD studies. This is likely to increase the precision of future form-factor predictions, especially at large hadronic recoil where $q^2\simeq 0$. In \refsec{th}, we briefly recap the theory of the local form factors for baryon-to-baryon transitions and their dispersive bounds. We then propose a new parametrization for the full set of form factors in $\Lambda_b \to \Lambda$ transitions, which diagonalizes the dispersive bound. In Sec.~\ref{sec::Numerical-Analysis}, we illustrate the power of our parametrization based on lattice QCD constraints for the $\Lambda_b\to \Lambda$ form factors. We highlight how the form-factor uncertainties in the low momentum transfer region are affected by our parametrization and the different types of bounds we apply. We conclude in \refsec{conc}. \section{Derivation of the dispersive Bounds} \label{sec:th} We begin with a review of the Lorentz decomposition of the hadronic matrix elements in \refsec{th:lorentz}. We then introduce the two-point correlation functions responsible for the dispersive bound and their theoretical predictions within an operator product expansion in \refsec{th:db}. The hadronic representation of the correlation functions is discussed in \refsec{th:had-repr}. Our proposed parametrization is introduced in \refsec{th:parametrization}. \subsection{Lorentz decomposition in terms of helicity form factors} \label{sec:th:lorentz} A convenient definition of the form factors is achieved when each helicity amplitude corresponds to a single form factor: \begin{equation} \bra{\Lambda(k)} \bar{s} \Gamma^\mu b \ket{\Lambda_b(p)} \, \varepsilon^*_\mu(\lambda) \propto f^{\Gamma}_\lambda(q^2)\,, \end{equation} where $q^2 = (p - k)^2$, and $\varepsilon$ is the polarization vector of a fictitious vector mediator with polarization $\lambda$. For $1/2^+\to 1/2^+$ transitions, this definition is achieved by the Lorentz decomposition~\cite{Feldmann:2011xf}: \begin{align} \label{eq:th:lorentz-decomposition:V} \bra{ \Lambda(k,s_\Lambda) } \overline{s} \,\gamma^\mu\, b \ket{ \Lambda_b(p,s_{\Lambda_{b}}) } & = \overline{u}_\Lambda(k,s_{\Lambda}) \bigg[ f_t^V(q^2)\: (m_{\Lambda_b}-m_\Lambda)\frac{q^\mu}{q^2} \\ \nonumber & \phantom{\overline{u}_\Lambda \bigg[}+ f_0^V(q^2) \frac{m_{\Lambda_b}+m_\Lambda}{s_+} \left( p^\mu + k^{ \mu} - (m_{\Lambda_b}^2-m_\Lambda^2)\frac{q^\mu}{q^2} \right) \\ \nonumber & \phantom{\overline{u}_\Lambda \bigg[}+ f_\perp^V(q^2) \left(\gamma^\mu - \frac{2m_\Lambda}{s_+} p^\mu - \frac{2 m_{\Lambda_b}}{s_+} k^{ \mu} \right) \bigg] u_{\Lambda_b}(p,s_{\Lambda_{b}}) \, , \\ % \label{eq:th:lorentz-decomposition:A} \bra{ \Lambda(k,s_{\Lambda}) } \overline{s} \,\gamma^\mu\gamma_5\, b \ket{ \Lambda_b(p,s_{\Lambda_{b}}) } & = -\overline{u}_\Lambda(k,s_{\Lambda}) \:\gamma_5 \bigg[ f_t^A(q^2)\: (m_{\Lambda_b}+m_\Lambda)\frac{q^\mu}{q^2} \\ \nonumber & \phantom{\overline{u}_\Lambda \bigg[}+ f_0^A(q^2)\frac{m_{\Lambda_b}-m_\Lambda}{s_-} \left( p^\mu + k^{ \mu} - (m_{\Lambda_b}^2-m_\Lambda^2)\frac{q^\mu}{q^2} \right) \\ \nonumber & \phantom{\overline{u}_\Lambda \bigg[}+ f_\perp^A(q^2) \left(\gamma^\mu + \frac{2m_\Lambda}{s_-} p^\mu - \frac{2 m_{\Lambda_b}}{s_-} k^{ \mu} \right) \bigg] u_{\Lambda_b}(p_{\Lambda_{b}},s_{\Lambda_{b}}), \\ \bra{ \Lambda(k,s_{\Lambda}) } \overline{s} \,i\sigma^{\mu\nu} q_\nu \, b \ket{ \Lambda_b(p,s_{\Lambda_{b}}) } &= - \overline{u}_\Lambda(k,s_{\Lambda}) \bigg[ f_0^T(q^2) \frac{q^2}{s_+} \left( p^\mu + k^{\mu} - (m_{\Lambda_b}^2-m_{\Lambda}^2)\frac{q^\mu}{q^2} \right) \\ \nonumber & \phantom{\overline{u}_\Lambda \bigg[} + f_\perp^T(q^2)\, (m_{\Lambda_b}+m_\Lambda) \left( \gamma^\mu - \frac{2 m_\Lambda}{s_+} \, p^\mu - \frac{2m_{\Lambda_b}}{s_+} \, k^{ \mu} \right) \bigg] u_{\Lambda_b}(p,s_{\Lambda_{b}}) \, , \\ % \label{eq:th:lorentz-decomposition:T} \bra{ \Lambda(k,s_\Lambda) } \overline{s} \, i\sigma^{\mu\nu}q_\nu \gamma_5 \, b \ket{ \Lambda_b(p,s_{\Lambda_{b}}) } & = -\overline{u}_{\Lambda}(k,s_{\Lambda}) \, \gamma_5 \bigg[ f_0^{T5}(q^2) \, \frac{q^2}{s_-} \left( p^\mu + k^{\mu} - (m_{\Lambda_b}^2-m_{\Lambda}^2) \frac{q^\mu}{q^2} \right) \\ \nonumber & \phantom{\overline{u}_\Lambda \bigg[} + f_\perp^{T5}(q^2)\, (m_{\Lambda_b}-m_\Lambda) \left( \gamma^\mu + \frac{2 m_\Lambda}{s_-} \, p^\mu - \frac{2 m_{\Lambda_b}}{s_-} \, k^{ \mu} \right) \bigg] u_{\Lambda_b}(p,s_{\Lambda_{b}})\,, \end{align} where we abbreviate $\sigma^{\mu \nu} = \frac{i}{2} [\gamma^\mu, \gamma^\nu]$ and $s_{\pm} = (m_{\Lambda_b} \pm m_{\Lambda}) - q^2$. The labelling of the ten form factors follows the conventions of Ref.~\cite{Boer:2014kda}. Each form factor, $f_\lambda^\Gamma$, arises in the current $\bar{s} \Gamma b$ in a helicity amplitude with polarization $\lambda = t, 0, \perp$. We refer to Ref.~\cite{Boer:2014kda} for details and the relations between the form factors and the helicity amplitudes. Note that the matrix elements for the scalar and pseudo-scalar current can be related to the vector and axial-vector current of the timelike-polarized form factors $f_t^V$ and $f_t^A$ via the equations of motion: \begin{align} \bra{ \Lambda(k,s_\Lambda) } \overline{s} \, b \ket{ \Lambda_b(p,s_{\Lambda_{b}}) } \nonumber &= \frac{q^\mu}{m_b -m_s} \bra{ \Lambda(k,s_\Lambda) } \overline{s} \,\gamma_\mu \, b \ket{ \Lambda_b(p,s_{\Lambda_{b}}) } \\ &= f_t^V(q^2) \frac{m_{\Lambda_b} - m_{\Lambda}}{m_b -m_s} \overline{u}_\Lambda(k,s_\Lambda) \, u_{\Lambda_b}(p,s_{\Lambda_{b}}) \, ,\\ \bra{ \Lambda(k,s_\Lambda) } \overline{s} \, \gamma_5 \, b \ket{ \Lambda_b(p,s_{\Lambda_{b}}) } \nonumber &= -\frac{q^\mu}{m_b + m_s} \bra{ \Lambda(k,s_\Lambda) } \overline{s} \,\gamma_\mu \gamma_5 \, b \ket{ \Lambda_b(p,s_{\Lambda_{b}}) } \\ &= f_t^A(q^2) \frac{m_{\Lambda_b} + m_{\Lambda}}{m_b + m_s} \overline{u}_\Lambda(k,s_\Lambda) \, \gamma_5 \, u_{\Lambda_b}(p,s_{\Lambda_{b}}) \, . \end{align} Although the ten functions, $f_\lambda^\Gamma(q^2)$, are a-priori independent, some relations exist at specific points in $q^2$. These so-called endpoint relations arise due to two different mechanisms. First, the hadronic matrix elements on the left-hand sides of \refeq{th:lorentz-decomposition:V} to \refeq{th:lorentz-decomposition:T} must be free of kinematic singularities. Two such singularities can arise, as spurious poles at $q^2 = 0$ and $q^2 = q^2_\text{max} \equiv (m_{\Lambda_b} - m_{\Lambda})^2$. They are removed by the following identities: \begin{align} \label{eq:ep1} f_t^V(0) & = f_0^V(0) \, , & f_t^A(0) & = f_0^A(0)\,,\\ \label{eq:ep2} f_\perp^A(q^2_{\text{max}}) & = f_0^A(q^2_{\text{max}}) \, ,& f_\perp^{T5}(q^2_{\text{max}}) & = f_0^{T5}(q^2_{\text{max}})\,. \end{align} In addition to the above, an algebraic relation between $\sigma^{\mu\nu}$ and $\sigma^{\mu\nu}\gamma_5$ ensures that \begin{align} \label{eq:ep3} f_\perp^{T5}(0) & = f_\perp^T(0)\,. \end{align} See also Ref.~\cite{Hiller:2021zth} for additional discussion of endpoint relations for baryon transition form factors. \subsection{Two-point correlation functions and OPE representation} \label{sec:th:db} Dispersive bounds for local form factors have a successful history. They were first used for the kaon form factor \mbox{\cite{Okubo:1971jf,Okubo:1971my,Okubo:1971wup}} and have also successfully been applied to exclusive $B\to \pi$~\cite{Becher:2005bg,Bourrely:2008za} and $B\to D^{(\ast)}$~\cite{Boyd:1994tt,Boyd:1995tg,Boyd:1997qw} form factors\footnote{See also applications \cite{Lellouch:1995yv,DiCarlo:2021dzg,Martinelli:2021frl,Martinelli:2021onb,Martinelli:2022vvh} of the dispersive matrix method \cite{Caprini:2019osi}.}. In the latter case, the heavy-quark expansion renders the bounds phenomenologically more useful due to relations between all form factors of transitions between doublets under heavy-quark spin symmetry~\cite{Caprini:1997mu}, see Refs.~\cite{Bigi:2017jbd,Bordone:2019guc,Bordone:2019vic} for recent phenomenological updates and analyses up to order $1/m^2$ in the heavy-quark expansion, respectively. The application of the bound to form factors arising in baryon-to-baryon transitions is more complicated~\cite{Hill:2010yb,Gambino:2020jvv}, chiefly due to the fact that for any form factor, $F$, its first branch point, $t_+^F$, does not coincide with the threshold for baryon/antibaryon pair production, $t_{\rm th}^F$. Instead, the branch points lay to the left of the pair production points, at the pair production threshold for the corresponding ground-state meson/antimeson pair. We show a sketch of this structure in the left-hand side of \reffig{th:sketch}. \begin{figure} \centering \includegraphics[width=.7\textwidth]{plots/z_expansion.pdf} \caption{% Sketch of the analytic structure of the baryon-to-baryon form factors in the variable $q^2$ (left) and the variable $z$ (right). The $q^2$ range of semileptonic decays is marked ``SL''. The baryon/antibaryon pair production is marked ``pair prod.''. The form factors develop a branch cut below the baryon/antibaryon pair production threshold due to rescattering of virtual baryon/antibaryon pairs into, {\it e.g.}, $\bar{B}K^{(*)}$ pairs. } \label{fig:th:sketch} \end{figure} The dispersive bounds connect a theoretical computation of a suitably-chosen two-point function with weighted integrals of the squared hadronic form factors. For concreteness and brevity we derive the dispersive bound for the vector current $J_V^\mu$ and its hadronic form factors. The generalization to the currents \begin{align} J^\mu_{V} & = \bar{s} \gamma^\mu b \, ,& J^\mu_{A} & = \bar{s} \gamma^\mu \gamma_5 b \, ,\\ J^{\mu}_{T} & = \bar{s} \sigma^{\mu \nu} q_\nu b \, ,& J^{\mu}_{T5} & = \bar{s} \sigma^{\mu \nu} q_\nu \gamma_5 \,b \end{align} is straight-forward following the same prescription as $J_V^{\mu}$. As we will see below, the results for scalar and pseudo-scalar currents can be obtained from the vector and axial currents, respectively. We define $\Pi^{\mu\nu}_V$ to be the vacuum matrix elements of the two point function with two insertions of $J_V$: \begin{align} \label{eq:th:db:Pi_mu_nu} \Pi^{\mu \nu}_{V}(Q) &= i \int \text{d}^4 x \, \, e^{i Q\cdot x} \bra{0} \mathcal{T} \{ J_V^{\mu}(x), J_V^{\nu \dagger}(0) \} \ket{0}\,, \end{align} where $Q^\mu$ is the four-momentum flowing through the two-point function. This tensor-valued function can be expressed in terms of two scalar-valued functions: \begin{align} \label{eq:th:db:Pi} \Pi_{V}^{\mu\nu}(Q) &= P^{\mu \nu}_{J=0}(Q) \Pi_{V}^{J=0}(Q^2) + P^{\mu\nu}_{J=1}(Q) \Pi_V^{J=1}(Q^2)\,, \end{align} using the two projectors \begin{align} P^{\mu \nu}_{J=0}(p) & = \frac{p^\mu p^\nu}{p^2} \, ,& P^{\mu\nu}_{J=1}(p) & = \frac{1}{3} \left(\frac{p^\mu p^\nu}{p^2} - g^{\mu \nu} \right) \, . \end{align} Note that the two tensor currents do not feature a $J=0$ component, {\it i.e.}, the coefficients of the projectors $P_{J=0}$ vanish for these currents. The functions $\Pi_V^{J=0}(Q^2)$ and $\Pi_V^{J=1}(Q^2)$ feature singularities along the real $Q^2$ axis, which will be discussed below. These singularities are captured by the discontinuities of $\Pi_V^{J=0}$ and $\Pi_V^{J=1}$. It is now convenient to define a new function, $\chi^J_V$, which is completely described in terms of the discontinuities of the functions $\Pi^{J=1}_V$: \begin{equation} \label{eq:th:db:def-chi} \chi^{J=1}_V(Q^2) = \frac{1}{n!} \left(\frac{d}{dQ^2}\right)^n \Pi_V^{J=1}(Q^2) = \frac{1}{2 \pi i} \int_{0}^\infty \text{d}t \, \frac{\text{Disc} \, \Pi^{J=1}_V(t)}{(t - Q^2)^{n + 1}}\,. \end{equation} Here, the number of derivatives $n$ (also known as the number of ``subtractions'') is chosen to be the smallest number that yields a convergent integral. Note that in general the functions $\chi$ for the scalar and pseudo-scalar currents require a different value of $n$ than the functions for the vector and axial currents, respectively, despite the fact that they can be extracted from the vector and axial two-point correlators. \begin{table}[t] \begin{center} \renewcommand{\arraystretch}{1.25} \begin{tabular}{C{1cm} C{1cm} C{3cm} C{4cm} C{1cm}} \toprule $\Gamma$ & $J$ & form factors & $\chi_{\Gamma}^{J}|_\text{OPE}$ [$10^{-2}$] & $n$ \\ \midrule $V$ & $0$ & $f_t^V$ & $1.42$ & $1$ \\ $V$ & $1$ & $f_0^V$, $f_\perp^V$ & $1.20 \, / \, m_b^2$ & $2$ \\ $A$ & $0$ & $f_t^A$ & $1.57$ & $1$ \\ $A$ & $1$ & $f_0^A$, $f_\perp^A$ & $1.13 \, / \, m_b^2$ & $2$ \\ $T$ & $1$ & $f_0^T$, $f_\perp^T$ & $0.803 \, / \, m_b^2$ & $3$ \\ $T5$ & $1$ & $f_0^{T5}$, $f_\perp^{T5}$ & $0.748 \, / \, m_b^2$ & $3$ \\ \bottomrule \end{tabular} \renewcommand{\arraystretch}{1.0} \end{center} \caption{% \label{tab:th:db:chiOPE-and-n} The values of $\chi_{\Gamma}^{J}(Q^2 = 0)|_\text{OPE}$ as taken from Ref.~\cite{Bharucha:2010im}, which include terms at next-to-leading order in $\alpha_s$ and subleading power corrections. The number of derivatives for each current $\Gamma = V,A,S,P,T,T5$ is provided as $n$. Note that the results for $\chi$ in the rows for $\Gamma = T,T5$ differ from those given in Ref.~\cite{Bharucha:2010im} by a factor of $\tfrac{1}{4}$, which is due to differences in convention for the tensor current. The value of the $b$-quark mass is taken as $m_b = 4.2$ GeV. } \end{table} The dispersive bound is constructed by equating two different representation of $\chi_V$ with each other, based on the assumption of global \emph{quark hadron duality}: \begin{equation} \label{eq:th:db:QHD} \chi_{V}^{J}\bigg|_\text{OPE} =\chi_{V}^{J}\bigg|_\text{hadr}\,. \end{equation} The left-hand side representation is obtained from an operator product expansion (OPE) of the time-ordered product that gives rise to $\Pi_V^{\mu\nu}(Q)$. For $\bar{s}\Gamma b$ currents, the most recent analysis of these OPE results, including subleading contributions, has been presented in Ref.~\cite{Bharucha:2010im} for all the dimension-three currents considered in this work. We summarize results of the analysis for $Q^2 = 0$ in \reftab{th:db:chiOPE-and-n}, where we also list the values for $n$ on a per-current basis.\\ The right-hand side representation is obtained from the hadronic matrix elements of on-shell intermediate states. We will discuss this representation and its individual terms in the next section. \subsection{Hadronic representation of the bound} \label{sec:th:had-repr} We continue to discuss the bounds for the case of the vector current, and concretely, the scalar-valued two-point function $\Pi_V^{J=1}$, \begin{equation} \Pi_V^{J=1} = \left[P_{J=1}\right]_{\mu\nu} \Pi^{\mu\nu}_V\,. \end{equation} Its discontinuity due to a hadronic intermediate state, $H_{\bar{s}b}$, with flavour quantum numbers $B = -S = 1$ can be obtained using \begin{align} \label{eq:th:had-repr:def-disc:Disc-oneparticle} \text{Disc} \,\Pi_{\Gamma}^{J} & = i \sum_\textrm{spin} \int \text{d} \rho \, (2\pi)^4 \delta^{(4)}\left(q- \sum_i^n p_i\right) P_{J}^{\mu \nu}(q) \bra{0} J^{\mu}_{\Gamma} \ket{H_{b\bar{s}}(p_1, \dots, p_n)} \bra{H_{b\bar{s}}(p_1, \dots, p_n)} J^{\nu \dagger}_{\Gamma} \ket{0}\,, \end{align} where the $\text{d}\rho$ is the phase-space element of the $n$-particle intermediate state. Below we consider the cases of one- and two-particle intermediate states, with: \begin{align} \label{eq::phasespace-integral} \int \text{d}\rho &= \begin{cases} \displaystyle\int \dfrac{\text{d}^3 p}{(2\pi)^3 2 E_{\vec{p}}} & \text{for one-particle states}, \vspace{0.25cm}\\ \displaystyle\int \dfrac{\text{d}^3 p_1}{(2\pi)^3 2 E_{\vec{p}_1}} \displaystyle\int \dfrac{\text{d}^3 p_2}{(2\pi)^3 2 E_{\vec{p}_2}} & \text{for two-particle states}. \end{cases} \end{align} \subsubsection{One-particle contributions} \label{sec:one-particle} Here, we discuss contributions due to a single asymptotic on-shell state $H_{b\bar{s}}$ with flavour quantum numbers $B = -S = 1$, which excludes states that strongly decay such as radially excited states. We continue to use the case $\Gamma=V$ as an example, with $J=1$. In that case, the discontinuity receives a single contribution: \begin{align} \text{Disc} \, \Pi_{V}^{J=1}(q^2)\bigg|_{\text{1pt}} & = i \int \text{d}\rho \, (2\pi)^4 \delta^{(4)}(q-p) \sum_\lambda \left[P_{J=1} \right]_{\mu \nu} \bra{0} J^{\mu}_{V} \ket{\bar{B}^{*}_{s}(p, \lambda)} \bra{\bar{B}^{*}_{s}(p, \lambda)} J^{\nu \dagger}_{V} \ket{0}\\ & = i \int \text{d}\rho \, (2\pi)^4 \delta^{(4)}(q-p) m_{B_s^*}^2 f_{B_s^*}^2\\ & = 2\pi \delta(q^2 - m_{B_s^*}^2) \theta(q^0) m_{B_s^*}^2 f_{B_s^*}^2\,, \label{eq:Disc-V-1pt} \end{align} where $\lambda$ is the polarization of the $\bar{B}_s^*$ meson and $m_{B_s^*}$ its mass. States other than the $B_s^*$ do not contribute, since either their matrix elements with the $\Gamma = V$ current vanish; their projection onto the $J=1$ state vanish; or they decay strongly. The generalization to $\Gamma = A$ and $J=0$ is straightforward: \begin{align} \text{Disc} \, \Pi_{V}^{J=0}(q^2)\bigg|_{\text{1pt}} & = 2\pi \delta(q^2 - m_{B^*_{s,0}}^2) \theta(q^0) m_{B^*_{s,0}}^2 f_{B^*_{s,0}}^2 \,, \\ \text{Disc} \, \Pi_{A}^{J=0}(q^2)\bigg|_{\text{1pt}} & = 2\pi \delta(q^2 - m_{B_s}^2) \theta(q^0) m_{B_s}^2 f_{B_s}^2\,, \\ \text{Disc} \, \Pi_{A}^{J=1}(q^2)\bigg|_{\text{1pt}} & = 2\pi \delta(q^2 - m_{B_{s,1}}^2) \theta(q^0) m_{B_{s,1}}^2 f_{B_{s,1}}^2 \,. \end{align} Here $B_s$ is the ground-state pseudoscalar meson with a very well-known decay constant $f_{B_s} = 230.7 \pm 1.3\,\, \text{MeV}$~\cite{Bazavov:2017lyh}, $B_{s,1}$ is the axial vector meson, and $B_{s,0}^*$ is the scalar meson. In brief, the (pseudo)scalar current receives a contribution from a (pseudo)scalar on-shell state, and the axialvector current receives a contribution from an axialvector on-shell state. Although sub-$BK$-threshold $B_{s,1}$ or $B_{s,0}^*$ states have not yet been seen in the experiment, there are indications in lattice QCD analyses that such sub-threshold states exist \cite{Lang:2015hza}. However, the values of their respective decay constants are presently not very well known; estimates have been obtained, via QCD sum rule at next-to-leading order, in Ref.~\cite{Gelhausen:2013wia,Pullin:2021ebn}. Nevertheless, these states produce a pole both in the two-point functions $\Pi_{\Gamma}^J$ and in their associated form factors, which is a necessary information for the formulation of the dispersive bounds and the form factor parametrization. From this point forward, we assume the presence of a single pole due to a $J^P=\lbrace 0^+, 1^-,0^-,1^+\rbrace$ state contributing to form factors with $(\Gamma,J) = \lbrace (V,0), (V,1), (A,0), (A,1)\rbrace$, respectively. The cases for currents with $\Gamma = T$ and $\Gamma = T5$ benefit from further explanation. For these currents one might assume that tensor, {\it i.e.}, $J^P=2^{\pm}$, states play a leading role. However, these states do not contribute at all, since their matrix elements vanish: \begin{equation} \bra{0} \bar{s} \sigma^{\mu\nu} (\gamma_5) b \ket{B_s(J^P=2^\pm)} = 0\,. \end{equation} This can readily be understood, since the above matrix elements are antisymmetric in the indices $\mu$ and $\nu$, while the polarization tensors of $J^P=2^\pm$ mesons are symmetric quantities. Nevertheless, the currents $\Gamma = T$ and $\Gamma = T5$ do feature poles due to one-particle contributions, which arise from states with $J^P=1^\pm$. We obtain: \begin{align} \text{Disc} \, \Pi_{T}^{J=1}(q^2)\bigg|_{\text{1pt}} & = 2\pi \delta(q^2 - m_{B_s^*}^2) \theta(q^0) m_{B_s^*}^4 (f_{B_s^*}^T)^2 \,, \\ \text{Disc} \, \Pi_{T5}^{J=1}(q^2)\bigg|_{\text{1pt}} & = 2\pi \delta(q^2 - m_{B_{s,1}}^2) \theta(q^0) m_{B_{s,1}}^4 (f_{B_{s,1}}^T)^2 . \end{align} where $f_{B_s^*,T}$ and $f_{B_{s,1},T}$ are the decay constants of the respective state for a tensor current: \begin{align} \bra{0} J_{T}^\mu \ket{\bar{B}^*_{s}(p)} &= i m_{B_s^*}^2 f^T_{B_s^*} \epsilon^\mu \, & \bra{0} J_{T5}^\mu \ket{\bar{B}_{s,1}(p)} &= - i m_{B_{s,1}}^2 f^T_{B_{s,1}} \epsilon^\mu \,. \end{align} Plugging the results for the discontinuities into \refeq{Disc-V-1pt} we obtain: \begin{align} \chi_{V}^{J=1}(Q^2)\bigg|_\text{1pt} & = \frac{m_{B_s^*}^2 f_{B_s^*}^2}{(m_{B_s^*}^2 - Q^2)^{n+1}} \,, & \chi_{V}^{J=0}(Q^2)\bigg|_\text{1pt} & = \frac{m_{B^*_{s,0}}^2 f_{B^*_{s,0}}^2}{(m_{B^*_{s,0}}^2 - Q^2)^{n+1}} \,, \\ \chi_{A}^{J=1}(Q^2)\bigg|_\text{1pt} & = \frac{m_{B_{s,1}}^2 f_{B_{s,1}}^2 }{(m_{B_{s,1}}^2 -Q^2)^{n+1}} \,, & \chi_{A}^{J=0}(Q^2)\bigg|_\text{1pt} & = \frac{m_{B_s}^2 f_{B_s}^2 }{(m_{B_s}^2 -Q^2)^{n+1}} \,, \\ \chi_{T}^{J=1}(Q^2)\bigg|_\text{1pt} & = \frac{m_{B^*_s}^4 (f^T_{B_s^*})^2 }{(m_{B^*_s}^2 -Q^2)^{n+1}} \,, & \chi_{T5}^{J=1}(Q^2)\bigg|_\text{1pt} & = \frac{m_{B_{s,1}}^4 (f^T_{B_{s,1}})^2 }{(m_{B_{s,1}}^2 -Q^2)^{n+1}} \,. \end{align} The one-particle contributions each amount to about $10\%$ of the respective OPE result. \\ \subsubsection{Two-particle contributions} \label{sec:two-particle} Here, we focus on the contributions to $\chi$ due to an intermediate $\Lambda_b\bar{\Lambda}$ state. By means of unitarity we can express the discontinuity of the two-particle correlator $\Pi^{J}_{\Gamma}(t)$ as a sum of intermediate $H_{b \bar{s}}$ states with flavour quantum numbers $B = -S= 1$: \begin{align} \text{Disc} \, \Pi^{J}_{\Gamma} &= i \sum_\textrm{spins} \int \text{d}\rho \, (2\pi)^4 \delta^{(4)}(q - \left( p_1 + p_2 \right)) [P_{J}]_{\mu \nu} \bra{0} J^{\mu}_{\Gamma} \ket{\Lambda_b(p_1, s_{\Lambda_{b}}) \bar{\Lambda}(-p_2,s_\Lambda)}\nonumber \\ & \times \bra{\bar{\Lambda}(-p_2,s_\Lambda) \Lambda_b(p_1, s_{\Lambda_{b}})} J^{\nu \dagger}_{\Gamma} \ket{0} + \text{further positive terms} \, . \end{align} Note that further two-particle contributions for which dispersive bounds have been applied include $\bar{B} K, \bar{B} K^*$ and $\bar{B}_s \phi$~\cite{Bharucha:2010im}. The effect of each of those two-particle contributions would decrease the upper bound only by \mbox{1--4\%}~\cite{Bharucha:2010im}, {\it i.e.}, by a smaller amount than the one-particle contributions. We can evaluate the phase-space integration in the rest frame of the two-particle system as \begin{align} \int \text{d}\rho \, (2\pi)^4 \delta^{(4)}(q - \left( p_1 + p_2 \right)) &= \frac{1}{8 \pi} \frac{\sqrt{\lambda(m_{\Lambda_b}^2, m_{\Lambda}^2, q^2)}}{q^2} \theta(q^2 - s_{\Lambda_{b} \Lambda}), \end{align} with $s_{\Lambda_{b} \Lambda} = (m_{\Lambda_b} + m_{\Lambda})^2$. From this we obtain \begin{align} \text{Disc} \, \Pi^{J}_{\Gamma} &= \frac{i}{8\pi} \frac{\sqrt{ \lambda(m_{\Lambda_b}^2, m_{\Lambda}^2, q^2) }}{q^2} \theta(q^2-s_{\Lambda_b \Lambda}) [P_{J}]_{\mu \nu} \bra{0} J^{\mu}_{\Gamma} \ket{\Lambda_b \bar{\Lambda}} \bra{\bar{\Lambda} \Lambda_b} J^{\nu \dagger}_{\Gamma} \ket{0} \end{align} where in the last line we dropped all further positive terms. In the following we summarize the contraction between helicity operators and matrix elements that can be expressed via local form factors. \begin{align} [P_{J}]_{\mu \nu} \bra{0} J^{\mu}_{V} \ket{\bar{\Lambda} \Lambda_b} \bra{\bar{\Lambda} \Lambda_b} J^{\nu \dagger}_{V} \ket{0} &= \begin{cases} \dfrac{2(m_{\Lambda_b}-m_{\Lambda})^2}{q^2} s_+(q^2) |f_{t}^V|^2 & \text{for } J = 0, \vspace{0.25cm}\\ \dfrac{2 s_{-}(q^2)}{3 q^2} \left( (m_{\Lambda_b}+ m_{\Lambda})^2 |f_{0}^V|^2 + 2 q^2 \, |f_{\perp}^V|^2 \right) & \text{for } J = 1, \end{cases} \label{eq::vector-FF} \\ [P_{J}]_{\mu \nu}\bra{0} J^{\mu}_{A} \ket{\bar{\Lambda} \Lambda_b} \bra{\bar{\Lambda} \Lambda_b} J^{\nu \dagger}_{A} \ket{0} &= \begin{cases} \dfrac{2 s_{-}(q^2)}{q^2} (m_{\Lambda_b}+m_{\Lambda})^2 |f_{t}^A|^2 & \text{for } J = 0, \vspace{0.25cm}\\ \dfrac{2 s_+(q^2)}{3 q^2} \left( (m_{\Lambda_b}-m_{\Lambda})^2 |f_{0}^A|^2 + 2 q^2 \, |f_{\perp}^A|^2 \right) & \text{for } J = 1, \end{cases}\\ [P_{J}]_{\mu \nu} \bra{0} J^{\mu}_{T} \ket{\bar{\Lambda} \Lambda_b} \bra{\bar{\Lambda} \Lambda_b} J^{\nu \dagger}_{T} \ket{0} &= \begin{cases} 0 & \text{for } J = 0, \vspace{0.25cm}\\ \dfrac{2 s_{-}(q^2)}{3} \left( 2 (m_{\Lambda_b}+m_{\Lambda})^2 |f_{\perp}^T|^2 + q^2 \, |f_{0}^T|^2 \right) & \text{for } J = 1, \end{cases} \label{eq::axialvector-FF} \\ [P_{J}]_{\mu \nu} \bra{0} J^{\mu}_{T5} \ket{\bar{\Lambda} \Lambda_b} \bra{\bar{\Lambda} \Lambda_b} J^{\nu \dagger}_{T5} \ket{0} &= \begin{cases} 0 & \text{for } J = 0, \vspace{0.25cm}\\ \dfrac{2 s_{+}(q^2)}{3} \left( 2 (m_{\Lambda_b}-m_{\Lambda})^2 |f_{\perp}^{T5}|^2 + q^2 \, |f_{0}^{T5}|^2 \right) & \text{for } J = 1, \end{cases} \label{eq::tensor-FF} \end{align} where the sum over the baryon spins is implied. \subsection{Parametrization} \label{sec:th:parametrization} We relate the OPE representation to the hadronic representation of the functions $\chi_\Gamma^J$ through \refeq{th:db:QHD}. Using $\Gamma = V$ and $J=1$ again as an example, the dispersive bound takes the form \begin{align} \label{eq:th:parametrization:bound-V1} \chi^{J=1}_{V}(Q^2)\bigg|_\text{OPE} & \geq \chi^{J=1}_{V}(Q^2)\bigg|_\text{1pt} + \int_{s_{\Lambda_b \Lambda} }^{\infty} \text{d}t \, \frac{1}{24 \pi^2} \frac{\sqrt{\lambda(m_{\Lambda_b}^2,m_{\Lambda}^2,t) }}{t^2 (t-Q^2)^{n+1}} s_{-}(t) \\ \nonumber & \hspace{3cm} \times \left( (m_{\Lambda_b} +m_{\Lambda})^2 |f_{0}^V(t)|^2 +2 t |f_{\perp}^V(t)|^2 \right) \,, \end{align} where the last term is the two-particle contribution due to the ground-state baryons. Our intent is now to parametrize the $\Lambda_b\to\Lambda$ form factors (here: $f_0^V, f_\perp^V$) in such a way that their parameters enter the two-particle contributions to $\chi_\Gamma$ in a simple form. Concretely, we envisage a contribution that enters as the 2-norm of the vector of parameters.\\ In general, the bounds are best represented by transforming the variable $t$ to the new variable $z$, defined as \begin{align} z(t; t_0, t_+) & = \frac{\sqrt{t_+ - t} -\sqrt{t_+ - t_0}}{\sqrt{t_+ - t} +\sqrt{t_+ - t_0}}\,. \end{align} In the above, $t_0$ corresponds to the zero of $z(t)$ and is a free parameter that can be chosen, and $t_+$ corresponds to lowest branch point of the form factors. The mapping from $t=q^2$ to $z$ is illustrated in \reffig{th:sketch}. The integral comprising the two-particle contribution starts at the pair-production threshold $t_\text{th}$. \\ When discussing the dispersive bounds for e.g.~$B\to D$ or $B\to\pi$ form factors, one has $t_\text{th}=t_+$. The integral of the discontinuity along the real $t$ axis in the mesonic analogue of~\refeq{th:parametrization:bound-V1} then becomes a contour integral along the unit circle $|z|=1$. For an arbitrary function $g$, \begin{equation} \int_{t_\text{th} = t_+}^\infty \text{d}t \,\text{Disc} \, g(t) = \frac{1}{2} \oint_{|z| = 1} \text{d}z \left|\frac{\text{d}t(z)}{\text{d}z}\right| \,\text{Disc} \,g(t(z)) = \frac{i}{2} \int_{-\pi}^{+\pi} \text{d}\alpha \, \left| \frac{\text{d}t(z)}{\text{d}z}\right| \, e^{i\alpha} \, \text{Disc} \,g(t(e^{i\alpha}))\,. \end{equation} The contribution to the integrand from a form factor $F$ is then written as $|\phi_F|^2 |F|^2$, where the \emph{outer function} $\phi_F$ is constructed such that the product $\phi_F F$ is free of kinematic singularities on the unit disk $|z|<1$ \cite{Boyd:1994tt,Boyd:1997qw,Caprini:1997mu,Becher:2005bg,Arnesen:2005ez}. The product of outer function and form factor is then commonly expressed as a power series in $z$, which is bounded in the semileptonic region. Powers of $z$ are orthonormal with respect to the scalar product \begin{equation} \braket{z^n|z^m} \equiv \oint_{|z| = 1} \frac{\text{d}z}{iz} \,z^{n,*} z^m = \int_{-\pi}^{+\pi} \text{d}\alpha \, z^{n,*} z^m\big|_{z=e^{i\alpha}} = 2\pi \delta_{nm}\,, \end{equation} that is, when integrated over the entire unit circle. As a consequence, for an analytic function on the $z$ unit disk that is square-integrable on the $z$ unit circle, the Fourier coefficients exist only for positive index $n$ and coincide with the Taylor coefficients for an expansion in $z=0$. The contribution to the dispersive bound can then be expressed as the 2-norm of the Taylor coefficients. For more details of the derivation, we refer the reader to Ref.~\cite{Caprini:2019osi}. \\ For $b\to s$ transitions, $\bar{B}_s\pi$ intermediate states produce the lowest-lying branch cut. However, production of a $\bar{B}_s\pi$ state from the vacuum through a $\bar{s}b$ current violates isospin symmetry and is therefore strongly suppressed. For the purpose of this analysis we set $t_+$ to the first branch point that contributes in the isospin symmetry limit: \begin{equation} t_+ \equiv (m_{B} + m_{K})^2 \, . \end{equation} The integral contribution for $\bar{B}K$ intermediate states can then be mapped onto the entire unit circle in $z$ as discussed above, and their contributions to the dispersive bound can be expressed as the 2-norm of their Taylor coefficients. However, intermediate states with larger pair-production thresholds cover only successively smaller \emph{arcs of the unit circle}, and the correspondence of the 2-norm of the Taylor coefficients and their contributions to the dispersive bound does not hold any longer. The branch point at $t_+$ arises from scattering into on-shell $\bar{B}K$ intermediate states. In the following, we discuss the application of the series expansion to baryon-to-baryon form factors in the presence of a dispersive bound. The main difference between our approach and other parametrizations is that we do not assume the lowest branch point $t_+$ to coincide with the baryon/antibaryon threshold $t_\text{th} > t_+$. As a consequence, the contour integral representing the form factor's contribution to its bound is supported only on the arc of the unit circle with opening angle $2 \alpha_{\Lambda_b\Lambda}$, where \begin{equation} \alpha_{\Lambda_b \Lambda} = \arg z((m_{\Lambda} + m_{\Lambda_b})^2)\,. \label{eq:LbL-angle} \end{equation} Specifically, Eq.~(\ref{eq:th:parametrization:bound-V1}) becomes \begin{align} \nonumber 1 &\geq \frac{1}{48\pi^2 \chi_V^{J=1}(Q^2)\big|_\text{OPE}} \int_{-\alpha_{\Lambda_b \Lambda}}^{+\alpha_{\Lambda_b \Lambda}} \text{d}\alpha \left |\frac{\text{d}z(\alpha)}{\text{d}\alpha} \frac{\text{d}t(z)}{\text{d}z} \right| \frac{\sqrt{\lambda(m_{\Lambda_b}^2,m_{\Lambda}^2,t) }}{t^2 (t-Q^2)^{n+1}} s_{-}(t) \left( (m_{\Lambda_b} +m_{\Lambda})^2 |f_{0}^V(t)|^2 +2 t |f_{\perp}^V(t)|^2 \right) \\ \label{eq:th:parametrization:bound-analytic} & \equiv \int_{-\alpha_{\Lambda_b \Lambda}}^{+\alpha_{\Lambda_b \Lambda}} \text{d}\alpha \, \left(|\phi_{f_0^V}(z)|^2 |f_{0}^V(z)|^2 +|\phi_{f_\perp^V}(z)|^2 |f_{\perp}^V(z)|^2 \right)_{z=e^{i\alpha}}\, , \end{align} where $t = t(z(\alpha))$, and we dropped the one-particle contributions for legibility. Here, $\phi_{f_0^V}(z),\phi_{f_\perp^V}(z)$ are the outer functions for the form factors $f_0^V$ and $f_\perp^V$. The full list of expressions for the outer functions of all baryon-to-baryon form factors is compiled in Appendix~\ref{app:outerfunction}. A form factor's contribution to the bound is expressed in terms of an integral with a positive definite integrand. Hence, we immediately find that a parametrization that assumes integration over the full unit circle rather than the relevant pair production arc $|\alpha| < \alpha_{\Lambda_b\Lambda}$ \emph{overestimates} the saturation of the dispersive bound due to that form factor. To express the level of saturation due to each term in \refeq{th:parametrization:bound-analytic} as a 2-norm of some coefficient sequence, we expand the form factors in a basis of polynomials $p_n(z)$. These polynomials must be orthonormal with respect to the scalar product \begin{equation} \braket{p_n|p_m} \equiv \oint_{\substack{|z| = 1\\ |\arg z| \leq \alpha_{\Lambda_b \Lambda}}} \frac{\text{d}z}{iz} \, p^*_n(z)\, p_m(z) = \int_{-\alpha_{\Lambda_b \Lambda}}^{+\alpha_{\Lambda_b \Lambda}} \text{d}\alpha \, p^*_n(z)\, p_m(z)\big|_{z=e^{i\alpha}} = \delta_{nm} \,. \label{eq:szego-ortho} \end{equation} The polynomials $p_n(z)$ are the Szeg\H{o} polynomials~\cite{Simon2004OrthogonalPO}, which can be derived via the the Gram-Schmidt procedure; see details in Appendix~\ref{app:gram-schmidt}. A computationally efficient and numerically stable evaluation of the polynomials can be achieved using the Szeg\H{o} recurrence relation~\cite{Simon2004OrthogonalPO}, which we use in the reference implementation of our parametrization as part of the \texttt{EOS}\xspace software. The first five so-called Verblunsky coefficients that uniquely generate the polynomials are listed in Appendix~\ref{app:gram-schmidt}. \\ Our series expansion for the parametrization of the local form factors now takes the form \begin{align} \label{eq:para} f_\lambda^\Gamma(q^2) & = \frac{1}{\mathcal{P}(q^2) \, \phi_{f_\lambda^\Gamma}(z)} \sum_{i=0}^{\infty} a^{i}_{f_\lambda^\Gamma} \, \, p_{i}(z), \end{align} where $\mathcal{P}(q^2) = z(q^2; t_0 = m^2_{\text{pole}}, t_+)$ is the Blaschke factor, $\phi_{f_\lambda^\Gamma}(z)$ is the outer function and $p_{i}(z)$ are the orthonormal polynomials. The Blaschke factor takes into account bound-state poles below the lowest branch point $t_+$ without changing the contribution to the dispersive bound~\cite{Caprini:2019osi}. Here, we assume each form factor to have a single bound-state pole, with the masses given in Table~\ref{tab::mpoles}. For our parametrization, we choose $t_0 = q^2_{\text{max}} = (m_{\Lambda_b} - m_{\Lambda})^2$. The rationale as is as follows: at negative values of $z$, the Szeg\H{o} polynomials oscillate as functions of their index $n$. Our choice of $t_0$ means that the entire semileptonic phase is mapped onto the \emph{positive} real $z$ axis. Given that the lattice data does not show any oscillatory pattern, this choice appears to be the most appropriate. While our parametrization appears to feature all of the benefits inherent to the BGL parametrization for meson-to-meson form factors~\cite{Boyd:1994tt}, this is not the case. The BGL parametrization uses the $z^n$ monomials, which are bounded on the open unit disk. As a consequence, the form factor parametrization for processes such as $\bar{B}\to D$ are an \emph{absolutely convergent} series~\cite{Caprini:2019osi}. This benefit does not translate to the baryon-to-baryon form factors.\footnote{% It also does not transfer to form factors for processes such as $\bar{B}_s\to D_s$ or $\bar{B}_s\to \bar{K}$, which suffer from the same problem: branch cuts below their respective pair-production thresholds. Our approach can be adjusted for these form factors. } The polynomials $p_n$ are not bounded on the open unit disk. In fact, the Szeg\H{o} recurrence relation combined with the Szeg\H{o} condition provide that $p_n(z = 0)$ increase exponentially with $n$ for large $n$. Nevertheless, our proposed parametrization proves to be useful to limit the truncation error, as demonstrated in Sec.~\ref{sec::Numerical-Analysis}. Based on Eq.~(\ref{eq::vector-FF})--(\ref{eq::tensor-FF}), we arrive at \emph{strong unitarity bounds} on the form-factor coefficients: \begin{align} \label{eq:sUB-1} \sum_{i=0}^\infty |a_{f_t^V}^i|^2 &\leq 1 - \frac{\chi_V^{J=0}\big|_\text{1pt\space\space}}{\chi_V^{J=0}\big|_\text{OPE}}\, , & \sum_{i=0}^\infty |a_{f_t^A}^i|^2 \leq 1 - \frac{\chi_A^{J=0}\big|_\text{1pt\space\space}}{\chi_A^{J=0}\big|_\text{OPE}} \, , \\ \label{eq:sUB-2} \sum_{i=0}^{\infty} \left \{|a_{f_0^V}^i|^2 +|a_{f_\perp^V}^i|^2 \right \} &\leq 1 - \frac{\chi_V^{J=1}\big|_\text{1pt\space\space}}{\chi_V^{J=1}\big|_\text{OPE}} \, , & \sum_{i=0}^{\infty} \left\{ |a_{f_0^A}^i|^2 +|a_{f_\perp^A}^i|^2 \right\} \leq 1 - \frac{\chi_A^{J=1}\big|_\text{1pt\space\space}}{\chi_A^{J=1}\big|_\text{OPE}} \, , \\ \label{eq:sUB-3} \sum_{i=0}^{\infty} \left \{|a_{f_0^T}^i|^2 +|a_{f_\perp^T}^i|^2 \right \} &\leq 1 - \frac{\chi_T^{J=1}\big|_\text{1pt\space\space}}{\chi_T^{J=1}\big|_\text{OPE}} \, , & \sum_{i=0}^{\infty} \left\{ |a_{f_0^{T5}}^i|^2 +|a_{f_\perp^{T5}}^i|^2 \right\} \leq 1 - \frac{\chi_{T5}^{J=1}\big|_\text{1pt\space\space}}{\chi_{T5}^{J=1}\big|_\text{OPE}}\, . \end{align} Note that here we also subtracted the one-particle contributions, which are discussed in Sec.~\ref{sec:one-particle}. However, this subtraction decreases the bound by only $\sim 10 \%$. In our statistical analysis of only $\Lambda_b\to \Lambda$ form factors, we find that this subtraction is not yet numerically significant. Nevertheless, we advocate to include the one-particle contributions in global fits of the known local $b\to s$ form factors, where their impact will likely be numerically relevant. \begin{table}[t] \begin{center} \renewcommand{\arraystretch}{1.25} \begin{tabular}{C{4cm} C{3cm} C{3cm}} \toprule Form factor & Pole spin-parity $J^P$ & $m_{\text{pole}}$ in GeV \\ \midrule \\[-1em] $f_0^V, f_\perp^V, f_0^T, f_\perp^T$ & $1^-$ & 5.416 \\ $f_t^V$ & $0^+$ & 5.711 \\ $f_0^A, f_\perp^A, f_0^{T5}, f_\perp^{T5}$ & $1^+$ & 5.750 \\ $f_t^A$ & $0^-$ & 5.367 \\ \bottomrule \end{tabular} \renewcommand{\arraystretch}{1.0} \end{center} \caption{List of $B_s$ meson pole masses appearing in the different form factors. The values are taken from Refs.~\cite{Lang:2015hza,PDG2020}.} \label{tab::mpoles} \end{table} At this point, we have not yet employed the endpoint relations given in Eq. (\ref{eq:ep1}) - (\ref{eq:ep3}). By using the endpoint relations, we can express the zeroth coefficient of $f_t^V, f_t^A, f_\perp^A, f_\perp^T, f_0^{T5}$ in terms of coefficients of other form factors. Our proposed parametrization has two tangible benefits. First, each form factor parameter $a_k$ is bounded in magnitude, $|a_k| \leq 1$. The $N$ dimensional parameter space is therefore restricted to the hypercube $[-1, +1]^N$. We refer to this type of parameter bound as the \emph{weak bound}\footnote{% Our definitions of \emph{weak} and \emph{strong} bounds differ from the definitions proposed in Ref.~\cite{Bigi:2017jbd}. There, what we call the \emph{weak bound} is not considered in isolation, and what we call the \emph{strong bound} is labelled a ``weak bound'', in contrast to a ``strong bound'' that affects more than one decay process. }. It facilitates fits to theoretical or phenomenological inputs on the form factors, since the choice of a prior is not subjective. Second, the form factor parameters are restricted by the \emph{strong bounds} \refeq{sUB-1} to \refeq{sUB-3}. In the absence of the small number of exact relations between the form factors that we discussed earlier, this strong bound is in fact an upper bound on the sum of the squares of the form-factor parameters. As a consequence, the parameter space is further restricted to the combination of four hyperspheres, one per bound.\footnote{% The form factor relations mix the parameters of form factors that belong to different strong bounds, thereby making a geometric interpretation less intuitive. } The strong bounds imply that the sequence of form factor parameters asymptotically falls off faster than $1/\sqrt{k}$. This behaviour does not prove absolute convergence of the series expansion of the form factors, which would require a fall off that compensates the exponential growth of the polynomials. Nevertheless, we will assume sufficient convergence of the form factors from this point on. Below, we check empirically if the strong bound suffices to provide bounded uncertainties for the form factors in truncated expansions. \section{Statistical Analysis} \label{sec::Numerical-Analysis} \subsection{Data Sets} To illustrate the power of our proposed parametrization, we carry out a number of Bayesian analyses to the lattice QCD results for the full set of $\Lambda_b\to \Lambda$ form factors as provided in Ref.~\cite{Detmold:2016pkz}. These analyses are all carried out using the \texttt{EOS}\xspace software~\cite{vanDyk:2021sup}, which has been modified for this purpose. Our proposed parametrization for the $\Lambda_b\to\Lambda$ form factors is implemented as of \texttt{EOS}\xspace version 1.0.2~\cite{EOS:v1.0.2}. The form factors are constrained by a multivariate Gaussian likelihood that jointly describes synthetic data points of the form factors, up to three per form factor. Each data point is generated for one of three possible values of the momentum transfer $q^2$: $q^2_i \in \{13, 16, 19 \}$ GeV$^2$. The overall $q^2$ range is chosen based on the availability of lattice QCD data points in Ref.~\cite{Detmold:2016pkz}. The synthetic data points are illustrated by black crosses in Figs.~\ref{fig:formfactor-nominal}--\ref{fig:formfactor-truncation}. Reference~\cite{Detmold:2016pkz} provides two sets of parametrizations of the form factors in the continuum limit and for physical quark masses, obtained from one ``nominal'' and one ``higher-order'' fit to the lattice data. The nominal fit uses first-order $z$ expansions, which are modified with correction terms that describe the dependence on the lattice spacing and quark masses. The higher-order fit uses second-order $z$ expansions and also includes higher-order lattice-spacing and quark-mass corrections. The parameters that only appear in the higher-order fit are additionally constrained with Gaussian priors. In the case of lattice spacing and quark masses, these priors are well motivated by effective field theory considerations~\cite{Detmold:2016pkz}. In the higher-order fit, the coefficients $a^2_{f_\lambda^\Gamma}$ of the $z$ expansion are also constrained with Gaussian priors, centered around zero and widths equal to twice the magnitude for the corresponding coefficients $a^1_{f_\lambda^\Gamma}$ obtained within the nominal fit. This choice of prior was less well motivated but has little effect in the high-$q^2$ region. Ref.~\cite{Detmold:2016pkz} recommends to use the following procedure for evaluating the form factors in phenomenological applications: the nominal-fit results should be used to evaluate the central values and statistical uncertainties, while a combination of the higher order-fit and nominal-fit results should be used to estimate systematic uncertainties as explained in Eqs.~(50)--(56) in Ref.~\cite{Detmold:2016pkz}. To generate the synthetic data points for the present work, we first updated both the nominal and the higher-order fits of Ref.~\cite{Detmold:2016pkz} with minor modifications: we now enforce the endpoint relations among the form factors at $q^2=0$ \emph{exactly}, rather than approximately as done in Ref.~\cite{Detmold:2016pkz}, and we include one additional endpoint relation $f_\perp^{T5}(0)=f_\perp^T(0)$, which is not used in Ref.~\cite{Detmold:2016pkz}. The synthetic data points for $f_0^V$, $f_0^A$ and $f_\perp^T$ at $q^2 = 13 \,\,{\rm GeV}^2$, and $f_0^A$ and $f_0^{T5}$ at 19\,\,GeV$^{2}$, have strong correlation with other data points. This can be understood, since five exact relations hold for these form factors either at $q^2 = 0$ or $q^2 = (m_{\Lambda_b} - m_{\Lambda})^2$ between pairs of form factors. We remove the synthetic data points listed above, which renders the covariance matrix regular and positive definite. We arrive at a 25 dimensional multivariate Gaussian likelihood. The likelihood is accessible under the name \begin{center} \texttt{Lambda\_b->Lambda::f\_time+long+perp\^{}V+A+T+T5[nominal,no-prior]\@DM:2016A} \end{center} as part of the constraints available within the \texttt{EOS}\xspace software. \subsection{Models} In this analysis, we consider a variety of statistical models. First, we truncate the series shown in Eq.~(\ref{eq:para}) at $N=2$, 3 or 4. The number of form factor parameters is $10 (N + 1)$, due to a total of ten form factors under consideration. Since we implement the five form factors relations \emph{exactly}, the number of fit parameters is smaller than the number of form factor parameters by five. Hence, we arrive at between $P=25$ and $P=45$ fit parameters. We use three different types of priors in our analyses. An analysis labelled ``w/o bound'' uses a uniform prior, which is chosen to contain at least $99\%$ of the integrated posterior probability. An analysis labelled ``w/ weak bound'' uses a uniform prior on the hypercube $[-1, +1]^P$, thereby applying the weak bound for all fit parameters. An analysis labelled ``w/ strong bound'' uses the same prior as the weak bound. In addition, we modify the posterior to include the following element, which can be interpreted either as an informative non-linear prior or a factor of the likelihood. For each of the six bounds $B(\lbrace a_n\rbrace)$, we add the penalty term~\cite{Bordone:2019vic} \begin{equation} \begin{cases} 0 & \rho_B < 1,\\ 100 (\rho_B - 1)^2 & \text{otherwise} \end{cases}\, \end{equation} to $-2 \ln \text{Posterior}$. Here, $\rho_B = \sum_n |a_n|^2$, and the sum includes only the parameters affected by the given bound $B$. The additional terms penalize parameter points that violate any of the bounds with a one-sided $\chi^2$-like term. The factor of $100$ corresponds to the inverse square of the relative theory uncertainty on the bound, which we assume to be $10\%$. This uncertainty is compatible with the results obtained in Ref.~\cite{Bharucha:2010im}. In the above, we use unity as the largest allowed saturation of each bound. As discussed in Sec.~\ref{sec:th:had-repr}, one-body and mesonic two-body contributions to the bounds are known. They could be subtracted from the upper bounds. However, we suggest here to include these contributions on the left-hand side of the bound in a global analysis of the available $b\to s$ form factor data. A global analysis clearly benefits from this treatment, which induces non-trivial theory correlations among the form factor parameters across different processes. It also clearly goes beyond the scope of the present work. For $N=2$, the number of parameters is equal to the number of data points, and we arrive at zero degrees of freedom. For $N > 2$, the number of parameters exceeds the number of data points. Hence, a frequentist statistical interpretation is not possible in these cases. Within our analyses, we instead explore whether the weak or strong bounds suffice to limit the a-posteriori uncertainty on the form factors, despite having zero or negative degrees of freedom.\\ \subsection{Results} \begin{figure}[p!] \begin{tabular}{cc} \includegraphics[width=.4\textwidth]{plots/plot-time-V-nominal.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-long-V-nominal.pdf} \\ \includegraphics[width=.4\textwidth]{plots/plot-perp-V-nominal.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-time-A-nominal.pdf} \\ \includegraphics[width=.4\textwidth]{plots/plot-long-A-nominal.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-perp-A-nominal.pdf} \\ \includegraphics[width=.4\textwidth]{plots/plot-long-T-nominal.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-perp-T-nominal.pdf} \\ \includegraphics[width=.4\textwidth]{plots/plot-long-T5-nominal.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-perp-T5-nominal.pdf} \end{tabular} \caption{% Uncertainty bands for the a-posteriori form-factor predictions of the ten form factors. The bands comprise the central $68\%$ probability interval at every point in $q^2$. We show the form factor results at $N =2$ in the absence of any bounds, using weak bounds $|a_{V,\lambda}^i| <1$, and using the strong bounds (see text), respectively. The markers indicate the synthetic lattice data points. } \label{fig:formfactor-nominal} \vspace{-100pt} \end{figure} \begin{figure}[p] \begin{tabular}{cc} \includegraphics[width=.4\textwidth]{plots/plot-time-V-strong-only.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-long-V-strong-only.pdf} \\ \includegraphics[width=.4\textwidth]{plots/plot-perp-V-strong-only.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-time-A-strong-only.pdf} \\ \includegraphics[width=.4\textwidth]{plots/plot-long-A-strong-only.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-perp-A-strong-only.pdf} \\ \includegraphics[width=.4\textwidth]{plots/plot-long-T-strong-only.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-perp-T-strong-only.pdf} \\ \includegraphics[width=.4\textwidth]{plots/plot-long-T5-strong-only.pdf} & \includegraphics[width=.4\textwidth]{plots/plot-perp-T5-strong-only.pdf} \end{tabular} \caption{% Uncertainty bands for the a-posteriori form-factor predictions of the ten form factors. The bands comprise the central $68\%$ probability interval at every point in $q^2$. We show the form factor results at $N \in \{2, 3, 4\}$ when using the strong bound. Note that for $N > 2$ we have more parameters than data points. Finite uncertainty envelopes are enforced by the bound. The markers indicate the synthetic lattice data points. } \label{fig:formfactor-truncation} \vspace{-100pt} \end{figure} We begin with three analyses at truncation $N=2$, using each of the three types of priors defined above. In all three analyses, we arrive at the same best-fit point. This indicates clearly that the best-fit point not only fulfills the weak bound, but also the strong bound. We explicitly confirm this by predicting the saturation of the individual bounds at the best-fit point. These range between $12\%$ (for the $1^-$ bound) and $33\%$ (for the $1^+$ bound), which renders the point \emph{well within} the region allowed by the strong bound. Accounting for the known one-particle contributions does not change this conclusion. At the maximum-likelihood point, the $\chi^2$ value arising from the likelihood is compatible with zero at a precision of $10^{-5}$ or better. For each of the three analyses, we obtain a unimodal posterior and sample from the posterior using multiple Markov chains and the Metropolis-Hastings algorithm~\cite{Metropolis:1953am,Hastings:1970aa}. We use these samples to produce posterior-predictive distributions for each of the form factors, which are shown in Figures \ref{fig:formfactor-nominal}--\ref{fig:formfactor-truncation} on the left-hand side. We observe that the strong bound has some impact on the form factor uncertainties, chiefly far away from the region where synthetic data points are available. For $N=2$, we do not find a significant reduction of the uncertainties due to the application of the strong bound. Rather, it influences the shape of the form factors and suppresses the appearance of local minima in the form factors close to $q^2 = 0$, which become visible when extrapolating to negative $q^2$. The modified shape aligns better with the naive expectation that the form factors rise monotonically with increasing $q^2$ below the first subthreshold pole. It also provides confidence that, with more precise lattice QCD results, analyses of the nonlocal form factors at negative $q^2$ can be undertaken. This opens the door toward analysis in the spirit of what has been proposed in Refs.~\cite{Bobeth:2017vxj,Gubernari:2020eft}. We continue with three analyses using the strong bound, for $N=2$, $N=3$, and $N=4$. Due to the nature of the orthonormal polynomials, the best-fit point for $N=2$ is not expected to be nested within the $N=3$ and $N=4$ solutions. Similarly, the $N=3$ best-fit point is not nested within the $N=4$ solution. In all three cases, we find a single point that maximizes the posterior. For all three points we find that the bounds are fulfilled and consequently we obtain $\chi^2$ values consistent with zero. The form-factor shapes are compatible between the $N=2$, $3$ and $4$ solutions. We show the a-posteriori form factor envelopes at 68\% probability together with the median values in \reffig{formfactor-truncation}. A clear advantage of our proposed parameterization is that the uncertainties in the large recoil region, {\it i.e.} away from the synthetic data points, do not increase dramatically when $N$ increases. This is in stark contrast with a scenario without any bounds on the coefficients $a_n$, where the a-posteriori uncertainty for the form factors would be divergent for negative degrees of freedom. This indicates that the bounds are able to constrain the parameterization even in an underconstrained analysis and gives confidence that the series can be reliably truncated in practical applications of this method. Figure~\ref{fig:saturation} shows the saturation of the strong bound for the different form factors with $N=2$, $3$ and $4$. For $N = 2$, the bounds are saturated between $10 - 30 \%$. This is as large or even larger than the one-particle contributions, which saturate the bounds to $\sim 10\%$ and much larger than the two-particle mesonic contributions, which saturate the bounds by only $1$--$4\%$~\cite{Bharucha:2010im}. As $N$ increases, the average saturation of the bounds increases. This is expected as additional parameters have to be included in the bound. The observed behaviour of the bound saturation provides further motivation for a global analysis of all $b\to s$ form factor data. Based on the updated analysis of the lattice data of Ref.~\cite{Detmold:2016pkz}, we produce a-posteriori prediction obtain for the tensor form factor $f_\perp^T$ at $q^2 = 0$ from our analyses. We use this form factor as an example due to its phenomenological relevance in predictions of $\Lambda_b\to \Lambda\gamma$ observables. Moreover, its location at $q^2 = 0$ provides the maximal distance between a phenomenologically relevant quantity and the synthetic lattice QCD data points, thereby maximizing the parametrization's systematic uncertainty. Applying the strong bound, we obtain \begin{equation} \begin{aligned} f_\perp^T(q^2 = 0)\big|_{N=2} & = 0.190 \pm 0.043\,, \\ f_\perp^T(q^2 = 0)\big|_{N=3} & = 0.173 \pm 0.053\,, \\ f_\perp^T(q^2 = 0)\big|_{N=4} & = 0.166 \pm 0.049\,. \\ \end{aligned} \end{equation} We observe a small downward trend in the central value and stable parametric uncertainties. The individual bands are compatible with each other within their uncertainties. We remind the reader that our results are obtained for negative degrees of freedom and should therefore not be compared with the behaviour of a regular fit. Our results should be compared with \begin{equation} f_\perp^T(q^2 = 0)\big|_{\text{\cite{Detmold:2016pkz}}} = 0.166 \pm 0.072\,. \\ \end{equation} This value and its uncertainty is obtained from the data and method described in Ref.~\cite{Detmold:2016pkz}, however, includes the exact form factor relation \refeq{ep3}, which has not been previously used. Our parametrization exhibits a considerably smaller parametric uncertainty. \begin{figure}[t] \begin{tabular}{cc} \includegraphics[width=.49\textwidth]{plots/plot-saturation-0m.pdf} & \includegraphics[width=.49\textwidth]{plots/plot-saturation-0p.pdf} \\ \includegraphics[width=.49\textwidth]{plots/plot-saturation-1m.pdf} & \includegraphics[width=.49\textwidth]{plots/plot-saturation-1p.pdf} \\ \includegraphics[width=.49\textwidth]{plots/plot-saturation-T.pdf} & \includegraphics[width=.49\textwidth]{plots/plot-saturation-T5.pdf} \end{tabular} \caption{% Relative saturation of the form factors with their respective spin-parity number $J^P$ obtained from posterior samples. The saturation's are shown for different truncation's of $N$, where the coefficients are constrained through the strong unitarity bound. The vertical bands comprise the central $68\%$ probability interval. } \label{fig:saturation} \end{figure} \section{Conclusion} \label{sec:conc} In this work we have introduced a new parametrization for the ten independent local $\Lambda_b \to \Lambda$ form factors. Our parametrization has the advantage that the parameters are bounded, due to the use of orthonormal polynomials that diagonalize the form factors' contribution within their respective dispersive bounds. Using a Bayesian analysis of the available lattice QCD results for the $\Lambda_b \to \Lambda$ form factors, we illustrate that our parametrization provides excellent control of systematic uncertainties when extrapolating from low to large hadronic recoil. To that end, we investigate our parametrization for different truncations and observe that the extrapolation uncertainty does not increase significantly within the kinematic phase space of $\Lambda_b\to\Lambda \ell^+\ell^-$ decays. We point out that the dispersive bounds are able to constrain the form factor uncertainties to such an extent that a massively underconstrained analyses still exhibit stable uncertainty estimates. This is a clear benefit compared to other parametrizations. For future improvements of the proposed parametrization, one can insert the framework of dispersive bounds directly into the lattice-QCD analysis. Moreover, by including the one-particle contributions, as discussed in Sec.~\ref{sec:one-particle}, and other two-particle contributions, as discussed in Sec.~\ref{sec:two-particle}, in a global analysis of the available $b \to s$ form factor data, we would expect even more precise results to be obtained for the form factors as the upper bound would be even more saturated. \subsubsection*{Acknowledgements} We would like to thank Marzia Bordone, Nico Gubernari, Martin Jung, and M\'eril Reboud for helpful discussions. The work of TB is supported by the Royal Society (UK). The work of SM is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number D{E-S}{C0}009913. The work of MR is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257. The work of DvD is supported by the DFG within the Emmy Noether Programme under grant DY-130/1-1 and the Sino-German Collaborative Research Center TRR110 ``Symmetries and the Emergence of Structure in QCD'' (DFG Project-ID 196253076, NSFC Grant No. 12070131001, TRR 110). DvD was supported in the final phase of this work by the Munich Institute for Astro- and Particle Physics (MIAPP), which is funded by the DFG under Germany’s Excellence Strategy – EXC-2094 – 390783311. \clearpage
1,116,691,498,802
arxiv
\section{Introduction} The statistics of `recurrence times', defined as the random durations of time intervals between two consequent events, is widely used to characterize systems punctuated by short-duration occurrences interspersed between quiet phases. For instance, the statistics of recurrence times between earthquakes is the basis for hazard assessment in seismology. The statistics of recurrence times has recently been the focus of researchers interested in the properties of different natural \cite{Zaslavsky91,Corral2003,Corral2004,Davisenetal07} and social systems \cite{Barabasi_Nature05,Vasquez_et_al_06}. The study of recurrence between earthquakes is perhaps the most advanced quantitatively due to the availability of data and the high involved stakes. The statistics of earthquake recurrence times in large geographic domains have been reported to be characterized by universal intermediate power law asymptotics, both for single homogeneous regions \cite{Bak2002,Corral2003} and when averaged over multiple regions \cite{Bak2002,Corral2004}. These intermediate power laws, as well as the scaling properties of the distribution of recurrence times, were theoretically explained by the present authors \cite{SaiSor2006,SaiSor2007}, using the parsimonious ETAS model of earthquakes triggering \cite{Ogata88}, which is presently the benchmark model in statistical seismology. We recall that the acronym ETAS stands for Epidemic-Type Aftershocks and the ETAS model is an incarnation of the Hawkes self-excited conditional Poisson process \cite{Hawkes1,Hawkes2,Hawkes3,Hawkes4}. Our previous works \cite{SaiSor2006,SaiSor2007} have shown that one does not need to invoke new universal laws or fancy scaling in order to explain quantitatively with high accuracy the previously reported scaling laws \cite{Bak2002,Corral2003,Bak2002,Corral2004}. In other words, the finding reported in \cite{Bak2002,Corral2003,Corral2004} do not contain evidence for any new physics/geophysics laws but constitute just a reformulation of the following laws: \begin{itemize} \item Earthquakes tend to trigger other earthquakes according to the same triggering mechanism, independently of their magnitudes. \item The Omori-Utsu law for aftershocks, generalized into the phenomenon of earthquake triggering where all earthquakes are treated on the same footing, states that the rate of events that are triggered by a preceding event that occurred at time $0$ decays as \begin{equation}\label{omorilawexpr} f_1(t) = \frac{\theta t_0^\theta}{(t_0+t)^{1+\theta}} , \qquad 0<\theta \ll 1 , \qquad t_0>0 , \qquad t> 0~. \end{equation} The function $f_1(t)$ can also be interpreted as the probability density function (pdf) of the durations of the waiting time intervals between the reference ``mother'' event and the triggered events of first generation corresponding to direct triggered by the mother event. The constant $t_0$ describes a characteristic microscopic time scale of the generalized Omori law that ensures regularization at small time and normalization. \end{itemize} Our previous analytical derivations \cite{SaiSor2006,SaiSor2007}, found in excellent agreement with empirical data \cite{Bak2002,Corral2003,Corral2004}, was essentially based on the long-memory of the Omori law (\ref{omorilawexpr}), $f_1(t) \sim t^{-1-\theta}$ for large $t$ with $0 \leq \theta <1$. However, it did not take into account the impact of heterogenous fertilities, which come in wildly varying values. Indeed, the number of daughters triggered by an earthquake of a given magnitude grows exponentially with its magnitude. For instance, a magnitude 8-earthquake may have tens of thousands of aftershocks of magnitude larger than 2 while a magnitude 2-earthquake may generate no more than 0.1 earthquake on average of magnitude larger than 2 \cite{Helmstteterfertility}. Given the fact that the distribution of magnitudes is itself an exponentially decaying function of magnitudes (called the Gutenberg-Richter law), this translates into a heavy tail distribution of fertilities \cite{Saihelmsor05}, i.e. the distribution of the number of first generation events triggered by a given event has the following power law asymptotic: \begin{equation}\label{poneralasymp} p_1(r) \sim r^{-\alpha-1} , \qquad r\to \infty , \qquad \alpha \in (1,2) ~. \end{equation} Precisely, $p_1(r)$ is the probability that the random number $R_1$ of first generation aftershocks triggered independently by a given mother event is equal to a given integer $r$. In fact, the main approximation in our previous work \cite{SaiSor2006,SaiSor2007} was to consider that, for the estimation of the distribution of recurrence times, it is sufficient to assume that each mother event triggers at most one event, so that the power law \eqref{poneralasymp} is completely irrelevant. This surprising approximation was justified by the focus on the tail of the distribution of recurrence times, for which typically only one event, among the set of events triggered by a given earthquake, does contribute. The goal of the present paper is to reexamine this approximation and present an exact analysis of the impact of the power law form \eqref{poneralasymp} of the distribution of fertilities on the distribution of the recurrence times. To make the analysis feasible and exact, we consider the case where the Omori law is no more heavy-tailed but has a shorter memory in the form of an exponential distribution, expressed by a suitable choice of time units in the form \begin{equation}\label{expdisdef} f_1(t) = e^{-t} . \end{equation} In addition to getting exact analytical expressions, our study of the case of an exponential memory kernel (\ref{expdisdef}) is motivated by the fact that, for many applications, this is the default assumption \cite{Chavezetal05,BauwensHautsch09,Eymanetal10,Azizetal10,Aitsahaliaetal10,SalmonTham08,Filisor12}. Therefore, this parameterization has an genuine interest and intrinsic value. This exponential memory function should not be confused with the Poisson model, which has no memory. In contrast, the Hawkes process takes into account the full set of interactions between all past events and the future events, mediated by the influence function given by $f_1(t)$. The main result of the present paper is the exact expression for the full distribution $f(t)$ of recurrence times that results from all the possible cascades of triggering of events over all generations. We make explicit the substantial dependence of the power law exponents characterizing the distribution $f(t)$ on the exponent $\alpha$ of the power law tail \eqref{poneralasymp} for the distribution of fertilities and on the branching ratio $n$, defined as the mean number of events of first generation triggered per event: \begin{equation} n= \text{E}\left[R_1\right] ~. \label{wryjujiuk} \end{equation} The paper is organized as follows. Section 2 present the self-excited conditional Hawkes Poisson process and the main exact equations obtained using generating probability functions. Section 3 studies the probability of quiescence within a given fixed time interval and derives different statistical properties of earthquake clusters, such as the mean number of earthquakes in clusters and their mean duration. Section 4 presents the main results concerning the probability density functions (pdf) of the waiting times between successive earthquakes. Section 5 summarizes our main results and concludes. The Appendix makes more specific the analytical model used in our derivations and describes useful statistical properties of first generation aftershocks. While we use the language of seismology and events are named `earthquakes', our results obviously apply to the many natural and social-economic-financial systems in which self-excitation occurs. \section{Branching model of earthquake triggering} Before discussing the statistical properties of recurrence times, let us develop the statistical description of the random number of earthquakes occurring within the time window $(t,t+\tau)$. The Hawkes model that we consider assumes that there are exogenous events (called ``immigrants'' in the literature on branching processes, or ``noise events''), occurring spontaneously according to a Poissonian stationary flow statistics. Thus, successive instants $\dots<t_{-1}<t_0 < t_1 < t_2 < \dots$ of the noise earthquakes belong to the stationary Poisson point process with mean rate $\nu=\text{const}$. Then, the Hawkes model assumes that any given noise earthquake, occurring at $\{t_k\}$, triggers a total of $R_1^k$ first generation earthquakes. We assume that the set of the total numbers $\{R_1^k\}$ of first generation aftershocks triggered by noise earthquakes are iid random integers, with the same generating probability function (GPF) $G_1(z)$. Furthermore, the distribution of waiting times $t$ between the time $t_k$ of a given noise earthquake and the occurrences of its aftershocks is assumed to be $f_1(t)$ defined by expression (\ref{expdisdef}). In turn, each of the aftershocks of first generation triggered by a given noise earthquake also triggers independently its own first generation aftershocks, with the same statistical properties (same GPF) as the first generation aftershocks. Specifically, the Hawkes self-excited conditional Poisson process is defined by the following form of the Intensity function \begin{equation} \lambda(t | H_t, {\cal P}) = \lambda_{\rm noise}(t) + \sum_{i | t_{i} < t} R_1^i~ f(t-t_i)~, \label{hyjuetg2tgj} \end{equation} where the history $H_t = \{ t_i \}_{1 \leq i \leq i_t,~ t_{i_t} \leq t < t_{i_t+1} }$ includes all events that occurred before the present time $t$ and the sum in expression (\ref{hyjuetg2tgj}) runs over all past triggered events. The set of parameters is denoted by the symbol ${\cal P}$. The term $\lambda_{\rm noise}(t)$ means that there are some external noise (or immigrant, exogenous) sources occurring according to a Poisson process with intensity $\lambda_{\rm noise}(t)$, which may be a function of time, but all other events can be both triggered by previous events and can themselves trigger their offsprings. This gives rise to the existence of many generations of events. In the sequel, we will consider only the case where $\lambda_{\rm noise}(t) = \lambda_{\rm noise}$ is constant. Under the above definitions and assumptions, one can prove that the GPF of the total number of all earthquakes (including noise and triggered events of all generations) occurring within the time window $(t,t+\tau)$ can be expressed as a product of two terms: \begin{equation}\label{thztauexpr} \Theta(z;\tau) =\Theta_\text{out}(z;\tau) \cdot \Theta_\text{in}(z;\tau) ~. \end{equation} The first term $\Theta_\text{out}(z;\tau)$ is the GPF of the number of all earthquakes in $(t,t+\tau)$ triggered by noise and triggered earthquakes that occurred up to time $t$. The second term $\Theta_\text{in}(z;\tau)$ is the GPF of the number of all noise earthquakes that occurred within $(t,t+\tau)$ and of all earthquakes triggered in that window by events also in $(t,t+\tau)$. The factorization of $\Theta(z;\tau)$ given by expression (\ref{thztauexpr}) simply expresses the independence between the branching processes starting outside and within the time window $(t,t+\tau)$. We have previously shown \cite{SaiSor2007,SaiSor2006a}) that $\Theta_\text{out}(z;\tau)$ and $\Theta_\text{in}(z;\tau)$ are given respectively by \begin{equation}\label{mathadef} \Theta_\text{out}(z;\tau) = \exp\left(\nu \int_0^\infty \left[ \mathcal{G}(z;t,\tau) -1 \right] dt \right)~, \end{equation} and \begin{equation}\label{mathbdef} \Theta_\text{in}(z;\tau) = \exp\left(\nu \int_0^\tau \left[ z G(z;\tau)-1\right] dt \right)~ , \end{equation} where the functions $\mathcal{G}(z;t,\tau)$ and $G(z;\tau)$ satisfy the following nonlinear integral equations: \begin{equation}\label{gztallaft} \begin{array}{c} \displaystyle G(z;\tau) = Q\left[H(z;\tau)\right] ~, \\[1mm]\displaystyle H(z;\tau) = \rho(t) - z f_1(\tau) \otimes G(z;\tau)~ , \\[1mm]\displaystyle G(z;0) = 1 , \qquad H(z;0) = 0 ~. \end{array} \end{equation} and \begin{equation}\label{gztallaftseq} \begin{array}{c} \displaystyle \mathcal{G}(z;t,\tau) = Q[\mathcal{H}(z;t,\tau)] ~, \\[1mm]\displaystyle \mathcal{H}(z;t,\tau) = \rho(t+\tau) - \mathcal{G}(z;t,\tau) \otimes f_1(t) - z G(z;\tau) \otimes f_1(t+\tau)~ , \\[1mm]\displaystyle \mathcal{G}(z;0,\tau) = G(z;\tau) , \qquad \mathcal{H}(z;0,\tau) = H(z;\tau)~ . \end{array} \end{equation} The symbol $\otimes$ represents the convolution operator with respect to the repeating time arguments $t$ or $\tau$. We have introduced the auxiliary function \begin{equation}\label{qytrugoneomz} Q(y) := G_1(1-y)~ , \end{equation} and the cumulative distribution function (cdf) of the first generation aftershocks instants \begin{equation} \rho(t) = \int_0^t f_1(t') dt'~ . \label{syjiukoiloyi} \end{equation} \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{Fig1.eps}\\ \end{center} \caption{Schematic illustration of the geometric sense of function $\mathcal{G}(z;t,\tau)$. It is the GPF of the random number $R(t,\tau)$ of all generations aftershocks within the window $(t,t+\tau)$ triggered by a noise earthquake that occurred at the origin of time $t=0$ (large solid arrow). The aftershocks within $(t,t+\tau)$ are depicted by the small solid arrows. The aftershocks triggered outside the window $(t,t+\tau)$ are depicted by the small dotted arrows. In the picture, $R(t,\tau)=5$.} \label{gztgeom} \end{figure} The functions $G(z;\tau)$ and $\mathcal{G}(z;t,\tau)$ have an intuitive geometric sense, illustrated by figure~\ref{gztgeom}. $G(z;\tau)$ is the GPF of the random number of aftershocks of all generations triggered till the current time $t=\tau$ by some noise earthquake (or some aftershock) that occurred at the origin of time $t=0$. In turn, $\mathcal{G}(z;t,\tau)$ is the GPF of the random number of all generations aftershocks triggered within the window $(t,t+\tau)$ (for $t>0$) by some noise earthquake that occurred at the origin of time. The GPFs $G(z;\tau)$ and $\mathcal{G}(z;t,\tau)$ are related by their definition as follows: \begin{equation} G(z;0,\tau) = G(z;\tau) . \end{equation} A key input of the theory within the formalism of GPF is thus $G_1(y)$ or $Q(y)$ (via its definition (\ref{qytrugoneomz})). Given expression (\ref{poneralasymp}), we show in the Appendix that a convenient form is \begin{equation}\label{gonepowgamdef} G_1(z) = 1 - n (1-z) + \kappa (1-z)^\alpha , \qquad \alpha\in(1,2] ~, \end{equation} so that the corresponding auxiliary function $Q(y)$ \eqref{qytrugoneomz} takes the form \begin{equation}\label{qyalphadef} Q(y) = 1 - n y + \kappa y^\alpha~ . \end{equation} \section{Probability of quiescence, mean number of earthquakes and mean duration of clusters} \subsection{Probability of quiescence} For $z=0$, relation \eqref{thztauexpr} reduces to \begin{equation}\label{probsztauexpr} \text{P}(\tau) =\text{P}_\text{out}(\tau) \cdot \text{P}_\text{in}(\tau) , \end{equation} where \begin{equation} \text{P}(\tau) = \Theta(z=0;\tau) \end{equation} is the probability that there are no earthquakes (including the noise earthquakes and their aftershocks of all generations) within the window $(t,t+\tau)$. $\text{P}(\tau)$ can be decomposed as the product of the probability $\text{P}_\text{in}(\tau)$ that no noise earthquakes occur within $(t,t+\tau)$ and the probability $\text{P}_\text{out}(\tau)$ that no aftershocks occur within $(t,t+\tau)$ that could have been triggered by noise earthquakes and their aftershocks that occurred before and until time $t$. In the general case, $\text{P}_\text{in}(\tau)$ is given by \begin{equation}\label{prtauin} \text{P}_\text{in}(\tau) = e^{-\nu \tau} ~. \end{equation} In turn, $\text{P}_\text{out}(\tau) = \Theta_\text{out}(z=0;\tau)$ by definition. Using relation \eqref{mathadef} and equations \eqref{gztallaft}, \eqref{gztallaftseq}, we obtain \begin{equation}\label{protauout} \text{P}_\text{out}(\tau) = \exp\left(\nu \int_0^\infty \left[ \mathcal{G}(t,\tau) -1 \right] dt \right) ~, \end{equation} where \begin{equation} \mathcal{G}(t,\tau) = \mathcal{G}(z=0;t,\tau) \end{equation} is defined by \begin{equation}\label{gtauoutexpr} \mathcal{G}(t,\tau) = Q[\mathcal{H}(t,\tau)]~ . \end{equation} The auxiliary function $\mathcal{H}(t,\tau)$ is solution of the nonlinear integral equation \begin{equation}\label{gtouteqs} \begin{array}{c} \displaystyle \mathcal{H}(t,\tau) = \rho(t+\tau) - Q[\mathcal{H}(t,\tau)] \otimes f_1(t) ~, \\[1mm]\displaystyle \mathcal{H}(0,\tau) = \rho(\tau)~ . \end{array} \end{equation} \subsection{Solution of equation \eqref{gtouteqs} and determination of $\text{P}(\tau)$ for the exponential pdf $f_1(t)$} Using the exponential form of the Omori law (\ref{expdisdef}), it is possible to obtain an exact analytical solution of equation \eqref{gtouteqs}, which gives us the possibility to explore in detail the probabilistic properties of recurrence times. Using the form (\ref{expdisdef}), it is easy to show that equation \eqref{gtouteqs} reduces to the initial value problem \begin{equation} \frac{d \mathcal{H}}{dt} + \mathcal{H} + Q\left[\mathcal{H}\right] = 1 , \qquad \mathcal{H}(0,\tau) = \rho(\tau) ~. \end{equation} Using expression \eqref{qyalphadef} for $Q(y)$ leads to \begin{equation}\label{matheqalp} \frac{d \mathcal{H}}{dt} + (1-n) \mathcal{H} +\kappa \mathcal{H}^\alpha = 0 , \qquad \mathcal{H}(0,\tau) = \rho(\tau) ~, \end{equation} whose solution is given by \begin{equation}\label{mathxtexpr} \mathcal{H}(t,\tau) = \left[ (1 - e^{-\tau})^{1-\alpha} ~e^{(\alpha-1)(1-n)t} + \gamma \left(e^{(\alpha-1)(1-n)t} - 1\right) \right]^{1/(1-\alpha)}~, \end{equation} where \begin{equation}\label{gamtilaldef} \gamma = \frac{\kappa}{1-n} ~. \end{equation} We have used the fact that \begin{equation}\label{atauexp} \rho(\tau) = 1 - e^{-\tau}~, \end{equation} as derived from the exponential pdf $f_1(t)$ given by \eqref{expdisdef} and definition (\ref{syjiukoiloyi}). We can now rewrite probability $\text{P}_\text{out}(\tau)$ \eqref{protauout} in the form \begin{equation}\label{mathafoverline} \text{P}_\text{out}(\tau) = e^{-\nu \overline{F}(\tau)}~ , \end{equation} where \begin{equation}\label{overfdef} \overline{F}(\tau) = \int_0^\infty \left[1- \mathcal{G}(t,\tau) \right] dt~. \end{equation} Taking into account expression \eqref{mathxtexpr} and the equality \begin{equation} \mathcal{G}(t,\tau) = Q[\mathcal{H}(t,\tau)] = 1 - n \mathcal{H} +\kappa \mathcal{H}^\alpha , \end{equation} the explicit calculation of integral \eqref{overfdef} yields \begin{equation}\label{overfexpr} \begin{array}{c} \displaystyle \overline{F}(\tau) = \overline{F}(n,\kappa,\alpha,\rho) = \\[2mm] \displaystyle \frac{\gamma^{1/(1-\alpha)}}{(\alpha-1) (1-n)} \bigg[ n B\left(\frac{\gamma \rho^{\alpha}}{\rho+\gamma \rho^{\alpha}}, \frac{1}{\alpha-1}, \frac{\alpha-2}{\alpha-1}\right) - \\[4mm] \displaystyle (1-n) B\left(\frac{\gamma \rho^{\alpha}}{\rho+\gamma \rho^{\alpha}}, \frac{\alpha}{\alpha-1}, \frac{1}{1-\alpha}\right) \bigg] ~, \end{array} \end{equation} where $\rho=\rho(\tau)$ and $B(x;a,b)$ is the incomplete beta function \begin{equation} B(x;a,b) = \int_0^x s^{a-1} (1-s)^{b-1} ds~ . \end{equation} In view of the key role played by the function $\overline{F}(n,\kappa,\alpha,\rho)$ in the following, it is useful to describe some of its properties. A first result of interest is its limit behavior as the branching ratio $n$ tends to $1$. Recall that this limit corresponds to the critical regime of the Hawkes process, separating the subcritical phase $n<1$ and the supercritical phase $n>1$. For $n<1$, each noise earthquake has only a finite number of aftershocks. For $n>1$, there is a non-zero probability that a single noise earthquake may generate an infinite number and infinitely long-lives sequence of aftershocks. The relevant physical regime is thus $n \leq 1$ and the boundary value $n=1$ plays a special role, especially when one remembers that $n$ can also be interpreted as the ratio of the total number of triggered events to the total number of events \cite{HelmstteterSornette03}. Hence, when $n \to 1$, most of the observed activity is endogenous, i.e., triggered by past activity. Thus, the limit of $\overline{F}(n,\kappa,\alpha,\rho)$ as $n \to 1$ reads \begin{equation}\label{overefeneqone} \overline{F}(\kappa,\alpha,\rho) = \lim_{n\to 1} \overline{F}(n,\kappa,\alpha,\rho) = \frac{\rho^{2-\alpha}}{\kappa (2-\alpha)} - \rho , \quad 1<\alpha < 2 . \end{equation} For $n <1$, it is convenient to choose values of $\alpha$ that take the form \begin{equation} \alpha = 1 +\frac{1}{m} ~, \end{equation} so that it is possible to express the function \begin{equation} \mathcal{F}_m(n,\kappa,\rho) = \overline{F}\left(n,\kappa,1+\frac{1}{m},\rho\right) \end{equation} under the form \begin{equation}\label{mathefexpr} \begin{array}{c} \displaystyle \mathcal{F}_m(n,\kappa,\rho) = \frac{n \rho}{1-n} + \\[4mm] \displaystyle \frac{m}{\kappa^m} (n-1)^{m-1} \left[ \ln\left(1+\frac{\kappa}{1-n} \rho^{1/m} \right) - \ln_m\left(1+\frac{\kappa}{1-n} \rho^{1/m} \right) \right]~ . \end{array} \end{equation} The auxiliary function $\ln_m(1+x)$ is defined as the sum of the first $m$ terms of the Taylor series expansion of the logarithm function $\ln(1+x)$ with respect to $x$: \begin{equation} \ln_m(1+x) = - \sum_{k=1}^m \frac{(-x)^k}{k}~ . \end{equation} For $m=1$ ($\alpha=2$), we have \begin{equation}\label{overfaltone} \mathcal{F}_1(n,\kappa,\rho) = \frac{1}{\kappa} \ln\left[1 + \frac{\kappa \rho}{1-n} \right ] - \rho ~. \end{equation} For $m=2$ ($\alpha=3/2$), we have \begin{equation}\label{overfaltwo} \begin{array}{c} \displaystyle \mathcal{F}_2(n,\kappa,\rho) = \frac{2}{\kappa} \sqrt{\rho} - \rho - \frac{2}{\kappa^2} (1-n) \ln\left(1+ \frac{\kappa \sqrt{\rho} }{1-n} \right)~. \end{array} \end{equation} For $m=3$ ($\alpha=4/3$), we have \begin{equation}\label{overf43} \begin{array}{c} \displaystyle \mathcal{F}_3(n,\kappa,\rho)= -\frac{3}{\kappa^2}(1-n) \rho^{1/3} + \frac{3}{2\kappa}~ \rho^{2/3}- \rho + \\[4mm] \displaystyle \frac{3}{\kappa^3} (1-n)^2 \ln\left(1+ \frac{\kappa \rho^{1/3}}{1-n} ~ \right)~. \end{array} \end{equation} \subsection{Mean duration of seismic clusters} \subsubsection{Seismic clusters of all types} Let us consider the random duration $\Delta_k$ of the aftershock cluster triggered by the $k$th noise earthquake that occurred at time $t_k$. By definition, \begin{equation} \Delta_k =t_\text{last}^k- t_k ~, \end{equation} where $t_\text{last}^k$ is the occurrence time of the last of its triggered aftershocks over all generations. The mean value of $\Delta_k$ is by definition $\langle \Delta \rangle = \int_0^\infty \varrho \cdot w(\varrho) d\varrho$, where $w(\varrho)$ is the pdf of the random cluster durations $\{\Delta_k\}$. By hypothesis, the instants $\{t_k\}$ of the noise earthquakes form a Poissonian point process with mean rate $\nu$. Moreover, within the Hawkes branching process model, the clusters durations $\{\Delta_k\}$ are iid random variables. One can easily show that the probability $\text{P}_\text{out}(\tau)$ given by \eqref{mathafoverline}, that no aftershocks occur within $(t,t+\tau)$ that could have been triggered by noise earthquakes and their aftershocks that occurred before and until time $t$, take the following value in the limit $\tau \to +\infty$: \begin{equation}\label{pinfdel} \text{P}_\text{out}(\infty) = e^{-\nu \langle \Delta \rangle}~, \end{equation} Thus, $\text{P}_\text{out}(\infty)$ is the probability that all noise earthquakes that occurred up to time $t$ do not trigger any aftershock after $t$. It follows from relation \eqref{mathafoverline} and (\ref{pinfdel}) that \begin{equation}\label{nglDelgen} \langle \Delta \rangle = \overline{F}_\infty(n,\kappa,\alpha, \rho=1)~ . \end{equation} \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{Fig2.eps}\\ \end{center} \caption{Dependence with respect to the exponent $\alpha$ of the probability $\text{P}_\text{out}(\infty)$ \eqref{pinfdel} that no aftershocks occur within $(t,t+\tau)$ that could have been triggered by noise earthquakes and their aftershocks that occurred before and until time $t$. The plot corresponds to the critical case $n=1$ with $\kappa=0.25$. Bottom to top: $\nu=0.1;0.05; 0.01; 0.001$.}\label{pinfcrit} \end{figure} Now, $\overline{F}_\infty(n,\kappa,\alpha, \rho=1)$ can be obtained as the value of the function $\overline{F}(\tau)$ given by \eqref{overfexpr} at $\tau \to +\infty$: \begin{equation}\label{finfdef} \overline{F}_\infty(n,\kappa,\alpha, \rho) = \overline{F}(\infty) ~. \end{equation} We obtain \begin{equation} \begin{array}{c} \displaystyle \overline{F}(n,\kappa,\alpha,\rho=1) = \\[4mm] \displaystyle \frac{\gamma^{1/(1-\alpha)}}{(\alpha-1) (1-n)} \bigg[ n B\left(\frac{\kappa}{\kappa+1-n}, \frac{1}{\alpha-1}, \frac{\alpha-2}{\alpha-1}\right) - \\[4mm] \displaystyle (1-n) B\left(\frac{\kappa}{\kappa+1-n}, \frac{\alpha}{\alpha-1}, \frac{1}{1-\alpha}\right) \bigg]~ . \end{array} \label{hyjruiu5jyne} \end{equation} In particular, in the critical case $n=1$, we have \begin{equation} \overline{F}(1,\kappa,\alpha,1) = \frac{1}{\kappa(2-\alpha)} - 1~ , \end{equation} and thus \begin{equation}\label{angledenone} \langle \Delta \rangle = \frac{1}{\kappa (2-\alpha)} - 1 , \qquad n= 1 , \qquad \kappa < \alpha^{-1}~ . \end{equation} Figure~\ref{pinfcrit} shows the probability $\text{P}_\text{out}(\infty)$ \eqref{pinfdel} as a function of the exponent $\alpha$, in the critical case $n=1$. \subsubsection{Seismic clusters with at least $m \geq 1$ aftershock} The mean duration $\langle \Delta \rangle$ of clusters given by expression \eqref{nglDelgen} with (\ref{hyjruiu5jyne}) includes the contribution of the empty clusters for which the noise earthquake does not trigger any aftershock. It is thus interesting to evaluate another derived quantity $\langle\Delta^1\rangle$, defined as the mean duration of clusters that contain at least one aftershock. To get $\langle\Delta^1\rangle$, we divide $\langle \Delta \rangle$ by the probability that the number of aftershocks is strictly positive, \begin{equation} \langle\Delta^1\rangle = \langle\Delta\rangle \big/ \text{Pr}\{R>0\} ~, \end{equation} where $\text{Pr}\{R>0\}$ is the probability that the number $R$ of aftershocks is positive. Accordingly, one introduce the mean rate $\nu^1$ of the non-empty clusters equal to \begin{equation} \nu^1 = \nu \big/ \text{Pr}\{R>0\} ~, \end{equation} where $\nu$ is the mean rate of noise earthquakes. Obviously, $\text{Pr}\{R>0\}$ is given by \begin{equation} \text{Pr}\{R>0\} = 1 - p_1(0)~ , \end{equation} where $p_1(0)$ is the probability that a noise earthquake does not trigger any first generation aftershock at all. Using the parameterization defined in the Appendix for the Hawkes model, $p_1(0)$ is given by expression \eqref{ponezone}, leading to \begin{equation} \nu^1 = \nu \cdot(n-\kappa) \qquad \text{and} \qquad \langle\Delta^1\rangle =\frac{\overline{F}_\infty(n,\kappa,\alpha,1)}{n-\kappa}~ . \end{equation} In particular, in the critical case $n=1$, the mean duration of the non-empty clusters is given by \begin{equation}\label{deltaonexpr} \langle\Delta^1\rangle = \frac{1-\kappa (2-\alpha)}{(2-\alpha) \kappa (1-\kappa)}~ . \end{equation} As an example, taking $\kappa = 0.25$ and $\alpha = 1.5$ yields a mean duration of the non-empty clusters in the critical case equal to $\langle\Delta^1\rangle\simeq 9.33$. Recall that the unit time is the characteristic decay time of the Omori law $f_1(t)$ (\ref{expdisdef}). This allows us to define regimes of low seismicity as characterized by the following inequalities \begin{equation}\label{lowsesmineq} \nu \cdot \langle\Delta\rangle \ll 1 \qquad \Leftrightarrow \qquad \nu^1 \cdot \langle\Delta^1\rangle \ll 1~ , \end{equation} which means that clusters are well individualized, being separated by comparatively long quiet time intervals. It is useful to generalize the mean duration $\langle\Delta^1\rangle$ of clusters that contain at least one aftershock to the mean durations $\langle\Delta^m\rangle$ of the clusters that contain at least $m$ aftershocks. We now derive the general equation allowing one to calculate these $\langle\Delta^m\rangle$'s. Using the total probability formula, one can represent the pdf $w(\varrho)$ of clusters durations in the form \begin{equation} w(\varrho) = p(0)\delta(\varrho)+\sum_{j=1}^\infty p(j) w(\varrho|j) ~, \end{equation} where $p(j)$ is the probability that a given noise earthquake triggers $j$ aftershocks of all generations, and $w(\varrho|j)$ is the conditional pdf of cluster durations under the condition that the number of aftershocks is equal to $j$. Accordingly, the pdf $w(\varrho|j\geqslant m)$ of the durations of the clusters that have $m$ or more aftershocks is equal to \begin{equation} w(\varrho|j\geqslant m) = \frac{\displaystyle w(\varrho)- \sum_{j=1}^{m-1} p(j)w(\varrho|j)}{\displaystyle 1- \sum_{j=0}^{m-1} p(j)} ~. \end{equation} The corresponding conditional expectation $\langle \Delta^m\rangle$ is equal to \begin{equation} \langle \Delta^m\rangle = \int_0^\infty \varrho ~w(\varrho|j\geqslant m) d\varrho = \frac{\displaystyle \langle \Delta\rangle- \sum_{j=1}^{m-1} p(j)\langle \Delta|j\rangle}{\displaystyle 1- \sum_{j=0}^{m-1} p(j)}~ . \end{equation} \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{Fig3.eps}\\ \end{center} \caption{Dependences of the cluster durations $\langle \Delta\rangle$ \eqref{angledenone} (bottom curve), $\langle \Delta^1\rangle$ \eqref{deltaonexpr} (middle curve) and $\langle \Delta^2\rangle$ \eqref{delta2expr} (top curve) as functions of the exponent $\alpha$. The plot corresponds to the critical case $n=1$ with $\kappa=0.25$. Recall that $\langle \Delta^m \rangle$ is defined as the mean duration of the clusters that contain at least $m$ aftershocks. In our notations, $\langle \Delta\rangle$ corresponds formally to $\langle \Delta^0 \rangle$, i.e., it also takes into account the empty clusters for which the noise earthquake does not trigger any aftershock. }\label{deltas} \end{figure} In particular, taking into account that \begin{equation}\label{pwjexpr} \begin{array}{c} p(0) = p_1(0) = 1-n+\kappa , \qquad p(1) = p_1(1) = n - \alpha \kappa , \\[1mm] \displaystyle w(\varrho|1) = f_1(\varrho) \qquad \Rightarrow \qquad \langle \Delta|j\rangle = 1~, \end{array} \end{equation} we obtain \begin{equation}\label{delta2exprbis} \langle \Delta^2\rangle = \frac{\langle \Delta\rangle -p(1)}{1-p(0)-p(1)} . \end{equation} Using \eqref{pwjexpr} for $n=1$ and the expression \eqref{angledenone} for $\langle\Delta\rangle$, we obtain the mean duration of clusters containing more than one aftershock: \begin{equation}\label{delta2expr} \langle \Delta^2\rangle = \frac{1-\kappa(2-\alpha) (2-\alpha \kappa)}{\kappa^2 (\alpha-1) (2-\alpha)} ~. \end{equation} The dependences of the cluster durations $\langle \Delta\rangle$ \eqref{angledenone}, $\langle \Delta^1\rangle$ \eqref{deltaonexpr} and $\langle \Delta^2\rangle$ \eqref{delta2expr} as functions of the exponent $\alpha$ are shown in figure~\ref{deltas}. Note the large jump in mean durations of clusters containing at least two aftershocks compared with clusters containing at least one aftershock. \section{Pdf of recurrence intervals} \subsection{General relations} The knowledge of the exact probability $\text{P}(\tau)$ \eqref{probsztauexpr} allows one to calculate exactly the pdf $f(\tau)$ of the random waiting times $\{T_k\}$ between subsequent earthquakes. Indeed, a general result of the theory of point processes states that \begin{equation}\label{pdfpsidder} f(\tau) = \langle\tau\rangle \frac{d^2 \text{P}(\tau)}{d\tau^2} ~, \end{equation} where $\langle\tau\rangle = \text{E}\left[T_k\right]$ denotes the mean waiting time between subsequent earthquakes. Therefore, the complementary cumulative distribution function (ccdf) of the random waiting times is equal to \begin{equation}\label{lpsifirder} \Psi(\tau) = \text{Pr}\left\{T>\tau\right\} = - \langle\tau\rangle \frac{d P(\tau)}{d\tau} ~. \end{equation} By normalization, $\Psi(0)\equiv 1$, so that \begin{equation}\label{meantrhudpdt} \frac{1}{\langle\tau\rangle} = - \frac{dP(\tau)}{d\tau}\bigg|_{\tau=0}~ . \end{equation} Using expressions \eqref{probsztauexpr}, \eqref{prtauin} and \eqref{mathafoverline}, we have \begin{equation} \text{P}(\tau) = e^{-\nu \overline{F}(\tau) - \nu \tau}~. \end{equation} Making explicit $\overline{F}(\tau)$ with expression \eqref{overfexpr} yields \begin{equation} \frac{d P(\tau)}{d\tau} = \nu \frac{1-n \rho(\tau)+\kappa \rho^{\alpha}(\tau) }{1-n +\kappa \rho^{\alpha-1}(\tau)} e^{-\nu \overline{F}(\tau) - \nu \tau}~ . \end{equation} Using \eqref{meantrhudpdt}, we have \begin{equation}\label{upstaumeanexpr} \langle\tau\rangle = \frac{1-n}{\nu} \qquad \Rightarrow \qquad \Psi(\tau) = -\frac{1-n}{\nu} ~ \frac{d P(\tau)}{d\tau}~, \end{equation} and finally obtain \begin{equation}\label{ccdfinterevent} \Psi(\tau) = (1-n) \frac{1-n \rho(\tau)+\kappa \rho^{\alpha}(\tau) }{1-n +\kappa \rho^{\alpha-1}(\tau)} e^{-\nu \overline{F}(\tau) - \nu \tau} ~. \end{equation} Differentiating this last expression (\ref{ccdfinterevent}) with respect to $\tau$ yields the pdf $f(\tau)$ of waiting times between successive earthquakes: \begin{equation}\label{pdftaugenexpr} f(\tau) = \Phi(n,\kappa,\alpha,\rho(\tau),\nu)~, \end{equation} where \begin{equation}\label{Phirenormexpr} \begin{array}{c} \displaystyle \Phi(n,\kappa,\alpha,\rho,\nu) = \\[4mm] \displaystyle \left[\mathcal{A}(n,\kappa,\alpha,\rho) + \nu \cdot \mathcal{B}(n,\kappa,\alpha,\rho) \right] ~ e^{-\nu \left(\overline{F}(n,\kappa,\alpha,\rho)+\tau\right)} , \end{array} \end{equation} and \begin{equation}\label{matkexpr} \begin{array}{c} \displaystyle \mathcal{A}(n,\kappa,\alpha,\rho) = (1-n) (1-\rho) \times \\[5mm] \displaystyle \frac{n (1-n)+ \kappa (\alpha-1 +(2 n-\alpha) \rho) \rho^{\alpha-2} -\kappa^2 \rho^{2(\alpha-1)}} {(1-n+\kappa \rho^{\alpha-1})^2} , \\[5mm] \displaystyle \mathcal{B}(n,\kappa,\alpha,\rho) = (1-n) \left(\frac{1-n \rho+\kappa \rho^\alpha}{1-n+ \kappa \rho^{\alpha-1}} \right)^2~ . \end{array} \end{equation} Recall that $\rho$ represents $\rho(t)$, which is defined by expression (\ref{syjiukoiloyi}). In the following subsections, we analyze in detail expressions \eqref{Phirenormexpr} and \eqref{matkexpr} in order to derive the properties of the pdf $f(\tau)$ \eqref{pdftaugenexpr}. Expression \eqref{Phirenormexpr} suggests that it is natural to decompose the analysis of $f(\tau)$ into two discussions, one centered on the term surviving in the limit $\nu \to 0$ and the other one. The two next subsections analyze these two terms in turn. \subsection{Case $\nu \to 0$} Taking the limit $\nu \to 0$ amounts to neglecting the occurrence of any noise earthquake within the window $(t,t+\tau)$ of analysis. As shown in the next subsection, this first case already reveals interesting properties of the pdf $f(\tau)$, which remain valid in the general case $\nu> 0$. Putting $\nu=0$ in expression \eqref{Phirenormexpr} and using \eqref{pdftaugenexpr}, we have \begin{equation}\label{ftauthrua} f(\tau) = \mathcal{A}\left[n,\kappa,\alpha,\rho(\tau)\right] \qquad (\nu = 0)~ , \end{equation} where the function $\mathcal{A}(n,\kappa,\alpha,\rho)$ is given by expression \eqref{matkexpr}. The main asymptotics of the function $\mathcal{A}(n,\kappa,\alpha,\rho)$ are respectively \begin{itemize} \item at $\rho\ll 1$: \begin{equation}\label{arholessone} \mathcal{A}(n,\kappa,\alpha,\rho) \simeq \kappa (1-n) (\alpha-1) ~ \frac{\rho^{\alpha-2}}{(1-n+\kappa \rho^{\alpha-1})^2} ~, \qquad \rho\ll 1~. \end{equation} This regime $\rho\ll 1$ corresponds to $\tau \ll 1$ and thus $\rho \simeq \tau$. Relation \eqref{arholessone} thus leads to \begin{equation}\label{ftaulessonetwo} f(\tau) \simeq \kappa (1-n) (\alpha-1) ~ \frac{\tau^{\alpha-2}}{(1-n+\kappa \tau^{\alpha-1})^2} , \qquad \tau \ll 1~ . \end{equation} \item At $\rho\to 1$: \begin{equation}\label{asymprtone} \mathcal{A}(n,\kappa,\alpha,\rho) \simeq \mathcal{C} (1-\rho) ~, \qquad 1-\rho \ll 1 , \qquad \mathcal{C} = \frac{(1-n) (n-\kappa)}{1-n+\kappa} ~. \end{equation} This second asymptotic $\rho\to 1$ corresponds to $1-\rho\ll 1$, which is equivalent to the condition $\tau\gg 1$. Using expression \eqref{atauexp}, relation \eqref{asymprtone} leads to \begin{equation}\label{ftauexpasymp} f(\tau) \simeq \mathcal{C} ~ e^{-\tau} , \qquad \tau \gg 1 . \end{equation} \end{itemize} The denominator of expression (\ref{ftaulessonetwo}) determines a new characteristic time scale \begin{equation}\label{taustardef} \tau_c = \left(\frac{1-n}{\kappa}\right)^{\frac{1}{\alpha-1}} ~, \end{equation} from which one can define a critical value for the branching ratio (\ref{wryjujiuk}) \begin{equation} n_c = 1-\kappa~, \label{ehyhyt} \end{equation} such that $\tau_c > 1$ for $n < n_c$ and $\tau_c < 1$ for $n > n_c$. We shall refer to the case $n<n_c$ as the \emph{subcritical} regime, while $n_c<n<1$ is called the \emph{near-critical} regime. For $n < n_c$ (subcritical regime), $\tau_c\gtrsim 1$, and one may replace the asymptotics \eqref{ftaulessonetwo} by the pure power law: \begin{equation}\label{onepowerasym} f(\tau) \simeq \kappa ~ \frac{\alpha-1}{1-n} ~ \tau^{-(2-\alpha)}~, \qquad n \lesssim n_c~. \end{equation} In contrast, in the near-critical case $n_c<n<1$, expression \eqref{ftaulessonetwo} leads to two power asymptotics: \begin{equation}\label{twopowerasym} f(\tau) \simeq \begin{cases} \displaystyle \kappa ~ \frac{\alpha-1}{1-n} ~ \tau^{-(2-\alpha)}~ , & \tau \ll \tau_c ~, \\[4mm] \displaystyle (1-n) \frac{\alpha-1}{\kappa}~ \tau^{-\alpha}~ , & \tau_c \ll \tau \ll 1~ , \end{cases} \qquad n_c < n <1~ . \end{equation} \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{Fig4.eps}\\ \end{center} \caption{Solid line: Plot of the pdf $f(\tau)$ \eqref{ftauthrua} of waiting times between successive earthquakes for $\nu=0$, $\kappa=0.25$, $\alpha=1.5$ and $n=0.999$. The dotted lines show the power law asymptotics \eqref{twopowerasym} and the exponential asymptotic behavior \eqref{ftauexpasymp}.}\label{nuzero999} \end{figure} Figure~\ref{nuzero999} shows the pdf $f(\tau)$ \eqref{ftauthrua} of the waiting times between successive earthquakes for $\nu=0$, $\alpha=1.5$, $\kappa=0.25$ ($n_c=0.75$), and in the near-critical case $n=0.999$. The two power law asymptotics \eqref{twopowerasym} and the exponential asymptotics \eqref{ftauexpasymp} for large $\tau$'s are clearly visible. Figure~\ref{nuzero9} is the same as figure ~\ref{nuzero999}, except for the value $n=0.9$. Although, formally, this value belongs also to the near-critical case ($n=0.9 > n_c =0.75$), the intermediate power law asymptotics $\tau^{-\alpha}$ is barely visible and, for any $\tau\ll 1$, the subcritical power asymptotics $\tau^{-(2-\alpha)}$ dominates at short times. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{Fig5.eps}\\ \end{center} \caption{Same as figure \ref{nuzero999} except for the value of the branching ratio $n=0.9$.}\label{nuzero9} \end{figure} Figure~\ref{nuzeroens} shows log-log plots of the pdf $f(\tau)$ for $\nu=0$, $\kappa=0.25$, $\alpha=1.5$ and different values of the branching ratio $n$ belonging to the subcritical regime. The subcritical power law asymptotics $\tau^{-(2-\alpha)}$ is dominant and the different pdf's are similar. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{Fig6.eps}\\ \end{center} \caption{Solid line: Log-log plot of the pdf $f(\tau)$ \eqref{ftauthrua} for $\nu=0$, $\kappa=0.25$, $\alpha=1.5$ and for three $n$ values in the subcritical regime. Top to bottom: $n=0.8; 0.6; 0.4$. The dotted straight line is the subcritical power law asymptotics \eqref{onepowerasym}.}\label{nuzeroens} \end{figure} \subsection{General case $\nu \neq 0$} We now take into account the occurrence of noise earthquakes within the window $(t,t+\tau)$ of analysis. The main qualitative difference between this case and the previous one $\nu =0$ is the replacement of the large $\tau$ asymptotics \eqref{ftauexpasymp} by \begin{equation} f(\tau) \simeq (1-n) \left( \frac{n-\kappa}{1-n+\kappa} e^{-\tau} + \nu\right) e^{-\nu (\langle \Delta \rangle +\tau)} , \qquad \tau \gg 1 ~. \label{ryjrujkiku} \end{equation} \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{Fig7.eps}\\ \end{center} \caption{Solid line: Loglog plots of the recurrence times pdf $f(\tau)$ \eqref{pdftaugenexpr} in the case $\kappa=0.25$, $\alpha=1.5$ $n=0.999$ and three subcritical $\nu$ values: $\nu=0.1; 10^{-3}$ and $\nu=10^{-5}$. Dotted straight lines are the subcritical and near-critical power asymptotics \eqref{ftauexpasymp}}\label{nu999} \end{figure} The interesting regime of well-defined clusters occurs for $\nu\ll 1$, for which the typical waiting time between noise earthquakes is much larger than the characteristic decay time of the Omori law, which is the typical waiting time between a noise earthquake and its aftershocks. In the interval $1\ll \tau \ll 1\big/\nu$, expression (\ref{ryjrujkiku}) reduces to (\ref{ftauexpasymp}) and this exponential decay can be replaced by the plateau \begin{equation}\label{plateau} f(\tau) \simeq (1-n) \nu~ , \qquad 1\ll \tau \ll 1\big/\nu~ . \end{equation} For $\tau\gtrsim 1/\nu$, expression (\ref{ryjrujkiku}) simplifies into \begin{equation}\label{nuexpasymp} f(\tau) \simeq (1-n) \nu e^{-\nu \langle \Delta \rangle} ~e^{-\nu \tau} , \qquad \tau \gtrsim 1/\nu ~. \end{equation} These different regimes are illustrated in figure~\ref{nu999}, which depicts the pdf of waiting times between successive earthquakes for $\kappa=0.25$, $\alpha=1.5$, $\nu=0.999$ and three values of the positive parameter $\nu=10^{-1}; 10^{-3}; 10^{-5}$. One can clearly observe the subcritical and near-critical power law asymptotics \eqref{twopowerasym}, as well as the plateau \eqref{plateau} joining the exponential asymptotics \eqref{ftauexpasymp} and \eqref{nuexpasymp}. The plateau is especially visible for the smallest values $\nu=10^{-3}$ and $\nu=10^{-5}$. \section{Conclusion} In this paper, we have studied the impact of the power law form \eqref{poneralasymp} of the distribution of fertilities on the distribution of the recurrence times. In contrast with previous studies, we have considered the simple case where the Omori law is not heavy-tailed but is described by an exponential function. Motivated by real applications, this choice allows us to develop an exact analytical treatment using the formalism of generating probability functions. Our analysis emphasizes the importance of three time scales controlling the different regimes of the probability density function (pdf) $f(\tau)$ of waiting times between successive earthquakes: \begin{itemize} \item the characteristic decay time of the exponential Omori law $f_1(t)$ (\ref{expdisdef}) describing the occurrence of first generation aftershocks, taken as our time unit, \item the average waiting time $1/\nu$ between two successive ``noise earthquakes'', which constitute the exogenous sources of the self-excited processes followed by their aftershocks, \item the characteristic time $\tau_c$ defined by expression (\ref{taustardef}) associated with the self-excited cascades of aftershocks. \end{itemize} In the interesting and relevant regime where earthquake clusters are well defined, namely when the typical waiting time till the first aftershocks (unit time) is smaller than the waiting times $1/\nu$ between noise earthquakes, we have found that the pdf of recurrence times exhibits several intermediate power law asymptotics: \begin{enumerate} \item For $\tau \ll \tau_c$, $f(\tau) \sim \tau^{-(2-\alpha)}$. \item For $\tau_c \ll \tau \ll 1$, $f(\tau) \sim \tau^{-\alpha}$. \item For $1 \ll \tau \ll 1/\nu$, $f(\tau) \simeq (1-n) \nu = const$. \item For $1/\nu \lesssim \tau$, $f(\tau) \simeq e^{-\nu \langle \Delta \rangle} ~e^{-\nu \tau} $. \end{enumerate} In these formulas, $\alpha$ is the exponent of the power law distribution of fertilities $p_1(r) \sim r^{-\alpha-1}$ (\ref{poneralasymp}), which is the pdf of the number of first generation aftershocks triggered by a given event of any type. In turn, $n$ stands for the branching ratio defined by equation (\ref{wryjujiuk}), i.e. the average number of daughters of first generation per mother event, and $\langle \Delta \rangle$ is the mean duration of a cluster that starts with a noise earthquake and ends with its last aftershock over all generation. It is given by expression (\ref{nglDelgen}). Only the first two intermediate asymptotics $f(\tau) \sim \tau^{-(2-\alpha)}$ and $f(\tau) \sim \tau^{-\alpha}$ at short time scales reflect the influence of the power law distribution of fertilities (\ref{poneralasymp}), which is revealed by the remarkable effect of the cascade of triggering over the population of aftershocks of many different generations. Finally, let us stress the differences between the present investigation and our previous work \cite{SaiSor2006,SaiSor2007} on the same problem. In Refs.~\cite{SaiSor2006,SaiSor2007}, we determined the asymptotic behavior at long times of the distribution of recurrence times, under the approximation that it was sufficient to consider only one aftershock at most per mother event (`noise earthquake' in the present terminology). In addition, we considered the standard power law Omori law (\ref{omorilawexpr}) and not the exponential law (\ref{expdisdef}). By performing a detailed analysis made possible by the use of the exactly tractable exponential Omori law (\ref{expdisdef}), the present paper has thus demonstrated the existence of additional short-time intermediate asymptotics that reveal the distribution of fertilities. This opens the possibility to estimate the exponent $\alpha$ of the distribution of cluster sizes from purely dynamic measures of activity. \clearpage \section*{Appendix: Statistics of first generation aftershocks} Given the power law \eqref{poneralasymp} for the right tail of the pdf $\{p_1(r)\}$ of the number $R_1$ of the first generation aftershocks probabilities, we show that the leading relevant terms of the expansion of the GPF $G_1(z)$ in powers of $(1-z)$ take the form \begin{equation}\label{gonepowgamdef2} G_1(z) = 1 - n (1-z) + \kappa (1-z)^\alpha , \qquad \alpha\in(1,2] , \end{equation} so that the corresponding auxiliary function $Q(y)$ \eqref{qytrugoneomz} takes the form \begin{equation}\label{qyalphadef2} Q(y) = 1 - n y + \kappa y^\alpha . \end{equation} They depend on the branching ratio $n$ defined in (\ref{wryjujiuk}), the exponent $\alpha$ of the power law distribution \eqref{poneralasymp} of the number of first-generation aftershocks. The additional scale parameter $\kappa$ satisfies the following inequalities \begin{equation}\label{gamkapenineq} \begin{cases} 0 < \alpha \kappa < n , & n\leqslant 1~ , \\ n-1 < \alpha \kappa < n , & n > 1~ , \end{cases} \end{equation} which ensure the necessary constraints \[ 0\leqslant p_1(0) \leqslant 1 , \qquad \text{and} \qquad 0\leqslant p_1(1) \leqslant 1 . \] Rather than deriving the form (\ref{gonepowgamdef2}) from \eqref{poneralasymp}, it is more convenient to show that the tail of the pdf $\{p_1(r)\}$ whose GPF is given by (\ref{gonepowgamdef2}) is the power law \eqref{poneralasymp}. Given (\ref{gonepowgamdef2}) and the definition linking $G_1(z)$ to $p_1(r)$, namely $G_1(z)= \sum_{j=0}^{+\infty} p_1(r) z^r$, we obtain \begin{equation}\label{ponezone} p_1(0) = 1- n + \kappa , \qquad p_1(1) = n -\alpha \kappa ~, \end{equation} and \begin{equation}\label{pkmoretwo} \begin{array}{c} \displaystyle p_1(r) = \kappa (-1)^k \binom{\alpha}{r} = \frac{\kappa (-1)^r \Gamma(\alpha+1)}{\Gamma(r+1) \Gamma(\alpha-r+1)} , \\[4mm]\displaystyle r \geqslant 2 , \qquad \alpha \in (1,2) . \end{array} \end{equation} Using the properties of gamma functions and in particular the well-known equality \begin{equation} \Gamma(z) \Gamma(1-z) = \frac{\pi}{\sin\pi z}~ , \end{equation} we obtain that \begin{equation} (-1)^r \Gamma(\alpha-r+1) = \frac{\pi}{\Gamma(r-\alpha) \sin[\pi(\alpha-1)]} , \qquad \alpha\in(1,2) . \end{equation} Accordingly, expression \eqref{pkmoretwo} takes the form \begin{equation}\label{pkgamevent} p_1(r) = c \cdot\frac{\Gamma(r-\alpha)}{\Gamma(r+1)} , \qquad c := \frac{\kappa}{\Gamma(-\alpha)} . \end{equation} using the asymptotic relation \[ \frac{\Gamma(r+a)}{\Gamma(r+b)} \simeq r^{a-b} , \qquad r\to\infty , \] we finally recover the power law \eqref{poneralasymp}. The case where the exponent $\alpha=2$ in expression (\ref{gonepowgamdef2}) of $G_1(z)$ requires a special mention. Indeed, this form describes the special situation in which each noise earthquake (and any aftershock as well) can trigger not more than two first generation aftershocks. Accordingly, there are, in general, only three nonzero probabilities \begin{equation} p_1(0) = 1-n+\kappa , \qquad p_1(1) = n- 2 \kappa , \qquad p_1(2) = \kappa \qquad (\alpha=2)~ . \end{equation} This special situation arises due to the fact that the GPF $G_1(z)$ has been truncated beyond the quadratic order $(1-z)^2$. It will not be considered further in this paper. \clearpage
1,116,691,498,803
arxiv
\section{Introduction} \label{sec:int} Sample surveys are generally designed to estimate finite population parameters, such as total, mean, variance and quantiles. On the other hand, decision makers of both public and private agencies have become interested in such parameters for smaller subpopulation (small area) as well, created by cross classifying geographical and demographical variables, such as age, sex and race. However, direct survey estimators of small area parameters, sample mean, sample variance, sample quantiles and others, are often unstable and unreliable because the sample size for each area is too small mainly due to the budget constraint. In order to obtain more reliable estimators of small area parameters, the model-based approach which uses mixed effects models is becoming popular. The empirical best predictor or empirical Bayes estimator derived from mixed effects models, which is often called model based estimator, is more stable than the direct survey estimator because the model-based estimator borrows strength from other areas through the statistical model which connects across the areas with auxiliary variables from other data sources such as large-scale sample surveys and population census. Alternatively, the hierarchical Bayes approach to the model-based method has been also discussed in the literature. For the detail about small area estimation (SAE), see \citet{DG12}, \citet{Pfe13}, \citet{RM15} and others. There are two fundamental models for model-based SAE: the Fay--Herriot model for area-level aggregated data, which was first proposed to estimate the per capita income for small areas by \citet{FH79}, and the nested error regression model for unit-level data \citep{BHF88}. While only one population parameter, such as an areal mean, can be estimated at a time by using Fay--Herriot model, general finite population parameters can be estimated by using the nested error regression model and its extensions, proposed by \citet{MR10}, \citet{GMR18}, \citet{DR18}, \citet{SK19} and others provided that a unit-level data is available. However, the Fay--Herriot model is more widely used in practice as the accessibility of unit-level data is limited in many cases. Along with area-level aggregated measures of quantities of interest, as sample mean, sample surveys frequently report grouped data. Grouped data contains information on frequency distributions based on some predefined groups in each area and thus provides more insight about areas than an aggregated areal measure. The need to model for and to analyze a grouped data arises in many fields of statistical analysis and there exist theoretical developments regarding the grouped data analysis, see \cite{Heitjan89} and references therein. Especially in the analysis of income data, the individual households often are grouped into some predefined income classes \citep{Choti08}. For example, Housing and Land Survey (HLS) conducted by Statistics Bureau of Japan in 2013 reports the numbers of households that fall into the five and nine income classes over 1265 municipalities. The grouped data literature, mainly from the view point of the income data analysis, predominantly focused on developing a more flexible underlying parametric or semiparametric form for a single nation, region or period. However, when we face the grouped data over multiple local areas as in the HLS data, the existing grouped data methods do not suffice. This is because the reported frequency distributions are based on the survey sampling, they are not reliable for areas with small sample sizes and thus call for a correction through an SAE method. It must be noted that none of the existing SAE methods can be used to reduce uncertainty in grouped data, because grouped data do not contain unit-level information that is required in the nested error regression model and an appropriate direct estimator that can be used in the Fay--Herriot model is difficult to define for many small area parameters. Therefore a new SAE method specifically designed for grouped data is required. In this paper, we develop a new model-based SAE method which explicitly takes frequency distributions observed in grouped data into account and can estimate general finite population parameters including areal means. Since the frequency distribution in the grouped data counts the number of units that fall into each group, the multinomial likelihood function is adopted. We introduce the latent unit-level variables that represent the unit-level quantities of interest and that are supported within the range of each group. Then in order to connect the frequency distribution to the auxiliary variables within the SAE framework, these latent unit-level variables are assumed to follow a linear mixed model after some transformation. The linear mixed model adopts the random dispersion as well as random intercept, because the frequency distribution of each area provides the information on the scale of the distribution. While \citet{JN12} and \citet{KSGC16} considered the heteroskedasticity in SAE, they did not consider the grouped data setting. Given the random effects, the probabilities that a unit belongs to the groups can be derived and are used to construct the multinomial likelihood function for the grouped data. The unknown model parameters (hyperparameters) are estimated by maximizing the marginal likelihood which integrates out the random effects. Since the marginal likelihood cannot be evaluated analytically, we develop an EM algorithm \citep{DLR77}, where the E-step is carried out by Monte Carlo integration based on the sampling importance resampling (SIR) using an efficient importance sampling technique. After obtaining the estimates of hyperparameters, the empirical Bayes (EB) or equivalently empirical best predicts, of small area parameters, such as areal means and Gini coefficients, are easily calculated using the output from a simple Gibbs sampler, where the unobserved unit-level quantities are augmented as latent variables to simulate the finite population. The rest of the paper is organized as follows. Section~\ref{sec:method} describes the proposed model and methods for hyperparameter estimation and calculation of EB estimates. Section~\ref{sec:income} presents the application of the proposed method to Japanese income dataset from HLS. The patchy maps of the areal mean income and Gini coefficient are completed using our method. In Section \ref{sec:sim}, the performance of the proposed model is examined through the model-based and design-based simulation studies. Finally, Section \ref{sec:concl} concludes the paper with some discussion. \section{Proposed method} \label{sec:method} \subsection{Model description} \label{subsec:model} In each of $m$ areas, we observe the grouped data that provides the frequency distribution over the mutually exclusive $G$ groups divided by the known thresholds $0 = c_0 < c_1 < \dots < c_{G-1} < c_G = +\infty$. Let us denote the observed frequencies and sample size in the $i$th area by ${\text{\boldmath $y$}}_i = (y_{i1},\dots,y_{iG})^\top$ for $i=1,\dots,m$ and $n_i=\sum_{g=1}^G y_{ig}$, respectively, and thus $y_{ig}$ counts the number of units that fall into the $g$th group in the $i$th area. Therefore, it can be regarded that ${\text{\boldmath $y$}}_i$ follows the multinomial distribution. In order to model the group probabilities of the multinomial distribution that links the grouped data with the auxiliary variables and then to facilitate the small area parameter estimation (see Section~\ref{subsec:Gibbs}), we introduce the positive latent variable $z_{ij}>0$ for the $j$th unit in the $i$th area ($i=1,\dots,m; \ j=1,\dots,N_i$) that constitutes the population of the $i$th area and from which the units are sampled to construct the grouped data. We also let ${\text{\boldmath $z$}}_i = (z_{i1},\dots,z_{iN_i})^\top$. Note that $N_i$ is not the sample size but the population size and thus a finite population setting is considered. Without loss of generality, it is assumed that the first $n_i$ values of $z_{ij}$'s are sampled. Then $y_{ig}$ can be expressed as \begin{equation} \label{eqn:model_y} y_{ig} = \sum_{j=1}^{n_i} I( c_{g-1} \leq z_{ij} < c_g ), \quad (g=1,\dots,G), \\ \end{equation} where $I(\cdot)$ is the indicator function. We take into account the variability of the frequency distribution by incorporating the sample size into our model. In order to devise small area estimation for the grouped data, we assume that the latent $z_{ij}$ after some transformation follows the linear mixed model: \begin{equation} \label{eqn:lmm} \begin{split} &h_{\kappa}(z_{ij}) = {\text{\boldmath $x$}}_i^\top{\text{\boldmath $\beta$}} + b_i + {\varepsilon}_{ij}, \quad b_i \sim \mathrm{N}(0,\tau^2), \\ &{\varepsilon}_{ij} \mid {\sigma}_i^2 \sim \mathrm{N}(0,{\sigma}_i^2), \quad {\sigma}_i^2 \sim \mathrm{IG} \left( {{\lambda} \over 2} + 1, {{\lambda}\varphi_i \over 2} \right), \quad \varphi_i = \exp({\text{\boldmath $x$}}_i^\top{\text{\boldmath $\gamma$}}), \end{split} \end{equation} or equivalently the following Bayesian model: \begin{equation} \label{eqn:BM} \begin{split} h_{\kappa}(z_{ij}) \mid \mu_i, {\sigma}_i^2 &\sim \mathrm{N}(\mu_i,{\sigma}_i^2) \\ \mu_i &\sim \mathrm{N}({\text{\boldmath $x$}}_i^\top{\text{\boldmath $\beta$}}, \tau^2) \\ {\sigma}_i^2 &\sim \mathrm{IG}\left( { {\lambda} \over 2 }+ 1, {{\lambda}\varphi_i \over 2} \right), \quad \varphi_i = \exp({\text{\boldmath $x$}}_i^\top{\text{\boldmath $\gamma$}}), \end{split} \end{equation} where $h_{\kappa}(\cdot)$ is an arbitrary parametric transformation with the parameter $\kappa$, ${\text{\boldmath $x$}}_i$ is the area specific $p$-dimensional auxiliary variable vector, ${\text{\boldmath $\beta$}}$ is the unknown parameter vector of regression coefficients, $b_i$ is the random area effect with the unknown variance parameter $\tau^2$ and ${\varepsilon}_{ij}$ is the error term with the area specific random variance $\sigma^2_i$. It is further assumed that $b_i$'s and ${\sigma}_i^2$'s are mutually independent or equivalently $\mu_i$'s and ${\sigma}_i^2$'s are mutually independent and that $z_{ij}$'s are conditionally independent given ${\text{\boldmath $b$}} = (b_1,\dots,b_m)^\top$ and ${\text{\boldmath $\sigma$}} = ({\sigma}_1^2,\dots,{\sigma}_m^2)^\top$. The mean of $\sigma_i^2$ is $\varphi_i$ which is further modeled as $\varphi_i=\exp({\text{\boldmath $x$}}_i^\top{\text{\boldmath $\gamma$}})$ using the auxiliary variables. While the model looks like a version of unit-level nested error regression model proposed in the small area estimation literature \citep{BHF88}, there is a crucial difference that in the present setting we do not observe the unit-level ${\text{\boldmath $z$}}_{i}$'s but ${\text{\boldmath $y$}}_i$'s only. Also, the auxiliary variables ${\text{\boldmath $x$}}_i$ are available only at the area-level. Based on the statistical model \eqref{eqn:lmm} or \eqref{eqn:BM}, the conditional probability that $z_{ij}$ falls in the $g$th group given $b_i$ (or $\mu_i$) and ${\sigma}_i^2$ is given by \begin{equation} \label{eqn:prob_g} \Pr(c_{g-1} \leq z_{ij} < c_g \mid b_i,\sigma^2_i) = \Phi\left\{\frac{h_{\kappa}(c_g) - \mu_i}{\sigma_i}\right\}-\Phi\left\{\frac{h_{\kappa}(c_{g-1})-\mu_i}{\sigma_i}\right\}, \end{equation} where $\mu_i = {\text{\boldmath $x$}}_i^\top {\text{\boldmath $\beta$}} + b_i$ and $\Phi(\cdot)$ denotes the cumulative distribution function of the standard normal distribution. Note that we model the unit-level variable $z_{ij}$, not the area-level variable like the Fay--Herriot model. However, the auxiliary variables are available only on the area-level. Hence, if the log transformation is used, the superpopulation of $z_{ij}$ is the log-normal distribution with the same mean and variance within the same small area $i$, which is too restrictive. In this paper, a more flexible parametric transformation $h_{\kappa}(\cdot)$ is adopted to relax the restriction. Specifically, we use the Box--Cox transformation given by \begin{equation*} h_{\kappa}(z)=\left\{ \begin{split} \frac{z^\kappa-1}{\kappa},\quad \kappa\neq 0,\\ \log(z),\quad \kappa=0, \end{split} \right. \quad z>0, \end{equation*} and $-1/\kappa < h_\kappa(z) < +\infty$ if $\kappa > 0$ and $-\infty < h_\kappa(z) < -1/\kappa$ if $\kappa < 0$. Our goal is to estimate (predict) some characteristics of each area, such as the areal mean ${\overline z}_i = N_i^{-1}\sum_{j=1}^{N_i}z_{ij}$ and Gini coefficient defined as \begin{equation} \label{eqn:Gini} \mathrm{GINI}({\text{\boldmath $z$}}_i) = {1 \over N_i} \left\{ N_i + 1 - {2\sum_{j=1}^{N_i}(N_i + 1 - j)z_{i(j)} \over N_i{\overline z}_i} \right\}, \end{equation} where $\{ z_{i(1)},\dots, z_{i(N_i)} \}$ are sorted values of $\{ z_{i1},\dots, z_{i,N_i} \}$ in non-decreasing order. To this end, we develop the empirical Bayes (EB) estimators of ${\overline z}_i$ and $\mathrm{GINI}({\text{\boldmath $z$}}_i)$. \subsection{Hyperparameter estimation} \label{subsec:EM} The unknown model parameter vector is denoted by ${\text{\boldmath $\psi$}} = ({\text{\boldmath $\beta$}}^\top, {\tau}^2, {\lambda}, {\kappa}, {\text{\boldmath $\gamma$}}^\top)^\top$. If our model is seen as a Bayesian model \eqref{eqn:BM}, ${\text{\boldmath $\psi$}}$ is referred to as hyperparameters. Hereafter, ${\text{\boldmath $\psi$}}$ is referred to as the hyperparmeters for the sake of clarity of terminology. The hyperparameter ${\text{\boldmath $\psi$}}$ is estimated by maximizing the marginal likelihood: \begin{equation} \label{eqn:ML} L( {\text{\boldmath $\psi$}}; {\text{\boldmath $y$}} ) = \prod_{i=1}^m \int f( {\text{\boldmath $y$}}_i \mid {\text{\boldmath $u$}}_i) \pi({\text{\boldmath $u$}}_i) {\rm d} {\text{\boldmath $u$}}_i, \end{equation} where $\pi({\text{\boldmath $u$}}_i)$ is the pdf of ${\text{\boldmath $u$}}_i = (b_i, {\sigma}_i^2)^\top \sim \mathrm{N}(0,\tau^2) \times \mathrm{IG}(\lambda/2+1, \lambda\varphi_i/2)$, and $f({\text{\boldmath $y$}}_i \mid {\text{\boldmath $u$}}_i)$ is the conditional probability mass function (pmf) of ${\text{\boldmath $y$}}_i$ given ${\text{\boldmath $u$}}_i$, which is given by the pmf of the multinomial distribution with $n_i$ trials and the probabilities given by \eqref{eqn:prob_g}: \begin{equation} \label{eqn:pmf_yi} f( {\text{\boldmath $y$}}_i \mid {\text{\boldmath $u$}}_i ) = { n_i! \over y_{i1}!y_{i2}!\cdots y_{iG}! } \times \prod_{g=1}^G \left[ \Phi\left\{ \frac{ h_{\kappa}(c_g) - \mu_i }{ {\sigma}_i} \right\} - \Phi\left\{ \frac{ h_{\kappa}(c_{g-1}) - \mu_i }{ {\sigma}} \right\} \right]^{y_{ig}}, \end{equation} for $i = 1,\dots,m$. It is difficult to evaluate the marginal likelihood \eqref{eqn:ML} analytically because of the integration with respect to ${\text{\boldmath $u$}}_i$. Thus we introduce the EM algorithm \citep{DLR77} where the vector of random effects ${\text{\boldmath $u$}} = ({\text{\boldmath $u$}}_1^\top,\dots,{\text{\boldmath $u$}}_m^\top)^\top$ is regarded as the missing variable. The complete log-likelihood is given by \begin{equation*} \log \{ L^c( {\text{\boldmath $\psi$}} ; {\text{\boldmath $y$}}, {\text{\boldmath $u$}} ) \} = \sum_{i=1}^m \left[ \log \{ f( {\text{\boldmath $y$}}_i \mid {\text{\boldmath $u$}}_i ) \} + \log \{ \pi({\text{\boldmath $u$}}_i) \} \right]. \end{equation*} In the $k$th iteration of the algorithm, the E-step calculates \begin{equation*} Q({\text{\boldmath $\psi$}} \mid {\text{\boldmath $\psi$}}^{(k-1)}) = E[ \log\{ L^c( {\text{\boldmath $\psi$}}; {\text{\boldmath $y$}}, {\text{\boldmath $u$}} ) \} \mid {\text{\boldmath $y$}}, {\text{\boldmath $\psi$}}^{(k-1)} ], \end{equation*} where the expectation is taken with respect to the conditional distribution of ${\text{\boldmath $u$}}$ given ${\text{\boldmath $y$}}$ with the parameter value ${\text{\boldmath $\psi$}}^{(k-1)}$ from the $(k-1)$th iteration. The M-step maximizes $Q({\text{\boldmath $\psi$}} \mid {\text{\boldmath $\psi$}}^{(k-1)})$ with respect to ${\text{\boldmath $\psi$}}$. The maximizer, denoted by ${\text{\boldmath $\psi$}}^{(k)} = ( ( {\text{\boldmath $\beta$}}^{(k)} )^\top, \tau^{2(k)}, {\lambda}^{(k)}, \kappa^{(k)},({\text{\boldmath $\gamma$}}^{(k)})^\top )^\top$, is obtained as \begin{align*} \tau^{2(k)} =& \ {1 \over m}E[ {\text{\boldmath $b$}}^\top{\text{\boldmath $b$}} \mid {\text{\boldmath $y$}}, {\text{\boldmath $\psi$}}^{(k-1)} ], \\ ( ( {\text{\boldmath $\beta$}}^{(k)} )^\top, \kappa^{(k)} )^\top =& \ \mathop{\rm argmax}\limits_{ ( {\text{\boldmath $\beta$}}^\top, {\kappa} )^\top } E\left[ \sum_{i=1}^m \log\{ f( {\text{\boldmath $y$}}_i \mid {\text{\boldmath $u$}}_i ) \} \bigm| {\text{\boldmath $y$}}, {\text{\boldmath $\psi$}}^{(k-1)} \right], \\ ( ({\text{\boldmath $\gamma$}}^{(k)})^\top, {\lambda}^{(k)} )^\top =& \ \mathop{\rm argmax}\limits_{ ( {\text{\boldmath $\gamma$}}^\top, {\lambda} )^\top } E\left[ \sum_{i=1}^m \log\{ \pi({\sigma}_i^2) \} \bigm| {\text{\boldmath $y$}}, {\text{\boldmath $\psi$}}^{(k-1)} \right]. \end{align*} Since it is difficult to evaluate the conditional expectation analytically in the E-step, we use the Monte Carlo integration based on the sampling importance resampling (SIR). Note that the conditional pdf of ${\text{\boldmath $u$}}$ given ${\text{\boldmath $y$}}$ is the product of the conditional pdfs of ${\text{\boldmath $u$}}_i$ given ${\text{\boldmath $y$}}_i$: \begin{equation*} \pi( {\text{\boldmath $u$}} \mid {\text{\boldmath $y$}} ) = \prod_{i=1}^m \pi( {\text{\boldmath $u$}}_i \mid {\text{\boldmath $y$}}_i )\propto\prod_{i=1}^m f({\text{\boldmath $y$}}_i \mid {\text{\boldmath $u$}}_i) \pi({\text{\boldmath $u$}}_i), \end{equation*} where $\pi( {\text{\boldmath $u$}} \mid {\text{\boldmath $y$}} )$ is the conditional pdf of ${\text{\boldmath $u$}}$ given ${\text{\boldmath $y$}}$ and $\pi( {\text{\boldmath $u$}}_i \mid {\text{\boldmath $y$}}_i )$ is the conditional pdf of ${\text{\boldmath $u$}}_i$ given ${\text{\boldmath $y$}}_i$. Therefore, we apply the following SIR method independently for $i=1,\dots,m$. Let $q({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)$ denote the proposal density for ${\text{\boldmath $u$}}_i$ where ${\text{\boldmath $a$}}_i\in\mathbb{R}^q$ is the parameter vector of the proposal distribution. In the SIR method, first a set of random numbers $\{ {\tilde \u}_i^{(1)},\dots,{\tilde \u}_i^{(S_1)} \}$ from $q({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)$ is generated. Then for each ${\tilde \u}_i^{(s)}$, the weight \begin{equation*} {\tilde w}_{is}=\frac{f({\text{\boldmath $y$}}_i \mid {\tilde \u}_i^{(s)}) \pi({\tilde \u}_i^{(s)}) }{ q( {\tilde \u}_i^{(s)} \mid {\text{\boldmath $a$}}_i) },\quad s=1,\dots,S_1, \end{equation*} is calculated. Finally, a set of samples of size $S_2$, $\{ {\text{\boldmath $u$}}_i^{(1)},\dots,{\text{\boldmath $u$}}_i^{(S_2)} \}$, is drawn with replacement from $\{ {\tilde \u}_i^{(1)},\dots,{\tilde \u}_i^{(S_1)} \}$ based on the probability \begin{equation*} \Pr( {\text{\boldmath $u$}}_i^{(r)} = {\tilde \u}_i^{(s)} ) = \frac{{\tilde w}_{is}}{\sum_{s'=1}^{S_1} {\tilde w}_{is'}},\quad s=1,\dots,S_1,\quad r=1,\dots,S_2. \end{equation*} For large $S_1/S_2$, $\{ {\text{\boldmath $u$}}_i^{(1)},\dots,{\text{\boldmath $u$}}_i^{(S_2)} \}$ is approximately a set of independent random samples from $\pi( {\text{\boldmath $u$}}_i \mid {\text{\boldmath $y$}}_i )$. The expectations in the M-step are replaced with the Monte-Carlo estimates based on the SIR samples. The performance of the SIR depends on the choice of the proposal distribution. It is ideal to employ a proposal distribution that well approximates the target distribution and we aim to achieve this by updating the value of ${\text{\boldmath $a$}}_i$ through an iterative procedure proposed by \citet{RZ07}. Their efficient importance sampling (EIS) method determines the value $\hat{{\text{\boldmath $a$}}}_i$ such that it minimizes the Monte Carlo sampling variance of the importance weights with respect to the proposal distribution. In the current context, as shown by \citet{RZ07}, $\hat{{\text{\boldmath $a$}}}_i$ is determined through the following minimization problem \begin{equation}\label{eqn:eis_q} (\hat{c}_i, \hat{{\text{\boldmath $a$}}}_i^\top)^\top = \mathop{\rm argmin}\limits_{( c_i,{\text{\boldmath $a$}}_i^\top)^\top}\int \left\{\log f({\text{\boldmath $y$}}_i \mid {\text{\boldmath $u$}}_i) + \log\pi({\text{\boldmath $u$}}_i) - c_i - \log g({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)\right\}^2 w_i({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i) q({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i) {\rm d}{\text{\boldmath $u$}}_i, \end{equation} where $g({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)$ is the kernel of the proposal density $q( {\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)$ such that $q({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)=g({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)/\int g({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i) {\rm d} {\text{\boldmath $u$}}_i$, $c_i$ is a scalar that adjusts for the normalizing constants and $w_i({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)=f({\text{\boldmath $y$}}_i \mid {\text{\boldmath $u$}}_i)\pi({\text{\boldmath $u$}}_i)/q({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)$. The EIS method replaces \eqref{eqn:eis_q} with a Monte Carlo approximation and proceeds by iteratively solving \begin{equation}\label{eqn:eis_qmc} (\hat{c}_i^{(t)},\hat{{\text{\boldmath $a$}}}_i^{(t)\top})^\top = \mathop{\rm argmin}\limits_{( c_i,{\text{\boldmath $a$}}_i^\top)^\top}\frac{1}{S_0}\sum_{s=1}^{S_0} \left\{\log f({\text{\boldmath $y$}}_i \mid \check{{\text{\boldmath $u$}}}_i^{(s)})+\log\pi(\check{{\text{\boldmath $u$}}}_i^{(s)}) - c_i - \log g(\check{{\text{\boldmath $u$}}}_i^{(s)} \mid {\text{\boldmath $a$}}_i)\right\}^2 w_i(\check{{\text{\boldmath $u$}}}_i^{(s)} \mid {\text{\boldmath $a$}}_i^{(t-1)}), \end{equation} where $(\hat{{\text{\boldmath $a$}}}_i^{(t)\top}, \hat{c}_i^{(t)})^\top$ denotes the value of $(\hat{{\text{\boldmath $a$}}}_i^\top, \hat{c}_i)^\top$ at the $t$th iteration of the EIS minimization and $\{ \check{{\text{\boldmath $u$}}}_i^{(1)},\dots,\check{{\text{\boldmath $u$}}}_i^{(S_0)} \}$ is the set of samples generated from $q({\text{\boldmath $u$}}_i \mid \hat{{\text{\boldmath $a$}}}_i^{(t-1)})$ for $\check{{\text{\boldmath $u$}}}_i^{(s)} = ( \check{b}_i^{(s)}, \check{{\sigma}}_i^{2(s)} )^\top$. \citet{RZ07} noted that $S_0$ does not have to be very large. In this paper, we employ $\mathrm{N}({\theta}_{i1}({\text{\boldmath $a$}}_i),{\theta}_{i2}({\text{\boldmath $a$}}_i)) \times \mathrm{IG}( {\theta}_{i3}({\text{\boldmath $a$}}_i), {\theta}_{i4}({\text{\boldmath $a$}}_i) )$ for $q({\text{\boldmath $u$}}_i \mid {\text{\boldmath $a$}}_i)$ where ${\text{\boldmath $a$}}_i = (a_{i1}, a_{i2}, a_{i3}, a_{i4})^\top$ is the vector of natural parameters. Because the proposal distribution belongs to the exponential family where $$ \log g({\text{\boldmath $u$}}_i^{(s)} \mid {\text{\boldmath $a$}}_i) = a_{i1}b_i + a_{i2}b_i^2 + a_{i3} \log({\sigma}_i^2) + a_{i4}{1 \over {\sigma}_i^2}, $$ for $a_{i1} = {\theta}_{i1} / {\theta}_{i2}$, $a_{i2} = -1 / (2{\theta}_{i2})$, $a_{i3} = -({\theta}_{i3} + 1)$ and $a_{i4} = -{\theta}_{i4}$, the solution for the EIS minimization \eqref{eqn:eis_qmc} is given by the following generalized least squares (GLS) estimator \begin{equation}\label{eqn:eis_gls} (\hat{c}_i^{(t)}, \hat{{\text{\boldmath $a$}}}_{i}^{(t)\top})^\top=({\text{\boldmath $Z$}}_i^\top{\text{\boldmath $D$}}_i{\text{\boldmath $Z$}}_i)^{-1}{\text{\boldmath $Z$}}_i^\top{\text{\boldmath $D$}}_i{\text{\boldmath $f$}}_i \end{equation} where ${\text{\boldmath $Z$}}_i=({\bf\text{\boldmath $1$}}_{S_0},\check{{\text{\boldmath $b$}}}_i,\check{{\text{\boldmath $b$}}}_i^{2}, \mathbf{log}\check{{\text{\boldmath $\sigma$}}}_i^2, \check{{\text{\boldmath $\sigma$}}}_i^{-2} )$, $\check{{\text{\boldmath $b$}}}_i$, $\check{{\text{\boldmath $b$}}}_i^2$, $\mathbf{log}\check{{\text{\boldmath $\sigma$}}}_i^2$, $\check{{\text{\boldmath $\sigma$}}}_i^{-2}$ and ${\text{\boldmath $f$}}_i$ are $S_0\times 1$ vectors with the $s$th elements given by $\check{b}_i^{(s)}$, $(\check{b}_i^{(s)})^2$, $\log(\check{{\sigma}}_i^{2(s)})$, $1 / \check{{\sigma}}_i^{2(s)}$ and $\log f({\text{\boldmath $y$}}_i \mid \check{{\text{\boldmath $u$}}}_i^{(s)})+\log\pi(\check{{\text{\boldmath $u$}}}_i^{(s)})$, respectively, and ${\text{\boldmath $D$}}_i$ is the $S_0$ dimensional diagonal matrix with $w_i(\check{{\text{\boldmath $u$}}}_i^{(s)} \mid \hat{{\text{\boldmath $a$}}}_i^{(t-1)})$ on the $s$th diagonal position. In this paper, the EIS iteration is terminated when the relative change in $({\theta}_{i1}({\text{\boldmath $a$}}_i^{(t)}),{\theta}_{i2}({\text{\boldmath $a$}}_i^{(t)}), {\theta}_{i3}({\text{\boldmath $a$}}_i^{(t)}), {\theta}_{i4}({\text{\boldmath $a$}}_i^{(t)}) )^\top$ is below $10^{-3}$. After the termination of the EIS iterations, the optimal parameters for the proposal distribution are obtained through ${\hat \th}_{i1}=-\hat{a}_{1i}/(2\hat{a}_{2i})$, ${\hat \th}_{i2}=-1/(2\hat{a}_{2i})$, ${\hat \th}_{i3} = -{\hat a}_{3i} - 1$ and ${\hat \th}_{i4} = -{\hat a}_{4i}$. See \citet{RZ07} for more detailed implementation of the EIS method. The initial values for the MCEM algorithm are determined as follows. Let us define $V_i = n_i^{-1}\sum_{g=1}^G\log({\overline c}_g) \times y_{ig}$ where ${\overline c}_g = ( c_{g-1} + c_g ) / 2$ for $g=1,\dots,G-1$ and ${\overline c}_G = c_{G-1} + ( c_{G-1} - c_{G-2} ) / 2$, ${\text{\boldmath $V$}} = (V_1,\dots,V_m)^\top$ and ${\text{\boldmath $X$}} = ({\text{\boldmath $x$}}_1,\dots,{\text{\boldmath $x$}}_m)^\top$. Then, the initial value of ${\text{\boldmath $\beta$}}$ and ${\tau}^2$ are determined as $$ {\text{\boldmath $\beta$}}^{(0)} = ({\text{\boldmath $X$}}^\top{\text{\boldmath $X$}})^{-1}{\text{\boldmath $X$}}^\top{\text{\boldmath $V$}}, \quad \tau^{2(0)} = m^{-1}\Vert {\text{\boldmath $V$}} - {\text{\boldmath $X$}}{\text{\boldmath $\beta$}}^{(0)} \Vert^2. $$ The initial values of ${\lambda}$, $\kappa$ and ${\text{\boldmath $\gamma$}}$ are determined by using the estimates based on the local model which modifies the model \eqref{eqn:lmm} as follows: \begin{equation} \label{eqn:local} h_{\kappa_i}(z_{ij}) = \beta_i + {\varepsilon}_{ij}, \quad {\varepsilon}_{ij} \sim \mathrm{N}(0,{\sigma}_i^2), \end{equation} where ${\beta}_i,$, $\kappa_i$ and ${\sigma}_i^2$ are the unknown parameters. Let ${\widehat \be}_i$, $ {\hat \ka}_i$ and ${\hat \si}_i^2$ denote the maximum likelihood estimates which independently maximizes the likelihood function for $i=1,\dots,m$: $$ ({\widehat \be}_i, {\hat \ka}_i, {\sigma}_i^2)^\top = \mathop{\rm argmax}\limits_{({\beta}_i,{\kappa}_i,{\sigma}_i^2)^\top}{ n_i! \over y_{i1}!y_{i2}!\cdots y_{iG}! } \times \prod_{g=1}^G \left[ \Phi\left\{ \frac{ h_{\kappa}(c_g) - {\beta}_i }{ {\sigma}_i} \right\} - \Phi\left\{ \frac{ h_{\kappa}(c_{g-1}) - {\beta}_i }{ {\sigma}} \right\} \right]^{y_{ig}}. $$ Then, the initial value of ${\lambda}$ and $\kappa$ are determined as $$ {\lambda}^{(0)} = 2 \times \{ (\overline{{\hat \si}^2})^2 / {\widehat V}({\hat \si}^2) +1 \}, \quad \kappa^{(0)} = \overline{{\hat \ka}}, $$ where $\overline{{\hat \si}^2}$ and ${\widehat V}({\hat \si}^2)$ are sample mean and variance of ${\hat \si}_i^2$'s over the areas and $\overline{{\hat \ka}}$ is the sample mean of ${\hat \ka}_i$'s. Furthermore, the initial value of ${\text{\boldmath $\gamma$}}$ is $$ {\text{\boldmath $\gamma$}}^{(0)} = ({\text{\boldmath $X$}}^\top{\text{\boldmath $X$}})^{-1}{\text{\boldmath $X$}}^\top {\text{\boldmath $\sigma$}}, $$ where ${\text{\boldmath $\sigma$}} = ({\hat \si}^2_1,\dots,{\hat \si}^2_m)^\top$. This method generally provides reasonable initial values for the MCEM algorithm leading to a fast convergence. Although other initial values are also tried, the similar results are obtained with longer computing times. To monitor the convergence of the MCEM algorithm, the criterion considered by \cite{SC02} is used. In order to prevent premature termination of the algorithm due to the difference in the scale of the parameter values, the quantities $e_{k,({\text{\boldmath $\beta$}})}$, $e_{k,(\tau^2)}$, $e_{k,(\kappa)}$, $e_{k,(\lambda)}$ and $e_{k,({\text{\boldmath $\gamma$}})}$ is evaluated respectively for ${\text{\boldmath $\beta$}}$, $\tau^2$, $\kappa$, $\lambda$ and ${\text{\boldmath $\gamma$}}$. In the case of ${\text{\boldmath $\beta$}}$, for example, \begin{equation}\label{eqn:em_conv} e_{k,({\text{\boldmath $\beta$}})} = { \| {\widetilde \bbe}_1^{(k)} - {\widetilde \bbe}_2^{(k)} \| \over \| {\widetilde \bbe}_2^{(k)} \| + {\delta} }, \end{equation} where ${\widetilde \bbe}_1^{(k)} = H^{-1} \sum_{h=0}^{H-1} {\text{\boldmath $\beta$}}^{(k-h)}$, ${\widetilde \bbe}_2^{(k)} = H^{-1} \sum_{h=0}^{H-1} {\widetilde \bbe}^{(k-h-d)}$, and ${\delta}$, $H$, and $d$ are specified by the user. Then the EM algorithm is terminated in the $k$th iteration if $$ \max\{ e_{k,({\text{\boldmath $\beta$}})}, e_{k,(\tau^2)}, e_{k,(\kappa)}, e_{k,(\lambda)}, e_{k,({\text{\boldmath $\gamma$}})}\} < \epsilon, $$ for some small value $\epsilon>0$, and use ${\tilde \bpsi}_1^{(k)} = ( {\widetilde \bbe}^{(k)\top}, {\tilde \tau}^{2(k)}, {\tilde \la}^{(k)}, \tilde{{\kappa}}^{(k)}, \tilde{{\text{\boldmath $\gamma$}}}^{(k)\top} )^\top$ as the estimate of ${\text{\boldmath $\psi$}}$, which is denoted by ${\widehat \bpsi} = ({\widehat \bbe}^\top, {\hat \tau}^2, {\hat \la}, {\hat \ka}, {\widehat \bga}^\top)^\top$ hereafter. \subsection{Calculation of empirical Bayes estimates} \label{subsec:Gibbs} Here we propose the method to calculate EB estimates of some function of ${\text{\boldmath $z$}}_i$, which is denoted as $\zeta_i({\text{\boldmath $z$}}_i)$ in general. The examples of $\zeta_i({\text{\boldmath $z$}}_i)$ include the areal mean ${\overline z}_i$ and Gini coefficients $\mathrm{GINI}({\text{\boldmath $z$}}_i)$ in \eqref{eqn:Gini}. Under the quadratic loss, the Bayes estimator of $\zeta_i({\text{\boldmath $z$}}_i)$ is its conditional expectation given the data, $E[ \zeta_i({\text{\boldmath $z$}}_i) \mid {\text{\boldmath $y$}} ]$. Because of the independence over the areas, $E[ \zeta_i({\text{\boldmath $z$}}_i) \mid {\text{\boldmath $y$}} ]$ is reduced to $E[ \zeta_i({\text{\boldmath $z$}}_i) \mid {\text{\boldmath $y$}}_i ]$, which is denoted by $$ \xi_i({\text{\boldmath $\psi$}};{\text{\boldmath $y$}}_i) = E[ \zeta_i({\text{\boldmath $z$}}_i) \mid {\text{\boldmath $y$}}_i ]. $$ Because $\xi_i({\text{\boldmath $\psi$}};{\text{\boldmath $y$}}_i)$ is a function of the unknown parameter ${\text{\boldmath $\psi$}}$, we obtain the empirical Bayes (EB) estimator $\xi_i({\widehat \bpsi};{\text{\boldmath $y$}}_i)$ by substituting ${\widehat \bpsi}$ for ${\text{\boldmath $\psi$}}$ in the Bayes estimator. However, since it is impossible to evaluate the conditional expectation of $\zeta_i({\text{\boldmath $z$}}_i)$ analytically, we calculate the EB estimates from the output of the following Gibbs sampler. Let the random vector ${\tilde \v}_i = (v_{i1},\dots,v_{in_i})^\top$ denote the sorted values of $\{ h_{\hat \ka}(z_{i1}),\dots,h_{\hat \ka}(z_{in_i}) \}$ in increasing order with size $y_{i1},\dots,y_{iG}$ and then the following relationship holds: \begin{equation*} v_{ij} \leq v_{ik}, \quad {\rm for \ all} \ j,k \ {\rm such \ that} \ j \leq {\tilde y}_{ig} < k, \ {\rm for \ all} \ g=1,\dots,G, \end{equation*} where ${\tilde y}_{ig} = \sum_{g'=1}^g y_{ig'}$ for $g=1,\dots,G$ and $n_i = {\tilde y}_{iG}$. For out-of-sample units, let $\check{{\text{\boldmath $v$}}}_i = (v_{i,n_i+1},\dots,v_{iN_i})^\top = ( h_{\hat \ka}(z_{i,n_i+1}),\dots,h_{\hat \ka}( z_{iN_i} ) )^\top$. Let ${\text{\boldmath $v$}}_i = ( {\tilde \v}_i^\top, \check{{\text{\boldmath $v$}}}_i^\top )^\top = (v_{i1},\dots,v_{iN_i})^\top$. To evaluate the conditional expectation of ${\text{\boldmath $v$}}_i$ given ${\text{\boldmath $y$}}_i$, the sample from the joint conditional distribution of $\{ {\tilde \v}_i, \check{{\text{\boldmath $v$}}}_i, \mu_i, {\sigma}_i^2 \}$ given ${\text{\boldmath $y$}}_i$ is obtained by using the Gibbs sampling algorithm with the following full conditional distributions: \begin{equation} \label{eqn:full} \begin{split} \mu_i \mid {\tilde \v}_i, \check{{\text{\boldmath $v$}}}_i, {\sigma}_i^2, {\text{\boldmath $y$}}_i &\sim \mathrm{N} \left( { {\sigma}_i^2{\text{\boldmath $x$}}_i^\top{\widehat \bbe} + N_i{\hat \tau}^2 {\overline v}_i \over {\sigma}_i^2 + N_i{\hat \tau}^2 }, {{\hat \tau}^2{\sigma}_i^2 \over {\sigma}_i^2 + N_i{\hat \tau}^2} \right), \\ v_{ij} \mid \mu_i, \check{{\text{\boldmath $v$}}}_i, {\sigma}_i^2, {\text{\boldmath $y$}}_i & \overset{ \text\small\rm{indep} }{\sim} \begin{cases} {\rm TN}_{[ h_{\hat \ka}(c_0), h_{\hat \ka}(c_1) )} ( \mu_i, {\sigma}_i^2 ), & j=1,\dots,\tilde{y}_{i1}, \\ {\rm TN}_{[ h_{\hat \ka}(c_1), h_{\hat \ka}(c_2) )} ( \mu_i, {\sigma}_i^2 ), & j=\tilde{y}_{i1}+1,\dots,\tilde{y}_{i2} \\ \vdots \\ {\rm TN}_{[ h_{\hat \ka}(c_{G-1}), h_{\hat \ka}(c_G) )} ( \mu_i, {\sigma}_i^2 ), & j=\tilde{y}_{i,G-1}+1,\dots,n_i, \end{cases}\\ \check{{\text{\boldmath $v$}}}_i \mid \mu_i, {\tilde \v}_i, {\sigma}_i^2, {\text{\boldmath $y$}}_i &\sim \mathrm{N}_{N_i-n_i}( \mu_i{\bf\text{\boldmath $1$}}_{N_i - n_i}, {\sigma}_i^2{\bf\text{\boldmath $1$}}_{N_i - n_i} ), \\ {\sigma}_i^2 \mid \mu_i, {\tilde \v}_i, \check{{\text{\boldmath $v$}}}_i, {\text{\boldmath $y$}}_i &\sim \mathrm{IG}\bigg( {N_i+{\hat \la} \over 2} + 1, \ {1\over 2} \Big\{ {\hat \la}\hat{\varphi}_i + \sum_{j=1}^{N_i} (v_{ij} - \mu_i)^2 \Big\} \bigg), \end{split} \end{equation} where ${\overline v}_i = N_i^{-1}\sum_{j=1}^{N_i}v_{ij}$ and $\mathrm{TN}_{[a,b)}(\mu,{\sigma}^2)$ denotes the truncated normal distribution with the mean $\mu$ and variance ${\sigma}^2$ truncated to the interval $[a,b)$. The derivation of the full conditional distributions is given in Appendix~\ref{sec:app1}. Let ${\text{\boldmath $v$}}_i^{(s)} = (v_{i1}^{(s)},\dots,v_{iN_i}^{(s)})^\top$ be the $s$th output of ${\text{\boldmath $v$}}_i$ from the Gibbs sampler $(s=1,\dots,S_3)$. Then the EB estimates $\xi_i({\widehat \bpsi};{\text{\boldmath $y$}}_i)$ can be calculated as $$ \widehat{\xi_i({\widehat \bpsi};{\text{\boldmath $y$}}_i)} = {1 \over S_3}\sum_{s=1}^{S_3} \zeta_i( h_{\hat \ka}^{-1}({\text{\boldmath $v$}}_i^{(s)}) ), $$ where $h_{\hat \ka}^{-1}(\cdot)$ is the inverse Box--Cox transformation with parameter value ${\hat \ka}$. If the auxiliary variables ${\text{\boldmath $x$}}_i$'s are available for out-of-sample areas, $\zeta_i({\text{\boldmath $z$}}_i)$ can be also predicted for an out-of-sample area $i=m+1$ by $\xi_{m+1}({\widehat \bpsi})$ where $\xi_{m+1}({\text{\boldmath $\psi$}}) = E[ \zeta_{m+1}({\text{\boldmath $z$}}_{m+1}) ]$, since ${\text{\boldmath $y$}}$ and $z_{m+1}$ are mutually independent. This expectation can be calculated by the Monte Carlo integration that generates random numbers from the model \eqref{eqn:lmm} with the hyperparameters are fixed to their estimates. \section{Application to grouped income data of Japan} \label{sec:income} The proposed method is demonstrated by using the grouped income data obtained from Housing and Land Survey (HLS) of Japan in 2013. The data contains the number of households that fall in $G=5$ and $9$ income classes.\footnote{ Only are the numbers of households in each income class adjusted for the population sizes accessible in the HLS data and the original sample sizes for the sampled municipalities of HLS are not published. How they are estimated for this analysis is described in Appendix~\ref{sec:app2}. } The income classes are defined in million Japanese Yen (M~JPY) and the thresholds are given by $(c_1,c_2,c_3,c_4)=(3,5,7,10)$ for $G=5$ and $(c_1,c_2,c_3,c_4,c_5,c_6,c_7,c_8)=(1,2,3,4,5,7,10,15)$ for $G=9$. In this survey in 2013, 1265 out of 1899 municipalities in Japan were sampled. As a summary of the data, Figure~\ref{fig:real1} presents the proportions of the households in the in-sample-municipalities for each income class in the case of $G=9$. The maps look incomplete because of the presence of the out-of-sample municipalities. Using the proposed method, the EB estimates of the areal mean incomes and Gini coefficients are obtained. For the auxiliary variables, we use the total population denoted by $\mathrm{P}_i$ and working-age population denoted by $\mathrm{WA}_i$ obtained from Population Census (PC) of Japan in 2010 and set ${\text{\boldmath $x$}}_i=(1,\log \mathrm{P}_i, \log \mathrm{WA}_i)$ for the $i$th municipality. Since these auxiliary variables are also available for the out-of-sample municipalities of HLS, the model can be further utilised to complete the maps of the mean incomes and Gini coefficients. \begin{figure}[H] \center \begin{tabular}{ccc} \includegraphics[scale=0.14]{fig1_1.png} & \includegraphics[scale=0.14]{fig1_2.png} & \includegraphics[scale=0.14]{fig1_3.png} \\ \includegraphics[scale=0.14]{fig1_4.png} & \includegraphics[scale=0.14]{fig1_5.png} & \includegraphics[scale=0.14]{fig1_6.png} \\ \includegraphics[scale=0.14]{fig1_7.png} & \includegraphics[scale=0.14]{fig1_8.png} & \includegraphics[scale=0.14]{fig1_9.png} \end{tabular} \caption{Proportions of households in in-sample-municipalities ($G=9$)} \label{fig:real1} \end{figure} To estimate the hyperparameters, we set $S_0=100$, $S_1=10000$, $S_2=500$, $H=30$, $d=5$, and $\delta=\epsilon=0.001$ for the MCEM algorithm. The initial values are determined using the method described in Section~\ref{subsec:EM}. The convergence of the MCEM algorithm occurs relatively fast. We also tried other initial values obtained similar results. It is noted that the method in Section~\ref{subsec:EM} took much shorter computing times. Figure~\ref{fig:real2} presents the $0.1$, $0.5$ and $0.9$ quantiles of the effective sample size (ESS) divided by $S_1$ for the 1265 municipalities at each step of the MCEM algorithm. It is seen that the ESS is fairly high and stable over the EM iterations, especially for $G=9$. \begin{figure}[H] \center \includegraphics[width=15cm]{fig_real_ess.pdf} \caption{Quantiles of effective sample size (ESS) } \label{fig:real2} \end{figure} The Bayes estimator of ${\overline z}_i$ is denoted by $\xi_{1i}({\text{\boldmath $\psi$}};{\text{\boldmath $y$}}_i) = E( {\overline z}_i \mid {\text{\boldmath $y$}}_i )$ and that of $\mathrm{GINI}({\text{\boldmath $z$}}_i)$ is denoted by $\xi_{2i}({\text{\boldmath $\psi$}};{\text{\boldmath $y$}}_i) = E[ \mathrm{GINI}({\text{\boldmath $z$}}_i) \mid {\text{\boldmath $y$}}_i ]$. The EB estimates of ${\overline z}_i$ and $\mathrm{GINI}({\text{\boldmath $z$}}_i)$ are calculated from the output of the Gibbs sampler \eqref{eqn:full} as $$ \widehat{ \xi_{1i}({\widehat \bpsi};{\text{\boldmath $y$}}_i) } = {1 \over S_3}\sum_{s=1}^{S_3} \left\{ {1 \over N_i} \sum_{j=1}^{N_i}h_{\hat \ka}^{-1}(v_{ij}^{(s)}) \right\}, $$ and $$ \widehat{ \xi_{2i}({\widehat \bpsi};{\text{\boldmath $y$}}_i) } = {1 \over S_3}\sum_{s=1}^{S_3} {1 \over N_i} \left\{ N_i + 1 - {2\sum_{j=1}^{N_i}(N_i + 1 - j)h_{\hat \ka}^{-1}(v_{i(j)}^{(s)}) \over \sum_{j=1}^{N_i}h_{\hat \ka}^{-1}(v_{ij}^{(s)}) } \right\}, $$ where $\{ v_{i(1)}^{(s)},\dots,v_{i(N_i)}^{(s)} \}$ are sorted values of $\{ v_{i1}^{(s)},\dots,v_{iN_i}^{(s)} \}$ in non-decreasing order. In this analysis, we run the Gibbs sampler for $S_3=500$ iterations with the initial burn-in period of $50$ iterations. While it is generally difficult to define a reasonable direct estimator for these small area parameters from grouped data, for a comparison purpose, we may also think of the following ``naive" estimator of the areal mean ${\overline z}_i$ that uses the class midpoints given by \begin{equation} \label{eqn:naive} \widehat{{\overline z}}_i^{\mathrm{naive}} = {1 \over n_i} \sum_{g=1}^G {\overline c}_g \times y_{ig} \end{equation} where ${\overline c}_g = ( c_{g-1} + c_g ) / 2$ for $g=1,\dots,G-1$ and ${\overline c}_G = c_{G-1} + ( c_{G-1} - c_{G-2} ) / 2$. This estimator is naive particularly because the upper end ${\overline c}_G$ has to be set and its choice is completely arbitrary. The choice of ${\overline c}_G$ would have a huge impact on its performance. Note that the proposed approach has no arbitrariness with this respect as $c_G=\infty$ and \eqref{eqn:prob_g} is well defined. Figure~\ref{fig:real_mean} presents the estimates of the areal means based on the proposed method and naive method \eqref{eqn:naive}. By borrowing strength from the other municipalities through the statistical model \eqref{eqn:lmm}, the proposed method can predict the income for the out-of-sample municipalities and provide the complete maps of the mean incomes and Gini coefficients. The boxplots of Figure~\ref{fig:real_Box} compares the EB and naive estimates of the areal means for the sample areas. The figure indicates that the results for the naive estimates can vary between $G=5$ and $9$ resulting the lower mean incomes for some areas for $G=5$ than for $G=9$. This would be because the naive estimates cannot capture the behavior of the upper tail of the income distribution, which has an impact on the estimation of the mean income. In fact, we also considered the different values for $\bar{c}_G$ for the naive estimates to demonstrate the impact. Figure~\ref{fig:real_Box_d} presents the boxplots of the naive estimates under the different values of $\bar{c}_G$ for $G=5$ and $9$. The figure shows that the naive estimates exhibit severe sensitivity with respect to the setting of $\bar{c}_G$ in the case of $G=5$. While the sensitivity decreases for $G=9$, the areal mean estimates for the high income areas still appear to increase with $\bar{c}_G$. In order to assess the uncertainty of the estimators, we estimated the root mean squared error (RMSE) of the estimators for the sampled municipalities by using a parametric bootstrap method. Let $z_{ij}^{*(b)} \ (i=1,\dots,m; \ j=1,\dots,N_i)$ and $\{ {\text{\boldmath $y$}}_1^{*(b)}, \dots, {\text{\boldmath $y$}}_m^{*(b)} \}$ denote the $b$th bootstrap sample $(b=1,\dots, B)$ generated from the models (\ref{eqn:model_y}) and (\ref{eqn:lmm}) with the hyperparameter fixed to the maximum likelihood estimate ${\widehat \bpsi}$. Then, the RMSE of the EB estimator of areal mean is estimated as \begin{equation*} \widehat{\rm RMSE}_i^{\rm EB} = \sqrt{ {1 \over B}\sum_{b=1}^B \left\{ \widehat{ \xi_{1i}({\widehat \bpsi}; {\text{\boldmath $y$}}_i^{*(b)}) } - {\overline z}_i^{*(b)} \right\}^2 }, \end{equation*} for a large $B$, where ${\overline z}_i^{*(b)} = N_i^{-1}\sum_{j=1}^{N_i}z_{ij}^{*(b)}$. For each $b$, we simply run the Gibbs sampler described in Section~\ref{subsec:Gibbs} to calculate the EB estimates given the estimate ${\widehat \bpsi}$ from the original data, not on the bootstrap samples. In the same way, the RMSE of the naive estimator is estimated as $$ \widehat{\rm RMSE}_i^{\rm naive} = \sqrt{ {1 \over B}\sum_{b=1}^B \left\{ \widehat{{\overline z}}_i^{\mathrm{naive}*(b)} - {\overline z}_i^{*(b)} \right\}^2 }, $$ where $\widehat{{\overline z}}_i^{\mathrm{naive}*(b)} = n_i^{-1}\sum_{g=1}^G {\overline c}_g \times y_{ig}^{*(b)}$. Figure~\ref{fig:real_RMSE} presents the estimates of the RMSE of the EB estimators and naive estimators for the sampled areas. The naive estimators resulted in the large RMSE indicated by the darker shade of red in the case of $G=5$. While the RMSE for the naive estimators improves as the number of income classes increases, the EB estimators resulted in the smaller RMSE. The figure also shows that the overall improvement in the RMSE of the EB estimators in the case of $G=9$ over $G=5$ is marginal compared to the naive estimators. Finally, Figure~\ref{fig:real_Gini} presents the EB estimates for the Gini coefficients for all municipalities and associated estimates of RMSE for the sampled municipalities. As in the case of the mean incomes, the proposed method can also predict the Gini coefficients for the out-of-sample municipalities to complete the map. The RMSE of the estimator of the Gini coefficient is estimated in the same way as that of the mean income by using the parametric bootstrap. The map for the case of $G=9$ exhibits darker shades of blue than the map for $G=5$ implying that the degree of inequality is greater across the country. This could be because that the data with $G=9$ contains more information on the income distribution, especially on the upper tail of the distribution which can have an impact on the estimates. The figure also shows that the uncertainty regarding the Gini coefficients estimation decreases as the number of income classes in the data increases. \begin{figure}[H] \center \begin{tabular}{cc} \includegraphics[width=7.5cm]{fig2_1.png} & \includegraphics[width=7.5cm]{fig2_2.png} \\ \includegraphics[width=7.5cm]{fig2_3.png} & \includegraphics[width=7.5cm]{fig2_4.png} \\ \end{tabular} \caption{EB and naive estimates of areal means} \label{fig:real_mean} \end{figure} \begin{figure}[H] \center \includegraphics[scale=0.65]{fig2_a.pdf} \caption{Boxplots of EB and naive estimates of areal means for the sampled areas} \label{fig:real_Box} \end{figure} \begin{figure}[H] \center \includegraphics[scale=0.65]{fig9_a.pdf}\\ \includegraphics[scale=0.65]{fig9_b.pdf} \caption{Boxplots of naive estimates of areal means under different values of $\bar{c}_G$} \label{fig:real_Box_d} \end{figure} \begin{figure}[H] \center \begin{tabular}{cc} \includegraphics[width=7.5cm]{fig3_1a.png}& \includegraphics[width=7.5cm]{fig3_1b.png}\\ \includegraphics[width=7.5cm]{fig3_1c.png}& \includegraphics[width=7.5cm]{fig3_1d.png}\\ \end{tabular} \caption{Estimates of RMSE of the naive estimators and EB estimators for areal means} \label{fig:real_RMSE} \end{figure} \begin{figure}[H] \center \begin{tabular}{cc} \includegraphics[width=7.5cm]{fig2_5.png}& \includegraphics[width=7.5cm]{fig2_6.png}\\ \includegraphics[width=7.5cm]{fig3_5.png}& \includegraphics[width=7.5cm]{fig3_6.png} \end{tabular} \caption{EB estimates and estimates of RMSE (multiplied by $1000$) for Gini coefficients} \label{fig:real_Gini} \end{figure} \section{Simulation Studies} \label{sec:sim} \subsection{Model-based simulation} In this section, the proposed approach is illustrated using the simulated data. The first simulation is a model-based simulation where \eqref{eqn:lmm} is the data generating process. The true parameter values are set to the estimates obtained in the real application in Section \ref{sec:income} and we use the same values of the auxiliary variables ${\text{\boldmath $x$}}_i$'s as the real data for the randomly chosen $m=100$ areas out of the 1265 in-sample areas of HLS. Based on this setting, we generate $R=100$ replications of $z_{ij}$'s with $N_i = 1000$ for all $i$ and calculate the true mean ${\overline z}_i$ and Gini coefficient $\mathrm{GINI}({\text{\boldmath $z$}}_i)$. For each replication, we obtain a frequency distribution for each area from the simulated data $\{ z_{i1},\dots,z_{i,n_i} \}$. The two cases of the numbers of groups $G=5$ and $9$ with the same thresholds as HLS are considered. The sample sizes are set as $n_i = 10 \ (i=1,\dots,20), \ n_i = 50 \ (i=21,\dots,40), \ n_i = 100 \ (i=41,\dots,60), \ n_i = 150 \ ( i=61,\dots,80 )$, and $n_i = 200 \ (i=81,\dots,100)$. The true parameter values and the auxiliary variables ${\text{\boldmath $x$}}_i$'s for $i=1,\dots, m$ are fixed for all replications. The settings for the MCEM algorithm and the Gibbs sampler are the same as the real data analysis in Section \ref{sec:income}. In order to demonstrate the advantage of the present approach, the naive estimator of $\widehat{{\overline z}}_i^\mathrm{naive}$ in \eqref{eqn:naive} is also considered again. The performance of the methods is compared by the simulated relative root MSE (RRMSE) over $R=100$ replications of the data. The simulated RRMSE is calculated as \begin{equation*} {\rm RRMSE}(\widehat{{\overline z}}_i) = \sqrt{{1 \over R}\sum_{r=1}^R \left( { \widehat{{\overline z}}_i^{(r)} - {\overline z}_i^{(r)} \over {\overline z}_i^{(r)} } \right)^2}, \end{equation*} where $\widehat{{\overline z}}_i^{(r)}$ is the EB or naive estimates and ${\overline z}_i^{(r)}$ is the true mean in the $r$th replication. Figure~\ref{fig:sim1} shows the result of the simulation. Noting that the horizontal axis represents the area index, the figure shows that the RRMSE decreases as the sample size increases both for the EB estimator and the naive estimator. In terms of RRMSE, the EB estimator improves on the naive estimator for all the areas. It is interesting to see that the improvement of the RRMSE is much larger for the areas with small sample sizes, especially for the areas with $n_i=10$ and $50$. This is because the EB estimator borrows strength from other areas even though the area sample size is small, while the naive estimator only uses the information of the target area. It is also observed that EB estimator for $G=9$ resulted in better performance than for $G=5$ for most of the areas. This is a natural result because the frequency distributions based on $G=9$ contain more information of the distribution of the latent $z_{ij}$'s. \begin{figure}[H] \center \includegraphics[width=9cm]{fig_sim1.pdf} \caption{RRMSE of EB estimator and naive estimator for model based simulation} \label{fig:sim1} \end{figure} \subsection{Design-based simulation} The second simulation is a design based simulation where \eqref{eqn:lmm} is not assumed to be the data generating process. For this simulation, the Spanish income dataset included in the R package \texttt{sae} developed by \citet{MM18}. This dataset contains the synthetic data on income and some related information of 17199 households including the province where the household is located and the gender of the head of the household. There are 52 provinces in Spain and for each province the dataset is divided based on the gender of the head of the household. Therefore, this dataset consists of $m=104$ small domains. We generate the datasets for this design-based simulation study following the technique used by \citet{CSCT12}. First, a synthetic population is created for each domain by resampling with replacement from the original dataset and calculate the `true' population mean for each dataset. Then 100 independent samples are obtained from the fixed synthetic populations based on the simple random sampling without replacement and form a frequency distribution for each domain. As the auxiliary variables, we use ${\text{\boldmath $x$}}_i = (1, \mathrm{NAT}_i, \mathrm{WA}_i, \mathrm{LABOR}_i)^\top$ where $\mathrm{NAT}_i$ is the proportion of the people holding Spanish nationality in the $i$th domain, $\mathrm{WA}_i$ is the proportion of the people who are in working age in the $i$th domain, and $\mathrm{LABOR}_i$ is the proportion of the people who are employed in the $i$th domain. For the transformation in \eqref{eqn:lmm}, since the negative income observations are present for some households in this dataset, the following modified Box--Cox transformation is used: $$ h_\kappa(z) = { (z-C)^\kappa - 1 \over \kappa }, $$ where $C$ is equal to 0.1 less than the minimum income of the synthetic population. The same settings for the MCEM algorithm and Gibbs sampler as in the previous sections are used. As in the previous sections, the performance of the proposed EB estimator and naive estimator is compared. Figure~\ref{fig:sim2} shows the RRMSE for the EB and naive estimators. The figure shows that the the EB estimator resulted in the better performance than the naive estimators in terms of RRMSE for most domains. In addition, the degree of improvement is larger in the case of $G=5$, where the frequency distributions contain less information. Since this simulation setting does not assume a statistical model, we obtained an important implication that the proposed EB estimator performs well even when the statistical model is misspecified. This design based simulation can be seen as an empirical evidence to show the usefulness of our proposed method. \begin{figure}[H] \center \begin{tabular}{cc} \includegraphics[width = 7.5cm]{fig_design5.pdf}& \includegraphics[width = 7.5cm]{fig_design9.pdf} \end{tabular} \caption{RRMSE of EB estimator and naive estimator based on design based simulation} \label{fig:sim2} \end{figure} \section{Conclusion} \label{sec:concl} We have proposed a new model-based small area estimation method for grouped data where only frequency distributions of the quantity of interest are observed at the area-level. In the proposed model, the observed frequencies are linked with the area-level auxiliary variables through the unit-level latent variables which are modeled in a similar fashion to the nested error regression model. The model parameter is estimated easily by using the Monte Carlo EM algorithm based on the efficient importance sampling and the EB estimates of small area parameters are calculated by the output of the Gibbs sampler. From the application to the real data of Japan and simulation studies, we have shown that the proposed EB estimator performs better than the naive estimator. Because our proposed model is in a general form, it can be applied to a wide variety of datasets. However, if we do focus on the income data, especially on the Gini coefficient or other poverty indicators, a probability distribution assumed by the small area model should provide good fit to the income distribution and provide a straightforward interpretation. The present model that assumes the normal distribution after a transformation may be limited in this sense. An extension of our model to the parametric income distribution is left for future studies. \paragraph{ Acknowledgments.} This work is partially supported by JSPS KAKENHI (\#19K13667, \#18K12754). The computational results were obtained by using Ox version 6.21 \citep{D07}.
1,116,691,498,804
arxiv
\section{Introduction} \label{sec:intro} \setcounter{equation}{0} The muon $g-2$ measurements at the BNL and FermiLab experiments had a great impact on the study of particle physics. The value of the muon anomalous magnetic moment for these two combined is \cite{Bennett:2002jb,Bennett:2004pv,Bennett:2006fi,Abi:2021gix} \begin{align} a_\mu^{\rm (exp)} = (11\,659\,206.1 \pm 4.1 ) \times 10^{-10}. \label{amu(exp)} \end{align} On the contrary, the standard-model (SM) predicts \cite{Aoyama:2020ynm}\footnote {For more details about the estimation of the SM prediction, see Refs.\ \cite{Aoyama:2012wk, Aoyama:2019ryr, Czarnecki:2002nt, Gnendiger:2013pva, Davier:2017zfy, Keshavarzi:2018mgv, Colangelo:2018mtw, Hoferichter:2019mqg, Davier:2019can, Keshavarzi:2019abf, Kurz:2014wya, Melnikov:2003xd, Masjuan:2017tvw, Colangelo:2017fiz, Hoferichter:2018kwz, Gerardin:2019vio, Bijnens:2019ghy, Colangelo:2019uex, Blum:2019ugy, Colangelo:2014qya}.} \begin{align} a_\mu^{\rm (SM)} = ( 11\,659\,181.0 \pm 4.3 ) \times 10^{-10}. \label{amu(SM)} \end{align} These values give \begin{align} \Delta a_{\mu} \equiv a_\mu^{\rm (exp)} - a_\mu^{\rm (SM)} = ( 25.1 \pm 5.9) \times 10^{-10}, \label{damu} \end{align} which shows $4.2\sigma$ discrepancy between the experimentally measured value of $a_\mu$ and the SM prediction (the so-called muon $g-2$ anomaly). The discrepancy seems to strongly indicate the existence of a physics beyond the SM (BSM), which can be the origin of the muon $g-2$ anomaly. One of the attractive candidates of the BSM physics which can solve the muon $g-2$ anomaly is the supersymmetry (SUSY). In particular, in the minimal SUSY SM (MSSM), the smuon-neutralino and sneutrino-chargino diagrams may contribute significantly to the muon anomalous magnetic moment \cite{Lopez:1993vi,Chattopadhyay:1995ae,Moroi:1995yh}; the size of the SUSY contribution can be as large as $\Delta a_{\mu}$ to solve the muon $g-2$ anomaly (for the recent studies about the MSSM contribution to the muon $g-2$, see, for example, \cite{Endo:2021zal, Chakraborti:2021dli, Han:2021ify, VanBeekveld:2021tgn, Ahmed:2021htr, Cox:2021nbo, Wang:2021bcx, Baum:2021qzx, Yin:2021mls, Iwamoto:2021aaf, Athron:2021iuf, Shafi:2021jcg, Aboubrahim:2021xfi, Chakraborti:2021bmv, Baer:2021aax, Aboubrahim:2021phn, Li:2021pnt, Jeong:2021qey, Ellis:2021zmg, Nakai:2021mha, Forster:2021vyz, Ellis:2021vpp, Chakraborti:2021mbr, Gomez:2022qrb, Chakraborti:2022vds, Agashe:2022uih}). Because the superparticles are in the loops, the SUSY contribution to the muon $g-2$ is suppressed as the superparticles become heavy. Thus, in order to explain the muon $g-2$ anomaly, masses of (some of) the superparticles are bounded from above. A detailed understanding of the upper bound is important in order to verify the SUSY interpretation of the muon $g-2$ anomaly with ongoing and future collider experiments \cite{Endo:2013lva, Endo:2013xka, Endo:2022qnm}. The muon $g-2$ anomaly can be explained in various parameter regions of the MSSM. If the masses of all the superparticles are comparable, the masses of superparticles are required to be of $O(100)\ {\rm GeV}$. Then, the muon $g-2$ anomaly indicates that superparticles (in particular, sleptons, charginos, and neutralinos) are important targets of ongoing and future collider experiments. The SUSY contribution to the muon $g-2$ can be, however, sizable even if superparticles are much heavier. It happens when the Higgsino mass parameter ({\it i.e.}, the so-called $\mu$ parameter) is significantly large and the enlarged smuon-smuon-Higgs trilinear scalar coupling enhances the contribution to the muon $g-2$. Such a trilinear coupling is, however, dangerous because it may make the electroweak (EW) vacuum unstable \cite{Frere:1983ag, Gunion:1987qv, Casas:1995pd, Kusenko:1996jn}. In this letter, we study the stability of the EW vacuum, paying attention to the parameter region of the MSSM where the muon $g-2$ anomaly is solved (or alleviated) by the SUSY contribution. Requiring that the SUSY contribution to the muon anomalous magnetic moment, denoted as $a_{\mu}^{\rm (SUSY)}$, be large enough to solve the muon $g-2$ anomaly, the smuon masses are bounded from above based on the observed longevity of the EW vacuum. Refs.\ \cite{Endo:2013lva, Endo:2021zal} have studied the vacuum stability bound using a tree-level analysis of the decay rate for the case where the SUSY breaking mass parameters of the left- and right-handed sleptons are degenerate. The tree-level analysis, however, has several inaccuracies. In particular, it cannot fix the dimensionful prefactor of the decay rate and receives the renormalization scale uncertainty. These difficulties cannot be avoided without performing the calculation at the one-loop level. We study the stability of the EW vacuum using the state-of-the-art method to calculate the decay rate of the false vacuum \cite{Endo:2017gal, Endo:2017tsz, Chigusa:2020jbn}, with which a full one-loop calculation of the decay rate is performed. We also consider a wide range of the slepton mass parameters. Then, based on the accurate estimation of the decay rate, we derive an upper bound on the lightest smuon mass to explain the muon $g-2$ anomaly. This letter is organized as follows. In Section \ref{sec:mssm}, we briefly overview the SUSY contribution to the muon anomalous magnetic moment and discuss the importance of the stability of the EW vacuum. In Section \ref{sec:eft}, we introduce the effective field theory (EFT) we use in our analysis. In Section \ref{sec:vacuumdecay}, we explain our procedure to calculate the decay rate of the EW vacuum. Our main results are given in Section \ref{sec:results}. Section \ref{sec:conclusions} is devoted for conclusions and discussion. \section{MSSM and muon $g-2$} \label{sec:mssm} \setcounter{equation}{0} We first overview the model we consider, which is the low energy effective theory obtained from the MSSM. (For the review of the MSSM, see, for example, Ref.\ \cite{Martin:1997ns}.) We also explain why the stability of the EW vacuum is important in the study of the SUSY contribution to the muon $g-2$. Since $a_{\mu}^{\rm (SUSY)}$ is enhanced in the parameter region in which $\tan\beta$ is large \cite{Moroi:1995yh}, we concentrate on the large $\tan\beta$ case to obtain a conservative bound on the mass scale of the smuons. Importantly, $\tan\beta$ cannot be arbitrarily large if we require perturbativity. In particular, $\tan\beta$ is smaller than $\sim 50$ in the grand unified theory (GUT), which is one of the strong motivations to consider the MSSM. There, the coupling constants (in particular, the bottom Yukawa coupling constant) should be perturbative up to the GUT scale. In order to study the behavior of $a_{\mu}^{\rm (SUSY)}$ in the large $\tan\beta$ case, it is instructive to use the so-called mass insertion approximation in which $a_{\mu}^{\rm (SUSY)}$ is estimated in the gauge-eigenstate basis and the interactions proportional to the Higgs vacuum expectation values (VEVs) are treated as perturbations. (In our following numerical calculation, $a_{\mu}^{\rm (SUSY)}$ is estimated more precisely by using the basis in which the sleptons, charginos, and neutralinos are in the mass eigenstates, as we will explain.) In Fig.\ \ref{fig:feyndiags}, we show one-loop diagrams which may dominate the SUSY contribution to the muon $g-2$ in the large $\tan\beta$ limit. Because the superparticles are in the loop, $a_{\mu}^{\rm (SUSY)}$ is suppressed as the superparticles become heavier. For the case where the masses of all the superparticles are comparable, for example, the SUSY contribution to the muon anomalous magnetic moment is approximately given by $|a_\mu^{\rm (SUSY)}|\simeq \frac{5g_2^2}{192\pi^2} \frac{m_\mu^2}{m_{\rm SUSY}^2}\tan\beta$, where $g_2$ is the gauge coupling constant of $SU(2)_L$, $m_\mu$ is the muon mass, and $m_{\rm SUSY}^2$ is the mass scale of superparticles. (Here, the contributions of the diagrams that contain the Bino are neglected because they are subdominant.) Taking $\tan\beta\sim 50$, which is the approximate maximum possible value of $\tan\beta$ for the perturbativity up to the GUT scale, the superparticles should be lighter than $\sim 700\ {\rm GeV}$ in order to make the total muon anomalous magnetic moment consistent with the observed value at the $2\sigma$ level. \begin{figure}[t] \centering \includegraphics[width=0.65\linewidth]{FeynmanDiags.pdf} \caption{One-loop Feynman diagrams, which are enhanced in large $\tan\beta$ limit, giving rise to the SUSY contribution to the muon $g-2$. Here, the mass insertion approximation is adopted. The black and white blobs are two-point interactions induced by the VEVs of Higgs bosons.} \label{fig:feyndiags} \end{figure} Such an upper bound is significantly altered by the Bino-smuon diagram (Fig.\ \ref{fig:feyndiags}~(a)). The other diagrams ({\it i.e.}, Fig.\ \ref{fig:feyndiags}~(b) $-$ (e)) have slepton, gaugino, and Higgsino propagators in the loop, and hence their contributions are suppressed when any of these particles is heavy. On the contrary, the Bino-smuon diagram has only the smuon and Bino propagators in the loop, and its contribution is approximately proportional to the Higgsino mass parameter $\mu$. Thus, with a very large $\mu$ parameter, the contribution of the Bino-smuon diagram can be large enough to cure the muon $g-2$ anomaly even if the smuon and/or Bino are much heavier than the upper bound estimated above. In the following, we study the upper bound on the masses of superparticles in the light of the muon $g-2$ anomaly, paying particular attention to the contribution of the Bino-smuon diagram. In the parameter region where the Bino-smuon diagram has a dominant contribution, a large $\mu$ parameter enhances the smuon-smuon-Higgs trilinear coupling. Such a large trilinear scalar coupling is dangerous because it may destabilize the EW vacuum. Consequently, the lifetime of the EW vacuum may become shorter than the present cosmic age \cite{ParticleDataGroup:2020ssz}: \begin{align} t_{\rm now} \simeq 13.8\ {\rm Gyr}. \label{t_now} \end{align} The parameter region predicting the too short lifetime of the EW vacuum is excluded. The purpose of this letter is to derive an upper bound on the smuon mass under the requirement that the muon $g-2$ anomaly be solved (or relaxed) by the SUSY contribution. We are interested in the case where the $\mu$ parameter is large so that the Bino-smuon diagram dominates $a_\mu^{\rm (SUSY)}$; hereafter, we consider the case where $\mu$ is much larger than the Bino and smuon masses. A large value of $\mu$ implies heavy Higgsinos. In addition, the stops are expected to be relatively heavy to push up the lightest Higgs mass to the observed value, {\it i.e.}, about $125\ {\rm GeV}$, through the radiative correction \cite{Okada:1990vk, Okada:1990gg, Ellis:1990nz, Haber:1990aw}. On the contrary, in order to enhance $a_\mu^{\rm (SUSY)}$, slepton and Bino masses should be close to the EW scale. Based on these considerations, in this letter, we consider the case where the Bino $\tilde{B}$ and smuons are relatively light among the MSSM particles. These particles are assumed to have EW-scale masses comparable to the top pole mass $M_t$. The other MSSM constituents are assumed to have heavier masses, which are characterized by a single scale $M_S$. (For simplicity, masses of gauginos other than $\tilde{B}$ are assumed to be of $O(M_S)$.) We assume that there exists a significant hierarchy between $M_t$ and $M_S$ and thus the superparticles with the masses of $\sim M_S$ do not affect the physics of our interest. A comment on the case where some of the other SUSY particles are as light as the smuons will be given at the end of this letter. A large value of $\mu$ suggests relatively large values of the soft SUSY breaking Higgs mass parameters for a viable EW symmetry breaking; the light Higgs mass ({\it i.e.}, the mass of the SM-like Higgs boson) is realized by the cancellation between the contributions of the $\mu$ and soft SUSY breaking parameters. The heavier Higgs doublet is expected to have masses of $O(M_S)$ which is comparable to $\mu$. In such a case, the SM-like Higgs doublet, denoted as $H$, and the heavier doublet, $H'$, are given by linear combinations of the up- and down-type Higgs bosons, denoted as $H_u$ and $H_d$, respectively, as \begin{align} \left( \begin{array}{c} H \\ H' \end{array} \right) = \left( \begin{array}{cc} \cos\beta & \sin\beta \\ -\sin\beta & \cos\beta \end{array} \right) \left( \begin{array}{c} H_d\\ H_u \end{array} \right), \end{align} where $\tan\beta$ is the ratio of the vacuum expectation values (VEVs) of up- and down-type Higgs bosons. In the case of our interest, the mass spectrum around the EW scale includes the second-generation sleptons and Bino $\tilde{B}$, as well as the SM particles. Hereafter, the sleptons in the second generation in the gauge eigenstate are denoted as $\sle{L}$ and $\smuon{R}$; $\sle{L}$ is an $SU(2)_L$ doublet with hypercharge $\frac{1}{2}$, which is decomposed as \begin{align} \sle{L} = \begin{pmatrix} \tilde{\nu}_{L} \\ \smuon{L} \end{pmatrix}, \end{align} while $\smuon{R}$ is an $SU(2)_L$ singlet with hypercharge $-1$. \section{Effective field theory analysis} \label{sec:eft} \setcounter{equation}{0} We are interested in the case where there exists a hierarchy in the mass spectrum of the MSSM particles. To deal with the hierarchy, we resort to the EFT approach and solve the renormalization group (RG) equations with proper boundary conditions to evaluate the EFT coupling constants. Hereafter, we assume that the effects of possible CP-violating phases are negligible. We adopt $M_t$ and $M_S$ as matching scales. For the renormalization scale $Q < M_t$ (with $Q$ being the renormalization scale), we consider the QCD+QED that contains the SM gauge couplings and fermion masses as parameters. For $M_t < Q < M_S$, we consider an EFT with Bino and smuons as described below. At $Q=M_S$, the EFT is matched to the full MSSM, which imposes relations among EFT couplings. We choose $M_S$ to be close to the Higgsino mass. The Lagrangian of the EFT, which is relevant for the calculation of the decay rate of the EW vacuum and the muon $g-2$, is given by \begin{align} \mathcal{L} = \mathcal{L}_{\mathrm{SM}} + \Delta \mathcal{L}_{\mathrm{kin}} + \Delta \mathcal{L}_{\mathrm{mass}} + \Delta \mathcal{L}_{\mathrm{Yukawa}} - V, \end{align} where $\mathcal{L}_{\mathrm{SM}}$ is the SM Lagrangian without the Higgs potential, and the additional kinetic terms, mass terms, and Yukawa couplings are described by \begin{align} \Delta \mathcal{L}_{\mathrm{kin}} =& \, | D_\mu \sle{L} |^2 + | D_\mu \smuon{R}|^2 - i \tilde{B} \sigma^\mu \partial_\mu \tilde{B}^\dagger, \\ \Delta \mathcal{L}_{\mathrm{mass}} =& \, - \frac{1}{2} M_1 \tilde{B}\tilde{B} + \mathrm{h.c.}, \\ \Delta \mathcal{L}_{\mathrm{Yukawa}} =& \, Y_{L} \sle{L}^\dagger \ell_{L} \tilde{B} + Y_{R} \smuon{R}^\dagger \mu_R \tilde{B}^\dagger + \mathrm{h.c.}, \end{align} where $\ell_L$ and $\mu_R$ are the second generation left-handed lepton doublet and right-handed lepton, respectively. We use the two-component Weyl notation for fermions. The scalar potential $V$ is given by \begin{align} V = &\, V_2 + V_3 + V_4, \label{Vtot} \end{align} with \begin{align} V_2 = &\, m_H^2 |H|^2 + m_{L}^2\, | \sle{L} |^2 + m_{R}^2\, | \smuon{R} |^2, \label{eq:V2} \\ V_3 = &\, - T H^\dagger \sle{L} \smuon{R}^\dagger + \text{h.c.},\\ V_4 = &\, \lambda_H |H|^4 + \lam{HL} |H|^2 | \sle{L} |^2 + \lam{HR} |H|^2 | \smuon{R} |^2 + \kappa ( H^\dagger \sle{L} ) ( \sle{L}^\dagger H ) \nonumber \\ &\, + \lam{L} | \sle{L} |^4 + \lam{R} | \smuon{R} |^4 + \lam{LR} | \sle{L}| ^2 | \smuon{R} |^2, \label{eq:V4} \end{align} where $T$ is the trilinear scalar coupling constant. Next, we describe the matching conditions of coupling constants at the threshold scales. All the SM parameters including the Higgs quartic coupling ${\lambda}_H^{\rm (SM)}$ and the mass squared parameter $m_H^{2{\rm (SM)}}$ are determined at $Q=M_t$. Importantly, the top Yukawa coupling, the gauge couplings, the Higgs quartic coupling, and the Higgs mass parameter are subject to the possibly large weak-scale threshold corrections. We use the results of \cite{Buttazzo:2013uya} to fix these parameters with using physical parameters $\alpha_3(M_Z)=0.1179$, $M_t=172.76\,\mathrm{GeV}$, $M_W=80.379,\mathrm{GeV}$, and $M_h=125.25\,\mathrm{GeV}$ \cite{ParticleDataGroup:2020ssz}. As for the light fermion couplings, we calculate the running of their masses with the one-loop QED and three-loop QCD beta functions \cite{Gorishnii:1990zu, Tarasov:1980au, Gorishnii:1983zi} to determine the corresponding Yukawa couplings at $Q=M_t$. For other parameters, we mostly adopt the tree-level matching between the SM and the EFT at $Q=M_t$, but take into account some of the one-loop corrections which can be sizable. The corrections to the Higgs quartic coupling and the mass term are given by \begin{align} \lambda_H &= \lambda_H^{\rm (SM)} + \Delta \lambda_H, \label{eq:dellH} \\ m_H^2 &= m_H^{2{\rm (SM)}} + \Delta m_H^2, \label{eq:delmHSq} \end{align} with \begin{align} (16\pi)^2 \Delta \lambda_H =& \left( \lam{HL}^2 + \lam{HL} \kappa + \frac{1}{2} \kappa^2 \right) B_0(m_{L}^2, m_{L}^2) + \frac{1}{2} \lam{HR}^2 B_0(m_{R}^2, m_{R}^2) \notag \\ &+ (\lam{HL}+\kappa) T^2 C_0 (m_{L}^2, m_{L}^2, m_{R}^2) + \lam{HR} T^2 C_0 (m_{R}^2, m_{R}^2, m_{L}^2) \notag \\ &+ \frac{1}{2} T^4 D_0(m_{L}^2, m_{R}^2, m_{L}^2, m_{R}^2),\\ (16\pi)^2 \Delta m_H^2 =& \left( 2\lam{HL} + \kappa \right) A_0(m_{L}^2) + \lam{HR} A_0(m_{R}^2) + T^2 B_0 (m_{L}^2, m_{R}^2), \end{align} where $A_0$, $B_0$, $C_0$, and $D_0$ are the Passarino-Veltman one-, two-, three-, and four-point functions without momentum inflow, respectively \cite{Passarino:1978jh}. In determining the muon Yukawa coupling in the EFT, we also take account of the one-loop correction \cite{Marchetti:2008hw, Girrbach:2009uy} because it may significantly affect the vacuum decay rate and $a_{\mu}^{\rm (SUSY)}$. The correction $\Delta y_\mu$ is given by \begin{align} (16\pi)^2 \Delta y_{\mu} = Y_{L} Y_{R} T M_1 J(M_1^2, m_{R}^2, m_{L}^2), \label{dymu} \end{align} with \begin{align} J(a,b,c) \equiv -\frac {ab\ln(a/b) + bc\ln(b/c) + ca\ln(c/a)} {(a-b)(b-c)(c-a)}. \label{eq:I} \end{align} The muon Yukawa coupling constant in the EFT, $y_\mu$, and that in the SM, $y_\mu^{\rm (SM)}$, are related as $y_\mu=y_\mu^{\rm (SM)}+\Delta y_{\mu}$. In the present case, the sign of $\Delta y_{\mu}$ is correlated with that of $a_\mu^{\rm (SUSY)}$ and is negative. These corrections can be sizable due to the hierarchy of the scales $M_1,\,m_L,\,m_R \ll M_S$. Concerning the trilinear coupling $T$, the value is determined so that an input value of $a_\mu^{\rm (SUSY)}$ is realized. In the MSSM, the scalar potential is completely determined only by the parameters of the superpotential, {\it i.e.}, the Yukawa and gauge couplings. Accordingly, we impose the matching conditions on the EFT couplings at the matching scale $Q=M_S$. At the tree level, these conditions are given by \begin{align} Y_L = \frac{1}{\sqrt{2}} g_Y = \sqrt{\frac{3}{10}} g_1, ~~~ Y_R = -\sqrt{2} g_Y = -\sqrt{\frac{6}{5}} g_1, \end{align} where $g_Y$ and $g_1$ are the $U(1)_Y$ gauge coupling constant and its $SU(5)$-normalized value, respectively, and \begin{align} \lambda_R &= \frac{3}{10} g_1^2,\\ \lambda_{L} &= \frac{1}{8} g_2^2 + \frac{3}{40} g_1^2,\\ \lambda_{LR} &= \frac{y_\mu^2}{\cos^2 \beta} - \frac{3}{10} g_1^2, \label{eq:lLR} \\ \lambda_{HR} &= y_\mu^2 - \frac{3}{10} g_1^2 \cos 2\beta, \label{eq:lHRm} \\ \lambda_{HL} &= \left( \frac{1}{4} g_2^2 + \frac{3}{20} g_1^2 \right) \cos 2\beta,\\ \kappa &= y_\mu^2 - \frac{1}{2} g_2^2 \cos 2\beta. \label{eq:lkappam} \end{align} The $T$-parameter is related to the MSSM parameters as \begin{align} T &= y_\mu \mu \tan\beta + A_\mu \cos\beta, \label{T-param} \end{align} with $A_\mu$ being the soft SUSY breaking trilinear scalar coupling of smuon. For simplicity, we assume that the SUSY breaking contribution to the scalar trilinear coupling, $T$, is negligible. We expect that this assumption is valid when $\mu\tan\beta$ is much larger than the typical smuon masses, which is the case in our following discussion. Notice that, as we see below, the SUSY contribution to the muon $g-2$ and the decay rate of the EW vacuum are both dependent on the MSSM parameters through the $T$-parameter. Thus, the upper bound on the smuon mass, which will be derived in the following Sections, will be almost unchanged even if the effect of $A_\mu$ on $T$ is sizable. Notice that we use Eq.\ \eqref{T-param} only to evaluate $\mu$. Although we determine $\lambda_H$ at $Q=M_t$, there is also the SUSY relation between $\lambda_H$ and other couplings. Considering only the stop contribution to the threshold correction at $Q=M_S$, we obtain the one-loop matching condition \cite{Bagnaschi:2014rsa}, \begin{align} \lambda_H = \left( \frac{1}{8} g_2^2 + \frac{3}{40} g_1^2 \right) \cos^2 2\beta + \delta \lambda_H, \label{eq:lambdaH_matching} \end{align} with \begin{align} (16\pi^2) \delta \lambda_H \simeq &\, \frac{3}{2} y_t^2 \left[ y_t^2 + \left( \frac{1}{2} g_2^2 - \frac{1}{10} g_1^2 \right) \cos2 \beta \right] \ln \frac{m_{Q3}^2}{Q^2} \notag \\ &\, + \frac{3}{2} y_t^2 \left( y_t^2 + \frac{2}{5} g_1^2 \cos2 \beta \right) \ln \frac{m_{U3}^2}{Q^2} \notag \\ &\, + \frac{\cos^2 2\beta}{200} \left[ (25g_2^4 + g_1^4) \ln \frac{m_{Q3}^2}{Q^2} + 8 g_1^4 \ln \frac{m_{U3}^2}{Q^2} + 2 g_1^4 \ln \frac{m_{D3}^2}{Q^2} \right], \end{align} where $y_t$ is the top-quark Yukawa coupling constant while $m_{Q3}$, $m_{U3}$, and $m_{D3}$ are the mass parameters of the third generation left-handed squark, right-handed up-type squark, and right-handed down-type squark, respectively. For simplicity, we neglect the threshold correction to the parameters of the superpotential, {\it i.e.}, the top Yukawa coupling and the gauge couplings, and use their values in the EFT at $Q=M_S$ to evaluate the size of $\delta \lambda_H$. Once the value of $\lambda_H$ at the matching scale $M_S$ is obtained, we can solve \eqref{eq:lambdaH_matching} against the stop mass $m_{\tilde{t}}$ assuming the universality $m_{\tilde{t}} \equiv m_{Q3} = m_{U3} = m_{D3}$. Requiring the observed Higgs mass to be realized, we have checked that difference between $|\mu|$ and $m_{\tilde{t}}$ is within one or two orders of magnitude in the region with small enough decay rate of the EW vacuum.\footnote {In some case, $m_{\tilde{t}}$ becomes one or two orders of magnitude smaller than $|\mu|$ and it may induce a color breaking minimum where stops acquire VEVs. We do not discuss the instability due to such a color breaking minimum because it depends on various fields and parameters that are not included in our EFT.} For $M_t < Q < M_S$, we solve the RG equations of the EFT. We use the two-loop RG equations \cite{Luo:2002ey} augmented by some important three-loop contributions calculated in \cite{Buttazzo:2013uya} for the SM-like couplings. On the other hand, Bino and smuon contributions to the beta functions of the SM-like couplings and the beta functions of the couplings specific to the EFT are calculated at the one-loop level. Since all the SM parameters are fixed at $Q=M_t$ and below, while the other couplings are determined at $Q=M_S$, we iteratively solve the RG evolution in $M_t < Q < M_S$ to obtain consistent solutions. Next, we explain how we calculate the SUSY contribution to the muon $g-2$. Because we are interested in the case where the masses of Bino and smuons are much lighter than Higgsino (and other superparticles), the EFT parameters introduced above are used. The mass matrix of the smuons is given by \begin{align} {\bf M}^2_{\tilde{\mu}} = \left( \begin{array}{cc} m_L^2 + (\lambda_{HL}+\kappa) v^2 & - T v \\ - T v & m_R^2 + \lambda_{HR} v^2 \end{array} \right), \end{align} where $v\simeq 174\ {\rm GeV}$ is the vacuum expectation value of the SM-like Higgs. The mass matrix can be diagonalized by a $2\times 2$ unitary matrix $U$ as \begin{align} \mbox{diag} (m^2_{\tilde{\mu}_1}, m^2_{\tilde{\mu}_2}) = U^\dagger {\bf M}^2_{\tilde{\mu}} U, \end{align} and the gauge eigenstates are related to the mass eigenstates, denoted as $\tilde{\mu}_A$ ($A=1$, $2$), as \begin{align} \left( \begin{array}{c} \tilde{\mu}_L \\ \tilde{\mu}_R \end{array} \right) = U \left( \begin{array}{c} \tilde{\mu}_1 \\ \tilde{\mu}_2 \end{array} \right) \equiv \left( \begin{array}{cc} U_{L,1} & U_{L,2} \\ U_{R,1} & U_{R,2} \end{array} \right) \left( \begin{array}{c} \tilde{\mu}_1 \\ \tilde{\mu}_2 \end{array} \right). \end{align} At the one-loop level, the Bino-smuon loop contributions to the muon anomalous magnetic moment is given by \cite{Moroi:1995yh} \begin{align} a_\mu^{({\rm SUSY},\, 1\mathchar"2D{\rm loop})} = \frac{m_\mu^2}{16\pi^2} \sum_{A=1}^2 \frac{1}{m_{\tilde{\mu}_A}^2} \left[ - \frac{1}{12} \mathcal{A}_A f_1 (x_A) - \frac{1}{3} \mathcal{B}_A f_2 (x_A) \right], \end{align} where $x_A\equiv M_1^2/m_{\tilde{\mu}_A}^2$, \begin{align} \mathcal{A}_A \equiv Y_L^2 U_{L,A}^2 + Y_R^2 U_{R,A}^2,~~~ \mathcal{B}_A \equiv \frac{M_1 Y_L Y_R U_{L,A} U_{R,A}}{m_\mu}, \end{align} and the loop functions are given by \begin{align} f_1 (x) \equiv &\, \frac{2}{(1-x)^4} (1 - 6x + 3x^2 + 2x^3 - 6x^2 \ln x), \\ f_2 (x) \equiv &\, \frac{3}{(1-x)^3} (1 - x^2 + 2 x \ln x). \end{align} In the MSSM, some of the two-loop contributions to the muon anomalous magnetic moment may become sizable. One important contribution is the non-holomorphic correction to the muon Yukawa coupling constant \cite{Marchetti:2008hw, Girrbach:2009uy}. In the limit of large $\tan\beta$ (or, large $T$), such an effect can be significant. In the present setup, such a non-holomorphic correction to the muon Yukawa coupling constant is taken into account when the EFT parameters (in particular, $y_\mu$) are matched to the MSSM parameters at the SUSY scale. Another is the photonic two-loop correction \cite{Degrassi:1998es, vonWeitershausen:2010zr}. Such a contribution includes large QED logarithms and can affect the SUSY contribution to the muon $g-2$ by $\sim 10\ \%$ or more. The full photonic two-loop correction relevant for our analysis is given by \cite{vonWeitershausen:2010zr} \begin{align} a_\mu^{({\rm SUSY,\, photonic})} = & \, \frac{m_\mu^2}{16\pi^2} \frac{\alpha}{4\pi} \sum_{A=1}^2 \frac{1}{m_{\tilde{\mu}_A}^2} \Bigg[ 16 \left\{ - \frac{1}{12} \mathcal{A}_A f_1 (x_A) - \frac{1}{3} \mathcal{B}_A f_2 (x_A) \right\} \ln \frac{m_\mu}{m_{\tilde{\mu}_A}} \nonumber \\ & \, - \left\{ - \frac{35}{75} \mathcal{A}_A f_3 (x_A) - \frac{16}{9} \mathcal{B}_A f_4 (x_A) \right\} + \frac{1}{4} \mathcal{A}_A f_1 (x_A) \ln \frac{m_{\tilde{\mu}_A}^2}{Q_{\rm DREG}^2} \Bigg], \end{align} where $\alpha$ is the fine structure constant, $Q_{\rm DREG}$ is the dimensional-regularization scale, and \begin{align} f_3 (x) \equiv &\, \frac{4}{105(1-x)^4} [ (1-x) (-97x^2 -529x +2) + 6 x^2 (13x + 81) \ln x \nonumber \\ &\, +108x (7x + 4) \mbox{Li}_2 (1-x) ], \\ f_4 (x) \equiv &\, \frac{-9}{4(1-x)^3} [ (x+3) (x \ln x +x -1) + (6x+2) \mbox{Li}_2 (1-x) ]. \end{align} In our analysis, the SUSY contribution to the muon anomalous magnetic moment is evaluated as \begin{align} a_\mu^{\rm (SUSY)} = a_\mu^{({\rm SUSY},\, 1\mathchar"2D{\rm loop})} + a_\mu^{({\rm SUSY,\, photonic})}, \end{align} using the EFT parameters evaluated at the renormalization scale $Q=M_t$. We note that the above prescription gives a good estimation of the SUSY contribution to the muon anomalous magnetic moment in the parameter region we consider in the following discussion. In particular, for the case of our interest, the effect of the Bino-Higgsino-smuon diagrams ({\it i.e.}, Fig.\ \ref{fig:feyndiags} (b) and (c)) is estimated to be $O(0.1)\ \%$ or smaller relative to $a_\mu^{\rm (SUSY)}$ given above. The Wino-Higgsino-slepton diagrams ({\it i.e.}, Fig.\ \ref{fig:feyndiags} (d) and (e)) become irrelevant in the decoupling limit of the Winos. \section{Decay rate of electroweak vacuum} \label{sec:vacuumdecay} \setcounter{equation}{0} Using the method proposed by Callan and Coleman \cite{Coleman:1977py,Callan:1977pt}, the vacuum decay rate can be written in the following form: \begin{equation} \gamma=\mathcal A e^{-\mathcal B}, \end{equation} where $\mathcal B$ is the so-called bounce action and $\mathcal A$ is a prefactor with mass-dimension four. Previous tree-level analyses naively estimated the prefactor $\mathcal A$ based on a typical energy scale of the bounce. It has been pointed out that $\mathcal A$ may deviate significantly from the naive estimation in particular when there are many particles that couple to the bounce \cite{Endo:2015ixx} and hence the precise calculation of $\mathcal A$ is important for the accurate determination of the allowed parameter space. The prefactor has been first evaluated for the SM in \cite{Isidori:2001bm} and it has been reevaluated recently with the correct treatment of zero modes in \cite{Andreassen:2017rzq,Chigusa:2017dux,Chigusa:2018uuj} using the prescription proposed in \cite{Endo:2017gal,Endo:2017tsz}. The prescription has been generalized to a multi-field bounce in \cite{Chigusa:2020jbn}, which enabled the calculation of precise decay rates in a more complex setup like the one in this letter. All the coupling constants used below should be understood as those in the EFT at the renormalization scale of $Q=M_t$. The bounce is a spherical object in four-dimensional Euclidean space. We parameterize the bounce as \begin{align} H=\frac{1}{\sqrt{2}}\mqty(0\\\rho_h(r)),~~~ \sle{L}=\frac{1}{\sqrt{2}}\mqty(0\\\rho_L(r)),~~~ \smuon{R}=\frac{1}{\sqrt{2}}\rho_R(r), \end{align} where $\rho_I$ ($I=h$, $L$, $R$) are real fields and $r$ is the radius in the four-dimensional Euclidean space. Notice that the upper component of $H$ can be taken to be $0$ without loss of generality due to the $SU(2)_L\times U(1)_Y$ symmetry. The directions of the other fields are chosen such that the trilinear interaction, $\smuon{R}^\dagger H^\dagger \sle{L}$, becomes non-vanishing. Then, the bounce configuration is a solution of the Euclidean equations of motion: \begin{equation} \partial_r^2\rho_I+\frac{3}{r}\partial_r\rho_I= \pdv{V}{\rho_I}, \label{EoM} \end{equation} satisfying the following boundary conditions: \begin{align} \rho_h(\infty)&=\sqrt{2}v_{\rm EFT},~~~\rho_L(\infty)=\rho_R(\infty)=0,~~~\partial_r \rho_I(0)=0, \end{align} where $v_{\rm EFT}$ is the Higgs VEV at the false vacuum in the EFT. We obtain the bounce solution by numerically solving Eq.\ \eqref{EoM} using a modified version of the gradient flow method \cite{Chigusa:2019wxb, Sato:2019axv,ChiMorSho:Future}. Next, we explain how we can obtain the prefactor, $\mathcal{A}$, which takes care of the one-loop effect on the decay rate. The prefactor is obtained by the functional determinant of the fluctuation matrix which is given by the second-order functional derivative of the total action (containing the total scalar potential given in Eq.\ \eqref{Vtot}). The prefactor can be expressed as \begin{equation} \mathcal A=2\pi\mathcal J_{\rm EM}\frac{\mathcal{B}}{4\pi^2} \mathcal A^{(A,\varphi,c\bar c)} \mathcal A^{(\psi)}, \end{equation} where $A^{(A,\varphi,c\bar c)}$ ($\mathcal{A}^{(\psi)}$) is the effect of gauge bosons, scalar bosons and Faddev-Popov ghosts (fermions), and $\mathcal J_{\rm EM}$ is the Jacobian in association with the zero-mode due to the electromagnetic symmetry breaking. In calculating $\mathcal{A}$, we take into account the effects of the smuons and the Bino as well as the $SU(2)_L$ and $U(1)_Y$ gauge bosons, the Higgs boson, the muons, and the top quark. $\mathcal A^{(A,\varphi,c\bar c)}$ and $\mathcal A^{(\psi)}$ are given by the ratios of functional determinants for the partial waves: \begin{align} \mathcal A^{(A,\varphi,c\bar c)}=&\, \frac{\det\mathcal M_0^{(c\bar c)}}{\det\mathcal {\widehat M}_0^{(c\bar c)}}\qty(\frac{\det'\mathcal M_0^{(S\varphi)}}{\det\mathcal {\widehat M}_0^{(S\varphi)}})^{-1/2}\qty(\frac{\det'\mathcal M_1^{(SL\varphi)}}{\det\mathcal {\widehat M}_1^{(SL\varphi)}})^{-2}\prod_{\ell=2}^\infty\qty(\frac{\det\mathcal M_\ell^{(SL\varphi)}}{\det\mathcal {\widehat M}_\ell^{(SL\varphi)}})^{-\frac{(\ell+1)^2}{2}}, \\ \mathcal A^{(\psi)}=&\, \prod_{\ell=0}^\infty\qty(\frac{\det\mathcal M_\ell^{(\psi)}}{\det\mathcal {\widehat M}_\ell^{(\psi)}})^{\frac{(\ell+1)(\ell+2)}{2}}, \end{align} where the prime indicates the subtraction of zero modes, $\mathcal M_\ell$'s indicate fluctuation matrices around the bounce, and $\widehat{\mathcal M}_\ell$'s indicate those around the false vacuum. A general procedure to calculate the decay rate of the false vacuum, including the prescription for the zero-mode subtraction and the renormalization, is given in Refs.\ \cite{Endo:2017gal,Endo:2017tsz,Chigusa:2020jbn}. We follow the procedure given in these articles to calculate the decay rate of the EW vacuum in the model of our interest. A more detailed explanation of the calculation of the decay rate of the EW vacuum in the present model will be given elsewhere \cite{ChiMorSho:Future}. \section{Numerical results} \label{sec:results} \setcounter{equation}{0} Now we are at the position to show the constraints from the stability of the EW vacuum. In order to investigate how large the slepton mass can be, we do not take into account the constraints from other considerations, like the collider and dark matter constraints. These constraints depend on the detail of the model; for example, if the $R$-parity is violated, these are relaxed considerably. \begin{figure}[t] \centering \includegraphics[width=0.65\linewidth]{Tparam.pdf} \caption{Contours of constant $T$ for the case of $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$ and $m_R=m_L$. The $\tan\beta$ parameter is taken to be $10$ (solid) and $50$ (dashed). The blue, green, orange, and magenta lines are for $T=0.5$, $1$, $2$, and $5\ {\rm TeV}$, respectively.} \label{fig:tparam} \end{figure} We first calculate the required value of $T$ to realize a given value of $a_\mu^{\rm (SUSY)}$ for given values $m_L$, $m_R$, and $M_1$ (as well as other MSSM parameters).\footnote {When the Bino mass is relatively large, $|\Delta y_{\mu}|$ may become larger than the SM muon Yukawa coupling constant $\tilde{y}_\mu$. In such a case, the EFT muon Yukawa coupling constant $y_\mu$ is negative. (Notice that $\Delta y_{\mu}<0$.) We have checked that our main result, Fig.\ \ref{fig:bound}, is unchanged even if we consider only the parameter region with $y_\mu>0$.} In Fig.\ \ref{fig:tparam}, we show the contours of constant $T$ parameter on the $m_{\tilde{\mu}_1}$ vs.\ $M_1$ plane, assuming $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$. Here we take $m_R/m_L=1$ and $\tan\beta=10$ and $50$. We can see that the required value of $T$ to realize $a_\mu^{\rm (SUSY)}\sim \Delta a_\mu$ is insensitive to the value of $\tan\beta$. We can also see that the $T$ parameter is required to be significantly larger than the smuon masses for the case of heavy sleptons. Such a choice of $T$, required to solve the muon $g-2$ anomaly, gives rise to a deeper minimum of the potential in addition to the EW vacuum. In such a minimum of the potential, which we call a charge breaking minimum, the smuons acquire vacuum expectation values. The longevity of the EW vacuum is not guaranteed for the case with the charge breaking minimum. We calculate the decay rate of the elecroweak vacuum with the procedure explained in the previous Section. We parameterize the decay rare per unit volume as \begin{align} S_{\rm eff} \equiv - \ln \left( \frac{\gamma}{1\ {\rm GeV}^4} \right). \end{align} Then, requiring that the bubble nucleation rate within the Hubble volume, $\frac{4}{3}\pi H_0^{-3}$, be smaller than $t_{\rm now}^{-1}$, we obtain \begin{align} S_{\rm eff} > 386. \label{seffbound} \end{align} \begin{figure} \centering \includegraphics[width=0.65\linewidth]{Seff.pdf} \caption{Contours of constant $S_{\rm eff}$, taking $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$ and $m_R=m_L$. The red, blue, orange, green, and magenta lines are for $S_{\rm eff}=300$, $400$, $500$, $700$, and $1000$, respectively. The solid and dashed lines are for $\tan\beta=10$ and $50$, respectively.} \label{fig:seff} \vspace{5mm} \centering \includegraphics[width=0.65\linewidth]{Stree.pdf} \caption{Contours of constant $S_{\rm eff}$ (solid) and $S_{\rm eff}^{\rm (tree)}$ (dashdotted), taking $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$, $m_R=m_L$ and $\tan\beta=10$. The red, blue, orange, green, and magenta lines show the contours on which $S_{\rm eff}$ or $S_{\rm eff}^{\rm (tree)}$ is equal to $300$, $400$, $500$, $700$, and $1000$, respectively.} \label{fig:stree} \end{figure} In Fig.\ \ref{fig:seff}, we show the contours of constant $S_{\rm eff}$ on the lightest smuon mass vs.\ Bino mass plane with fixing the $T$ parameter by requiring $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$; here, we take $m_R/m_L=1$. As the lightest smuon becomes heavier, $S_{\rm eff}$ becomes smaller and the constraint given in \eqref{seffbound} may not be satisfied. Thus, the stability of the EW vacuum gives an upper bound on the smuon mass assuming that the SUSY contribution is responsible for the muon $g-2$ anomaly. In order to see the impact of the one-loop calculation of the prefactor $\mathcal{A}$, we compare our result with a tree-level one. For this purpose, because the typical energy scale of the bounce for the decay of the EW vacuum is often taken to be around the EW scale, we define \begin{align} S_{\rm eff}^{\rm (tree)} \equiv \mathcal{B} - \ln \left( \frac{v^4}{1\, {\rm GeV^4}} \right). \end{align} In Fig.\ \ref{fig:stree}, we show the contours of constant $S_{\rm eff}$ and $S_{\rm eff}^{\rm (tree)}$, taking $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$, $m_L/m_R=1$ and $\tan\beta=10$. The contours of constant $S_{\rm eff}$ and $S_{\rm eff}^{\rm (tree)}$ show significant deviation. We find that $S_{\rm eff}$ and $S_{\rm eff}^{\rm (tree)}$ differ by $\sim 100$, which results in the $O(10)\ {\rm GeV}$ difference in the estimation of the upper bound on the smuon masses. \begin{figure} \centering \includegraphics[width=0.65\linewidth]{Seff387_r04.pdf} \caption{Contours of $S_{\rm eff}=387$ for $m_R/m_L =0.5$. The magenta, green, and blue lines are for $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$ ($0\sigma$), $19.2\times 10^{-10}$ ($1\sigma$), and $13.3\times 10^{-10}$ ($2\sigma$), respectively. The solid (dashed) lines are for $\tan\beta=10$ ($50$).} \label{fig:Seff387_r04} \vspace{7mm} \centering \includegraphics[width=0.65\linewidth]{Seff387_r10.pdf} \caption{Same as Fig.\ \ref{fig:Seff387_r04}, except $m_R/m_L =1$.} \label{fig:Seff387_r10} \end{figure} \begin{figure} \centering \includegraphics[width=0.65\linewidth]{Seff387_r16.pdf} \caption{Same as Fig.\ \ref{fig:Seff387_r04}, except $m_R/m_L =2$.} \label{fig:Seff387_r16} \vspace{7mm} \centering \includegraphics[width=0.65\linewidth]{Bound.pdf} \caption{Upper bound on the lightest smuon mass as a function of $m_R/m_L$. The magenta, green, and blue lines are for $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$ ($0\sigma$), $19.2\times 10^{-10}$ ($1\sigma$), and $13.3\times 10^{-10}$ ($2\sigma$), respectively. The solid (dashed) lines are for $\tan\beta=10$ (50).} \label{fig:bound} \end{figure} Now, we discuss the constraint on the lightest smuon mass. In Figs.\ \ref{fig:Seff387_r04}, \ref{fig:Seff387_r10}, and \ref{fig:Seff387_r16}, we show the contours of $S_{\rm eff}=386$ for $m_R/m_L =0.5$, $1$, and $2$, for $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$ ($0\sigma$), $19.2\times 10^{-10}$ ($1\sigma$), and $13.3\times 10^{-10}$ ($2\sigma$). Requiring that $a_\mu^{\rm (SUSY)}$ is comparable to $\Delta a_\mu$, we can see that the lightest smuon mass becomes maximally large when the Bino mass is $\sim 0.5-1\ {\rm TeV}$. In addition, as expected, the upper bound on the smuon mass becomes larger as $a_\mu^{\rm (SUSY)}$ becomes smaller. Notice that our smuon mass bound for the case of $m_R/m_L =1$ is close to the one given in Ref.\ \cite{Endo:2021zal}, which is based on the tree-level estimation of the decay rate. Varying the Bino mass, we determined the maximal possible value of the lightest neutralino mass for fixed values of $\tan\beta$ and $a_\mu^{\rm (SUSY)}$. The result is shown in Fig.\ \ref{fig:bound}, in which the upper bound on the lightest neutralino mass is given as a function of the ratio $m_R/m_L$. We can see that the upper bound becomes the largest when $m_R\simeq m_L$. Requiring $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$ ($0\sigma$), $19.2\times 10^{-10}$ ($1\sigma$), and $13.3\times 10^{-10}$ ($2\sigma$) with $\tan\beta=10$ ($50$) and $m_R= m_L$, the lightest smuon mass is required to be smaller than $1.20$, $1.38$ and $1.68\ {\rm TeV}$ ($1.18$, $1.37$ and $1.66\ {\rm TeV}$), respectively. The bound is insensitive to the choice of $\tan\beta$. The muon $g-2$ anomaly can be hardly explained by the MSSM contribution if the lightest smuon is heavier than this bound. \section{Conclusions and discussion} \label{sec:conclusions} \setcounter{equation}{0} In this letter, we have studied the stability of the EW vacuum in the MSSM, paying particular attention to the parameter region where the muon $g-2$ anomaly can be explained by the SUSY contribution. We consider the case where the Higgsino mass parameter $\mu$ is significantly large; in such a case, the SUSY contribution to the muon $g-2$ is enhanced so that the muon $g-2$ anomaly can be explained with relatively large values of the smuon masses. With $\mu$ being large, however, the smuon-smuon-Higgs trilinear coupling is enhanced, and there may show up a charge breaking minimum of the potential, resulting in the meta-stability of the EW vacuum. With the size of the SUSY contribution to the muon $g-2$ being fixed to alleviate the muon $g-2$ anomaly, the trilinear coupling is more enhanced with a larger value of the smuon mass. Thus, if the smuon mass is too large, the muon $g-2$ anomaly cannot be solved in the MSSM even if we consider a very large value of $\mu$ because the longevity of the EW vacuum cannot be realized. We have performed a detailed calculation of the decay rate of the EW vacuum, assuming that the SUSY contribution to the muon anomalous magnetic moment is large enough to alleviate the muon $g-2$ anomaly. Our calculation is based on the state-of-the-art method to calculate the decay rate of the false vacuum, which includes the one-loop effects due to the field coupled to the bounce. The most important advantage of the inclusion of the one-loop effect is to determine the mass scale of the prefactor $\mathcal{A}$, which is mass-dimension $4$. Another advantage is that the scale dependence of the bounce action $\mathcal{B}$ can be canceled by that of $\mathcal{A}$ at the leading-log level. Requiring $a_\mu^{\rm (SUSY)}=25.1\times 10^{-10}$ ($0\sigma$), $19.2\times 10^{-10}$ ($1\sigma$), and $13.3\times 10^{-10}$ ($2\sigma$), we found that the lightest smuon should be lighter than $1.20$, $1.38$ and $1.68\ {\rm TeV}$ ($1.18$, $1.37$ and $1.66\ {\rm TeV}$) for $\tan\beta=10$ ($50$), respectively. It is challenging to find such a heavy smuon with collider experiments. A very high energy collider, like muon colliders \cite{Delahaye:2019omf}, the FCC \cite{Mangano:2016jyj, Contino:2016spe, Golling:2016gvc}, or the CLIC \cite{CLICdp:2018cto} may be able to perform a conclusive test of the SUSY interpretation of the muon $g-2$ anomaly. In this letter, we assumed that the superparticles other than the smuons and the Bino are so heavy that they are irrelevant for the muon $g-2$ as well as for the stability of the EW vacuum. If some of the superparticles are as light as the smuons and the Bino, the upper bound on the smuon mass we obtained may become more stringent. For example, if the stau is relatively light, then the decay rate of the EW vacuum may become larger because the large $\mu$ also enhances the stau-stau-Higgs trilinear coupling which is orders of magnitude larger than the smuon-smuon-Higgs coupling. In such a case, the upper bound on the slepton mass becomes more stringent compared to the case only with the smuons. More detailed discussion on such a case will be given elsewhere \cite{ChiMorSho:Future}. \vspace{2mm} \noindent{\it Acknowledgments:} S.C. is supported by JSPS KAKENHI Grant No.\ 20J00046, and also by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under the Contract No.\ DE-AC02-05CH1123. T.M. is supported by JSPS KAKENHI Grant Nos.\ 16H06490 and 18K03608. Y.S. is supported by I-CORE Program of the Israel Planning Budgeting Committee (grant No.\ 1937/12). The authors gratefully acknowledge the computational and data resources provided by the Fritz Haber Center for Molecular Dynamics. \bibliographystyle{jhep}
1,116,691,498,805
arxiv
\section{\label{sec:intro} Introduction} Circular dichroism (CD) spectroscopy, due to the different absorption of left {\it vs} right circularly polarized light by chiral systems, is a useful technique for characterizing chiral molecules, it can be used to identify different enantiomers, and it is also able to yield information on molecule conformation ({\it e.g.} \cite{Molteni_Symm_2021,Molteni_JPCB_2015} and refs. therein). Chirality plays an important role in molecular recognition; therefore it is of interest in several fields, from drug discovery to catalysis to biomolecular function. Computational investigations of the CD spectra of molecules can help in several ways: they can allow to predict which spectral regions are more sensitive to the absolute configuration (enantiomer) of a given molecule, and which ones to its conformation; if a chiral drug molecule yields a clear CD “fingerprint” (i.e. well recognizable features in the CD spectrum), this may be used {\it e.g.} for assessing its accumulation in cells. We report here on our implementation of CD calculations within the Density Functional Theory (DFT) framework in the Yambo code~\cite{Sangalli2019}, and on its application to computing (absorption and) CD spectra of three cyclo-dipeptides, cyclo(Glycine-Phenylalanine), cyclo(Tryptophan-Tyrosine) and cyclo(Tryptophan-Tryptophan), of which some of us had previously characterized the electronic occupied states\cite{Molteni_PCCPdipep2021}. Cyclo-dipeptides (CDPs) or 2,5-diketopiperazines (DKP) are interesting both thanks to their biological and pharmacological activities (such as antibacterial, antiviral, antitumoral, antioxidant)\cite{Mishra_Molecules_2017} and as possible building blocks for nanodevices\cite{Zhao_PepSci_2020,Jeziorna_CrystGrowthDes_2015} due to their multiple hydrogen bonding sites, which can potentially have a role in self-assembly\cite{Mattioli_SciRep_2020,Zhao_PepSci_2020}. As chiral molecules, CDPs can catalyze enantioselective reactions\cite{Ying_SciRep_2018}; they have been detected e.g. in meteorites\cite{Danger_ChemSocRev_2012} and can considered as precursors of longer peptides\cite{Barreiro-Lage_JPCL2021,Danger_ChemSocRev_2012}: they may therefore have had a role also in the "homochirality" of life, {\it i.e.} the prevalence of the L enantiomer of amino acids in proteins\cite{Danger_ChemSocRev_2012}. Moreover, the role of molecule chirality in self-organization of cyclo-dipeptides has also attracted interest\cite{Jeziorna_CrystGrowthDes_2015}. \section{\label{sec:meth} Methods} Methyloxirane is first used as a reference molecule to validate the approach, and then we proceed to compute optical properties of the three cyclo-dipeptides. Optical absorption and circular dichroism (CD) spectra are computed within the Independent Particle (IP) approximation with the Yambo code~\cite{Sangalli2019}. Yambo is a plane wave code interfaced with QuantumESPRESSO (QE) \cite{QE_2017,QE_JPhysCondMat2009} that allows calculation of the optical response starting from the previously generated Kohn-Sham (KS) wave functions and energies in a plane-wave basis set. In the QE calculations the molecule (either methyloxirane or a cyclo-dipeptide) was put in a face-centered cubic (FCC) cell with lattice parameter $a=56.087$ a.u. ({\it i.e.} $\approx$ 29.68 \AA). Total energies have been calculated using norm-conserving Troullier-Martins atomic pseudopotentials\cite{TM_PsP}. The LDA approximation for the exchange-correlation potential is used for methyloxirane, while the hybrid B3LYP~\cite{B3LYP1,B3LYP2} functional is used for dipeptides. The D3 pairwise dispersion correction for van der Waals (vdW) interactions~\cite{Grimme_D3} was also included for relaxing the geometry of the three cyclo-dipeptides, and the Makov-Payne correction to the total energy was used to compute the vacuum level, and to properly align electronic energy levels~\cite{MakovPayne}. In Yambo calculations we used 200 bands for methyloxirane, and 500 bands for the three investigated cyclo-dipeptides. In all cases the same plane wave cutoff for wavefunctions of 45 Ha gives a good convergence of the energy levels, total energies and atomic forces in the QE runs. In our approach, both absorption and CD are constructed starting from the matrix elements of the position operator which, for isolated molecules, we compute in real space \begin{equation} \label{eq:r_dipoles} \mathbf{r}_{nm}=\langle\psi_{n}|\mathbf{r}|\psi_{m}\rangle \end{equation} where $\mathbf{r}$ is a vector of component $r_j$, with $j=x,y,z$. We define $\omega_{nm}= (\epsilon_{n}-\epsilon_{m})/\hbar$, the excitation frequency for an electronic transition between the states $\psi_n$ and $\psi_m$. From the latter we define the velocity dipoles $\mathbf{v}_{nm}=\mathbf{r}_{nm}\omega_{nm}$, and the magnetic dipoles \begin{equation} \label{eq:m_dipoles} \mathbf{m}_{mn}=\sum_{l=l_{min}}^{l_{max}} \mathbf{r}_{ml}\times\mathbf{v}_{ln}. \end{equation} The expression for the magnetic dipoles results from the use of the identity operator $\sum_l |l\rangle\langle l|$. Hence the $n$ and $m$ indexes belong to a given transition (from an occupied to an unoccupied orbital in the resonant case) while the $l$ index should run over all orbitals, i.e. $l_{min}=1$ and $l_{max}=\infty$. In practice one should verify the convergence of calculated CD spectra, in a given energy range, both on the transitions included, and on the $l$ index. For Independent Particle (IP) absorption spectra we calculated the polarizability $\alpha$, in terms of the sum of direct transitions between Kohn-Sham eigenstates within the Fermi Golden Rule \begin{equation} \label{eq_IPabs} \alpha_{ij}(\omega)=-4\pi \sum_{nm} \left( \frac{r^{i}_{nm}r^{j}_{mn}}{\omega-\omega_{nm}-i\gamma} + \frac{r^{j}_{nm}r^{i}_{mn}}{\omega+\omega_{nm}+i\gamma} \right). \end{equation} The IP circular dichroism signal, within linear response, is instead proportional to the $G$-tensor\cite{Molteni_JPCB_2015,Condon1937,Barron2004}: \begin{equation} \label{eq:G_CD_IP} G_{ij}(\omega) = \frac{q_e^2}{2m\hbar} \sum_{nm} \left( \frac{r^i_{nm}m^j_{mn}}{\omega_{nm} - \omega -i\gamma} + \frac{m^j_{mn}r^i_{nm}}{\omega_{nm}+\omega+i\gamma} \right) \end{equation} For randomly oriented chiral molecules absorption and CD are expressed as the trace of the $\alpha(\omega)$ and $G(\omega)$ tensors, respectively. \section{\label{sec:res} Results} \subsection{Numerical tests on R-methyloxirane and convergence in c-GlyPhe} \label{subsect:Rmeth} \begin{figure}[h] \includegraphics[width=\textwidth]{Rmethylox_geom_and_spectra-IP-exp.pdf} \caption{Left panel: Geometry of R-methyloxirane. Panels (a) and (b): IP (dashed red line) absorption (panel a) and CD (panel b) calculated spectra of R-methyloxirane, obtained within DFT LDA, compared to its experimental spectra (solid black line). Calculated spectra have been shifted by +1.4 eV, and a broadening of 0.1 eV has been used.} \label{fig:Rmeth_absCD} \end{figure} Methyloxirane has often been used in the literature as a benchmark molecule for CD calculations against experimental data, due to its rigidity. For flexible molecules instead, one has to take into account the fact that the different possible conformers will in general yield different CD spectra, which makes a comparison to experimental spectra not trivial. In Figure~\ref{fig:Rmeth_absCD} we report the geometry (left panel) of R-methyloxirane ({\it i.e.} the ``R'' enantiomer of methyloxirane) and its calculated absorption (panel a) and CD (panel b) spectra, compared to the vacuum UV experimentally measured absorption and CD spectra of the same molecule\cite{Carnell1991}. Our absorption and CD spectra of R-methyoxirane, calculated at IP level, reproduce well the first two features of the corresponding experimental spectra\cite{Carnell1991} provided a rigid shift of 1.4 eV is applied to calculated ones. The agreement is of the same quality of that reported in previous computational works\cite{Molteni_JPCB_2015,Varsano_PCCP_2009}. The discrepancies between theory and experiment in the high-energy part of the spectra have also been reported in the literature and they may be due to the IP approximation\cite{Molteni_JPCB_2015}. One of the important aspects of CD implementations, is that it involves the definition of the orbital magnetic dipoles, which are ill defined in periodic boundary conditions. In our approach to the orbital magnetic dipoles, this problem appears in the presence of the terms $\mathbf{r}_{nn}$ in eq.~\eqref{eq:m_dipoles}. In isolated systems this is not an issue, since $\mathbf{r}_{nn}$ can be directly evaluated in real space. On the other hand in extended systems the position operator is ill defined and only components between non degenerate states can be computed, via the evaluation in reciprocal space of $\mathbf{v}_{nm}$, and later using $\mathbf{r}_{nm}=\mathbf{v}_{nm}/\omega_{nm}$. Instead $\mathbf{r}_{nm}=0$ must be imposed if $\omega_{nm}<E_{thresh}$. Here we have verified, by comparing CD spectra of R-methyloxirane obtained by computing dipoles either in real space or in reciprocal space (data not shown) that the $\mathbf{r}_{nn}$ dipoles have a negligible effect on computed CD spectra. This suggests that our approach, here implemented for molecules computing dipoles in real space, may be successfully extended to the case of solids, where the G space approach is generally used. \begin{figure}[h] \includegraphics[width=\textwidth]{GP1_CD_IP_DipBands_upd-jan22.pdf} \caption{Independent Particle CD spectra of the c-GlyPhe dipeptide, obtained with different values for the the Yambo \texttt{DipBands} keyword, used to converge the identity resolution entering the definition of the magnetic dipoles $\mathbf{m}$. Convergence is verified varying the value of $l_{min}$ (left panel), and $l_{max}$ (right panel) independently. The molecule has 39 occupied states.} \label{fig:CD_dipbands} \end{figure} After validating on R-methyloxirane our computational scheme for (absorption and) circular dichroism spectra, we first consider the lowest energy conformer (see also discussion in the next section) of cyclo(Glycine-Phenylalanine) to verify the convergence of calculated CD spectra on the number of states used to resolve the identity in the definition of the magnetic dipole matrix elements (see discussion in the Methods section, $l$ index in eq.~\eqref{eq:m_dipoles}). Convergence results are presented in Fig.~\ref{fig:CD_dipbands}. The CD spectrum of c-GlyPhe is reported at fixed number of transitions, using in Eq.~\eqref{eq:G_CD_IP} 10 occupied states (index $m$ ranging from 30 to 39) and 11 empty states (index $n$ ranging from 40 to 50), while varying the number of states included in the sum over the $l$ index. In the left panel we consider the convergence with respect to the value of $l_{min}$ (index in the occupied states), while in the right panel we consider the convergence with respect to the value of $l_{max}$. The two parameters, i.e. number of transitions and number of states included in the resolution of the identity, are controlled independently in the Yambo input file {\it via} the use of the two variables \texttt{BSEbands} and \texttt{DipBands} respectively. Magnetic dipoles, and hence CD spectra, are weakly sensitive to the range of empty states used in the expression of $\mathbf{m}_{mn}$, while the convergence on occupied states is less trivial, requiring the inclusion of occupied states down to state 10 (the molecule has 39 occupied states) for a reasonable spectrum. Here the CD spectra are computed via the real space procedure for the dipoles, since the reciprocal space dipoles would require a direct evaluation of the commutator with the non local part of the b3lyp exchange and correlation potential. \subsection{CD spectra of Cyclo-dipeptides} \label{subsect:3dipep} We now report results on three cyclo-dipeptides with aromatic sidechains, namely cyclo(Glycine-Phenylalanine), cyclo(Tryptophan-Tyrosine) and cyclo(Tryptophan-Tryptophan). At a difference with the above-discussed methyloxirane, the three chosen cyclo-dipeptides (c-GlyPhe, c-TrpTyr, c-TrpTrp) display some flexibility; therefore, also in view of possible comparisons to experimentally measured absorption and/or CD spectra, one should look for the most stable conformers, which are expected to be the most abundant ones in experiments, apart from possible effects of the experimental conditions (solvent vs. gas phase, temperature, etc.). For each of the three investigated cyclo-dipeptides, therefore, we have considered the lowest energy gas phase conformers as obtained by some of us in a previous work through a tight-binding conformational search, followed by geometry optimization within B3LYP DFT\cite{Molteni_PCCPdipep2021}. These conformers are shown in the left panels of Figs.~\ref{fig:GPspectra}, ~\ref{fig:TrpTyr_spectra} and ~\ref{fig:TrpTrp_spectra} for c-GlyPhe, c-TrpTyr and c-TrpTrp respectively, nearby the IP absorption and CD spectra. In the IP approximation excited states correlations, which would lower the optical gap with respect to the electronic gap $E_{LUMO} - E_{HOMO}$, are neglected. On the other hand, the electronic gap $E_{LUMO} - E_{HOMO}$ is underestimated at the B3LYP level. The two mentioned effects are of opposite sign, therefore the IP-B3LYP-calculated energy position of the absorption onset can be either over- or underestimated with respect to the experimentally observed one, depending on which of the two effects is larger for the specific molecule under study. In the analysis of the spectra of R-methyloxirane, the sum of the two errors results in an underestimation of the optical gap. A simple rigid shift of $+1.4$~eV to the calculated IP spectrum was enough to give a very good description of CD. For cyclo-dipeptides instead we fix the HOMO - LUMO gap with a shift of +2.5 eV, based on a previous study~\cite{Molteni_PCCPdipep2021}, where measured photoemission spectra (PES) are carefully compared with {\it ab initio} simulations. We are thus left with an overall overestimation of the position of the optical gap. A direct comparison of IP results to experimentally measured spectra is not the main goal of the present work. This is why for all the three investigated cyclo-dipeptides we did not look for the additional shift needed to match the position of their optical spectra. In the case of the widely studied Trp amino acid\cite{Catalan_PCCP_2016,Hazra_JMCC_2014}, and also of the c-TrpTrp dipeptide\cite{Tao_NatComm_2018}, experimentally measured absorption spectra display a strong absorption peak in the 4 to 5 eV energy region, whose maximum lies at lower energy with respect to our IP calculated spectra, even before applying the above-mentioned +2.5 eV shift to them. An investigation of absorption and CD spectra of these molecules beyond IP level may be the subject of further works. On the other hand the calculated IP spectra reported here yield information on the contribution of individual state to state transitions to the intensity of absorption and CD peaks {\it via} the dipole matrix element between the two involved states, whose intensity can be strongly affected by the localization of these states (including the possible absence of specific peaks corresponding to dipole-forbidden transitions). This information is interesting when comparing spectra of different molecular conformers, and it adds to our knowledge of the electronic properties of these cyclo-dipeptides, with respect to the simple picture obtained by DFT electronic densities of states\cite{Molteni_PCCPdipep2021} which only depend on the energy distribution of electronic states. Having aligned the transition energies with the position of the energy level measured in photo-emission, helps in mapping a peak to the associated occupied and empty states. \begin{figure}[h] \includegraphics[width=\textwidth]{Fig_GP_geom_and_spectra.pdf} \caption{Left panel: geometry of the chosen low-energy conformers of c-GlyPhe. Panel (a): Independent Particle (IP) B3LYP absorption spectra of c-GlyPhe conformer 1 (magenta) and conformer 2 (gray). Panel (b): IP B3LYP circular dichroism (CD) spectra of the same two conformers (same color codes). Vertical black lines indicate E(vacuum level) - E(HOMO) for the two conformations. A shift of +2.5 eV has been applied to absorption and CD spectra, and a broadening of 0.05 eV was used.} \label{fig:GPspectra} \end{figure} In Fig.~\ref{fig:GPspectra} panels (a) and (b) we compare Independent Particle (IP) B3LYP optical absorption and electronic circular dichroism (CD) spectra of conformers 1 and 2 (left panel of the same figure) of the c-GlyPhe peptide, obtained with the Yambo code from QE KS wavefunctions. In spite of the quite similar energy distribution of electronic levels~\cite{Molteni_PCCPdipep2021} for the two considered conformers of c-GlyPhe, absorption spectra (panel (a)) display some degree of conformational sensitivity: the main features lie in both cases around 8.7 eV and around 9 eV, but their relative intensities and detailed shapes are different for the two conformers: in conformer 2, at a difference with conformer 1, each of these two features is splitted into two peaks. Conformational sensitivity is even more pronounced in CD spectra (panel (b)): here corresponding spectral features for the two conformers have in several cases opposite signs, thus yielding very different dichroism spectra. This strong conformational dependence of CD spectra has already been reported in the literature for single amino acids (see \cite{Molteni_JPCB_2015} and refs therein). In c-GlyPhe conformation 1 (magenta curve in panel (a) of Fig. \ref{fig:GPspectra}) six out of the first (in energy order) seven transitions between Kohn-Sham states (short vertical black ticks below the absorption spectrum), namely all of them except the sixth one, {\it i.e.} the HOMO - (LUMO+2) one, give non-negligible contributions (vertical magenta ticks) to the Independent Particle absorption spectrum. In particular, the first and second transitions contribute to the absorption peak at $\approx$ 8.7 eV, while the 3rd, 4th, 5th and 7th transitions contribute to the absorption peak at $\approx$ 9 eV. The fact that most low-energy transitions give non-negligible contributions to the IP absorption spectrum is in agreement with the observation that most of the highest occupied and lowest unoccupied electronic states in this system, as discussed by some of us in a previous work\cite{Molteni_PCCPdipep2021}, are not localized in a specific part of the molecule (in that case low intensity contributions would be expected to occur for transitions between pairs of states localized on separated and relatively ``far'' parts of the molecule). Also in c-GlyPhe conformation 2 (grey curve in panel (a) of Fig. \ref{fig:GPspectra}) six out of the first seven transitions (short vertical black ticks) are bright (vertical grey ticks). Again only the HOMO - (LUMO+2) transition, which in this case is the fifth one in energy order, is dark. For both conformers all the mentioned bright low energy transitions have either the LUMO or the LUMO+1 as ``final'' conduction state. Only a subset of the optically bright transitions give non-negligible contributions to the corresponding CD spectra (panel (b) of Fig. \ref{fig:GPspectra}). In particular, in c-GlyPhe conformation 1 only the first (HOMO - LUMO) and the fourth ((HOMO-1) - (LUMO+1)) transitions give non-negligible contributions - with the same sign - to CD. In c-GlyPhe conformation 2 instead four transitions give non-negligible contributions to CD in this energy region, namely the first (HOMO - LUMO) one with positive sign, and the third ((HOMO-1) - LUMO), sixth ((HOMO-1) - (LUMO+1)) and seventh ((HOMO-2) - (LUMO+1)) with negative sign. \begin{figure}[h] \includegraphics[width=\textwidth]{Fig_TrpTyr_geom_and_spectra.pdf} \caption{Left panel: geometry of the chosen low-energy conformers of c-TrpTyr. Panel (a): Independent Particle (IP) B3LYP absorption spectra of c-TrpTyr conformer 1 (magenta), conformer 5 (dark red) and conformer 4 (green). Panel (b): IP circular dichroism (CD) spectra of the same three conformers (same color codes). Vertical black lines indicate E(vacuum level) - E(HOMO) for the three conformations. A shift of +2.5 eV has been applied to absorption and CD spectra, and a broadening of 0.05 eV was used.} \label{fig:TrpTyr_spectra} \end{figure} In Fig.~\ref{fig:TrpTyr_spectra}, panels (a) and (b), we report the Independent Particle B3LYP absorption (a) and circular dichroism (b) spectra of the three lowest energy conformers (left panel of the same figure) of c-TrpTyr. Although absorption spectra of the three c-TrpTyr conformers share several common features, they can be distinguished from each other. For c-TrpTyr conformation 1 (magenta lines in Fig. \ref{fig:TrpTyr_spectra}) the only contribution to the first IP absorption peak stems from the HOMO-LUMO transition (7.46 eV); the main contribution to the second absorption peak comes from the (HOMO-1) - LUMO transition (7.98 eV), with a less intense contribution from the (HOMO-2) - LUMO one (8 eV); the (HOMO-2) - (LUMO+1) transition (at 8.13 eV), of comparable intensity as the (HOMO-2) - LUMO one, gives rise to the shoulder of the second absorption peak. The mentioned transitions are the only ones giving a non-negligible contribution to IP absorption out of the 11 transitions up to 8.20 eV. Regarding the spatial localization of the electronic states, as obtained by QE DFT B3LYP calculations in our previous work\cite{Molteni_PCCPdipep2021}, for c-TrpTyr conformation 1 most of the states near the energy gap, {\it i.e.} those involved on low energy IP transitions, are localized on a specific part of the cyclic dipeptide, not spread all over the molecule, at a difference with the case of c-GlyPhe conformation 1. In particular, the two transitions giving the most intense contributions to absorption up to 8.20 eV are the HOMO-LUMO and the (HOMO-1) - LUMO one, both involving pairs of states which are localized on the same part of the dipeptide, namely on the indole ring of tryptophan. On the other hand, transitions between pairs of states with different spatial localization yield contributions with lower intensity, such as the above mentioned (HOMO-2) - LUMO and (HOMO-2) - (LUMO+1), or even negligible intensity, such as the transitions from the HOMO state, localized on the Trp indole ring, to states ranging from (LUMO+1) to (LUMO+4), with negligible electronic density on that part of the molecule. As for CD spectra, their mutual differences are so pronounced that no recognizable peaks common to the three conformers are present, except possibly for the features in the range from $\approx$ 9 eV to $\approx$ 10 eV. Also for the c-TrpTyr dipeptide the stronger conformational sensitivity of CD spectra with respect to absorption ones is thus confirmed. The first feature in the CD spectrum of c-TrpTyr conformation 1 (magenta curve, right panel in Fig.\ref{fig:TrpTyr_spectra}) is made of two very weak positive peaks, due to the HOMO - LUMO and to the HOMO - (LUMO+1) transitions respectively. The (HOMO-1) - LUMO transition yields the most intense contribution to the second (in energy order) CD feature, also of positive sign and more intense with respect to the previous one, lying at the same energies as the second absorption feature; however, this second CD feature has contributions also from two transitions which give negligible contributions to absorption, namely the HOMO - (LUMO+4) and the HOMO - (LUMO+5) ones. \begin{figure}[h] \includegraphics[width=\textwidth]{Fig_TrpTrp_geom_and_spectra.pdf} \caption{Left panel: geometry of the chosen low-energy conformers of c-TrpTrp. Panel (a): Independent Particle (IP) B3LYP absorption spectra of c-TrpTrp conformer 1 (red), conformer 2 (blue) and conformer 3 (green). Panel (b): IP circular dichroism (CD) spectra of the same three conformers (same color codes). Vertical black lines indicate E(vacuum level) - E(HOMO) for the three conformations. A shift of +2.5 eV has been applied to absorption and CD spectra, and a broadening of 0.05 eV was used.} \label{fig:TrpTrp_spectra} \end{figure} For c-TrpTrp the differences among absorption spectra of the three B3LYP lowest energy conformers (see Fig.~\ref{fig:TrpTrp_spectra}, panel (a)) are larger than those observed for the c-TrpTyr dipeptide. The conformational variability among low energy geometries of c-TrpTrp (left panel of Fig. \ref{fig:TrpTrp_spectra}), larger than that observed in c-TrpTyr (left panel of Fig.~\ref{fig:TrpTyr_spectra}), appears to be sufficient for yielding significant differences in absorption spectra. In particular, the absorption spectrum of conformer 3 (the B3LYP lowest energy conformer, green curve in panel (a) of Fig.\ref{fig:TrpTrp_spectra}) is quite different from the spectra of the other two low energy conformers, in the relative intensity of spectral features up to $\approx$ 8.3 eV, and also, remarkably, in that both the feature at $\approx$ 7.6 eV and the one at $\approx$ 8.2 eV consist here of a single intense peak originated by two almost energy-degenerate transitions, with negligible intensity for all the other transitions in that energy range, at a difference with the other two c-TrpTrp conformers, where each of these absorption features originates from several transitions of non-negligible intensity. Interestingly, conformer 3 is rather different from the other two low-energy conformers also in its geometry, suggesting a CH-$\pi$ interaction for conformer 3, rather than a $\pi$-$\pi$ one as for conformers 1 and 2, as discussed by some of us in our previous work on these cyclo-dipeptides\cite{Molteni_PCCPdipep2021}. If we analyze the first absorption feature of c-TrpTrp conf3 (green curve in Fig. \ref{fig:TrpTrp_spectra} panel (a)) in terms of transitions between Kohn-Sham states (short vertical black lines below the spectrum), we find that only the second and the third transition (in order of energy) yield a non-negligible intensity in the energy range up to $\approx$ 7.8 eV: they are the HOMO - (LUMO+1) and the (HOMO-1) - LUMO transitions (the two intense green vertical lines at $\approx$ 7.564 eV and at $\approx$ 7.567 eV, respectively, appearing as a single line in the Figure). Interestingly, the computed B3LYP wavefunctions\cite{Molteni_PCCPdipep2021} of the HOMO and LUMO+1 states are both localized on the same part of the TrpTrp dipeptide, {\it i.e.} on the indole ring of one (the same in both cases) of the two Trp amino acids. The wavefunctions of the HOMO-1 and LUMO states, on the other hand, are both localized on the indole ring of the other Trp. The other transitions in the same energy range, such as the first one in order of energy, HOMO - LUMO, the fourth one, (HOMO-1) - (LUMO+1), the fifth one, HOMO - (LUMO+2), yield negligible intensities, and they correspond to pairs of electronic states localized in different regions of the molecule. The (HOMO-2) - (LUMO+1) and (HOMO-3) - LUMO transitions, both at $\approx$ 8.09 eV, build the second intense absorption peak for c-TrpTrp conformer 3: they are, once again, transitions between pairs of electronic states localized on the same part of the molecule, {\it i.e.} the indole ring of either of the two Trp amino acids. As for circular dichroism (Fig.~\ref{fig:TrpTrp_spectra} panel (b)), the spectra of conformers 1 and 2 of c-TrpTrp display a similar feature around 8 eV; the spectrum of conformer 3 (the lowest energy geometry) is overall less intense than the spectra of the other 2 investigated conformers. This latter CD spectrum (green curve in panel (b)) displays a first weak positive peak at 7.37 eV, due to the optically dark HOMO - LUMO transition. The following CD peak is more intense, of negative sign, and due to the HOMO - (LUMO+1) and (HOMO-1) - LUMO transitions, {\it i.e.} the ones involved in the first absorption peak. Then, another positive CD peak is due to the (HOMO-1) - (LUMO+1) transition, which is dark in the absorption spectrum. Finally, the (HOMO-2) - (LUMO+1) and (HOMO-3) - LUMO transitions yield a negligible contribution to the CD spectrum, at a difference with the absorption spectrum, where they yield the second (in energy order) intense peak observed. \section{Conclusion} In this work we have reported on our implementation of circular dichroism calculations at Independent Particle level in the Yambo code and on its application to three cyclo-dipeptides, cyclo(Glycine-Phenylalanine), cyclo(Tryptophan-Tyrosine) and cyclo(Tryptophan-Tryptophan), with some considerations on the more pronounced conformational sensitivity of CD with respect to absorption, and on the interpretation of Independent Particle spectral features in terms of both the energy position of occupied and empty frontier orbitals, and of their spatial localization. The implementation of CD calculations beyond IP level will be the subject of future works. In particular, an analysis of Independent Particle absorption spectra of the three investigated cyclo-dipeptides, together with the spatial localization of the electronic states involved in the optical transitions (see Results section) illustrates how similar densities of electronic energy levels can potentially yield rather different absorption spectra, due to the generally higher (lower, respectively) intensities of contributions from transitions between pairs of orbitals with a large (resp. small) spatial overlap. Moreover, the fact that the shape of circular dichroism spectra is affected by the positive or negative sign of the CD contributions corresponding to individual optical transitions, in addition to their intensity, and that not all absorption features will in general have a non-negligible CD counterpart, increases the conformational variability of CD spectra with respect to absorption ones. This renders CD spectroscopy potentially useful for conformational analysis, while at the same time requiring careful consideration when using it for identifying enantiomers of flexibile molecules. \begin{acknowledgments} The present work was performed in the framework the PRIN 20173B72NB research project “Predicting and controlling the fate of biomolecules driven by extreme-ultraviolet radiation”, which involves a combined experimental and theoretical study of electron dynamics in biomolecules with attosecond resolution. We moreover acknowledge financial support from IDEA-AISBL, Bruxelles, Belgium and from European Union project MaX Materials design at the eXascale H2020-EINFRA-2015-1 (Grant Agreement No. 824143). \end{acknowledgments} \nocite{*}
1,116,691,498,806
arxiv
\section{INTRODUCTION} To analyze a complex system, one is interested in finding a model able to explain the most about empirical data, with the fewest forms of interactions involved. Such a model should reproduce the statistics observed in the data, while making the least possible number of assumptions on the structure and parameters of the system. In other terms, one needs the simplest, most generic model that generates statistics matching the empirical values - this implies \textit{maximising entropy} in the system, with constraints imposed by the empirical statistics \citep{Jaynes82}. In a seminal paper \citep{Schneidman06}, a framework equivalent to the Ising model in statistical physics was used to analyze the collective behavior of neurons. This approach was based on the assumption that pairwise interactions between neurons can account for the collective activity of the neural population. Indeed, it was shown for experimental data, from the retina and cerebral cortex, that this approach can predict higher order statistics, including the probability distribution of the whole population's spiking activity. Even though the empirical pairwise correlations were very weak, the model performed significantly better than a model reproducing only the firing rates without considering correlations. The Ising model was subsequently demonstrated to efficiently reproduce the data better than models with smaller entropy \citep{Ferrari17}, as well as to analyse neural recordings in a variety of brain regions in different animals, ranging from the salamander retina \citep{Schneidman06,Tkacik14} to the cerebral cortex of mice \citep{Hamilton13}, rats \citep{Tavoni17}, and cats \citep{Marre09}. A complementary approach was recently introduced \citep{Okun15}, aiming at reproducing the correlation between single neuron activity and whole-population dynamics in the mouse and monkey visual cortex. \UF{This approach has then been generalized \citep{Gardella16}} to model the neurons' full profile of dependency with the population activity, and applied the model to the salamander retina. \UF{Later work \citep{Odonnell17} further investigates the properties of these models with neuron-to-population couplings.} Recent advances in experimental methods have allowed the recording of the spiking activity of up to a hundred neurons throughout hours of wakefulness and sleep, for instance using multi-electrode arrays, also known as Utah arrays. Inspection of neurons' spike waveforms and their cross-correlograms with other neurons made the discrimination of excitatory (E) and inhibitory (I) neuron types possible \citep{Peyrache12, Dehghani16}. Such data-sets therefore provide a further step in the probing of the system, due to the unprecedented availability of the simultaneously recorded dynamics of E and I neurons. In the present paper, we apply Maximum Entropy \UF{(MaxEnt)} models to analyze human and monkey Utah array recordings. We investigate in which way such models may describe the two recorded (E, I) populations. \UF{As a proof of concept, we demonstrate how this approach can be applied to investigate excitatory and inhibitory neural activity across the brain states of wakefulness and deep sleep.} \begin{figure*}[t!] \begin{center} \includegraphics[clip=true,keepaspectratio,width=1.95\columnwidth]{fig1.pdf} \caption{\textbf{Multi-electrode (Utah) array recordings.} \textbf{A}) Utah array position in human temporal cortex (top) and monkey prefrontal cortex (bottom). Figure adapted from \citep{Telenczuk17}. \textbf{B}) Raster plots of spikes recorded for human (top) and monkey (bottom) in wakefulness (left) and SWS (right). \TA{Neurons are ordered to separate excitatory (E) from inhibitory (I) cells} } \label{fig:raw_data} \end{center} \end{figure*} \begin{figure*}[t!] \includegraphics[clip=true,keepaspectratio,width=1.995\columnwidth]{fig2.pdf} \caption{ \textbf{Pairwise Ising model fails to predict SWS synchronous activity, especially for inhibitory neurons.} {\bf A}) Model schematic diagram. Model parameters are each neuron's bias toward firing, and symmetric functional couplings between each pair of neurons. {\bf B}) \TA{Empirical and predicted probability distributions of the population activity $K = \sum_i \sigma_i$ for the neuronal population. The Ising model more successfully captures the population statistics during wakefulness than SWS, especially for medium and large $K$ values. {\bf C}) Empirical and predicted population activities for E (lower curves, in green/dark grey) and I (upper curves, in red/light grey) neurons. }The model particularly fails at reproducing the statistics of I population activity. These results are consistent with the presence of transients of high activity and strong synchrony between I neurons during SWS. \UF{Insets show an enlarged view on the region of low population activity, region within which the system spends the vast majority of the time (on a linear scale)}. } \label{fig:Ising} \end{figure*} \begin{figure*}[t!] \includegraphics[clip=true,keepaspectratio,width=1.99\columnwidth]{fig3.pdf} \caption{ \textbf{Neural firing is tuned to the neural population's activity, particularly during SWS.} {\bf A}) Tuning curves of ten example neurons (see text and Appendix C) showing that neurons are tuned to \UF{the rest of the population's activity}. {\bf B}) Scatter-plot of the \TA{excitatory (green triangles) and inhibitory (red circles)} neuron sensitivity to the population activity (see Appendix C). Neurons are very consistently more sensitive during SWS ($p$-value $< 0.001$, Wilcoxon sing-ranked test). } \label{fig:tuning_all} \end{figure*} \begin{figure*}[t!] \includegraphics[clip=true,keepaspectratio,width=1.99\columnwidth]{fig4.pdf} \caption{\textbf{Single-population model shows better performance during SWS than wakefulness.} {\bf A}) Model schematic diagram. {\bf B}) Pairwise covariances, empirical against predicted, for wakefulness (left) and SWS (right) states. Consistently with Fig.~\ref{fig:tuning_all}B, the success for SWS, most noticeably for I-I pairs \TA{(red circles)}, suggests these neurons are most responsive to whole-population activity. \UF{Inset: enlargement of the small-correlation region.} } \label{fig:one-pop} \end{figure*} \begin{figure*}[t!] \includegraphics[clip=true,keepaspectratio,width=1.99\columnwidth]{fig5.pdf} \caption{\textbf{I neurons are more specifically tuned to the I population during SWS.} {\bf A}) Example tuning curves from ten neurons of each type to each type of population during SWS, and similarly for the I population. {\bf B}) Scatter-plot of neuron sensitivity to E versus I population, during SWS. I neuron \TA{(red circles)} are more tuned to I population than the E population ($p$-value $<10^{-3}$, Wilcoxon sign-ranked test). E neurons \TA{(green triangles)}, instead, are weakly sensitive to both populations. } \label{fig:tuning_SWS_EI} \end{figure*} \begin{figure*}[ht!] \includegraphics[clip=true,keepaspectratio,width=1.99\columnwidth]{fig6.pdf} \caption{ \textbf{Two-population model shows significant improvement in prediction for all types of neurons.} {\bf A}) Schematic diagram of the two population model. Parameters $h^E_{iK^E}, h^I_{iK^I}$ are the couplings between each neuron $i$ and the E population activity $K^E = \sum_{i \in E} \sigma_i$ and the I population activity $K^I = \sum_{i \in I} \sigma_i$. {\bf B}) Pairwise covariances, empirical against predicted, for the two-population model, during SWS. Improvement compared to the whole-population model is confirmed by the Pearson correlations. {\bf C}) Deterioration of prediction by shuffling neuron types for the human and monkey data-sets. This effect demonstrates that knowledge of neuron types significantly contributes to improving model prediction. This is confirmed by the Mann-Whitney \textit{U} test p-values. \UF{Inset: enlargement of the small-correlation region.} } \label{fig:two-pop} \end{figure*} \section{RESULTS} We study 96-electrode recordings (Utah array) of spiking activity in the temporal cortex of a human patient and in the premotor cortex of a macaque monkey (see Appendix A), in wakefulness and slow-wave sleep (SWS), as shown in Fig. \ref{fig:raw_data}. Spike times of single neurons were discriminated and binned into time-bins of 50 ms (human data) and 25 ms (monkey data) to produce the population's spiking patterns (see Appendix A). From these patterns, we computed the empirical covariances between neurons then used for fitting models. \subsection{Pairwise Ising model} Pairwise correlations between I neurons have been found to exhibit invariance with distance \citep{Peyrache09}, even across brain regions \citep{LeVanQuyen16}. Here, we study what this intriguing observation implies for functional interactions between neurons, and the information conveyed by pairwise correlations on such interactions. Therefore, we investigate whether pairwise covariances are sufficient to capture the main features of neural activity, for E and I neurons during wakefulness and SWS. To test this, we use a MaxEnt model that reproduces only and exactly the single neurons' spiking probability, and the pairwise covariances observed in the data. As it has been shown \citep{Schneidman06,Cocco11b}, this model takes the form of a disordered Ising model (see Fig.~\ref{fig:Ising}A): \begin{equation} P(\boldsymbol{\sigma}) = \frac{1}{Z}\exp\Big(\sum_i b_i\sigma_i + \sum_{i < j} J_{ij}\sigma_i \sigma_j \Big) \label{eq:psigma_ising} \end{equation} where $\sigma_i$ denotes activity of neuron $i$ given time bin (1: spike, 0: silence), $b_{i}$ the bias (or threshold) of neuron $i$, controlling its firing rate, and $J_{ij}$ the (symmetric) coupling between neurons $i$ and $j$, controlling the pairwise covariance between the neurons. We use the algorithm introduced by \cite{Ferrari16} to infer the model's parameters $b_{i}$ and $J_{ij}$ on data from wakefulness and SWS separately. Then we test how well the model describes neural activity in these states. In particular, synchronous events involving many neurons may not be well accounted for by the pairwise nature of the Ising model interactions. To test this, we quantify the empirical probability of having $K$ neurons active in the same time window \cite{Tkacik14}: $K(\boldsymbol{\sigma}) \equiv \sum_i \sigma_i$. Fig.~\ref{fig:Ising}B compares the empirical probability distributions with model predictions. The Ising model is able to account for the empirical statistics during wakefulness, while it partially fails to capture the statistics during SWS. This is confirmed by the measures of the Kullback-Leibler divergence, $D_\text{KL} \equiv \sum_K P_\text{data}(K) \log [P_\text{data}(K)/P_\text{model}(K)]$, between empirical and model-predicted distributions (Fig.~\ref{fig:Ising}B). This difference can be ascribed to the presence of high activity transients, known to modulate neurons activity during SWS \citep{Steriade93} and responsible for the larger covariances, as seen in \cite{Peyrache12}. In order to investigate the Ising model's failure during SWS, in Fig.~\ref{fig:Ising}C we compare the predictions for $P(K)$, separating \UF{E and I} neuron populations. For periods of wakefulness, the model is able to reproduce both neuron types' behaviors. However, during SWS periods, the model largely fails at predicting the empirical statistics, in particular for the I population. This is confirmed by estimates of the Kullback-Leibler divergences (see Fig.~\ref{fig:Ising}). Fig.~\ref{figSupp:Ising} shows similar results for the analysis on monkey recording. These results highlight the relevance of the pairwise Ising model to reproduce $P(K)$ for all neurons, E and I, during wakefulness. Neural dynamics during wakefulness can therefore be described as predominantly driven by pairwise interactions. However, during SWS the model fails to reproduce $P(K)$ for both populations. Therefore pairwise couplings alone are not sufficient and higher-order, perhaps even population-wide interactions may be needed to accurately depict neural activity during SWS. This is consistent with the observation that during SWS, neural firing is synchronous even across long distances, most notably for pairs of I neurons \citep{LeVanQuyen16}. So far, our findings from inferring a pairwise Ising model on our datasets have highlighted that pairwise interactions were sufficient to depict neural activity during wakefulness, but higher-order, population-wide interactions may appear during SWS. \subsection{Single-population model} In order to further characterize the neuronal activity during SWS, we consider the interaction between each neuron and the whole population: indeed, such approaches have proven successful in describing cortical neural activity \citep{Okun15}. We investigate whether neuron-to-population interactions exist in our data-set by studying the neurons' tuning curves to the population. Neuron-to-population tuning curves (see Appendix C) indicate how much a neuron's activity is determined by the total activity of the rest of the network \citep{Gardella16}. In Fig.~\ref{fig:tuning_all}A we present tuning curves for ten example E or I neurons during both wakefulness and SWS. These examples provide strong evidence for neuron-to-population tuning. In order to quantify population tuning, we estimate how much a neuron, either E or I, is \textit{sensitive} to the activity of the rest of the population, i.e. how much its activity fluctuates depending on the population activity (see Methods). As can be observed in Fig.~\ref{fig:tuning_all}B, and consistently with our previous results, we find that neurons are sensitive to the population especially during SWS. Similar results are valid for the monkey recording as well (Fig.~\ref{figSupp:one-pop}A). Since we have established neuron-to-population interactions take place during SWS, we wish to determine to what extent they are sufficient in capturing the characteristics of neural activity during sleep. To this purpose, we use a model \citep{Gardella16} for the dependencies between neuron firing, $\sigma_i=1$, and population activity, $k$: $P(\sigma_i = 1, k = K(\boldsymbol{\sigma}))$, where $K(\boldsymbol{\sigma})$ denotes the number of neurons spiking in any time bin. In this model (Fig.~\ref{fig:one-pop}A), the probability of neuron firing is described by the strength of its coupling to the population: \begin{equation} P(\boldsymbol{\sigma}) = \frac{1}{Z}\exp\Big(\sum_i h_{ik}\delta_{k}^{K(\boldsymbol{\sigma})}\sigma_i\Big), \label{eq:psigma_delta} \end{equation} where $h_{ik}$ is the coupling between neuron $i$ and the whole population when $k$ neurons are active. $\delta_k^K$ is the Kronecker delta, taking value one when the number $K$ of active neurons is equal to a given value $k$ and zero otherwise. For example, a ``chorister'' neuron, that fires most often when many others are firing, would have $h_{ik}$ increasing with $k$. Conversely, a ``soloist'' neuron, that fires more frequently when others are silent, would have $h_{ik}$ decreasing with $k$ \citep{Okun15}. $Z$ is the normalisation constant, that can be computed by summing over all possible activity configurations $Z = \sum_{\boldsymbol{\sigma}}\exp\Big(\sum_{i = 1}^{N} h_{ik}\delta_{k}^{K}\sigma_i\Big)$. Importantly, $Z$ and its derivative allow us to determine the statistics of the model, such as the mean firing rate and the pairwise covariances. As an analytical expression exists for $Z$, the statistics may be derived analytically from the values of the couplings, making this model solvable (see Appendix \UF{D}). To evaluate to what extent the model describes the data well and hence captures empirical statistics it was not designed to reproduce, we study the predicted pairwise correlations as compared to the empirical ones. In Fig.~\ref{fig:one-pop}B, we compare the empirical pairwise covariances to their model predictions. Pearson correlations (covariance between the two empirical and predicted variables, normalized the product of their standard deviations) confirm that the population statistics are better reproduced by the model during SWS than during wakefulness (Fig. \ref{fig:one-pop}). For monkey recording, the effect is even larger since the model entirely fails to account for wakefulness pairwise statistics (Fig.~\ref{figSupp:one-pop}B). While the effect may be amplified by the fact that the Pearson correlations are larger during SWS, this is the opposite of what was observed for the pairwise Ising model: a model reproducing only empirical neuron-to-population interactions seems adequate at depicting neural dynamics during SWS but not during wakefulness. In particular, the model best reproduces the empirical statistics during SWS for \UF{I-I} neuron pairs. By contrast, E-E pairwise covariances are the most poorly reproduced during wakefulness. This result implies that during SWS, I activity, and to a lesser extent E activity, is dominated by population-wide interactions rather than local pairwise mechanisms, such that a MaxEnt 'population model' is mostly sufficient at capturing the key dynamics. Nevertheless, this model still under-estimates the higher I-I pairwise covariances. \subsection{Two-population model} Since I neurons are strongly synchronised even across long distances \citep{Peyrache12, Dehghani16}, we hypothesise that they could be tuned to the I population only, rather than the whole population. We therefore ask if I neurons are tuned to the I population only. Indeed, as shown in Fig.~\ref{fig:tuning_SWS_EI}A, examination of the tuning curves of each neuron to the E and the I populations separately revealed homogeneous and strong tuning of I neurons to the I population, compared to tuning of I neurons to the E population or to the whole population (Fig.~\ref{fig:tuning_all}). In order to quantify this effect, we estimated the neuron sensitivity to both populations separately (see Appendix C). The comparison in Fig.~\ref{fig:tuning_SWS_EI}B suggests I neurons are significantly more sensitive to the activity of I population than the E population. The effect is even larger for monkey recordings (Fig.~\ref{figSupp:two-pop}A). To study tuning to the two populations separately, we now refine the previous model to take into account the couplings between each neuron and the E population and each neuron and the I population, separately. Because of the results of Fig.~\ref{fig:tuning_SWS_EI}B, we expect this model to perform better at reproducing the main features of the data during SWS. We want the model to only and exactly reproduce the empirical $P(\sigma_i = 1, k^E = K^E(\boldsymbol{\sigma}))$ and $P(\sigma_i = 1, k^I = K^I(\boldsymbol{\sigma}) )$ for all neurons $i$ and all values empirically taken by $K^E$ and $K^I$. The probability of obtaining any firing pattern $\boldsymbol{\sigma}$ is given by (see Fig.~\ref{fig:two-pop}A) \begin{equation} P(\boldsymbol{\sigma}) = \frac{1}{Z}\exp\Big(\sum_i (h_{ik^E}^E\delta_{k^E}^{K^E(\boldsymbol{\sigma})} + h_{ik^I}^I\delta_{k^I}^{K^I(\boldsymbol{\sigma})}) \sigma_i\Big), \label{eq:psigma_EI_delta} \end{equation} where $K^E(\boldsymbol{\sigma})$ is the number of E neurons spiking and $K^I$ the number of I neurons spiking in any time bin, and $h^E_{ik^E}$ the coupling between neuron $i$ and the whole E population when $k$ neurons are active, resp. $h^I_{ik^I}$ to the I population. $Z$ the normalisation, $\delta_{k^E}^{K^E(\boldsymbol{\sigma})}$ and $\delta_{k^I}^{K^I(\boldsymbol{\sigma})}$ are Kronecker deltas as before. It can be shown (see Appendix D), following an analogous reasoning to that employed in \cite{Gardella16}, that this model is also analytically solvable in that the normalisation function $Z$ may be derived analytically. Using the expression for $Z$, as described in the Appendix \UF{D}, allows us to analytically predict the model statistics for any given set of couplings. As for the previous models, we want to assess whether this model is sufficient to describe the data, that is if it can accurately predict a data statistic it was not specifically designed to reproduce. To this purpose we test pairwise covariances. We also aim to evaluate how prediction performance compares with the single-population model on the whole population (Fig. \ref{fig:one-pop}) described previously. For both human (Fig.~\ref{fig:two-pop}B) and monkey (Fig.~\ref{figSupp:two-pop}B) recordings, during SWS the two-population model provides better predictions for pairwise covariances than the single-population model. Furthermore large I-I covariance are no longer systematically under-estimated. To verify the improvement in model performance was not solely due to this model possessing more parameters, we repeat the inference on the same data with the neuron types (E or I) shuffled, and find that the prediction deteriorates significantly, as highlighted in Fig.~\ref{fig:two-pop}C. \UF{A two-fold cross-validation test provided similar results for both data-sets, as the mean square error on the pairwise covariance prediction was smaller for the two-population model in the totality of trials (see Appendix D and Fig. \ref{figSupp:crossval}).} \TA{Additionally, we note that the one-population model, Eq.~\ref{eq:psigma_delta}, inferred separately on the sub-populations of I neurons and E neurons, preforms similarly to the two-population model. This further supports that the knowledge of neuronal types is the key feature beyond the two-population model improvement (see Fig.~\ref{figSupp:sub-pop} for more details).} These \UF{analyses} demonstrate that taking into consideration each neuron's couplings with the E population and the I population separately is more relevant than taking into account its couplings with any sub-populations of the same size. \UF{We also note that while the deterioration due to shuffling is equally significant for both data-sets, it is more important for the monkey premotor cortex. This is consistent with the fact that E neurons, i.e. most neurons, are also \TA{very significantly} preferentially tuned to the I population for the monkey (Fig.\TA{~\ref{figSupp:two-pop}A}) but not for the human (Fig.\TA{~\ref{fig:tuning_SWS_EI}B}). Separating the two populations in the model therefore provides a much larger improvement on the prediction of E cells' behaviour in the monkey data. } Remarkably, with the two-population model, E-I correlations are also reproduced with increased accuracy as compared to the single-population model. This improvement suggests that the two-population model successfully captures some of the cross-type interactions between the E and I populations, a non-trivial result since the two populations are not directly coupled to one another by design of the model. \section{DISCUSSION} In this paper, we tested MaxEnt models on human and monkey multi-electrode array recordings where E and I populations were discriminated, during the states of wakefulness and SWS. In order to investigate the properties of the neuronal dynamics, models were designed to reproduce one empirical feature at a time, and tested against remaining statistics. The pairwise Ising model's performance highlighted pairwise interactions as dominant in cortical activity during wakefulness, but insufficient to describe neural activity during SWS. We identify I neurons as responsible for breaking pairwise sufficiency during SWS, suggesting instead that I neurons' interactions are long-distance and population-wide, which explains recent empirical observations \citep{Peyrache12,LeVanQuyen16}. We found that models based on neuron-to-population interactions, as introduced by \cite{Okun15}, are only relevant to SWS, failing to replicate the empirical pairwise correlations in the monkey premotor cortex (Fig.~\ref{figSupp:one-pop}). Even for SWS, I neurons' strong pairwise correlations were consistently underestimated. Eventually, the two-population model provides a good trade-off for modelling neural interactions in SWS, and in particular the strongly correlated behaviour of I neurons. Discrimination between E and I neuron types greatly improves the capacity of a model to capture empirical neural dynamics. \textbf{Pairwise sufficiency.} Pairwise Ising models (Fig.~\ref{fig:Ising}A) had previously been shown to accurately predict statistical features of neural interaction in many of data-sets \citep{Schneidman06,Cocco09,Hamilton13,Tavoni17}. The surprisingly good performance of these models has raised hypotheses on the existence of some unknown mechanisms beyond their success \cite{Mastromatteo11}. In order to understand the so-called `pairwise sufficiency', a number of theoretical investigations \cite{Roudi09,Obuchi15a,Obuchi15b,Merchan16} and an empirical benchmark \cite{Ferrari17} have been conducted. Model limitations have also been subject to some characterization. For instance, the breakdown of model performance for very large system sizes has been evidenced on experimental data \cite{Tkacik14} and studied theoretically \cite{Rostami17}. Ising model performance has also been shown to be sensitive to time bin size, and to its relation to characteristic time scales of the studied system \cite{Capone15}. Here, we observed that for the same neural system, activity can be well-reproduced in one brain state (wakefulness) and not the other (SWS) (see Fig.~\ref{fig:Ising}B). This result reinforces the idea that pairwise sufficiency depends on the system's actual statistical properties, and it is not a more general consequence of the MaxEnt principle. \textbf{\UF{Neuron}-to-population couplings} Although our study is the first to propose couplings between neurons and single-type population, an alternative approach has been previously used to highlight the neurons' tuning by the population activity \cite{Okun15}. In that work, neurons were classified as `soloist' or `chorister', depending on whether they spiked more frequently when the rest of the population was silent or active, respectively. \UF{Here}, we have refined this picture by pointing out tuning \UF{to a} single-type population. Specifically, we have shown that I neurons are more sensitive to the I population activity than to the E one (Fig.~\ref{fig:tuning_SWS_EI}B). \UF{This result contributes to a literature having highlighted important synchrony between I neurons, including during sleep \cite{Peyrache12, LeVanQuyen16}. Our approach provides a complementary, quantitative view of this phenomenon in terms of neural interactions to the population. } \UF{\textbf{Differences between data-sets and generality of results} One should also note the different characteristics between the two data-sets we analyze. First, as seen in Fig.\ref{fig:raw_data}, neurons are less active for the human data-set than the monkey. This difference may be due to recording in a different brain area \cite{Rolls90}, layer \cite{Sakata09}, and species \cite{Wallis12}. Second, neural correlations in the temporal and premotor cortex code for very different functions - long-term memory encoding in the temporal cortex \cite{Quiroga08}, and motion planning in the premotor \cite{Churchland10}. While the differences above may justify any notable differences, namely the E neuron tuning to I population in SWS in the monkey data, it is important to highlight that all findings are consistent across both data-sets. This highlights that the framework we introduced is robust and may allow for further investigation of E and I dynamics and their interplay in a variety of empirical recordings. Furthermore, this suggests the interactions uncovered here are not species or brain region-specific, but rather generic features of neural activity in the studied brain states. } \UF{ \textbf{Competition between internal network dynamics and common external inputs.} We note that mechanisms underlying the neuronal interactions we observe can occur at multiple scales. Different network connectivity for I neurons \cite{Hofer11}, such as reinforced structural couplings over long distances, could account for the population-wide interactions winning over pairwise interactions for I cells. Additionally, larger or more synchronous common inputs to the I population, across the scale of brain regions \cite{LeVanQuyen16, Olcese16}, may also be a plausible mechanism behind the observed interactions. In conclusion, MaxEnt models can provide quantitative constraints to biophysical models of excitatory and inhibitory activity. In turn, these biophysical models could serve the exploration of possible mechanisms behind the observed neuron-to-neuron and neuron-to-population interactions.} \section*{Acknowledgments} We thank C. Capone, M. Chalk, M. di Volo, C. Gardella, J.S. Goldman, A. Peyrache, G. Tkacik and N. Tort-Colet for useful discussion. Research funded by European Community (Human Brain Project, H2020-720270), ANR TRAJECTORY, ANR OPTIMA, French State program Investissements d'Avenir managed by the Agence Nationale de la Recherche [LIFESENSES: ANR-10-LABX- 65], NIH grant U01NS09050 and a AVIESAN-UNADEV grant.
1,116,691,498,807
arxiv
\section{Introduction} \label{sec:intro} This report describes Brno University of Technology (BUT) team submissions for the ASC challenge of DCASE 2019. We proposed three different deep neural network topologies for this task. The first one is a VGG like~\cite{simonyan2014very} two-dimensional CNN network for processing audio segments. The second network is again a 2-dimensional CNN network which called Light-CNN~\cite{wu2018light}. This network uses several Max-Feature-Map activations for reducing the number of channels after convolutional layers. Light-CNN was successfully used for spoofing attack detection challenge~\cite{Lavrentyeva2017}. We also used a fusion of this network with a VGG network for the last spoofing challenge~\cite{Zeinali2019spoofing}. The last network topology uses a one-dimensional CNN along the time axis. This topology is mainly used to extract fixed-length embeddings of (possibly variable length) acoustic segments. This architecture has been previously found useful for other speech processing tasks such as speaker recognition~\cite{snyder2018x, zeinali2019improve}, where the extracted embeddings were called x-vectors. In the previous DCASE challenge (i.e. DCASE 2018) we have used this network for both classification (i.e. like other two networks) and extracting x-vector embeddings~\cite{zeinali2018convolutional} while in this challenge we only use it for classification. All proposed networks were trained with 256-dimensional log Mel-spectrogram features. In all networks we use self-attention mechanism~\cite{zhu2018self,okabe2018attentive,chowdhury2017attention} for pooling instead of simple average pooling. Our submissions are based on fusions of different networks trained on the task development data. The current ASC challenge has three sub-tasks: In task1a, participants are allowed to use only the fixed development data for training. Task1b is similar to task1a except that the test files are from different mobile channels. Finally, task1c is an open set classification challenge where the test recording may be from a different environment than the 10 predefined target classes, in which case it should be classified as "unknown". We have participated in task1a only. \section{Dataset} In this challenge, an enhanced version of ASC dataset was used~\cite{mesaros2018multi}. The dataset consists of recordings from 10 scene classes and was collected in 12 large European cities and in different environments in each city. The development set of the dataset for task1a consists of 1440 segments for each acoustic scene and in total 40 hours of audio. This part only contains the recordings from 10 cities. The evaluation set consists of 7200 audio segments and was collected in different location of these 10 cities as well as in two not-seen cities, to test the generalization properties of the systems. Each segment has an exactly 10-second duration, this is achieved by splitting longer audio recordings from each environment (between 5-6 minutes). The dataset includes a predefined validation fold. Each team can also create its own folds, so we have created a 4-folds cross-validation setup for system development. The audio segments are 2-channels stereo files, recorded at 48\:KHz sampling rate. \section{Data Processing} \subsection{Features} The log Mel-scale spectrogram was used as a feature in this challenge. For extracting the features, first, we converted the audio to a mono-channel and removed the amplitude bias by subtracting the audio segment's mean from the signal. Then short time Fourier transform is computed on 2048 samples Hamming windowed frames with 430 samples overlap from downsampled signals to 22050\,Hz. Next, the power spectrum is transformed into 256 Mel-scale band energies and, finally, the log of these energies is taken. The features are extracted using librosa toolbox~\cite{mcfee2015librosa}. \subsection{Example generation for network training} The procedure for generating training examples can greatly affect the performance of neural networks in audio processing. Therefore, we experimented with several different strategies to find the best example generation method. We randomly select a subpart of each audio segment as an example in the training time. Because we have an attention pooling layer for the time axis in the all proposed networks, we can have input with different size during the training and test time. Initially, we used four-second segments but after doing several experiments, we found that networks trained on the smaller segments performed better than those trained on large segments, mainly because they overfit less to the training data. The size of the examples used to train the submitted systems is only 128 frames from 512 extracted frames for whole 10 second segments. \section{CNN topologies} We have used three different CNN topologies for this challenge. The first one is a VGG like two-dimensional CNN. The second topology is an enhanced version of Light-CNN (LCNN) which used Max-Feature-Map (MFM) as an additional non-linearity. MFM reduces the number of kernels to half. As a result, the final network has fewer parameters and this is the main reason that this network called Light-CNN. The last network is a one-dimensional CNN topology known as x-vector which is the state-of-the-art method for speaker recognition~\cite{snyder2018x}. In all networks, we have used a self-attention mechanism in pooling layer instead of common average pooling. Self-attention can be considered as a weighted average (and weighted standard deviation). All networks are described in more detail in the following sub-sections. \subsection{VGG-like network} The VGG network comprises several convolutional and pooling layers followed by a statistics pooling and several dense layers which perform classification. Table~\ref{tab:vgg} provides a detailed description of the proposed VGG architecture. There are 6 convolutional blocks in the model, each containing 2 convolutional layers and one max-pooling. Each max-pooling layer reduces the size of the frequency axis to half while only one of them reduces the temporal resolution. After the convolutional layers, there is an attention pooling layer. This layer operates only on the time axis and calculates the weighted mean over time. This layer will be explained in more details in the following section. After this layer, there is a flatten layer which simply concatenates the 4 remaining frequency dimensions. Finally, there are 3 dense layers which perform the classification task. \begin{table}[!t] \renewcommand{\arraystretch}{1.1} \centering \caption{\label{tab:vgg}The proposed VGG architecture. Conv2D: two dimensional convolutional layer. AttentionPooling: a layer which calculate the weighted mean in time axis using attention mechanism and reduce the shape (remove the time axis). Dense: fully connected dense layer. N in the third column indicates the segment length which is 128 for the training phase and 512 for the evaluation phase. The attention layer here only uses the mean statistics.} \vspace{2mm} \setlength\tabcolsep{4pt} \begin{tabular}{l c c r} \toprule \toprule \textbf{Layer name} & \textbf{Filter} & \textbf{Output} & \textbf{\#Params} \\ \midrule Input & -- & 256 $\times$ N $\times$ 1 & -- \\ Conv2D-1-1 & 3 $\times$ 3 & 256 $\times$ N $\times$ 32 & 608 \\ Conv2D-1-2 & 3 $\times$ 3 & 256 $\times$ N $\times$ 32 & 9.2K \\ MaxPooling-1 & 2 $\times$ 1 & 128 $\times$ N $\times$ 32 & -- \\ \midrule Conv2D-2-1 & 3 $\times$ 3 & 128 $\times$ N $\times$ 64 & 18.5K \\ Conv2D-2-2 & 3 $\times$ 3 & 128 $\times$ N $\times$ 64 & 37K \\ MaxPooling-2 & 2 $\times$ 1 & 64 $\times$ N $\times$ 64 & -- \\ \midrule Conv2D-3-1 & 3 $\times$ 3 & 64 $\times$ N $\times$ 128 & 74K \\ Conv2D-3-2 & 3 $\times$ 3 & 64 $\times$ N $\times$ 128 & 148K \\ MaxPooling-3 & 2 $\times$ 1 & 32 $\times$ N $\times$ 128 & -- \\ \midrule Conv2D-4-1 & 3 $\times$ 3 & 32 $\times$ N $\times$ 256 & 295K \\ Conv2D-4-2 & 3 $\times$ 3 & 32 $\times$ N $\times$ 256 & 590K \\ MaxPooling-4 & 2 $\times$ 1 & 16 $\times$ N $\times$ 256 & -- \\ \midrule Conv2D-5-1 & 3 $\times$ 3 & 16 $\times$ N $\times$ 256 & 590K \\ Conv2D-5-2 & 3 $\times$ 3 & 16 $\times$ N $\times$ 256 & 590K \\ MaxPooling-5 & 2 $\times$ 1 & 8 $\times$ N $\times$ 256 & -- \\ \midrule Conv2D-6-1 & 3 $\times$ 3 & 8 $\times$ N $\times$ 256 & 590K \\ Conv2D-6-2 & 3 $\times$ 3 & 8 $\times$ N $\times$ 256 & 590K \\ MaxPooling-6 & 2 $\times$ 1 & 4 $\times$ N $\times$ 256 & -- \\ \midrule AttentionPooling & -- & 4 $\times$ 256 & 66K \\ Flatten & -- & 1024 & -- \\ \midrule Dense1 & -- & 256 & 262K \\ Dense2 & -- & 256 & 66K \\ Dense (softmax) & -- & 10 & 2570 \\ \midrule Total & -- & -- & 3950K \\ \bottomrule \bottomrule \end{tabular} \vspace{-2mm} \end{table} \subsection{Light CNN (LCNN)} Table~\ref{tab:lcnn} shows the used LCNN topology for this challenge. This network is a combination of convolutional and max-pooling layers and uses Max-Feature-Map (MFM) as an additional non-linearity. MFM is a layer which simply reduce the number of output channels to the half by taking the maximum of two consecutive channels (or any two channels, e.g. $i, \frac{N}{2} + i$). The rest of this network (statistics and classification parts) is identical to the proposed VGG network. \begin{table}[!t] \centering \caption{\label{tab:lcnn} The proposed LCNN architecture. MFM: Max-Feature-Map activation. N in the third column indicates the segment length which is 128 for the training phase and 512 for the evaluation phase. The attention layer here only uses the mean statistics.} \vspace{2mm} \setlength\tabcolsep{4pt} \begin{tabular}{l c l r} \toprule \toprule \textbf{Layer name} & \textbf{Filter} & \textbf{Output} & \textbf{\#Params} \\ \midrule Input & -- & 256 $\times$ N $\times$ 1 & -- \\ Conv2D-1-1 & 5 $\times$ 5 & 256 $\times$ N $\times$ 32 & 832 \\ MFM-1-1 & -- & 256 $\times$ N $\times$ 16 & -- \\ MaxPooling-1 & 2 $\times$ 1 & 128 $\times$ N $\times$ 16 & -- \\ \midrule Conv2D-2-1 & 1 $\times$ 1 & 128 $\times$ N $\times$ 32 & 544 \\ MFM-2-1 & -- & 128 $\times$ N $\times$ 16 & -- \\ BatchNorm-1 & -- & 128 $\times$ N $\times$ 16 & 512 \\ Conv2D-2-2 & 3 $\times$ 3 & 128 $\times$ N $\times$ 64 & 10K \\ MFM-2-2 & -- & 128 $\times$ N $\times$ 32 & -- \\ MaxPooling-2 & 2 $\times$ 1 & 64 $\times$ N $\times$ 32 & -- \\ \midrule Conv2D-3-1 & 1 $\times$ 1 & 64 $\times$ N $\times$ 64 & 2K \\ MFM-3-1 & -- & 64 $\times$ N $\times$ 32 & -- \\ BatchNorm-2 & -- & 64 $\times$ N $\times$ 32 & 256 \\ Conv2D-3-2 & 3 $\times$ 3 & 64 $\times$ N $\times$ 128 & 28K \\ MFM-3-2 & -- & 64 $\times$ N $\times$ 64 & -- \\ MaxPooling-3 & 2 $\times$ 1 & 32 $\times$ N $\times$ 64 & -- \\ \midrule Conv2D-4-1 & 1 $\times$ 1 & 32 $\times$ N $\times$ 96 & 5K \\ MFM-4-1 & -- & 32 $\times$ N $\times$ 48 & -- \\ BatchNorm-3 & -- & 32 $\times$ N $\times$ 32 & 128 \\ Conv2D-4-2 & 3 $\times$ 3 & 32 $\times$ N $\times$ 128 & 55K \\ MFM-4-2 & -- & 32 $\times$ N $\times$ 64 & -- \\ MaxPooling-4 & 2 $\times$ 1 & 16 $\times$ N $\times$ 64 & -- \\ \midrule Conv2D-5-1 & 1 $\times$ 1 & 16 $\times$ N $\times$ 128 & 8K \\ MFM-5-1 & -- & 16 $\times$ N $\times$ 64 & -- \\ BatchNorm-4 & -- & 16 $\times$ N $\times$ 64 & 64 \\ Conv2D-5-2 & 3 $\times$ 3 & 16 $\times$ N $\times$ 160 & 92K \\ MFM-5-2 & -- & 16 $\times$ N $\times$ 80 & -- \\ MaxPooling-5 & 2 $\times$ 1 & 8 $\times$ N $\times$ 80 & -- \\ \midrule Conv2D-6-1 & 1 $\times$ 1 & 8 $\times$ N $\times$ 192 & 13K \\ MFM-6-1 & -- & 8 $\times$ N $\times$ 96 & -- \\ BatchNorm-5 & -- & 8 $\times$ N $\times$ 64 & 32 \\ Conv2D-6-2 & 3 $\times$ 3 & 8 $\times$ N $\times$ 192 & 138K \\ MFM-6-2 & -- & 8 $\times$ N $\times$ 96 & -- \\ MaxPooling-6 & 2 $\times$ 1 & 4 $\times$ N $\times$ 96 & -- \\ \midrule AttentionPooling & -- & 4 $\times$ 96 & 9K \\ Flatten & -- & 384 & -- \\ \midrule Dense1 & -- & 256 & 99K \\ Dense2 & -- & 256 & 66K \\ Dense (softmax) & -- & 10 & 2570 \\ \midrule Total & -- & -- & 531K \\ \bottomrule \bottomrule \end{tabular} \vspace{-2mm} \end{table} \subsection{One-dimensional CNN for x-vector extraction} In contrast to the other two proposed networks, the x-vector topology only uses one-dimensional convolution along the time. Table~\ref{tbl.xvector_topo} shows the network architecture. The network has three parts. The first part operates on the frame-by-frame level and outputs sequence of activation vectors (one for each frame). The second part compresses the frame-by-frame information into a fixed length vector of statistics describing the whole acoustic segment. More precisely, the weighted mean and weighted standard deviation of the input activation vectors are calculated over frames using the attention mechanism (note that in the original x-vector paper simple mean and standard deviation were used~\cite{snyder2018x}). The last part of the network consists of two Dense Leaky-ReLU layers followed by a Dense softmax layer like in the two previous topologies. Unlike our system for DCASE challenge 2018~\cite{zeinali2018convolutional} where we used the x-vector network in two ways: the softmax output was used for the classification or the x-vectors extracted at the output of the first affine transform after the pooling layer are used as the input for another classifier. Here we only use this network for classification exactly the same as the other two networks. \begin{table}[!t] \renewcommand{\arraystretch}{1.1} \caption{\label{tbl.xvector_topo} 1-Dimensional CNN topology for x-vector extraction. The second column shows the relative indices to the current time step. N in the third column indicates the segment length which is 128 for the training phase and 512 for the evaluation phase. The attention layer here uses both mean and standard deviation statistics.} \vspace{2mm} \centering{ \setlength\tabcolsep{6pt} \begin{tabular}{l c c r} \toprule \toprule \textbf{Layer name} & \textbf{Filters Index} & \textbf{Output} & \textbf{\#Params} \\ \midrule Input & -- & N $\times$ 256 & -- \\ Conv1D-1 & (-2,-1,0,1,2) & N $\times$ 256 & 328K \\ BatchNorm-1 & -- & N $\times$ 256 & 1K \\ Dropout-1 & -- & N $\times$ 256 & -- \\ \midrule Conv1D-2 & (-2,0,2) & N $\times$ 256 & 197K \\ BatchNorm-2 & -- & N $\times$ 256 & 1K \\ Dropout-2 & -- & N $\times$ 256 & -- \\ \midrule Conv1D-3 & (-3,0,3) & N $\times$ 256 & 197K \\ BatchNorm-3 & -- & N $\times$ 256 & 1K \\ Dropout-3 & -- & N $\times$ 256 & -- \\ \midrule Conv1D-4 & (-4,0,4) & N $\times$ 256 & 197K \\ BatchNorm-4 & -- & N $\times$ 256 & 1K \\ Dropout-4 & -- & N $\times$ 256 & -- \\ \midrule Conv1D-5 & (0) & N $\times$ 256 & 66K \\ BatchNorm-5 & -- & N $\times$ 256 & 1K \\ Dropout-5 & -- & N $\times$ 256 & -- \\ \midrule Conv1D-6 & (0) & N $\times$ 768 & 197K \\ BatchNorm-6 & -- & N $\times$ 768 & 1K \\ Dropout-6 & -- & N $\times$ 768 & -- \\ \midrule AttentionPooling & -- & 1536 & 590K \\ \midrule Dense1 & -- & 256 & 394K \\ BatchNorm-7 & -- & 256 & 1K \\ Dropout-7 & -- & 256 & -- \\ \midrule Dense2 & -- & 256 & 66K \\ BatchNorm-8 & -- & 256 & 1K \\ Dense3 (softmax) & -- & 10 & 2560 \\ \midrule Total & -- & -- & 2240K \\ \bottomrule \bottomrule \end{tabular} \vspace{-5mm} } \end{table} \subsection{Attention Mechanism} The conventional mean pooling layer considers the same weight for each input channel (depending on specified dimensions for calculating the mean). Each audio signal in the acoustic scene classification task contains several audio events which happened only in few frames in addition to the other events that happened in all frames like background noise. So, some frames contain more information than others about the interesting scene (i.e. class) and we should pay more attention to them. This is not possible with conventional mean pooling layer. We have proposed to use the attention mechanism which already successfully used in speaker verification task~\cite{zhu2018self,okabe2018attentive,chowdhury2017attention, zeinali2019improve}. In this method, the last layer before pooling layer is used to calculate weights of frames. Then the weighted mean and weighted standard deviation of the input channels are calculated and used as the output of the layer. For x-vector topology using both mean and standard deviation perform better in our setup while for the other two proposed topologies using only mean is slightly better. \section{Systems and Fusion} In this challenge, we fused outputs of different networks to obtain the final results. First, we made a 4-folds cross-validation setup using the whole development data in addition to the official provided setup (we only use the official validation set for report results here and for the final system we only used the generated folds). By doing some initial experiments using the provided setup we found there are some easy and some difficult locations in the development data. In order to be able to make any valid conclusions, it is better to evaluate the networks on the whole development data. So, by following the proposed strategy in \cite{Dorfer2018}, we made this 4-folds validation setup. For each fold, we trained 3 proposed network topologies using the data from the other three folds and evaluate them on the selected fold. The results on each fold were used to train a fusion system on the output of the trained networks (i.e. the output of the affine transform before applying softmax activation). FoCal Multiclass toolbox~\cite{brummer2007focal} was used for the training of a fusion system based on logistic regression. The final output of the system was the average fused output of each fold. Note that the output of FoCal is a calibrated score (i.e. have pretty same score distributions) and the average of them performs quite well. As an alternative to fusion training, we also did a majority vote fusion. We have 12 trained networks for all folds. In this case, we first classify the test segment on all networks and count the number of classified output for all 10 classes. The final segment class was the most voted class. If two classes had the same number of votes, the class with the higher score from the first fusion method was used. \section{Experiments and Results} \subsection{Experimental Setups} Similar to the baseline system provided by the organizers, our networks training was performed by optimizing the categorical cross-entropy using Adam optimizer~\cite{kingma2014adam}. The initial learning rate was set to 0.001 and the network training was early-stopped if the validation loss did not decrease for more than 100 epochs. The learning rate was linearly decreased to 1e-6 starting from epoch 50. The maximum number of epochs and the mini-batch size were set to 500 and 128, respectively. \subsection{Results on the Official Fold} In this section, the results of the final system with trained fusion for each scene are reported. Table~\ref{tbl.fold_results} shows the performance of the system for each scene separately as well as the overall performance on the official challenge validation fold. \begin{table}[t] \renewcommand{\arraystretch}{1.1} \caption{\label{tbl.fold_results} Comparison results between different scenes of the final fused system.} \vspace{2mm} \centerline { \setlength\tabcolsep{8pt} \begin{tabular}{ l c c c c } \toprule \midrule & & Our system & Baseline \\ Scene label & & Accuracy [\%] & Accuracy [\%]\\ \midrule Airport & & 71.5 & 48.4 \\ Bus & & 92.7 & 62.3 \\ Metro & & 74.3 & 65.1 \\ Metro Station & & 75.2 & 54.5 \\ Park & & 92.9 & 83.1 \\ Public Square & & 58.6 & 40.7 \\ Shopping Mall & & 71.8 & 59.4 \\ Street Pedestrian & & 60.0 & 60.9 \\ Street Traffic & & 90.6 & 86.7 \\ Tram & & 81.9 & 64.0 \\ \midrule Average & & 77.0 & 62.5 \\ \midrule \bottomrule \end{tabular} \vspace{-4mm} } \end{table} \section{Conclusions} We have described the systems submitted by BUT team to Acoustic Scene Classification (ASC) challenge of DCASE2019. Different systems were designed for this challenge and the final systems were fusions of the output scores from the individual system. A trained fusion as well as a majority vote fusion were used for the final system. The proposed systems are the fusion of there different network topologies: VGG like, Light-CNN and x-vector. \bibliographystyle{IEEEtran}
1,116,691,498,808
arxiv
\section{Introduction} There is currently an intense interest being shown in the possible application of quantum devices to fields such as computing and information processing~\cite{Nie00}. The goal is to construct machinery which operates manifestly at the quantum level. In any successful development of such technology the role of measurement in quantum systems will be of central, indeed crucial, importance (see for example~\cite{shepelyansky01}). In order to extend our understanding of this problem we have recently investigated the coupling together of quantum systems that, to a good approximation, appear classical (via the correspondence limit) but whose underlying behaviour is strictly quantum mechanical~\cite{everitt05}. In this work we followed the evolution of two coupled, and identical, quantised Duffing oscillators as our example system. We utilised two unravellings of the master equation to describe this system: quantum state diffusion and quantum jumps which correspond, respectively, to unit-efficiency heterodyne measurement (or ambi-quadrature homodyne detection) and photon detection~\cite{Wis96}. We demonstrated that the entanglement that exists between the two oscillators depends on the nature of their dynamics. Explicitly, we showed that whilst the dynamics was chaotic-like the entanglement between the oscillators remained high; conversely, if the two oscillators entrained into a periodic orbit the degree of entanglement became very small. With this background we subsequently became interested in acquiring a detailed understanding of experimental readouts of quantum chaotic-like systems. In this paper we have chosen to explore the subject through the quantum jumps unravelling of the master equation~\cite{Ple98,Wis96,Heg93}. Here, the measured output is easily identified, namely a click or no click in the photon detector. However, this measurement process is rather unique in the fact that it possesses no classical analogue. Indeed, this is the case even when the system under consideration may appear to be evolving along a classical trajectory. Interestingly, despite the fact that the photon detector has no classical analogue, it is the very presence of this as a source of decoherence that is responsible for recovering classical-like orbits in the $\left( \left\langle q\right\rangle ,\left\langle p\right\rangle \right) $ phase plane (despite the fact that we measure neither $q$ nor $p$). The subject of recovering such chaotic-like dynamics from unravellings of the master equation has been studied in depth in the literature~\cite{Per98,Hab98,brun97,Bru96,Spi94} and a detailed discussion is beyond the scope of this paper. However, we note that recently in~\cite{peano04} resonances have been observed in a model of a non-linear nano-mechanical resonator that is absent in the corresponding classical model. In this present work we have chosen to scale the oscillator so that we recover orbits similar to those generated from a classical analysis. \section{Background} In this work we study the output resulting from the measurement of quantum objects where the measurement device generates decoherence effects. In this limit the system exhibits dynamical behaviour in terms of its expectation values very much like those observed in its classical counterpart. In this work we investigate the region of parameter space under which the classical system exhibits chaotic motion. Of the many models that could be used we have chosen the Quantum Jumps approach~\cite{Ple98,Wis96,Heg93}. We note that this is only one of several possible unravellings of the master equation that correspond to the continuous measurement of the quantum object considered. Our motivation for using this approach is that the recorded output of the measurement is completely transparent i.e. the photon counter either registers a photon or it does not. In the quantum jumps unravelling of the master equation the evolution of the (pure) state vector $\left\vert \psi \right\rangle $ for an open quantum system is given by the stochastic It\^{o} increment equation \begin{widetext} \begin{equation} \label{eq:jumps} \ket{d \psi} = - \ensuremath{\frac{i}{\hbar}} H \ket{\psi} dt - \half \sum_j \left[L_j^\dag L_j - \EX{L_j^\dag L_j} \right] \ket{\psi} dt + \sum_j \left[ \frac{L_j}{\sqrt{\EX{L_j^\dag L_j}}} - 1 \right] \ket{\psi} dN_j \end{equation} \end{widetext} where $H$ is the Hamiltonian, $L_{i}$ are the Linblad operators that represent coupling to the environmental degrees of freedom, $dt$ is the time increment, and $dN_{j}$ is a Poissonian noise process such that $dN_{j}dN_{k}=\delta _{jk}dN_{j}$, $dN_{j}dt=0$ and $\overline{dN_{j}}=\left\langle L_{j}^{\dag }L_{j}\right\rangle dt$. These latter conditions imply that jumps occur randomly at a rate that is determined by $\left\langle L_{j}^{\dag }L_{j}\right\rangle $. We will find that this is very important when explaining the results presented later in this paper. For an excellent discussion of quantum trajectories interpreted as a a realistic model of a system that is being continuously monitored see~\cite{Wis96}. For an interesting and more general discussion on the emergence of classical-like behaviour from quantum systems see~\cite{Giulini96,joos03}. The Hamiltonian for our, standard, example system of the Duffing oscillator is given by \begin{equation} H = \frac{1}{2} p^2 + \frac{\beta^{2}}{4}q^4 - \frac{1}{2} q^2 + \frac{g}{\beta}\cos\left( t \right) q+\frac{\Gamma}{2}\left( q p + p q \right) \label{ham} \end{equation} where $q$ and $p$ are the canonically conjugate position and momentum operators for the oscillator. In this example we have only one Linblad operator which is $L=\sqrt{2\Gamma} a$, where $a$ is the oscillator annihilation (lowering) operator, $g$ is the drive amplitude and $\Gamma=0.125$ quantifies the damping. In order to apply the correspondence principal to this system, and recover classical-like dynamics, we have introduced in Eq.~(\ref{ham}) the parameter $\beta $. For this Hamiltonian it has two interpretations that are mathematically equivalent. Firstly, it can be considered to scale $\hbar $ itself, or, alternatively we can simply view $\beta $ as scaling the Hamiltonian, leaving $\hbar $ fixed, so that the relative motion of the expectation values of the observables becomes large compared with the minimum area $\left( \hbar /2\right) $ in the phase space. In either case, the system behaves more classically as $\beta $ tends to zero from its maximum value of one. In this work we have chosen to set $\beta =0.1$. \section{Results} \begin{figure}[tbh] \begin{center} \subfigure[Classical Duffing oscillator.\label{fig:powerCX}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure01a.eps}} } \subfigure[Quantum Duffing oscillator.\label{fig:powerQX}] { \resizebox*{0.45\textwidth}{!}{\includegraphics{figure01b.eps}} } \caption{ Power spectrum of the position, $x$ for the classical Duffing oscillator and $\EX{q}$ for the quantum Duffing oscillator $\beta=0.1$. The frequency is normalised to the drive frequency of the oscillator. \label{fig:powerCQX}} \end{center} \end{figure} Let us now consider the specific example of a Duffing oscillator with a drive amplitude $g=0.3$. This parameter, together with all the those already specified, form the classic example used to demonstrate that chaotic-like behaviour can be recovered for open quantum systems by using unravellings of the master equation~\cite{Per98,brun97,Bru96,everitt05}. In Fig.~\ref{fig:powerCQX} we compare the power spectra of the classical position coordinate with that of $\left\langle q\right\rangle $. Here noise has been added to the classical system so as to mimic the level of quantum noise that is present in the stochastic elements of our chosen unravelling of the master equation and we have solved for a realisation of the Lagivan equation. As can be seen, for this value of $\beta $ there is a very good match between these two results. Moreover, both display power spectra that are typical for oscillators in chaotic orbits. However, it is not position that is the measured output in this model, but the quantum jumps recorded, as a function $\mathcal{N}(t)$ of time in the photon detector. As stated above, these jumps occur randomly at a rate that is determined by $\left\langle L_{j}^{\dag }L_{j}\right\rangle $ which, for this example, is $2\Gamma \left\langle n\right\rangle $. Hence, the probability of making a jump is proportional to the number of photons in the state of the system at any one time. \begin{figure}[tbh] \begin{center} \subfigure[Power spectrum of $\EX{q}$.\label{fig:powerShoX}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure02a.eps}} } \subfigure[Power spectrum of $\mathcal{N}(t)$.\label{fig:powerShoN}] { \resizebox*{0.45\textwidth}{!}{\includegraphics{figure02b.eps}} } \caption{ Power spectrum of the position $\EX{q}$ and photons counted $\mathcal{N}(t)$ for the quantum simple harmonic oscillator in a steady state. Here the frequency is normalised to the drive frequency of the oscillator. \label{fig:powerSho}} \end{center} \end{figure} We now consider a special case that occurs frequently in the classical limit, namely where $\left\vert \psi \right\rangle $ localises approximately to a coherent (Gaussian) state. It is apparent that for such a state the chance of observing a jump is proportional to the square of the distance in $\left( \left\langle q\right\rangle ,\left\langle p\right\rangle \right) $ of the state from the origin. In order to illustrate the implications of this, let us consider a driven simple harmonic oscillator. The Hamiltonian is $$ H_s = \frac{1}{2} p^2 + \frac{1}{2} q^2 + \frac{g}{\beta}\cos\left( t \right) q $$ and we note that in this special case the only effect of $\beta=0.1$ is to scale the amplitude of the drive (again we set $g=0.3$). We now solve Eq.~(\ref{eq:jumps}) using this Hamiltonian and allow the system to settle into a steady state. Then, as the phase portrait for this system simply describes a circle centred about $(0,0)$ we would expect the power spectra of photons counted to be the same as those for white noise. Indeed, this is clearly seen in Fig.~\ref{fig:powerSho} where we show the power spectrum for both (a) the position operator and (b) the measured quantum jumps. \begin{figure}[tbh] \begin{center} \resizebox*{0.45\textwidth}{!}{\includegraphics{figure03.eps}} \caption{ Power spectrum of the measured quantum jumps $\mathcal{N}(t)$ for the Duffing oscillator of Fig.~\ref{fig:powerQX}. \label{fig:powerQJ}} \end{center} \end{figure} For more complicated orbits, such as those exhibited by the Duffing oscillator, we would expect to see some evidence of the underlying dynamical behaviour. Hence, localisation of $\left\vert \psi \right\rangle $ from the measurement of the Duffing oscillator through the photon detector forms a concomitant structure in the power spectrum of the measured output. In Fig.~\ref{fig:powerQJ} we show, for comparison with~\ref{fig:powerQX}, such a power spectrum. As we can see from Fig.~\ref{fig:powerQJ} the power spectrum for this chaotic mode of operation reveals some structure. However, it is not clear from this picture alone how we might relate this result to that shown in Fig.~\ref{fig:powerQX}. It is therefore reasonable to ask if this result does indeed tell us anything about the underlying dynamics of the oscillator. We have addressed this point by computing the power spectrum of both $\left\langle q\right\rangle $ and $\mathcal{N}(t)$ for drive amplitudes in the range $0<g\leq 3$, the results of which are presented in Fig.~\ref{fig:power3d}. \begin{figure}[tbh] \begin{center} \resizebox*{0.45\textwidth}{!}{\includegraphics{figure04.eps}} \caption{ (color online) Power spectrum of the \textbf{(a)} $\EX{q}$ and \textbf{(b)} measured quantum jumps as a function of drive amplitude. \label{fig:power3d}} \end{center} \end{figure} Although the functional form of these power spectra obviously differ, they do clearly exhibit changes in behaviour that are coincident in the drive amplitudes of both figures. These are identified as intervals in $g$ labelled I,~II,~\ldots in Fig.~\ref{fig:power3d}. \begin{figure*}[!p] \begin{center} \subfigure[Power spectrum of $\EX{q}$, $g=0.1$.\label{fig:ps0.1x}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure05a.eps}} } \subfigure[Power spectrum of $\mathcal{N}(t)$, $g=0.1$.\label{fig:ps0.1j}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure05b.eps}} } \subfigure[Power spectrum of $\EX{q}$, $g=0.3$.\label{fig:ps0.3x}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure05c.eps}} } \subfigure[Power spectrum of $\mathcal{N}(t)$, $g=0.3$.\label{fig:ps0.3j}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure05d.eps}} } \subfigure[Power spectrum of $\EX{q}$, $g=1.25$.\label{fig:ps1.25x}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure05e.eps}} } \subfigure[Power spectrum of $\mathcal{N}(t)$, $g=1.25$.\label{fig:ps1.25j}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure05f.eps}} } \subfigure[Power spectrum of $\EX{q}$, $g=2.5$.\label{fig:ps2.5x}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure05g.eps}} } \subfigure[Power spectrum of $\mathcal{N}(t)$, $g=2.5$.\label{fig:ps2.5j}]{ \resizebox*{0.45\textwidth}{!}{\includegraphics{figure05h.eps}} } \caption{ Example power spectra for four different drive amplitudes corresponding to the regions I to IV as marked in the power spectrum of Fig.~\ref{fig:power3d}. \label{fig:ps}} \end{center \end{figure*} \begin{figure*}[!t] \begin{center} \subfigure[Region I - the drive $g=0.1$ (periodic stable orbit).\label{fig:phasePortraitsI}]{ \resizebox*{0.4\textwidth}{!}{\includegraphics{figure06a.eps}} } \subfigure[Region II - the drive $g=0.3$ (chaotic-like trajectory). \label{fig:phasePortraitsII}]{ \resizebox*{0.4\textwidth}{!}{\includegraphics{figure06b.eps}} } \subfigure[Region III - the drive $g=1.25$ (periodic stable orbit).\label{fig:phasePortraitsIII}]{ \resizebox*{0.4\textwidth}{!}{\includegraphics{figure06c.eps}} } \subfigure[Region IV - the drive $g=2.5$ (quasi-periodic orbit).\label{fig:phasePortraitsIV}]{ \resizebox*{0.4\textwidth}{!}{\includegraphics{figure06d.eps}} } \caption{ Example phase portraits for four different drive amplitudes corresponding to the regions I to IV as marked in the power spectrum of Fig.~\ref{fig:power3d}. \label{fig:phasePortraits}} \end{center \end{figure*} To help clarify Fig.~\ref{fig:power3d} we provide in Fig.~\ref{fig:ps} explicit power spectra of both $\EX{q}$ and the quantum jumps $\mathcal{N}(t)$ for regions I-IV. As expected for region I in Figs.~\ref{fig:ps0.1x} and~\ref{fig:ps0.1j} we see a strong resonance at the frequency of the drive. The broadband behaviour characteristic of the chaotic phenomena associated with region II is evident in Fig.~\ref{fig:ps0.3x} and a concomitant, although different, structure in the power spectrum of the detected photons. In region III of Fig.~\ref{fig:power3d} we again return to a periodic orbit. In Fig.~\ref{fig:ps1.25x}, the power spectra for $\EX{q}$ exhibits a peak at the drive frequency, however, the power spectra of $\mathcal{N}(t)$ peaks at twice this frequency. The lack of coincidence between these two figures will be explained fully in the following text. Finally in Figs.~\ref{fig:ps2.5x} and~\ref{fig:ps2.5j} we see power the power spectrum of the quasi periodic dynamics of region IV, again the discrepancy between these two figures is discussed below. The mechanism through which the detection of photons can yield significant information about the underlying dynamics of the system can be understood by looking at the phase portraits of $\EX{q}$ and $\EX{p}$ associated with the regions I--IV of Fig.~\ref{fig:power3d} for those values of drive used in Fig.~\ref{fig:ps}, these are shown in Fig.~\ref{fig:phasePortraits}. For region~I there is a strictly periodic response on both power spectra at the drive frequency of the oscillator. It can be seen from Fig.~\ref{fig:phasePortraitsI} that, because of the distance from the origin, the chance of there being a photon counted at point A is more likely than at point B. As this occurs at the same frequency as the oscillations of $\left\langle q\right\rangle $, we have direct agreement in the position of the resonance in each of the different spectra. In region~II, and as is clear from Fig.~\ref{fig:phasePortraitsII}, the system is following a chaotic-like trajectory. Although the power spectra differ drastically in their structure, they do both exhibit broad band behaviour that is characteristic of chaotic orbits. As the drive amplitude is increased further, region~III in Fig.~\ref{fig:power3d} is accessed as the behaviour observed in region II ceases. For this range of drive amplitudes the solution is again a stable periodic orbit as displayed in Fig.~\ref{fig:phasePortraitsIII}. However, this time, whilst the power spectrum of $\left\langle q\right\rangle $ exhibits a resonance at the drive frequency, that of $\mathcal{N}(t)$ appears at double this frequency. The explanation for this is simply that the probability of detecting a photon when the orbit is in a region of phase space near the origin, such as those marked C in Fig.~\ref{fig:phasePortraitsIII}, is less that at those further away as in the region of D. This variation in probability occurs twice a period and therefore produces a resonance at double the drive frequency. An immediate corollary is that, by detecting a resonance at either of these different frequencies in the power spectra of $\mathcal{N}(t)$, we can determine whether the oscillator is in region~I or~III of Fig.~\ref{fig:power3d}. From our analysis in~\cite{everitt05} it may, in some circumstances, be advantageous to place the system in a chaotic orbit. It is possible that this sort of analysis could be used to increase or decrease drive amplitude as part of a feedback and control element for quantum machinery. Finally, the power spectrum of $\left\langle q\right\rangle $ in region~IV of Fig.~\ref{fig:power3d}~(a) is characteristic of quasi-periodic behaviour. Using a similar argument to the one above, we can transfer these features onto the spectrum of $\mathcal{N}(t)$. If we compare this result with the, albeit noisy, phase portrait of Fig.~\ref{fig:phasePortraitsIV} there is clear evidence of quasi-periodic behaviour. We have demonstrated, using the Duffing oscillator as our example system, that the different features exhibited in the power spectrum of the photon count can be associated with concomitant features in the power spectrum of the position operator (and vice versa). We note that for any given experimental system where there is a direct correspondence between the power spectrum of $\mathcal{N}$ and \EX{x} that the power spectrum of $\mathcal{N}$ provides us with the same amount of information about the underlying dynamics (e.g. chaotic, quasi-periodic etc.) as the power spectrum of $\EX{x}$. We would like to emphasise that if this direct correspondence did not exist then we would not necessarily be able to make such an assertion. For example, this situation might occur for a system in which there was a high degree of symmetry in the $\EX{x}-\EX{p}$ phase portrait. However, such a detailed study is beyond the scope of this paper. \section{Conclusion} In this work we have shown that, via analysis of the power spectra of the photons detected in a quantum jumps model of a Duffing oscillator, we can obtain signatures of the underlying dynamics of the oscillator. Again, we note that the decoherence associated with actually measuring these jumps is that which, through localisation of the state vector, enables these classical-like orbits to become manifest. We have also demonstrated that the power spectra of the counted photons can be used to distinguish between different modes of operation of the oscillator. Hence, this or some form of time-frequency analysis, could be used in the feedback and control of open quantum systems, a topic likely to be of interest in some of the emerging quantum technologies. \begin{acknowledgments} The authors would like to thank T.P.~Spiller and W.~Munro for interesting and informative discussions. MJE would also like to thank P.M.~Birch for his helpful advice. \end{acknowledgments}
1,116,691,498,809
arxiv
\section{Introduction}~\label{sec:intro} Recent work reveals that global mobile data consumption will experience a vast increase over the next few years \cite{Hemadeh18a,Rap13a}. MmWave communication is regarded as a promising technique to support the unprecedented capacity demand because of the availability of ultra-wide bandwidths. Accurate channel modeling for mmWave frequencies has been an important area of study recently, since the mmWave channel, when combined with directional antennas, has vastly different characteristics from omnidirectional microwave channels\cite{Rap15a,Sun18a,Rap15b}. Many statistical and deterministic channel models such as METIS \cite{METIS15a}, NYUSIM \cite{Sun18a,Samimi16a}, MiWEBA \cite{MiWeba14a}, 3GPP\cite{3GPP.38.901,3GPP-TR.25.966}, 5GCM \cite{A5GCM15}, and mmMAGIC \cite{mmMAGIC17a}, have been proposed over the past few years. Most of the existing statistical channel models are drop-based, where all parameters used in one channel realization are generated and used for a single placement of a particular user. Then, a subsequent simulation run of the drop-based channel model results in an independent sample function for a different user, and at a completely different, arbitrary location, even if the same distance between the transmitter (TX) and receiver (RX) is considered \cite{Sun17a,Sun18a,Shafi18a}. Drop-based models are popular because of their simplicity in Monte Carlo simulations\cite{Tranter03a}. The NYUSIM channel model generates static channel impulse responses (CIRs) at a particular distance/location, or across the manifold of a 2-D antenna structure, but cannot generate dynamic CIRs with spatial or temporal correlation based on a user's motion within a local area\cite{Sun17a,Sun18a,Samimi15a}. In other words, CIRs of two closely spaced locations are generated independently, although one would expect the CIRs to be highly correlated if the users were truly close to one another \cite{Shafi18a}. It stands to reason, and is borne out by measurements, that two close users, or a user moving in a small area, should experience a somewhat consistent scattering environment\cite{Ju18a}. Thus, spatial consistency has become a critical modeling component in the 3GPP Release 14 \cite{3GPP.38.901}. Challenges exist for drop-based models to be spatially consistent, since nearly all temporal and spatial parameters would need to vary in a continuous and realistic manner as a function of small changes in the user's location. Lack of measurements poses a challenge to accurate spatially consistent channel modeling, especially for mmWave frequencies. Using field measurements to create and validate the mathematical channel models is one way to ensure accuracy and to gain theoretical insights. The NYUSIM channel model uses realistic large-scale and small-scale parameters for various types of scenarios, environments, and antenna patterns based on massive datasets from measurements at 28, 38, and 73 GHz in urban, rural, and indoor environments \cite{Rap13a,Sun18a,Shafi18a}. Local area measurements were conducted in a street canyon at 73 GHz over a path length of 75 m, where the receiver moved from a non-line-of-sight (NLOS) environment to a line-of-sight (LOS) environment \cite{Rap17b}. The measurements\cite{Rap17b} provide a basis for the proposed model with spatial consistency. The NYUSIM channel model simulator operates over a wide range of carrier frequencies from 800 MHz to 100 GHz \cite{Sun17a,Sun18a}, and provides temporal and 3-D spatial parameters for each MPC, and generates accurate CIRs, power delay profiles (PDPs) and 3-D angular power spectrums. The spatial consistency extension proposed here allows the simulator to use additional parameters such as the velocity, location, and moving direction of a user to reproduce realistic CIRs received by the moving user with spatial consistency. This paper presents a modified channel coefficient generation procedure for spatial consistency under the framework of the NYUSIM channel model \cite{Ju18a}, and compares the simulation results with 73 GHz measured data from the street canyon measurements \cite{Rap17b}, and is organized as follows. Section \ref{sec:previous} overviews existing models that consider spatial consistency, and provides current approaches for channel tracking. Section \ref{sec:nyusim} describes the impact of spatial consistency on the NYUSIM channel model, and describes the modified generation procedure for spatial consistency. Section \ref{sec:meas} presents the actual channel transitions and resulting CIRs when a user moved in a street canyon based on the measurements. Conclusions are presented in Section \ref{sec:conclusion}. \section{Early Research on Spatial Consistency}\label{sec:previous} Due to the requirements of mmWave communications in mobile and vehicle-to-vehicle (V2V) communications \cite{Perfecto17a}, modern channel models and simulation techniques must adequately characterize changing environments, and generate continuous channel realizations with statistics that are lifelike and usable for accurate simulation for beamforming and other MAC and PHY level design. Channels can be categorized as stationary channels or non-stationary channels based on the rate of the change of the propagation scenarios. Channel modeling and simulations for non-stationary channels where the scattering environment changed significantly, is studied in \cite{Parra18a}. This channel model and simulation method emulated the time-variant nature of a real channel, and realized channel variations in a single channel realization. The channel modeling approach in \cite{Parra18a} can also be extended to stationary channels where the channel parameters are renewed over time while still fulfilling the stationary condition \cite{Parra18a}. Spatial consistency represents the smooth variations for the stationary channels when a user moves, or when multiple users are located closely, in a local area over 5-10 m. At microwave frequencies, early statistical CIR models for correlated multipath component amplitudes over one meter local areas due to the small-scale movement were developed from 1.3 GHz measurements, and associated channel simulators, SIRCIM/SMRCIM, were developed based on this model considering spatial and temporal correlation \cite{Rap91b}. Specifically, the simulators considered the motion, the corresponding Doppler spread, and the resulting phase shift on individual multipath components over a local area \cite{Nuckols99,Rap93a}. SIRCIM/SMRCIM simulators were implementations of spatial consistency, before the term was even coined. Generally, the small-scale spatial autocorrelation coefficient of the received signal voltage amplitude decreases rapidly over distance, and the correlation distance of individual MPC amplitude is only a few to a few tens of wavelengths. The correlation distance of the received signal voltage amplitude in a wideband (1 GHz) transmission is only 0.67-33.3 wavelengths (0.27-13.6 cm) at 73 GHz, depending on antenna pointing angle with respect to scattering objects \cite{Sun17a,Rap17a}. Furthermore, the amplitudes of individual MPCs in an 800 MHz bandwidth decorrelate over 2 and 5 wavelengths (2.14 and 5.35 cm) at 28 GHz in LOS and NLOS environments, respectively \cite{Samimi16b,Samimi16c}. Spatial consistency, however, is different from small-scale spatial correlation. Spatial consistency refers to the similar and correlated scattering environments that are characterized by large-scale and small-scale parameters in the channel model \cite{Ju18a,Shafi18a}. The large-scale parameters have a much longer correlation distance of 12-15 m \cite{3GPP.38.901} since the scattering environment does not change dramatically in a local area. Small-scale fading measurements \cite{Rap17a} support the hypothesis of spatial consistency extending well beyond one meter since the amplitude of individual MPC, and the total received power varied smoothly and continuously over 0.35 m (the longest distance measured) \cite{Samimi16b}. Both statistical and deterministic channel models need to be spatially consistent for use in studying adaptive signal processing for mobile scenarios. Statistical channel models rely on large-scale parameters (shadow fading, the number of time clusters, the number of spatial lobes, delay spread, and angular spread) and small-scale parameters (time excess delay, power, AOA and AOD for each MPC) from measurements \cite{Shafi18a}, whereas deterministic channel models rely on geometry and ray-tracing techniques to acquire the channel information \cite{METIS15a,MiWeba14a}. \begin{figure}[] \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \includegraphics[width=0.4\textwidth]{route.eps} \caption{2-D map of TX and RX locations for local area measurements at 73 GHz in a UMi street canyon environment in downtown Brooklyn \cite{Rap17b}. The yellow star is the TX location, blue dots represent LOS RX locations, and red squares indicate NLOS RX locations. North represents 0$^\circ$.} \label{fig:route} \end{figure} \subsection{Deterministic Channel Models with Spatial Consistency} Spatial consistency is easier defined and maintained in deterministic channel models, since the locations of the scatterers in the environment are identified in the site-specific channel models \cite{A5GCM15}. The powers, angles, and delays of MPCs can be easily calculated from the relative change of locations of the RX and scatterers based on geometry, generally through the use of ray tracing \cite{Seidel94a}. The MiWEBA channel model \cite{MiWeba14a} is quasi-deterministic at 60 GHz, and uses a few strong MPCs obtained from ray-tracing techniques. Several relatively weak statistical MPCs are added to ray-tracing results to maintain some randomness in the channel. The METIS map-based channel model \cite{METIS15a} is also a channel model that uses ray-tracing techniques to acquire large-scale parameters for a specific environment, and uses the map-based large-scale parameters, along with measurement-based statistical small-scale parameters \cite{METIS15a}. \begin{figure*}[] \setlength{\abovecaptionskip}{-1cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \includegraphics[width=0.9\textwidth]{omni_16.eps} \caption{Omnidirectional PDPs at 16 RX locations in a UMi street canyon in downtown Brooklyn at 73 GHz. Referring to Fig. \ref{fig:route}, the receiver moved from RX81(`1' at the `RX locations' axis) to RX96 (`16' at the `RX locations' axis). The distance between two successive RX locations was 5 m. The T-R separation distance varied from 81.5 m to 29.6 m. The visibility condition changed at RX 91 and RX 92 from NLOS to LOS. Absolute time delays are removed to show the difference of time excess delay and delay spread \cite{Rap17b}.} \label{fig:omni_16} \end{figure*} \subsection{Statistical Channel Models with Spatial Consistency} For statistical (e.g. stochastic) channel models, spatial consistency is a challenge since they tend to be drop-based, and cannot generate a time-evolved CIRs in a local area. Thus, geometric information and correlation statistics are necessary for these models to obtain the proper correlated values of large-scale and small-scale parameters for closely spaced locations. 5GCM \cite{A5GCM15} proposed three approaches for spatial consistency. The first approach uses spatially correlated random variables to generate small-scale parameters such as excess delays, powers, and angles. Users located nearby share correlated values of small-scale parameters. Four complex Gaussian identically and independently distributed (i.i.d) random variables on four vertices of a grid having a side length equal to the correlation distance are generated first. Then, spatially consistent uniform random variables at any location within the grid are formed by interpolating from these four Gaussian random variables \cite{A5GCM15}. The problem with this method of ensuring spatial consistency is that the system needs to store the values of random variables for grids around the user in advance, and requires a large storage space \cite{mmMAGIC17a}. The second approach is the geometric stochastic approach \cite{A5GCM15}. In this approach, large-scale parameters are pre-computed for each grid having a side length equal to the correlation distance of the corresponding large-scale parameter. The small-scale parameters are dynamically evolved both in the temporal and spatial domain, based on the time-variant angle of arrivals (AOAs) and angle of departures (AODs), and cluster birth and death \cite{Wang16a}. The third approach, the grid-based geometric stochastic channel model (GGSCM) \cite{A5GCM15}, uses the geometric locations of scatterers (i.e., clusters). A cluster is defined as a group of rays coming from the same scatterer, and these rays have similar angles and delays. The angles and delays of the cluster and multipath components in the cluster can be translated into the geometrical positions of the corresponding scatterers. Thus, the time evolution of angles and delays can be straightforwardly computed from the relative changes of the user position, and have very realistic variations. MmMAGIC channel models have adopted the three aforementioned spatial consistency approaches, and set the first approach as default since this approach has a more accurate realization of the mmWave channel \cite{mmMAGIC17a}. The COST 2100 model is also a geometry-based stochastic channel model \cite{COST2100}, and introduces a critical concept, the \textit{visibility region}. The visibility region refers to a region both in time and space where a group of multipath components is visible to the user. The multipath components in the visibility region constitute the CIRs experienced by the user. \section{Spatial Consistency Extension for NYUSIM Channel Model} \label{sec:nyusim} As discussed earlier and in \cite{Ju18a,Sun18a,Shafi18a}, the large-scale and small-scale parameters should vary continuously as a function of the user location in a channel realization over a local area. Under the framework of the NYUSIM channel model \cite{Samimi16a}, a spatial consistency extension is proposed for the NYUSIM channel model and associated simulator \cite{Ju18a}. A spatial exponential filter is applied to make large-scale parameters spatially correlated within the correlation distance of these parameters. The modeling of time-variant small-scale parameters is motivated by the stochastic geometry approach \cite{Wang16a} and the CIR generation procedure in 3GPP Release 14 \cite{3GPP.38.901}. The large-scale path loss is made time-variant, and the shadow fading is made spatially consistent over a local area. Thus, the NYUSIM channel model is extended from a static drop-based to a dynamic time-variant channel model which fits well the natural evolution of NYUSIM and other drop-based statistical models. Two distances should be clarified first, the \textit{correlation distance} and \textit{update distance}. The correlation distance determines the size of the grid that maintains spatial consistency of channel conditions. The CIRs of the user moving beyond the correlation distance, or multiple users separated beyond correlation distance, can be regarded as independent. Each large-scale parameter has its own particular correlation distance, and the correlation distance varies according to scenarios and frequencies. For example, the correlation distance of a large-scale parameter in the UMi scenario is shorter than the one in the RMa scenario because of the higher building density. Thus, extensive propagation measurements for various scenarios and frequencies are necessary to provide the accurate values of the correlation distances of large-scale parameters. 3GPP Release 14 \cite{3GPP.38.901} provides that the correlation distances of large-scale parameters in the LOS and NLOS UMi scenarios were 12 m and 15 m, respectively. Some 73 GHz measurements in a LOS street canyon scenario suggest that the correlation distance of large-scale parameters at 73 GHz is 3-5 m \cite{Wang16a}. From the local area measurements at 73 GHz that will be introduced in the Sec. \ref{sec:meas}, the correlation distance of the number of time clusters is 5-10 m. \begin{table} \centering \caption{\textsc{Hardware Specifications of Local Area Measurements}} \label{tab:sys} \begin{tabular}{|c|c|} \hline \textbf{Campaign} & 73 GHz Local Area Measurements \\ \hline \textbf{Transmit Signal} & 11$^{\text{th}}$ order PN sequence (length of 2047) \\ \hline \textbf{TX Antenna Setting} & 27 dBi horn antenna with 7$^\circ$ HPBW \\ \hline \textbf{RX Antenna Setting} & 20 dBi horn antenna with 15$^\circ$ HPBW \\ \hline \textbf{TX/RX Chip Rate} & 500 Mcps/499.9375 Mcps \\ \hline \textbf{RF Null-to-Null Bandwidth} & 1 GHz \\ \hline \textbf{TX/RX Intermediate Freq.} & 5.625 GHz \\ \hline \textbf{TX/RX Local Oscillator} & 67.875 GHz (22.635 GHz $\times$ 3) \\ \hline \textbf{RX ADC Sampling Rate} & 2.5 Msamples/s \\ \hline \textbf{Carrier Freq.} & 73.5 GHz \\ \hline \textbf{Max TX Power/EIRP} & 14.3 dBm/41.3 dBm \\ \hline \textbf{TX-RX Antenna Pol.} & vertical-to-vertical \\ \hline \textbf{Max Measurable Path Loss} & 180 dB \\ \hline \end{tabular} \end{table} The update distance is the time interval in which the model updates the small-scale parameters for MPCs and renews a CIR. Since the small-scale parameters are time-variant and not grid-based, the update distance should be much shorter than the correlation distance of large-scale parameters to ensure accurate modeling while sampling at an arbitrary time or distance within a local area \cite{Nuckols99}. 3GPP Release 14 suggested that the update distance should be within 1 m. During each update, the channel can be considered as static. That is to say; the update period is 2 s when the user moves at 0.5 m/s; the update period is 0.2 s when the user moves at 5 m/s. 1 m is set as update distance in the NYUSIM channel model for simplicity. The details for the generation method of each large-scale and small-scale parameter are described below. \begin{itemize} \begin{figure}[] \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \includegraphics[width=0.5\textwidth]{816.eps} \caption{Omnidirectional PDPs at RX81-RX86 to study the correlation distance of large-scale parameters. The distance between two successive RX locations was 5 m. The PDPs at RX81 and RX82 are similar; the PDPs at RX83 and RX84 are similar; the PDPs at RX85 and RX86 are similar. } \label{fig:816} \end{figure} \item \textit{Time-variant path loss:} The path loss varies smoothly as the user moves in a local area since the shadow fading is spatially consistent. The path loss is obtained from the close-in (CI) path loss model with 1 m free space reference distance \cite{Sun18a}, and calculated in every update period based on the locations of the moving user. The path loss and shadow fading is critical to the evaluation of massive MIMO and multi-user MIMO system performance, and has a large impact on the received power. \item \textit{LOS/NLOS transition:} LOS/NLOS condition (LOS probability) determines the value of path loss exponent and shadow fading. Thus, the path loss in LOS and NLOS scenarios are much different. The NYUSIM channel model, as a stochastic model, models the LOS probability as a distance squared model \cite{Rap17a}. The conventional NYUSIM channel model generates the LOS/NLOS condition independently in each simulation. The LOS/NLOS does not change during each simulation. A spatial exponential is applied to make the LOS/NLOS condition spatially correlated based on the correlation distance of LOS probability \cite{Ju18a}. Furthermore, when the LOS/NLOS condition changes, the values of the corresponding parameters will change during the simulation. For Monte Carlo simulations, a statistical spatially correlated LOS/NLOS condition map for a local area is reasonable enough to evaluate the system capacity. However, in the real-world transmission, the information of LOS/NLOS condition in the channel state information (CSI) would be very important for the base station to decide the transmission scheme. \item \textit{The number of time clusters, the number of spatial lobes, the number of MPCs in each time cluster:} These three parameters are large-scale parameters that are pre-computed for each grid since the surrounding scatterers do not change rapidly within the correlation distance of large-scale parameters. \item \textit{Cluster birth and death:} This concept, first presented in \cite{Wang16a}, demonstrates the time evolution of time clusters. When the user moves across grids from location A to location B in the real world, the time clusters appear at the location A may disappear at the location B since the clusters observed at A become very weak. The extension for NYSUIM channel model generates grid-based large-scale parameters including the number of time clusters. Thus, the clusters of A should be discarded, and the clusters of B should be generated gradually during the movement. This procedure can be modeled as a Poisson process with a rate of cluster birth and death. The probability of the occurrence of cluster birth and death is denoted as \begin{equation} Pr(t)=1-\exp{(-\lambda_c(t-t_0))} \end{equation} where $t_0$ is the most recent update time, and $\lambda_c$ is the mean rate of cluster birth and death per second. This rate varies according to the scenarios, and can only be obtained from field measurements. The birth and death always happen to the weakest cluster at the location. If the numbers of clusters in two grids, A and B, are the same, only the replacement from an old to a new cluster will occur. The weakest cluster of A will be replaced by the weakest clusters of B as one moves from grid A to grid B. Note that when the cluster birth and death occurs, only one cluster of A and one cluster of B will be involved. If the number of clusters in the two grids are not the same, the cluster birth or death will occur alone. This gradual replacement of time clusters ensures spatial consistency in the NYUSIM channel model. \end{itemize} \section{Local Area Measurements for Spatial Consistency} \label{sec:meas} \subsection{Measurement Environment and Procedure} Local area measurements were conducted at 73 GHz using a null-to-null RF bandwidth of 1 GHz \cite{Rap17b} to study the spatial consistency and provided the reference values of several parameters such as correlation distance of large-scale parameters. Table \ref{tab:sys} provides the specifications of the measurement system \cite{Rap17b}. The measurements were conducted in a street canyon (18 m wide) between 2 and 3 MetroTech Center in downtown Brooklyn, NY. During the measurements, the TX and RX antennas were set to 4.0 m and 1.5 m, respectively, to emulate the heights of an access point and a user terminal, respectively. TX and RX locations are shown in Fig. \ref{fig:route}, where RX moved from the location RX81 to the location RX96 (NLOS to LOS). The T-R separation distance varied from 81.5 m to 29.6 m. Specifically, the T-R separation distance of NLOS locations (RX81 to RX91) varied from 81.5 m to 50.8 m; the T-R separation distance of LOS locations (RX92 to RX 96) varied from 49.1 m to 29.6 m. The distance between two RX locations was 5 m. Note that the TX antenna pointing angle was the direction that resulted in the strongest received power at the starting location: RX81, and was fixed during the measurements. For each TX-RX combination, the RX swept five times in the azimuth plane. Each sweep was 3 min, and the interval between sweeps was 2 min. The RX antenna swept in half power beamwidth (HPBW) step increments (15$^\circ$). A power delay profile (PDP) was recorded at each RX azimuth pointing angle, and the measurements at each location resulted in at most 120 PDPs (some angles did not have a detectable signal above the noise floor). The best RX pointing angle (the direction where the RX got the maximum received power) in the azimuth plane was selected as the starting direction for the RX azimuth sweeps (elevation remained fixed for all RXs), at each RX location measured \cite{Rap17b}. \subsection{Measurement Data Processing and Analysis} \begin{table} \centering \caption{\textsc{The Number of Time Clusters in the First 6 RX Locations}} \label{tab:ntc} \begin{tabular}{|c|c|} \hline \# of time clusters & RX locations \\ \hline 3 & 81,82 \\ \hline 4 & 83,84 \\ \hline 6 & 85,86 \\ \hline \end{tabular} \end{table} 24 directional PDPs (HPBW step increments in the azimuth plane, 360/15 = 24) \cite{Rap17b} of one sweep at each location were combined to form one omnidirectional PDP to better illustrate spatial consistency. The denoising was done before this synthesis with a threshold of 20 dB below the peak power of each directional PDP. All 16 omnidirectional PDPs were aligned only for illustration purposes, and the time excess delays of these PDPs are shown in the Fig. \ref{fig:omni_16}. As the RX moved towards TX, the received power increased, and the number of time clusters also increased from 1 up to 6. To study the correlation distance of large-scale parameters, the PDPs of the first six NLOS RX locations were studied. The PDPs are shown in Fig. \ref{fig:816}. The number of time clusters is abstracted into the Table. \ref{tab:ntc} based on the time-clustering algorithm described in \cite{Samimi16c}. Thus, the correlation distance of the number of time clusters is about 5-10 m; the correlation distance of delay spread is also about 5-10 m. The same results of correlation distance can be found from the rest PDPs at other LOS and NLOS locations in Fig. \ref{fig:omni_16}. These local area measurements also showed the impact of LOS/NLOS condition on resulting PDPs. When the RX moved from RX91 to RX92, the visibility condition changed from NLOS to LOS. The PDPs at RX91, RX92, RX93 are shown in Fig. \ref{fig:913}. The received power of RX92 was much stronger than that of RX91, and there were more MPCs at RX92 than at RX91. These results indicate that the LOS/NLOS condition is particularly critical to CIRs and cannot be generated independently for nearby locations as is currently done in conventional statistical channel models. The spatially consistent LOS/NLOS condition would help to predict the CIRs more accurately. \begin{figure}[] \setlength{\abovecaptionskip}{0cm} \setlength{\belowcaptionskip}{-0.5cm} \centering \includegraphics[width=0.5\textwidth]{913.eps} \caption{Omnidirectional PDPs at RX91-RX93. The distance between two successive RX locations was 5 m. The receiver at RX 91 was in NLOS condition; The receiver at RX 92 and later locations was in LOS condition.} \label{fig:913} \end{figure} \section{Conclusion}\label{sec:conclusion} The spatial consistency extension for the outdoor NYUSIM channel model has been presented in this paper. The generation procedure of both large-scale parameters and small-scale parameters was modified to make these parameters spatially consistent and time-variant. Spatially correlated random variables were applied to characterize the grid-based large-scale parameters; A geometry-based approach was applied to obtain the time-variant small-scale parameters such as time-variant AODs and AOAs, and time cluster birth and death. The static large-scale path loss of drop-based simulations was transformed into a time-variant parameter. The local area measurements in a street canyon were also presented and analyzed in this paper, which indicated that the correlation distance of the number of time clusters and delay spread is about 5-10 m in a UMi street canyon scenario. More field measurements should be conducted to obtain parameters for spatial consistency in various scenarios. The modern channel models with spatial consistency will help in the design of beam tracking and beamforming in system level, and will estimate the channel more accurately for transient simulations. \section*{Acknowledgment} This work is supported in part by the NYU WIRELESS Industrial Affiliates, and in part by the National Science Foundation under Grants: 1702967 and 1731290. \bibliographystyle{IEEEtran}
1,116,691,498,810
arxiv
\section{Introduction} \label{sec:intro} When the Galerkin weak formulation of a boundary-value problem such as the linear elastostatic problem is solved numerically, the trial and test displacements are replaced by their discrete representations using basis functions. Herein, we consider basis functions that span the space of functions of degree 1 (i.e., affine functions). Due to the nature of some basis functions, the discrete trial and test displacement fields may represent linear fields plus some additional functions that are non-polynomials or high-order monomials. Such additional terms cause inhomogeneous deformations, and when present, integration errors appear in the numerical integration of the stiffness matrix leading to stability issues that affect the convergence of the approximation method. This is the case of polygonal and polyhedral finite element methods~\cite{Talischi-Paulino:2014,talischi:2015,francis:LSP:2016}, and meshfree Galerkin methods~\cite{dolbow:1999:NIO,chen:2001:ASC,babuska:2008:QMM,babuska:2009:NIM,ortiz:2010:MEM,ortiz:2011:MEI,duan:2012:SOI,duan:2014:CEF,duan:2014:FPI,ortiz:VAN:2015,ortiz:IRN:2015}. The virtual element method~\cite{BeiraoDaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013} (VEM) has been presented to deal with these integration issues. In short, the method consists in the construction of an algebraic (exact) representation of the stiffness matrix without the explicit evaluation of basis functions (basis functions are {\em virtual}). In the VEM, the stiffness matrix is decomposed into two parts: a consistent matrix that guarantees the exact reproduction of a linear displacement field and a correction matrix that provides stability. Such a decomposition is formulated in the spirit of the Lax equivalence theorem (consistency $+$ stability $\to$ convergence) for finite-difference schemes and is sufficient for the method to pass the patch test~\cite{cangiani:2015}. Recently, the virtual element framework has been used to correct integration errors in polygonal finite element methods~\cite{Gain-Talischi-Paulino:2014,Manzini-Russo-Sukumar:2014,BeiraodaVeiga-Lovadina-Mora:2015} and in meshfree Galerkin methods~\cite{ortiz:CSMGMVEM:2017}. Some of the advantages that the VEM exhibits over the standard finite element method (FEM) are: \begin{itemize} \item Ability to perform simulations using meshes formed by elements with arbitrary number of edges, not necessarily convex, having coplanar edges and collapsing nodes, while retaining the same approximation properties of the FEM. \item Possibility of formulating high-order approximations with arbitrary order of global regularity~\cite{daVeiga:VEMAR:2013}. \item Adaptive mesh refinement techniques are greatly facilitated since hanging nodes become automatically included as elements with coplanar edges are accepted~\cite{cangiani:PEE:2017}. \end{itemize} In this paper, object-oriented programming concepts are adopted to develop a C++ library, named \texttt{Veamy}, that implements the VEM on general polygonal meshes. The current status of this library has a focus on the linear elastostatic and Poisson problems in two dimensions, but its design is geared towards its extensibility. \texttt{Veamy} uses Eigen library~\cite{eigenweb} for linear algebra, and Triangle~\cite{shewchuk96b} and Clipper~\cite{clipperweb} are used for the implementation of its polygonal mesh generator, \texttt{Delynoi}~\cite{delynoiweb}, which is based on the constrained Voronoi diagram. Despite this built-in polygonal mesh generator, \texttt{Veamy} is capable of interacting straightforwardly with \texttt{PolyMesher}~\cite{Talischi:POLYM:2012}, a polygonal mesh generator that is widely used in the VEM and polygonal finite elements communities. In presenting the theory of the VEM, upon which \texttt{Veamy} is built, we adopt a notation and a terminology that resemble the language of the FEM in engineering analysis. The work of Gain et al.~\cite{Gain-Talischi-Paulino:2014} is in line with this aim and has inspired most of the notation and terminology used in this paper. In \texttt{Veamy}'s programming philosophy entities commonly found in the VEM and FEM literature such as mesh, degree of freedom, element, element stiffness matrix and element force vector, are represented by objects. In contrast to some of the well-established free and open source object-oriented FEM codes such as FreeFEM++~\cite{hecht:FFEM:2012}, FEniCS~\cite{alnaes:FENI:2015} and Feel++~\cite{prud:FEEL:2012}, \texttt{Veamy} does not generate code from the variational form of a particular problem, since that kind of software design tends to hide the implementation details that are fundamental to understand the method. On the contrary, since \texttt{Veamy}'s scope is research and teaching, in its design we wanted a direct and balanced correspondence between theory and implementation. In this sense, \texttt{Veamy} is very similar in its spirit to the 50-line MATLAB implementation of the VEM~\cite{Sutton:VEM:2017}. However, compared to this MATLAB implementation, \texttt{Veamy} is an improvement in the following aspects: \begin{itemize} \item Its core VEM numerical implementation is entirely built on free and open source libraries. \item It offers the possibility of using a built-in polygonal mesh generator, whose implementation is also entirely built on free and open source libraries. In addition, it allows a straightforward interaction with \texttt{PolyMesher}~\cite{Talischi:POLYM:2012}, a popular and widely used MATLAB-based polygonal mesh generator. \item It is designed using the object-oriented paradigm, which allows a safer and better code design, facilitates code reuse and recycling, code maintenance, and therefore code extension. \item Its initial release implements both the two-dimensional linear elastostatic problem and the two-dimensional Poisson problem. \end{itemize} We are also aware of the MATLAB Reservoir Simulation Toolbox~\cite{lie:MRS:2016}, which provides a module for first- and second-order virtual element methods for Poisson-type flow equations that was developed as part of a master thesis~\cite{klemetsdal:VEM:2016}. The toolbox also implements a module dedicated to the VEM in linear elasticity for geomechanics simulations. \texttt{Veamy} is free and open source software, and to the best of our knowledge is the first object-oriented C++ implementation of the VEM. The remainder of this paper is structured as follows. The model problem for two-dimensional linear elastostatics is presented in Section~\ref{sec:modelproblem}. Section~\ref{sec:vem} summarizes the theoretical framework of the VEM for the two-dimensional linear elastostatic problem. Also in this section, the VEM element stiffness matrix for the two-dimensional Poisson problem is given. The object-oriented implementation of \texttt{Veamy} is described and explained in Section~\ref{sec:implementation}. In Section~\ref{sec:meshgenerator}, some guidelines for the usage of \texttt{Veamy}'s built-in polygonal mesh generator are given. Several examples that demonstrate the usage of \texttt{Veamy} and a performance comparison between VEM and FEM are presented in Section~\ref{sec:sampleusage}. The paper ends with some concluding remarks in Section~\ref{sec:conclusions}. \section{Model problem} \label{sec:modelproblem} The Galerkin weak formulation for the linear elastostatic problem is considered for presenting the main ingredients of the VEM. Consider an elastic body that occupies the open domain $\Omega \subset \Re^2$ and is bounded by the one-dimensional surface $\Gamma$ whose unit outward normal is $\vm{n}_\Gamma$. The boundary is assumed to admit decompositions $\Gamma=\Gamma_g\cup\Gamma_f$ and $\emptyset=\Gamma_g\cap\Gamma_f$, where $\Gamma_g$ is the essential (Dirichlet) boundary and $\Gamma_f$ is the natural (Neumann) boundary. The closure of the domain is $\overline{\Omega}\equiv\Omega\cup\Gamma$. Let $\vm{u}(\vm{x}) : \Omega \rightarrow \Re^2$ be the displacement field at a point $\vm{x}$ of the elastic body when the body is subjected to external tractions $\vm{f}(\vm{x}):\Gamma_f\rightarrow \Re^2$ and body forces $\vm{b}(\vm{x}):\Omega\rightarrow\Re^2$. The imposed essential (Dirichlet) boundary conditions are $\vm{g}(\vm{x}):\Gamma_g\rightarrow \Re^2$. The Galerkin weak formulation, with $\vm{v}$ being the arbitrary test function, gives the following expression for the bilinear form: \begin{equation}\label{eq:bilinearform1} a(\vm{u},\vm{v})=\int_{\Omega}\boldsymbol{\sigma}(\vm{u}):\boldsymbol{\nabla}\vm{v}\,\mathrm{d}\vm{x}, \end{equation} where $\boldsymbol{\sigma}$ is the Cauchy stress tensor and $\boldsymbol{\nabla}$ is the gradient operator. The gradient of the displacement field can be decomposed into its symmetric ($\boldsymbol{\nabla}_\mathrm{S}\vm{v}$) and skew-symmetric ($\boldsymbol{\nabla}_\mathrm{AS}\vm{v}$) parts, as follows: \begin{equation}\label{eq:gradv} \boldsymbol{\nabla}\vm{v}=\boldsymbol{\nabla}_\mathrm{S}\vm{v}+\boldsymbol{\nabla}_\mathrm{AS}\vm{v}=\boldsymbol{\varepsilon}(\vm{v})+\boldsymbol{\omega}(\vm{v}), \end{equation} where \begin{equation}\label{eq:strain} \boldsymbol{\nabla}_\mathrm{S}\vm{v}=\boldsymbol{\varepsilon}(\vm{v})=\frac{1}{2}\left(\boldsymbol{\nabla}\vm{v}+\boldsymbol{\nabla}^\mathsf{T}\vm{v}\right) \end{equation} is the strain tensor, and \begin{equation}\label{eq:skew} \boldsymbol{\nabla}_\mathrm{AS}\vm{v}=\boldsymbol{\omega}(\vm{v})=\frac{1}{2}\left(\boldsymbol{\nabla}\vm{v}-\boldsymbol{\nabla}^\mathsf{T}\vm{v}\right) \end{equation} is the skew-symmetric gradient tensor that represents rotations. The Cauchy stress tensor is related to the strain tensor by \begin{equation}\label{eq:cauchystress} \boldsymbol{\sigma}=\mat{D}:\boldsymbol{\varepsilon}(\vm{u}), \end{equation} where $\mat{D}$ is a fourth-order constant tensor that depends on the material of the elastic body. Substituting~\eref{eq:gradv} into~\eref{eq:bilinearform1} and noting that $\boldsymbol{\sigma}(\vm{u}):\boldsymbol{\omega}(\vm{v})=0$ because of the symmetry of the stress tensor, results in the following simplification of the bilinear form: \begin{equation}\label{eq:simpbilinearform} a(\vm{u},\vm{v})=\int_{\Omega}\boldsymbol{\sigma}(\vm{u}):\boldsymbol{\varepsilon}(\vm{v})\,\mathrm{d}\vm{x}, \end{equation} which leads to the standard form of presenting the weak formulation: find $\vm{u}(\vm{x})\in V$ such that \begin{subequations}\label{eq:weakform} \begin{align} a(\vm{u},\vm{v}) = \ell_{b}(\vm{v}) &+ \ell_{f}(\vm{v}) \quad \forall \vm{v}(\vm{x})\in W,\label{eq:weakform_a}\\ a(\vm{u},\vm{v}) =& \int_{\Omega}\boldsymbol{\sigma}(\vm{u}):\boldsymbol{\varepsilon}(\vm{v})\,\mathrm{d}\vm{x},\label{eq:weakform_b}\\ \ell_{b}(\vm{v}) = \int_{\Omega}\vm{b}\cdot\vm{v}\,\mathrm{d}\vm{x} &, \quad \ell_{f}(\vm{v})=\int_{\Gamma_f}\vm{f}\cdot\vm{v}\,\mathrm{d}s,\label{eq:weakform_c} \end{align} \end{subequations} where $V$ and $W$ are the displacement trial and test spaces defined as follows: \begin{align*} V &:= \left\{\vm{u}(\vm{x}): \vm{u} \in \mathcal{W}(\Omega) \subseteq [ H^{1}(\Omega)]^2, \ \vm{u} = \vm{g} \ \textrm{on } \Gamma_g \right\},\\ W &:= \left\{\vm{v}(\vm{x}): \vm{v} \in \mathcal{W}(\Omega) \subseteq [ H^{1}(\Omega) ]^2, \ \vm{v} = \vm{0} \ \textrm{on } \Gamma_g \right\}, \end{align*} where the space $\mathcal{W}(\Omega)$ includes linear displacement fields. In the Galerkin approximation, the domain $\Omega$ is partitioned into disjoint non overlapping elements. This partition is known as a mesh. We denote by $E$ an element having an area of $|E|$ and a boundary $\partial E$ that is formed by edges $e$ of length $|e|$. The partition formed by these elements is denoted by $\mathcal{T}^h$, where $h$ is the maximum diameter of any element in the partition. The set formed by the union of all the element edges in this partition is denoted by $\mathcal{E}^h$, and the set formed by all the element edges lying on $\Gamma_f$ is denoted by $\mathcal{E}_f^h$. On this partition, the trial and test displacement fields are approximated using basis functions, and hence $\vm{u}$ and $\vm{v}$ are replaced by the approximations $\vm{u}^h$ and $\vm{v}^h$, respectively. The bilinear and linear forms are then obtained by summation of the contributions from the elements in the mesh, as follows: \begin{equation*} a(\vm{u}^h,\vm{v}^h)=\sum_{E\in\mathcal{T}^h}a_E(\vm{u}^h,\vm{v}^h),\quad \textrm{and} \quad \ell_b(\vm{v}^h)=\sum_{E\in\mathcal{T}^h}\ell_{b,E}(\vm{v}^h)\quad \textrm{and} \quad \ell_f(\vm{v}^h)=\sum_{e\in\mathcal{E}_f^h}\ell_{f,e}(\vm{v}^h). \end{equation*} In general, the weak form integrals are not available in closed-form expressions since functions in $\mathcal{W}(E)$, and in particular its basis, are not necessarily polynomial functions. Therefore, these integrals are evaluated using quadrature with the potential of introducing quadrature errors making them mesh-dependent. If that is the case, the convergence of the numerical solution will be affected. To reflect this, a superscript $h$ is added to the symbols that represent the bilinear and linear forms. Thus, the Galerkin solution is sought as the solution of the global system that results from the weak formulation described by the following discrete bilinear and linear forms: \begin{equation*} a^h(\vm{u}^h,\vm{v}^h)=\sum_{E\in\mathcal{T}^h}a_E^h(\vm{u}^h,\vm{v}^h), \quad \textrm{and} \quad \ell_b^h(\vm{v}^h)=\sum_{E\in\mathcal{T}^h}\ell_{b,E}^h(\vm{v}^h) \quad \textrm{and} \quad \ell_f^h(\vm{v}^h)=\sum_{e\in\mathcal{E}_f^h}\ell_{f,e}^h(\vm{v}^h), \end{equation*} respectively, with the corresponding discrete global trial and test spaces defined respectively as follows: \begin{align*} V^h &:= \left\{\vm{u}^h(\vm{x})\in V: \vm{u}^h|_E \in \mathcal{W}(E) \subseteq [ H^{1}(E)]^2 \ \forall E \in \mathcal{T}^h\right\},\\ W^h &:= \left\{\vm{v}^h(\vm{x})\in W: \vm{v}^h|_E \in \mathcal{W}(E) \subseteq [ H^{1}(E)]^2 \ \forall E \in \mathcal{T}^h\right\}. \end{align*} In the preceding discussion, we have implied that $a_E^h$ is inexact due to its evaluation using numerical quadrature --- in this case, $a_E^h$ is said to be \textit{uncomputable}. The situation is completely different in the VEM approach: $a_E^h$ is not evaluated using numerical quadrature. Instead, the displacement field is computed through projection operators that are tailored to achieve an algebraic (exact) evaluation of $a_E^h$ --- in this case, $a_E^h$ is said to be \textit{computable}. \section{The virtual element method} \label{sec:vem} In standard two-dimensional finite element methods, the partition $\mathcal{T}^h$ is usually formed by triangles and quadrilaterals. In the VEM, the partition is formed by elements with arbitrary number of edges, where triangles and quadrilaterals are particular instances. We refer to these more general elements as polygonal elements. \subsection{The polygonal element} Let the domain $\Omega$ be partitioned into disjoint non overlapping polygonal elements with straight edges. The number of edges and nodes of a polygonal element are denoted by $N$. The unit outward normal to the element boundary in the Cartesian coordinate system is denoted by $\vm{n}=[n_1 \quad n_2]^\mathsf{T}$. \fref{fig:1} presents a schematic representation of a polygonal element for $N=5$, where the edge $e_a$ of length $|e_a|$ and the edge $e_{a-1}$ of length $|e_{a-1}|$ are the element edges incident to node $a$, and $\vm{n}_a$ and $\vm{n}_{a-1}$ are the unit outward normals to these edges, respectively. \begin{figure}[!tbhp] \centering \epsfig{file = fig1.eps, width = 0.37\textwidth} \caption{Schematic representation of a polygonal element of $N=5$ edges} \label{fig:1} \end{figure} \subsection{Projection operators} As in finite elements, for the numerical solution to converge monotonically it is required that the displacement approximation in the polygonal element can represent rigid body modes and constant strain states. This demands that the displacement approximation in the element is at least a linear polynomial~\cite{strang:2008:AAO}. In the VEM, projection operators are devised to extract the rigid body modes, the constant strain states and the linear polynomial part of the motion at the element level. The spaces where these components of the motion reside are given next. The space of linear displacements over $E$ is defined as \begin{equation} \mathcal{P}(E):=\left\{\vm{a}+\mat{B}(\vm{x}-\overline{\vm{x}}):\vm{a}\in\Re^2,\,\mat{B}\in \Re^{2\times 2}\right\}, \label{eq:linearspace} \end{equation} where $\overline{\vm{x}}$ is defined through the mean value of a function $h$ over the element nodes given by \begin{equation} \overline{h}=\frac{1}{N}\sum_{j=1}^{N}h(\vm{x}_j), \label{eq:h_bar} \end{equation} where $N$ is the number of nodes of coordinates $\vm{x}_j$ that define the polygonal element\footnote[1]{Eq.~\eref{eq:h_bar} in fact defines any `barred' term that appears in this paper.}; $\mat{B}$ is a second-order tensor and thus can be uniquely expressed as the sum of a symmetric and a skew-symmetric tensor. Let the symmetric and skew-symmetric tensors be denoted by $\mat{B}_\mathrm{S}$ and $\mat{B}_\mathrm{AS}$, respectively. The spaces of rigid body modes and constant strain states over $E$ are defined, respectively, as follows: \begin{equation} \mathcal{R}(E):=\left\{\vm{a}+\mat{B}_\mathrm{AS}\cdot(\vm{x}-\overline{\vm{x}}):\vm{a}\in \Re^2,\,\mat{B}_\mathrm{AS}\in \Re^{2\times 2},\, \mat{B}_\mathrm{AS}^\mathsf{T}= -\mat{B}_\mathrm{AS}\right\}, \label{eq:rigidbodyspace} \end{equation} \begin{equation} \mathcal{C}(E):=\left\{\mat{B}_\mathrm{S}\cdot(\vm{x}-\overline{\vm{x}}):\mat{B}_\mathrm{S}\in \Re^{2\times 2},\, \mat{B}_\mathrm{S}^\mathsf{T}= \mat{B}_\mathrm{S}\right\}. \label{eq:constantstrainspace} \end{equation} Note that the space of linear displacements is the direct sum of the spaces given in~\eref{eq:rigidbodyspace} and~\eref{eq:constantstrainspace}, that is, $\mathcal{P}(E)=\mathcal{R}(E)+\mathcal{C}(E)$. The extraction of the components of the displacement field in the three aforementioned spaces is achieved through the following projection operators: \begin{equation}\label{eq:pir} \Pi_\mathcal{R}: \mathcal{W}(E) \to \mathcal{R}(E), \quad \Pi_\mathcal{R} \vm{r}=\vm{r}, \quad \forall \vm{r}\in \mathcal{R}(E) \end{equation} for extracting the rigid body modes, \begin{equation}\label{eq:pic} \Pi_\mathcal{C}: \mathcal{W}(E) \to \mathcal{C}(E), \quad \Pi_\mathcal{C} \vm{c}=\vm{c}, \quad \forall \vm{c}\in \mathcal{C}(E) \end{equation} for extracting the constant strain states, and \begin{equation}\label{eq:pip} \Pi_\mathcal{P}: \mathcal{W}(E) \to \mathcal{P}(E), \quad \Pi_\mathcal{P} \vm{p}=\vm{p}, \quad \forall \vm{p}\in \mathcal{P}(E) \end{equation} for extracting the linear polynomial part. And since $\mathcal{P}(E)=\mathcal{R}(E)+\mathcal{C}(E)$, the projection operators satisfy the relation \begin{equation}\label{eq:rcsum} \Pi_\mathcal{P}=\Pi_\mathcal{R}+\Pi_\mathcal{C}. \end{equation} We know by definition that the space $\mathcal{W}(E)$ includes linear displacements. This means that $\mathcal{W}(E) \supseteq \mathcal{P}(E)$. Thus, any $\vm{u},\vm{v}\in \mathcal{W}(E)$ can be decomposed into three terms, as follows: \begin{subequations}\label{eq:usplit} \begin{align} \vm{u}&=\Pi_\mathcal{R}\vm{u}+\Pi_\mathcal{C}\vm{u}+(\vm{u}-\Pi_\mathcal{P}\vm{u}),\\ \vm{v}&=\Pi_\mathcal{R}\vm{v}+\Pi_\mathcal{C}\vm{v}+(\vm{v}-\Pi_\mathcal{P}\vm{v}), \end{align} \end{subequations} that is, into their rigid body modes, their constant strain states and their additional non-polynomial or high-order functions, respectively. The explicit forms of the projection operators that are defined through~\eref{eq:pir}-\eref{eq:pip} are given in Ref.~\cite{Gain-Talischi-Paulino:2014} and are summarized as follows: let the cell-average of the strain tensor be defined as \begin{equation}\label{eq:volavg_strain} \widehat{\boldsymbol{\varepsilon}}(\vm{v})=\frac{1}{|E|}\int_E \boldsymbol{\varepsilon}(\vm{v})\,\mathrm{d}\vm{x} = \frac{1}{2|E|}\int_{\partial E}\left(\vm{v}\otimes\vm{n}+\vm{n}\otimes\vm{v}\right)\,\mathrm{d}s, \end{equation} where the divergence theorem has been used to transform the volume integral into a surface integral. Similarly, the cell-average of the skew-symmetric gradient tensor is defined as \begin{equation}\label{eq:volavg_skewstrain} \widehat{\boldsymbol{\omega}}(\vm{v})=\frac{1}{|E|}\int_E \boldsymbol{\omega}(\vm{v})\,\mathrm{d}\vm{x}=\frac{1}{2|E|}\int_{\partial E}\left(\vm{v}\otimes\vm{n}-\vm{n}\otimes\vm{v}\right)\,\mathrm{d}s. \end{equation} Note that $\widehat{\boldsymbol{\varepsilon}}(\vm{v})$ and $\widehat{\boldsymbol{\omega}}(\vm{v})$ are constant tensors in the element. On using the preceding definitions, the projection of $\vm{v}$ onto the space of rigid body modes is written as \begin{equation}\label{eq:pirv_final} \Pi_\mathcal{R}\vm{v}=\widehat{\boldsymbol{\omega}}(\vm{v})\cdot(\vm{x}-\overline{\vm{x}})+\overline{\vm{v}}, \end{equation} where $\widehat{\boldsymbol{\omega}}(\vm{v})\cdot(\vm{x}-\overline{\vm{x}})$ and $\overline{\vm{v}}$ are the rotation and translation modes of $\vm{v}$, respectively. And the projection of $\vm{v}$ onto the space of constant strain states is given by \begin{equation}\label{eq:picv_final} \Pi_\mathcal{C}\vm{v}=\widehat{\boldsymbol{\varepsilon}}(\vm{v})\cdot(\vm{x}-\overline{\vm{x}}). \end{equation} Hence, by~\eref{eq:rcsum} the projection of $\vm{v}$ onto the space of linear displacements is written as \begin{equation}\label{eq:pipv_final} \Pi_\mathcal{P}\vm{v}=\Pi_\mathcal{R}\vm{v}+\Pi_\mathcal{C}\vm{v}=\widehat{\boldsymbol{\varepsilon}}(\vm{v})\cdot(\vm{x}-\overline{\vm{x}})+\widehat{\boldsymbol{\omega}}(\vm{v})\cdot(\vm{x}-\overline{\vm{x}})+\overline{\vm{v}}. \end{equation} The projection operator $\Pi_\mathcal{P}$ satisfies some important energy-orthogonality conditions that are invoked when constructing the VEM bilinear form. The proofs can be found in Ref.~\cite{Gain-Talischi-Paulino:2014}. The energy-orthogonality conditions are given next. The projection $\Pi_\mathcal{P}$ satisfies: \begin{subequations} \begin{align} a_E(\vm{p},\vm{v}-\Pi_\mathcal{P}\vm{v}) &= 0 \quad \forall\vm{p}\in \mathcal{P}(E), \ \ \vm{v}\in \mathcal{W}(E), \label{eq:ortoprop_pip} \\ a_E(\vm{c},\vm{v}-\Pi_\mathcal{P}\vm{v}) &= 0 \quad \forall\vm{c}\in \mathcal{C}(E), \ \ \vm{v}\in \mathcal{W}(E). \label{eq:ortoprop_pip_c} \end{align} \end{subequations} The condition~\eref{eq:ortoprop_pip} means that $\vm{v}-\Pi_\mathcal{P}\vm{v}$ is energetically orthogonal to $\mathcal{P}$. The condition~\eref{eq:ortoprop_pip_c} emanates from condition~\eref{eq:ortoprop_pip} after replacing $\vm{p}=\vm{r}+\vm{c}$ and using the fact that rigid body modes have zero strain, that is $a_E(\vm{r},\cdot)=0$. \subsection{The VEM bilinear form} Substituting the VEM decomposition~\eref{eq:usplit} into the bilinear form~\eref{eq:simpbilinearform} leads to the following splitting of the bilinear form at element level: \begin{align}\label{eq:vem_strain_energy} a_E(\vm{u},\vm{v}) &= a_E(\Pi_\mathcal{R}\vm{u}+\Pi_\mathcal{C}\vm{u}+(\vm{u}-\Pi_\mathcal{P}\vm{u}),\Pi_\mathcal{R}\vm{v}+\Pi_\mathcal{C}\vm{v}+(\vm{v}-\Pi_\mathcal{P}\vm{v}))\nonumber\\ &= a_E(\Pi_\mathcal{C}\vm{u},\Pi_\mathcal{C}\vm{v}) + a_E(\vm{u}-\Pi_\mathcal{P}\vm{u},\vm{v}-\Pi_\mathcal{P}\vm{v}), \end{align} where the symmetry of the bilinear form, the fact that $\Pi_\mathcal{R}\vm{u}$ and $\Pi_\mathcal{R}\vm{v}$ do not contribute in the bilinear form (both have zero strain as they belong to the space of rigid body modes), and the energy-orthogonality condition~\eref{eq:ortoprop_pip_c} have been used. The first term on the right-hand side of~\eref{eq:vem_strain_energy} is the bilinear form associated with the constant strain states that provides consistency (it leads to the \textit{consistency} stiffness) and the second term is the bilinear form associated with the additional non-polynomial or high-order functions that provides stability (it leads to the \textit{stability} stiffness). We come back to these concepts later in this section. \subsection{Projection matrices} \label{sec:projection_matrices} The projection matrices are constructed by discretizing the projection operators. We begin by writing the projections $\Pi_\mathcal{R}\vm{v}$ and $\Pi_\mathcal{C}\vm{v}$ in terms of their space basis. To this end, consider the two-dimensional Cartesian space and the skew-symmetry of $\widehat{\boldsymbol{\omega}}\equiv\widehat{\boldsymbol{\omega}}(\vm{v})$\footnote[3]{Note that $\widehat{\omega}_{11}=\widehat{\omega}_{22}=0$ and $\widehat{\omega}_{21}=-\widehat{\omega}_{12}$.}. The projection~\eref{eq:pirv_final} can be written as follows: \begin{equation}\label{eq:pirv_final_alt} \Pi_\mathcal{R}\vm{v}=\vm{r}_1\overline{v}_1+\vm{r}_2\overline{v}_2+\vm{r}_3\widehat{\omega}_{12}, \end{equation} where the basis for the space of rigid body modes is: \begin{equation}\label{eq:basis_rigid_body} \vm{r}_1 = \smat{1 & 0}^\mathsf{T},\,\, \vm{r}_2 = \smat{0 & 1}^\mathsf{T},\,\, \vm{r}_3 = \smat{(x_2-\overline{x}_2) & -(x_1-\overline{x}_1)}^\mathsf{T}. \end{equation} Similarly, on considering the symmetry of $\widehat{\boldsymbol{\varepsilon}}\equiv\widehat{\boldsymbol{\varepsilon}}(\vm{v})$, the projection~\eref{eq:picv_final} can be written as \begin{equation}\label{eq:picv_final_alt} \Pi_\mathcal{C}\vm{v}=\vm{c}_1\widehat{\varepsilon}_{11}+\vm{c}_2\widehat{\varepsilon}_{22}+\vm{c}_3\widehat{\varepsilon}_{12}, \end{equation} where the basis for the space of constant strain states is: \begin{equation}\label{eq:basis_const_strain} \vm{c}_1 = \smat{(x_1-\overline{x}_1) & 0}^\mathsf{T},\,\, \vm{c}_2 = \smat{0 & (x_2-\overline{x}_2)}^\mathsf{T},\,\, \vm{c}_3 = \smat{(x_2-\overline{x}_2) & (x_1-\overline{x}_1)}^\mathsf{T}. \end{equation} On each polygonal element of $N$ edges with nodal coordinates denoted by $\vm{x}_a=[x_{1a} \quad x_{2a}]^\mathsf{T}$, the trial and test displacements are locally approximated as \begin{equation}\label{eq:disc_displacement} \vm{u}^h(\vm{x})=\sum_{a=1}^N\phi_a(\vm{x}) \vm{u}_a,\quad\vm{v}^h(\vm{x})=\sum_{b=1}^N\phi_b(\vm{x}) \vm{v}_b, \end{equation} where $\phi_a(\vm{x})$ and $\phi_b(\vm{x})$ are assumed to be the canonical basis functions having the Kronecker delta property (i.e., Lagrange-type functions), and $\vm{u}_a = [u_{1a} \quad u_{2a}]^\mathsf{T}$ and $\vm{v}_b = [v_{1b} \quad v_{2b}]^\mathsf{T}$ are nodal displacements. The canonical basis functions are also used to locally approximate the components of the basis for the space of rigid body modes: \begin{equation}\label{eq:disc_r_basis} \vm{r}_\alpha^h(\vm{x}) =\sum_{a=1}^N\phi_a(\vm{x})\vm{r}_\alpha(\vm{x}_a), \quad \alpha=1,\ldots,3 \end{equation} and the components of the basis for the space of constant strain states: \begin{equation}\label{eq:disc_c_basis} \vm{c}_\beta^h(\vm{x}) = \sum_{a=1}^N \phi_a(\vm{x})\vm{c}_\beta(\vm{x}_a),\quad \beta=1,\ldots,3. \end{equation} The discrete version of the projection to extract the rigid body modes is obtained by substituting~\eref{eq:disc_displacement} and \eref{eq:disc_r_basis} into~\eref{eq:pirv_final_alt}, which yields \begin{equation}\label{eq:disc_proj_rigid_body} \Pi_\mathcal{R}\vm{v}^h = \mat{N}\mat{P}_\mathcal{R}\mat{q}, \end{equation} where \begin{equation}\label{eq:matrix_N} \mat{N} = \left[(\mat{N})_1 \quad \cdots \quad (\mat{N})_a \quad \cdots \quad (\mat{N})_N\right] \, , \,\,\, (\mat{N})_a=\smat{\phi_a & 0 \\ 0 & \phi_a}, \end{equation} \begin{equation} \mat{q} = \left[\vm{v}_1^\mathsf{T} \quad \cdots \quad \vm{v}_a^\mathsf{T} \quad \cdots \quad \vm{v}_N^\mathsf{T}\right]^\mathsf{T} \, , \,\,\, \vm{v}_a=[v_{1a} \quad v_{2a}]^\mathsf{T} \end{equation} and \begin{equation}\label{eq:matrix_pr} \mat{P}_\mathcal{R}=\mat{H}_\mathcal{R}\mat{W}_\mathcal{R}^\mathsf{T} \end{equation} with \begin{equation}\label{eq:matrix_hr} \mat{H}_\mathcal{R}=\smat{ (\mat{H}_\mathcal{R})_1 & \cdots & (\mat{H}_\mathcal{R})_a & \cdots & (\mat{H}_\mathcal{R})_N }^\mathsf{T}, \quad (\mat{H}_\mathcal{R})_a = \smat{ 1 & 0 \\ 0 & 1\\ (x_{2a}-\overline{x}_2) & -(x_{1a}-\overline{x}_1)}^\mathsf{T} \end{equation} and \begin{equation}\label{eq:matrix_wr} \mat{W}_\mathcal{R}=\smat{ (\mat{W}_\mathcal{R})_1 & \cdots & (\mat{W}_\mathcal{R})_a & \cdots & (\mat{W}_\mathcal{R})_N}^\mathsf{T}, \quad (\mat{W}_\mathcal{R})_a = \smat{ \overline{\phi}_a & 0 \\ 0 & \overline{\phi}_a \\ q_{2a} & -q_{1a}}^\mathsf{T}. \end{equation} In~\eref{eq:matrix_wr}, $q_{ia}$ appeared because of the discretization of $\widehat{\omega}_{12}$ (see~\eref{eq:volavg_skewstrain}) and is given by \begin{equation}\label{eq:qia} q_{ia}=\frac{1}{2|E|}\int_{\partial E}\phi_a n_i\,\mathrm{d}s,\quad i=1,2. \end{equation} Similarly, substituting~\eref{eq:disc_displacement} and \eref{eq:disc_c_basis} into~\eref{eq:picv_final_alt} leads to the following discrete version of the projection to extract the constant strain states: \begin{equation}\label{eq:disc_proj_const_strains} \Pi_\mathcal{C}\vm{v}^h = \mat{N}\mat{P}_\mathcal{C}\mat{q}, \end{equation} where \begin{equation}\label{eq:matrix_pc} \mat{P}_\mathcal{C}=\mat{H}_\mathcal{C}\mat{W}_\mathcal{C}^\mathsf{T} \end{equation} with \begin{equation}\label{eq:matrix_hc} \mat{H}_\mathcal{C}=\smat{ (\mat{H}_\mathcal{C})_1 & \cdots & (\mat{H}_\mathcal{C})_a & \cdots & (\mat{H}_\mathcal{C})_N}^\mathsf{T}, \quad (\mat{H}_\mathcal{C})_a = \smat{ (x_{1a}-\overline{x}_1) & 0 \\ 0 & (x_{2a}-\overline{x}_2) \\ (x_{2a}-\overline{x}_2) & (x_{1a}-\overline{x}_1) }^\mathsf{T} \end{equation} and \begin{equation}\label{eq:matrix_wc} \mat{W}_\mathcal{C}=\smat{ (\mat{W}_\mathcal{C})_1 & \cdots & (\mat{W}_\mathcal{C})_a & \cdots & (\mat{W}_\mathcal{C})_N}^\mathsf{T}, \quad (\mat{W}_\mathcal{C})_a = \smat{ 2q_{1a} & 0\\ 0 & 2q_{2a}\\ q_{2a} & q_{1a}}^\mathsf{T}. \end{equation} In~\eref{eq:matrix_wc}, $q_{ia}$ is also given by~\eref{eq:qia} but in this case it stems from the discretization of $\widehat{\varepsilon}_{ij}$ (see~\eref{eq:volavg_strain}). The matrix form of the projection to extract the polynomial part of the displacement field is then $\mat{P}_\mathcal{P} = \mat{P}_\mathcal{R} + \mat{P}_\mathcal{C}$. For the development of the element \textit{consistency} stiffness matrix, it will be useful to have the following alternative expression for the discrete projection to extract the constant strain states: \begin{align}\label{eq:disc_proj_const_strains_alt_b} \Pi_\mathcal{C}\vm{v}^h &= \vm{c}_1\widehat{\varepsilon}_{11}+\vm{c}_2\widehat{\varepsilon}_{22}+\vm{c}_3\widehat{\varepsilon}_{12}\nonumber\\ &=\smat{\vm{c}_1 & \vm{c}_2 & \vm{c}_3}\sum_{b=1}^N\smat{2q_{1b} & 0\\ 0 & 2q_{2b}\\ q_{2b} & q_{1b}}\smat{v_{1b} \\ v_{2b}}\nonumber\\ &=\mat{c}\,\mat{W}_\mathcal{C}^\mathsf{T}\,\mat{q}. \end{align} \subsection{VEM element stiffness matrix} \label{sec:vem_element_stiffness} The decomposition given in~\eref{eq:vem_strain_energy} is used to construct the approximate mesh-dependent bilinear form $a_E^h(\vm{u},\vm{v})$ in a way that is computable at the element level. To this end, we approximate the quantity $a_E(\vm{u}-\Pi_\mathcal{P}\vm{u},\vm{v}-\Pi_\mathcal{P}\vm{v})$, which is uncomputable, with a computable one given by $s_E(\vm{u}-\Pi_\mathcal{P}\vm{u},\vm{v}-\Pi_\mathcal{P}\vm{v})$ and define \begin{align}\label{eq:vem_element_decomposition} a_E^h(\vm{u},\vm{v}) := a_E(\Pi_\mathcal{C}\vm{u},\Pi_\mathcal{C}\vm{v}) + s_E(\vm{u}-\Pi_\mathcal{P}\vm{u},\vm{v}-\Pi_\mathcal{P}\vm{v}), \end{align} where its right-hand side, as it will be revealed in the sequel, is computed algebraically. The decomposition~\eref{eq:vem_element_decomposition} has been proved to be endowed with the following crucial properties for establishing convergence~\cite{BeiraoDaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013,BeiraodaVeiga-Brezzi-Marini:2013}: For all $h$ and for all $E$ in $\mathcal{T}^h$ \begin{itemize} \item \textit{Consistency}: $\forall \vm{p} \in \mathcal{P}(E)$ and $\forall \vm{v}^h\in V^h|_E$ \begin{equation}\label{eq:consistency_cond} a_E^h(\vm{p},\vm{v}^h)=a_E(\vm{p},\vm{v}^h). \end{equation} \item \textit{Stability}: $\exists$ two constants $\alpha_*>0$ and $\alpha^*>0$, independent of $h$ and of $E$, such that \begin{equation}\label{eq:stability_cond} \forall\vm{v}^h\in V^h|_E, \quad \alpha_*a_E(\vm{v}^h,\vm{v}^h)\leq a_E^h(\vm{v}^h,\vm{v}^h)\leq \alpha^*a_E(\vm{v}^h,\vm{v}^h). \end{equation} \end{itemize} The discrete version of the VEM element bilinear form~\eref{eq:vem_element_decomposition} is constructed as follows. Substitute~\eref{eq:disc_proj_const_strains_alt_b} into the first term of the right-hand side of~\eref{eq:vem_element_decomposition} (note that when $\vm{u}^h$ is used instead of $\vm{v}^h$, $\mat{q}$ is replaced by the column vector of nodal displacements $\mat{d}$, which has the same structure of $\mat{q}$); use~\eref{eq:disc_proj_rigid_body} and \eref{eq:disc_proj_const_strains} to obtain $\Pi_\mathcal{P}\vm{v}^h=\Pi_\mathcal{R}\vm{v}^h+\Pi_\mathcal{C}\vm{v}^h=\mat{N}\mat{P}_\mathcal{P}\mat{q}$, where $\mat{P}_\mathcal{P} = \mat{H}_\mathcal{R}\mat{W}_\mathcal{R}^\mathsf{T}+\mat{H}_\mathcal{C}\mat{W}_\mathcal{C}^\mathsf{T}$. Also, note that $\vm{v}^h=\mat{N}\mat{q}$. Then, substitute the expressions for $\Pi_\mathcal{P}\vm{v}^h$ and $\vm{v}^h$ into the second term of the right-hand side of~\eref{eq:vem_element_decomposition}. This yields \begin{align}\label{eq:disc_vem_strain_energy} a_E^h(\vm{u}^h,\vm{v}^h) &= a_E(\mat{c}\,\mat{W}_\mathcal{C}^\mathsf{T}\,\mat{d},\mat{c}\,\mat{W}_\mathcal{C}^\mathsf{T}\,\mat{q}) + s_E(\mat{N}\mat{d}-\mat{N}\mat{P}_\mathcal{P}\mat{d},\mat{N}\mat{q}-\mat{N}\mat{P}_\mathcal{P}\mat{q}) \nonumber\\ &=\mat{q}^\mathsf{T}\mat{W}_\mathcal{C}\,a_E(\mat{c}^\mathsf{T},\mat{c})\,\mat{W}_\mathcal{C}^\mathsf{T}\mat{d} + \mat{q}^\mathsf{T}(\mat{I}_{2N}-\mat{P}_\mathcal{P})^\mathsf{T}\, s_E(\mat{N}^\mathsf{T},\mat{N})\,(\mat{I}_{2N}-\mat{P}_\mathcal{P})\,\mat{d}\nonumber\\ &= \mat{q}^\mathsf{T}|E|\,\mat{W}_\mathcal{C}\,\mat{D}\,\mat{W}_\mathcal{C}^\mathsf{T}\mat{d} + \mat{q}^\mathsf{T}(\mat{I}_{2N}-\mat{P}_\mathcal{P})^\mathsf{T}\,\mat{S}_E\,(\mat{I}_{2N}-\mat{P}_\mathcal{P})\,\mat{d}, \end{align} where $\mat{I}_{2N}$ is the identity ($2N \times 2N$) matrix and $\mat{S}_E=s_E(\mat{N}^\mathsf{T},\mat{N})$. Using Voigt notation and observing that $\upvarepsilon(\mat{c})=\smat{\varepsilon_{11}(\mat{c}) & \varepsilon_{22}(\mat{c}) & \varepsilon_{12}(\mat{c})}^\mathsf{T}=\mat{I}_{3}$ (the identity (3$\times$3) matrix), in~\eref{eq:disc_vem_strain_energy} we have used that $a_E(\mat{c}^\mathsf{T},\mat{c})=\int_E\upvarepsilon^\mathsf{T}(\mat{c}) \mat{D}\upvarepsilon(\mat{c})\,\mathrm{d}\vm{x}=\mat{D}\int_E\,\mathrm{d}\vm{x}=|E|\mat{D}$, where $\mat{D}$ is the constitutive matrix for an isotropic linear elastic material given by \begin{equation} \mat{D} = \frac{E_Y}{(1+\nu)(1-2\nu)}\smat{1-\nu & \nu & 0\\ \nu & 1-\nu & 0\\ 0 & 0 & 2(1-2\nu)} \end{equation} for plane strain condition, and \begin{equation} \mat{D} = \frac{E_Y}{(1-\nu^2)}\smat{1 & \nu & 0\\ \nu & 1 & 0\\ 0 & 0 & 2(1-\nu)} \end{equation} for plane stress condition, where $E_Y$ is the Young's modulus and $\nu$ is the Poisson's ratio. The first term on the right-hand side of~\eref{eq:disc_vem_strain_energy} is the \textit{consistency} part of the discrete VEM element bilinear form that provides patch test satisfaction when the solution is a linear displacement field (condition~\eref{eq:consistency_cond} is satisfied). The second term on the right-hand side of~\eref{eq:disc_vem_strain_energy} is the \textit{stability} part of the discrete VEM element bilinear form and is dependent on the matrix $\mat{S}_E=s_E(\mat{N}^\mathsf{T},\mat{N})$. This matrix must be chosen such that condition~\eref{eq:stability_cond} holds without putting at risk condition~\eref{eq:consistency_cond} already taken care of by the consistency part. There are quite a few possibilities for this matrix (see for instance~\cite{BeiraoDaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013,BeiraodaVeiga-Brezzi-Marini:2013,Gain-Talischi-Paulino:2014}). Herein, we adopt $\mat{S}_E$ given by~\cite{Gain-Talischi-Paulino:2014} \begin{equation}\label{eq:alt_stability} \mat{S}_E=\alpha_E\,\mat{I}_{2N},\quad \alpha_E=\gamma\frac{|E|\textrm{trace}(\mat{D})}{\textrm{trace}(\mat{H}_\mathcal{C}^\mathsf{T}\mat{H}_\mathcal{C})}, \end{equation} where $\alpha_E$ is the scaling parameter and $\gamma$ is typically set to 1. From~\eref{eq:disc_vem_strain_energy}, the final expression for the VEM element stiffness matrix is obtained respectively as the summation of the element \textit{consistency} and \textit{stability} stiffness matrices, as follows: \begin{equation}\label{eq:disc_stiffness} \mat{K}_E= |E|\,\mat{W}_\mathcal{C}\,\mat{D}\,\mat{W}_\mathcal{C}^\mathsf{T}+(\mat{I}_{2N}-\mat{P}_\mathcal{P})^\mathsf{T}\,\mat{S}_E\,(\mat{I}_{2N}-\mat{P}_\mathcal{P}), \end{equation} where we recall that $\mat{P}_\mathcal{P} = \mat{H}_\mathcal{R}\mat{W}_\mathcal{R}^\mathsf{T}+\mat{H}_\mathcal{C}\mat{W}_\mathcal{C}^\mathsf{T}$. Note that $\mat{H}_\mathcal{R}$ and $\mat{H}_\mathcal{C}$, which are given in~\eref{eq:matrix_hr} and~\eref{eq:matrix_hc}, respectively, are easily computed using the nodal coordinates of the element. However, in order to compute $\mat{W}_\mathcal{R}$ and $\mat{W}_\mathcal{C}$ (see their expressions in~\eref{eq:matrix_wr} and~\eref{eq:matrix_wc}, respectively), we need some knowledge of the basis functions so that $\overline{\phi}_a$ and $q_{ia}$ can be determined. Observe that $\overline{\phi}_a$ is computed using~\eref{eq:h_bar}, which requires the knowledge of the basis functions at the element nodes. And $q_{ia}$ is computed using~\eref{eq:qia}, which requires the knowledge of the basis functions on the element edges. Hence, everything we need to know about the basis functions is their behavior on the element boundary. We have already mentioned that the basis functions in the VEM are assumed to be Lagrange-type functions. This provides everything we need to know about them on the boundary of an element: basis functions are piecewise linear (edge by edge) and continuous on the element edges, and have the Kronecker delta property. Therefore, $\overline{\phi}_a$ can be computed simply as \begin{equation}\label{eq:known_basisfunctions_1} \overline{\phi}_a=\frac{1}{N}\sum_{j=1}^N\phi_a(\vm{x}_j)=\frac{1}{N}, \end{equation} and $q_{ia}$ can be computed exactly using a trapezoidal rule, which gives \begin{equation}\label{eq:known_basisfunctions_2} q_{ia}=\frac{1}{2|E|}\int_{\partial E}\phi_a n_i\,\mathrm{d}s= \frac{1}{4|E|}\left(|e_{a-1}|\{n_i\}_{a-1}+|e_a|\{n_i\}_a\right),\quad i=1,2, \end{equation} where $\{n_i\}_a$ are the components of $\vm{n}_a$ and $|e_a|$ is the length of the edge incident to node $a$ as defined in~\fref{fig:1}. The adoption of~\eref{eq:known_basisfunctions_1} and \eref{eq:known_basisfunctions_2} in the VEM, results in an algebraic evaluation of the element stiffness matrix. This also means that the basis functions are not evaluated explicitly --- in fact, they are never computed. Thus, basis functions are said to be \textit{virtual}. In addition, the knowledge of the basis functions in the interior of the element is not required, although the linear approximation of the displacement field everywhere in the element is computable through the projection~\eref{eq:pipv_final}. Therefore, a more specific discrete global trial space than the one already given in Section~\ref{sec:modelproblem} can be built by assembling element by element the local space defined as~\cite{BeiraoDaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013,BeiraodaVeiga-Lovadina-Mora:2015} \begin{equation} V^h|_E := \left\{\vm{v}^h \in [H^1(E)\cap C^0(E)]^2:\Delta \vm{v}^h=\vm{0}\,\,\textrm{in}\,\, E, \, \vm{v}^h|_e = \mathcal{P}(e)\,\,\,\forall e\in\partial E\right\}. \end{equation} \subsection{VEM element body and traction force vectors} \label{sec:vembodyforcevector} For linear displacements, the body force can be approximated by a piecewise constant. Typically, this piecewise constant approximation is defined as the cell-average $\vm{b}^h=\frac{1}{|E|}\int_E\vm{b}\,\mathrm{d}\vm{x}=\widehat{\vm{b}}$. Thus, the body force part of the discrete VEM element linear form can be computed as follows~\cite{BeiraoDaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013,BeiraodaVeiga-Brezzi-Marini:2013,artioli:AO2DVEM:2017}: \begin{equation}\label{eq:discrete_element_body_force_linear_form} \ell_{b,E}^h(\vm{v}^h)=\int_E\vm{b}^h\cdot\overline{\vm{v}}^h\,\mathrm{d}\vm{x}=|E|\widehat{\vm{b}}\cdot\overline{\vm{v}}^h= \mat{q}^\mathsf{T}|E|\,\overline{\mat{N}}^\mathsf{T}\widehat{\vm{b}}, \end{equation} where \begin{equation}\label{eq:matrix_N_bar} \overline{\mat{N}} = \left[(\overline{\mat{N}})_1 \quad \cdots \quad (\overline{\mat{N}})_a \quad \cdots \quad (\overline{\mat{N}})_N\right] \, , \,\,\, (\overline{\mat{N}})_a=\smat{\overline{\phi}_a & 0 \\ 0 & \overline{\phi}_a}. \end{equation} Hence, the VEM element body force vector is given by \begin{equation} \mat{f}_{b,E}=|E|\,\overline{\mat{N}}^\mathsf{T}\widehat{\vm{b}}. \end{equation} The traction force part of the VEM element linear form is similar to the integral expression given in~\eref{eq:discrete_element_body_force_linear_form}, but the integral is one dimension lower. Therefore, on considering the element edge as a two-node one-dimensional element, the VEM element traction force vector can be computed on an element edge lying on the natural (Neumann) boundary, as follows: \begin{equation} \mat{f}_{f,e}=|e|\,\overline{\mat{N}}_\Gamma^\mathsf{T}\,\widehat{\vm{f}}, \end{equation} where \begin{equation} \overline{\mat{N}}_\Gamma=\smat{\overline{\phi}_1 & 0 & \overline{\phi}_2 & 0\\ 0 & \overline{\phi}_1 & 0 & \overline{\phi}_2}=\smat{\frac{1}{N} & 0 & \frac{1}{N} & 0\\ 0 & \frac{1}{N} & 0 & \frac{1}{N}}=\smat{\frac{1}{2} & 0 & \frac{1}{2} & 0\\ 0 & \frac{1}{2} & 0 & \frac{1}{2}} \end{equation} and $\widehat{\vm{f}}=\frac{1}{|e|}\int_e\vm{f}\,\mathrm{d}s$. \subsection{$L^2$-norm and $H^1$-seminorm of the error} \label{sec:norms} To assess the accuracy and convergence of the VEM, two global error measures are used. The relative $L^2$-norm of the displacement error defined as \begin{equation} \frac{||\vm{u}-\Pi_\mathcal{P}\vm{u}^h||_{L^2(\Omega)}}{||\vm{u}||_{L^2(\Omega)}} =\sqrt{\frac{\sum_E\int_E\left(\vm{u}-\Pi_\mathcal{P}\vm{u}^h\right)^\mathsf{T} \left(\vm{u}-\Pi_\mathcal{P}\vm{u}^h\right)\,\mathrm{d}\vm{x}} {\sum_E\int_E\vm{u}^\mathsf{T}\vm{u}\,\mathrm{d}\vm{x}}}, \end{equation} and the relative $H^1$-seminorm of the displacement error given by \begin{equation} \frac{||\vm{u}-\Pi_\mathcal{P}\vm{u}^h||_{H^1(\Omega)}}{||\vm{u}||_{H^1(\Omega)}} =\sqrt{\frac{\sum_E\int_E\left(\upvarepsilon(\vm{u})-\upvarepsilon(\Pi_\mathcal{C}\vm{u}^h)\right)^\mathsf{T} \mat{D}\left(\upvarepsilon(\vm{u})-\upvarepsilon(\Pi_\mathcal{C}\vm{u}^h)\right)\,\mathrm{d}\vm{x}} {\sum_E\int_E\upvarepsilon(\vm{u})^\mathsf{T}\mat{D}\upvarepsilon(\vm{u})\,\mathrm{d}\vm{x}}}, \end{equation} where the strain appears in Voigt notation and $\upvarepsilon(\Pi_\mathcal{C}\vm{u}^h)=\boldsymbol{\nabla}_\mathrm{S}(\Pi_\mathcal{C}\vm{u}^h)=\widehat{\upvarepsilon}(\vm{u}^h)$ (see~\eref{eq:strain} and~\eref{eq:picv_final}). \subsection{VEM element stiffness matrix for the Poisson problem} \label{sec:vempoisson} The VEM formulation for the Poisson problem is derived similarly to the VEM formulation for the linear elastostatic problem. However, herein we give the VEM stiffness matrix for the Poisson problem by reducing the solution dimension in the two-dimensional linear elastostatic VEM formulation. The following reductions are used: the displacement field reduces to the scalar field $v(\vm{x})$, the strain is simplified to $\boldsymbol{\varepsilon}(v)=\boldsymbol{\nabla} v$, the rotations become $\boldsymbol{\omega}(v)=\vm{0}$, and the constitutive matrix is replaced by the identity ($2\times 2$) matrix. Hence, the VEM projections for the Poisson problem become $\Pi_\mathcal{R}v=\overline{v}$ and $\Pi_\mathcal{C}u=\widehat{\boldsymbol{\nabla}}(v)\cdot(\vm{x}-\overline{\vm{x}})$. The matrices that result from the discretization of the projection operators are simplified to \begin{equation}\label{eq:matrix_hr_poisson} \mat{H}_\mathcal{R}=\smat{ (\mat{H}_\mathcal{R})_1 & \cdots & (\mat{H}_\mathcal{R})_a & \cdots & (\mat{H}_\mathcal{R})_N}^\mathsf{T}, \quad (\mat{H}_\mathcal{R})_a = 1, \end{equation} \begin{equation}\label{eq:matrix_wr_poisson} \mat{W}_\mathcal{R}=\smat{ (\mat{W}_\mathcal{R})_1 & \cdots & (\mat{W}_\mathcal{R})_a & \cdots & (\mat{W}_\mathcal{R})_N}^\mathsf{T}, \quad (\mat{W}_\mathcal{R})_a = \frac{1}{N}, \end{equation} \begin{equation}\label{eq:matrix_hc_poisson} \mat{H}_\mathcal{C}=\smat{ (\mat{H}_\mathcal{C})_1 & \cdots & (\mat{H}_\mathcal{C})_a & \cdots & (\mat{H}_\mathcal{C})_N}^\mathsf{T}, \quad (\mat{H}_\mathcal{C})_a = \smat{ (x_{1a}-\overline{x}_1) & (x_{2a}-\overline{x}_2) }, \end{equation} \begin{equation}\label{eq:matrix_wc_poisson} \mat{W}_\mathcal{C}=\smat{ (\mat{W}_\mathcal{C})_1 & \cdots & (\mat{W}_\mathcal{C})_a & \cdots & (\mat{W}_\mathcal{C})_N}^\mathsf{T}, \quad (\mat{W}_\mathcal{C})_a = \smat{ 2q_{1a} & 2q_{2a} }. \end{equation} On using the preceding matrices, the projection matrix is $\mat{P}_\mathcal{P} = \mat{H}_\mathcal{R}\mat{W}_\mathcal{R}^\mathsf{T}+\mat{H}_\mathcal{C}\mat{W}_\mathcal{C}^\mathsf{T}$ and the final expression for the VEM element stiffness matrix is written as \begin{equation}\label{eq:disc_stiffness_poisson} \mat{K}_E= |E|\,\mat{W}_\mathcal{C}\mat{W}_\mathcal{C}^\mathsf{T}+(\mat{I}_{N}-\mat{P}_\mathcal{P})^\mathsf{T}(\mat{I}_{N}-\mat{P}_\mathcal{P}), \end{equation} where $\mat{I}_{N}$ is the identity ($N\times N$) matrix and $\mat{S}_E=\mat{I}_{N}$ has been used in the stability stiffness as this represents a suitable choice for $\mat{S}_E$ in the Poisson problem~\cite{BeiraoDaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013}. \section{Object-oriented implementation of VEM in C++} \label{sec:implementation} In this section, we introduce \texttt{Veamy}, a library that implements the VEM for the linear elastostatic and Poisson problems in two dimensions using object-oriented programming in C++. For the purpose of comparison with the VEM, a module implementing the standard FEM is available within \texttt{Veamy} for the solution of the two-dimensional linear elastostatic problem using three-node triangular finite elements. In \texttt{Veamy}, entities such as element, degree of freedom, constraint, among others, are represented by C++ classes. \texttt{Veamy} uses the following external libraries: \begin{itemize} \item Triangle~\cite{shewchuk96b}, a two-dimensional quality mesh generator and Delaunay triangulator. \item Clipper~\cite{clipperweb}, an open source freeware library for clipping and offsetting lines and polygons. \item Eigen~\cite{eigenweb}, a C++ template library for linear algebra. \end{itemize} Triangle and Clipper are used in the implementation of \texttt{Delynoi}~\cite{delynoiweb}, a polygonal mesh generator that is based on the constrained Voronoi diagram. The usage of our polygonal mesh generator is covered in Section~\ref{sec:meshgenerator}. \texttt{Veamy} is free and open source software and is available from Netlib (\url{http://www.netlib.org/numeralgo/}) as the na51 package. In addition, a website (\url{http://camlab.cl/software/veamy/}) is available, where the software is maintained. After downloading and uncompressing the software, the main directory ``Veamy-2.1/'' is created. This directory is organized as follows. The source code that implements the VEM is provided in the folder ``veamy/'' and the subfolders therein. External libraries that are used by \texttt{Veamy} are provided in the folder ``lib/.'' The folder ``matplots/'' contains MATLAB functions that are useful for plotting meshes and the VEM solution, and for writing a \texttt{PolyMesher}~\cite{Talischi:POLYM:2012} mesh and boundary conditions to a text file that is readable by \texttt{Veamy}. A detailed software documentation with graphical content can be found in the tutorial manual that is provided in the folder ``docs/.'' Several tests are located in the folder ``test/.'' Some of these tests are covered in the tutorial manual and in Section~\ref{sec:sampleusage} of this paper. Veamy supports Linux and Mac OS machines only and compiles with g++, the GNU C++ compiler (GCC 7.3 or newer versions should be used). The installation procedure and the content that comprises the software are described in detail in the README.txt file (and also in the tutorial manual), which can be found in the main directory. The core design of \texttt{Veamy} is presented in three UML diagrams that are intended to explain the numerical methods implemented (\fref{fig:2}), the problem conditions inherent to the linear elastostatic and Poisson problems (\fref{fig:3}), and the computation of the $L^2$-norm and $H^1$-seminorm of the errors (\fref{fig:4}). \subsection{Numerical methods} The \texttt{Veamy} library is divided into two modules, one that implements the VEM and another one that implements the FEM. \fref{fig:2} summarizes the implementation of these methods. Two abstract classes are central to the \texttt{Veamy} library, \texttt{Calculator2D} and \texttt{Element}. \texttt{Calculator2D} is designed in the spirit of the controller design pattern. It receives the \texttt{ProblemDiscretization} subclasses with all their associated problem conditions, creates the required structures, applies the boundary conditions and runs the simulation. \texttt{Calculator2D}, as an abstract class, has a number of methods that all inherited classes must implement. The two most important are the one in charge of creating the elements, and the one in charge of computing the element stiffness matrix and the element (body and traction) force vector. We implement two concrete \texttt{Calculator2D} classes, called \texttt{Veamer} and \texttt{Feamer}, with the former representing the controller for the VEM and the latter for the FEM. \begin{figure}[!tbhp] \centering \epsfig{file = fig2.eps, width = 0.85\textwidth} \caption{UML diagram for the \texttt{Veamy} library. VEM and FEM modules} \label{fig:2} \end{figure} On the other hand, \texttt{Element} is the class that encapsulates the behavior of each element in the domain. It is in charge of keeping the degrees of freedom of the element and its associated stiffness matrix and force vector. \texttt{Element} contains methods to create and assign degrees of freedom, assemble the element stiffness matrix and the element force vector into the global ones. An \texttt{Element} has the information of its defining polygon (the three-node triangle is the lowest-order polygon) along with its degrees of freedom. \texttt{Element} has two inherited classes, \texttt{VeamyElement} and \texttt{FeamyElement}, which represent elements of the VEM and FEM, respectively. They are in charge of the computation of the element stiffness matrix and the element force vector. Algorithm~\ref{algo:1} summarizes the implementation of the linear elastostatic VEM element stiffness matrix in the \texttt{VeamyElement} class using the notation presented in Sections~\ref{sec:projection_matrices} and \ref{sec:vem_element_stiffness}. \begin{algorithm}[H] \SetAlgoCaptionSeparator{\quad} \DontPrintSemicolon \SetArgSty{textrm} \SetAlgoLined $\mat{H}_\mathcal{R}=\mat{0}$, $\mat{W}_\mathcal{R}=\mat{0}$, $\mat{H}_\mathcal{C}=\mat{0}$, $\mat{W}_\mathcal{C}=\mat{0}$\; \For{each node in the polygonal element}{Get incident edges\; Compute the unit outward normal vector to each incident edge\; Compute $(\mat{H}_\mathcal{R})_a$ and $(\mat{H}_\mathcal{C})_a$, and insert them into $\mat{H}_\mathcal{R}$ and $\mat{H}_\mathcal{C}$, respectively\; Compute $(\mat{W}_\mathcal{R})_a$ and $(\mat{W}_\mathcal{C})_a$, and insert them into $\mat{W}_\mathcal{R}$ and $\mat{W}_\mathcal{C}$, respectively} Compute $\mat{I}_{2N}$, $\mat{P}_\mathcal{R}$, $\mat{P}_\mathcal{C}$, $\mat{P}_\mathcal{P}$, $\mat{D}$\; Compute $\mat{S}_E$\; \KwOut{$\mat{K}_E=|E|\,\mat{W}_\mathcal{C}\,\mat{D}\,\mat{W}_\mathcal{C}^\mathsf{T}+ (\mat{I}_{2N}-\mat{P}_\mathcal{P})^\mathsf{T}\,\mat{S}_E\,(\mat{I}_{2N}-\mat{P}_\mathcal{P})$} \label{algo:1} \caption{Implementation of the VEM element stiffness matrix for the linear elastostatic problem in the \texttt{VeamyElement} class} \end{algorithm} The element force vector is computed with the aid of the abstract classes \texttt{BodyForceVector} and \texttt{TractionVector}. Each of them has two concrete subclasses named \texttt{VeamyBodyForceVector} and \texttt{FeamyBodyForceVector}, and \texttt{VeamyTractionVector} and \texttt{FeamyTractionVector}, respectively. Even though we have implemented the three-node triangular finite element only as a means to comparison with the VEM, we decided to define \texttt{FeamyElement} as an abstract class so that more advanced elements can be implemented if desired. Finally, each \texttt{FeamyElement} concrete implementation has a \texttt{ShapeFunction} concrete subclass, representing the shape functions that are used to interpolate the solution inside the element. For the three-node triangular finite element, we include the \texttt{Tri3ShapeFunctions} class. One of the structures related to all \texttt{Element} classes is called \texttt{DOF}. It describes a single degree of freedom. The degree of freedom is associated with the nodal points of the mesh according to the \texttt{ProblemDiscretization} subclasses. So, in the linear elastostatic problem each nodal point has two associated \texttt{DOF} instances and in the Poisson problem just one \texttt{DOF} instance. The \texttt{DOF} instances are kept in a list inside a container class called \texttt{DOFS}. Although the VEM matrices are computed algebraically, the FEM matrices in general require numerical integration both inside the element (area integration) and on the edges that lie on the natural boundary (line integration). Thus, we have implemented two classes, \texttt{AreaIntegrator} and \texttt{LineIntegrator}, which contain methods that integrate a given function inside the element and on its boundary. There are several classes related to the numerical integration. \texttt{IntegrableFunction} is a template interface that has a method called \texttt{apply} that must be implemented. This method receives a sample point and must be implemented so that it returns the evaluation of a function at the sample point. We include three concrete \texttt{IntegrableFunction} implementations, one for the body force, another one for the stiffness matrix and the last one for the boundary vector. \subsection{Problem conditions} \fref{fig:3} presents the classes for the problem conditions used in the linear elastostatic and Poisson problems. The problem conditions are kept in a structure called \texttt{Conditions} that contains the physical properties of the material (\texttt{Material} class), the boundary conditions and the body force. \texttt{BodyForce} is a class that contains two functions pointers that represent the body force in each of the two axes of the Cartesian coordinate system. These two functions must be instantiated by the user to include a body force in the problem. By default, \texttt{Conditions} creates an instance of the \texttt{None} class, which is a subclass of \texttt{BodyForce} that represents the nonexistence of body forces. \texttt{Material} is an abstract class that keeps the elastic constants associated with the material properties (Young's modulus and Poisson's ratio) and has an abstract function that computes the material matrix; \texttt{Material} has two subclasses, \texttt{MaterialPlaneStress} and \texttt{MaterialPlaneStrain}, which return the material matrix for the plane stress and plane strain states, respectively. \begin{figure}[!tbhp] \centering \epsfig{file = fig3.eps, width = 0.85\textwidth} \caption{UML diagram for the \texttt{Veamy} library. Problem conditions} \label{fig:3} \end{figure} To model the boundary conditions, we have created a number of classes: \texttt{Constraint} is an abstract class that represents a single constraint --- a constraint can be an essential (Dirichlet) boundary condition or a natural (Neumann) boundary condition. \texttt{PointConstraint} and \texttt{SegmentConstraint} are concrete classes implementing \texttt{Constraint} and representing a constraint at a point and on a segment of the domain, respectively. \texttt{Constraints} is the class that manages all the constraints in the system, and the relationship between them and the degrees of freedom; \texttt{EssentialConstraints} and \texttt{NaturalConstraints} inherit from \texttt{Constraints}. Finally, \texttt{ConstraintsContainers} is a utility class that contains \texttt{EssentialConstraints} and \texttt{NaturalConstraints} instances. \texttt{Constraint} keeps a list of domain segments subjected to a given condition, the value of this condition, and a certain direction (vertical, horizontal or both). The interface called \texttt{ConstraintValue} is the method to control the way the user inputs the constraints: to add any constraint, the user must choose between a constant value (\texttt{Constant} class) and a function (\texttt{Function} class), or implement a new class inheriting from \texttt{ConstraintValue}. \subsection{Norms of the error} As shown in \fref{fig:4}, \texttt{Veamy} provides functionalities for computing the relative $L^2$-norm and $H^1$-seminorm of the error through the classes \texttt{L2NormCalculator} and \texttt{H1NormCalculator}, respectively, which inherit from the abstract class \texttt{NormCalculator}. Each \texttt{NormCalculator} instance has two instances of what we call the \texttt{NormIntegrator} classes: \texttt{VeamyIntegrator} and \texttt{FeamyIntegrator}. These are in charge of integrating the norms integrals in the VEM and FEM approaches, respectively. In these \texttt{NormIntegrator} classes, the integrands of the norms integrals are represented by the \texttt{Computable} class. Depending on the integrand, we define various \texttt{Computable} subclasses: \texttt{DisplacementComputable}, \texttt{DisplacementDifferenceComputable}, \texttt{H1Computable} and its subclasses, \texttt{StrainDifferenceComputable}, \texttt{StrainStressDifferenceComputable}, \texttt{StrainComputable} and \texttt{StrainStressComputable}. Finally, \texttt{DisplacementCalculator} and \texttt{StrainCalculator} (and their subclasses) permit to obtain the numerical displacement and the numerical strain, respectively; and \texttt{StrainValue} and \texttt{StressValue} classes represent the exact value of the strains and stresses at the quadrature points, respectively. \begin{figure}[!tbhp] \centering \epsfig{file = fig4.eps, width = 0.85\textwidth} \caption{UML diagram for the \texttt{Veamy} library. Computation of the $L^2$-norm and $H^1$-seminorm of the error} \label{fig:4} \end{figure} \subsection{Computation of nodal displacements} \label{subsec:displacementcomputation} Each simulation is represented by a single \texttt{Calculator2D} instance, which is in charge of conducting the simulation through its \texttt{simulate} method until the displacement solution is obtained. The procedure is similar to a finite element simulation. The implementation of the \texttt{simulate} method is summarized in Algorithm~\ref{algo:2}. \begin{algorithm}[H] \SetAlgoCaptionSeparator{\quad} \DontPrintSemicolon \SetArgSty{textrm} \SetAlgoLined \KwIn{Mesh} Initialization of the global stiffness matrix and the global force vector\; \For{each element in the mesh}{Compute the element stiffness matrix\; Compute the element force vector\; Assemble the element stiffness matrix and the element force vector into global ones} Apply natural boundary conditions to the global force vector\; Impose the essential boundary conditions into the global matrix system\; Solve the resulting global matrix system of linear equations\; \KwOut{Column vector containing the nodal displacements solution} \label{algo:2} \caption{Implementation of the \texttt{simulate} method in the \texttt{Calculator2D} class} \end{algorithm} The resulting matrix system of linear equations is solved using appropriate solvers available in the Eigen library~\cite{eigenweb} for linear algebra. \section{Polygonal mesh generator} \label{sec:meshgenerator} In this section, we provide some guidelines for the usage of our polygonal mesh generator \texttt{Delynoi}~\cite{delynoiweb}. \subsection{Domain definition} The domain is defined by creating its boundary from a counterclockwise list of points. Some examples of domains created in \texttt{Delynoi} are shown in \fref{fig:5}. We include the possibility of adding internal or intersecting holes to the domain as additional objects that are independent of the domain boundary. Some examples of domains created in \texttt{Delynoi} with one and several intersecting holes are shown in \fref{fig:6}. \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:5a} \epsfig{file = fig5a.eps, width = 0.16\textwidth}} \subfigure[]{\label{fig:5b} \epsfig{file = fig5b.eps, width = 0.2\textwidth}} \subfigure[]{\label{fig:5c} \epsfig{file = fig5c.eps, width = 0.16\textwidth}} \subfigure[]{\label{fig:5d} \epsfig{file = fig5d.eps, width = 0.16\textwidth}} } \caption{Domain examples. (a) Square domain, (b) rhomboid domain, (c) quarter circle domain, (d) unicorn-shaped domain} \label{fig:5} \end{figure} Listing~\ref{lst:square_domain} shows the code to generate a square domain and a quarter circle domain. More domain definitions are given in Section~\ref{sec:sampleusage} as part of \texttt{Veamy}'s sample usage problems. \begin{lstlisting}[language=C++, caption={Definition of square and quarter circle domains}, label=lst:square_domain] std::vector<Point> square_points = {Point(0,0), Point(10,0), Point(10,10), Point(0,10)}; Region square(square_points); std::vector<Point> qc_points = {Point(0,0), Point(10,0), Point(10,10)}; std::vector<Point> quarter = delynoi_utilities::generateArcPoints(Point(10,0), 10, 90.0, 180.0); qc_points.insert(quarter_circle_points.end(), quarter.begin(), quarter.end()); Region quarter_circle(qc_points); \end{lstlisting} \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:6a} \epsfig{file = fig6a.eps, width = 0.18\textwidth}} \subfigure[]{\label{fig:6b} \epsfig{file = fig6b.eps, width = 0.18\textwidth}} \subfigure[]{\label{fig:6c} \epsfig{file = fig6c.eps, width = 0.16\textwidth}} \subfigure[]{\label{fig:6d} \epsfig{file = fig6d.eps, width = 0.16\textwidth}} } \caption{Examples of domains with holes. (a) Square with an inner hole, (b) square with four intersecting holes, (c) unicorn-shaped domain with an inner hole, (d) unicorn-shaped domain with an intersecting hole} \label{fig:6} \end{figure} To add a circular hole to the center of the square domain already defined, first the required hole is created and then added to the domain as shown in Listing~\ref{lst:square_domain_hole}. \begin{lstlisting}[language=C++, caption={Adding a circular hole to the center of the square domain}, label=lst:square_domain_hole] Hole circular = CircularHole(Point(5,5), 2); square.addHole(circular); \end{lstlisting} \subsection{Mesh generation rules} We include a number of different rules for the generation of the seeds points for the Voronoi diagram. These rules are \texttt{constant}, \texttt{random\_double}, \texttt{ConstantAlternating} and \texttt{sine}. The \texttt{constant} method generates uniformly distributed seeds points; the \texttt{random\_double} method generates random seeds points; the \texttt{ConstantAlternating} method generates seeds points by displacing alternating the points along one Cartesian axis. \fref{fig:7} presents some examples of meshes generated on a square domain using different rules. We show how to generate constant (uniform) and random points for a given domain in Listing~\ref{lst:generation_tests_points}. \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:7a} \epsfig{file = fig7a.eps, width = 0.18\textwidth}} \subfigure[]{\label{fig:7b} \epsfig{file = fig7b.eps, width = 0.18\textwidth}} \subfigure[]{\label{fig:7c} \epsfig{file = fig7c.eps, width = 0.18\textwidth}} \subfigure[]{\label{fig:7d} \epsfig{file = fig7d.eps, width = 0.18\textwidth}} } \caption{Polygonal mesh generation on a square domain using different rules. (a) \texttt{constant}, (b) \texttt{random\_double}, (c) \texttt{ConstantAlternating}, (d) \texttt{sine}} \label{fig:7} \end{figure} \begin{lstlisting}[language=C++, caption={Generation of constant (uniform) and random points}, label=lst:generation_tests_points] dom1.generateSeedPoints(PointGenerator(functions::constant(), functions::constant()), nX, nY); dom2.generateSeedPoints(PointGenerator(functions::random_double(0,maxX), functions::random_double(0,maxY)), nX, nY); // nX, nY: horizontal and vertical divisions along sides of the bounding box \end{lstlisting} We also include the possibility of adding noise to the generation rules. For this, we implement a random noise function that adds a random displacement to each seed point. \fref{fig:8} depicts some examples of generation rules with random noise. Listing~\ref{lst:generation_with_noise} presents the code to add random noise to the \texttt{constant} generation rule on a square domain. \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:8a} \epsfig{file = fig8a.eps, width = 0.18\textwidth}} \subfigure[]{\label{fig:8b} \epsfig{file = fig8b.eps, width = 0.18\textwidth}} \subfigure[]{\label{fig:8c} \epsfig{file = fig8c.eps, width = 0.18\textwidth}} \subfigure[]{\label{fig:8d} \epsfig{file = fig8d.eps, width = 0.18\textwidth}} } \caption{Polygonal mesh generation on a square domain using different rules with random noise. (a) \texttt{constant} with noise, (b) \texttt{random\_double} with noise, (c) \texttt{ConstantAlternating} with noise, (d) \texttt{sine} with noise} \label{fig:8} \end{figure} \begin{lstlisting}[language=C++, caption={Generation of constant (uniform) points with random noise}, label=lst:generation_with_noise] Functor* n = noise::random_double_noise(functions::constant(), minNoise, maxNoise); square.generateSeedPoints(PointGenerator(n,n,nX, nY)); // nX, nY: horizontal and vertical divisions along sides of the bounding box \end{lstlisting} \subsection{Mesh generation on complicated domains} Finally, we present some examples of meshes generated on some complicated domains using \texttt{constant} and \texttt{random\_double} rules. \fref{fig:9} shows polygonal meshes for a square domain with four intersecting holes and \fref{fig:10} depicts polygonal meshes for the unicorn-shaped domain without holes and with different configuration of holes. \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:9a} \epsfig{file = fig9a.eps, width = 0.2\textwidth}} \subfigure[]{\label{fig:9b} \epsfig{file = fig9b.eps, width = 0.2\textwidth}} } \caption{Examples of polygonal meshes in complicated domains. (a) Square with four intersecting holes and \texttt{constant} generation rule, and (b) square with four intersecting holes and \texttt{random\_double} generation rule} \label{fig:9} \end{figure} \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:10a} \epsfig{file = fig10a.eps, width = 0.25\textwidth}} \subfigure[]{\label{fig:10b} \epsfig{file = fig10b.eps, width = 0.25\textwidth}} \subfigure[]{\label{fig:10c} \epsfig{file = fig10c.eps, width = 0.18\textwidth}} } \mbox{ \subfigure[]{\label{fig:10d} \epsfig{file = fig10d.eps, width = 0.25\textwidth}} \subfigure[]{\label{fig:10e} \epsfig{file = fig10e.eps, width = 0.19\textwidth}} \subfigure[]{\label{fig:10f} \epsfig{file = fig10f.eps, width = 0.24\textwidth}} } \caption{Examples of polygonal meshes in complicated domains. Unicorn-shaped domain with (a) \texttt{constant} generation rule, (b) \texttt{random\_double} generation rule, (c) inner hole and \texttt{constant} generation rule, (d) inner hole and \texttt{random\_double} generation rule, (e) intersecting hole and \texttt{constant} generation rule, and (f) intersecting hole and \texttt{random\_double} generation rule} \label{fig:10} \end{figure} \section{Sample usage} \label{sec:sampleusage} This section illustrates the usage of \texttt{Veamy} through several examples. For each example, a main C++ file is written to setup the problem. This is the only file that needs to be written by the user in order to run a simulation in \texttt{Veamy}. The setup file for each of the examples that are considered in this section is included in the folder ``test/.'' To be able to run these examples, it is necessary to compile the source code. A tutorial manual that provides complete instructions on how to prepare, compile and run the examples is included in the folder ``docs/.'' \subsection{Cantilever beam subjected to a parabolic end load} \label{sec:beamexample} The VEM solution for the displacement field on a cantilever beam of unit thickness subjected to a parabolic end load $P$ is computed using \texttt{Veamy}. \fref{fig:11} illustrates the geometry and boundary conditions. Plane strain state is assumed. The essential boundary conditions on the clamped edge are applied according to the analytical solution given by Timoshenko and Goodier~\cite{timoshenko:1970:TOE}: \begin{subequations}\label{beam_exact_sol} \begin{align*} u_{x} &= -\frac{Py}{6\overline{E}_Y I}\left((6L-3x)x + (2+\overline{\nu})y^2 - \frac{3D^2}{2}(1+\overline{\nu})\right),\\ u_{y} &= \frac{P}{6\overline{E}_Y I}\left(3\overline{\nu}y^{2}(L-x)+(3L-x)x^{2}\right), \end{align*} \end{subequations} where $\overline{E}_Y=E_Y/\left(1-\nu^{2}\right)$ with the Young's modulus set to $E_Y=1\times 10^7$ psi, and $\overline{\nu}=\nu/\left(1-\nu\right)$ with the Poisson's ratio set to $\nu=0.3$; $L=8$ in. is the length of the beam, $D=4$ in. is the height of the beam, and $I$ is the second-area moment of the beam section. The total load on the traction boundary is $P=-1000$ lbf. \begin{figure}[!tbhp] \centering \epsfig{file = fig11.eps, width = 0.5\textwidth} \caption{Model geometry and boundary conditions for the cantilever beam problem} \label{fig:11} \end{figure} \subsubsection{Setup file} The setup instructions for this problem are provided in the main C++ file ``ParabolicMain.cpp'' that is located in the folder ``test/.'' Additionally, the structure of the setup file is explained in detail in the tutorial manual that is located in the folder ``docs/.'' The interested reader is referred therein to learn more about this setup file. \subsubsection{Post processing} \texttt{Veamy} does not provide a post processing interface. The user may opt for a post processing interface of their choice. Here we visualize the displacements using a MATLAB function written for this purpose. This MATLAB function is provided in the folder ``matplots/'' as the file ``plotPolyMeshDisplacements.m.'' In addition, a file named ``plotPolyMesh.m'' that serves for plotting the mesh is provided in the same folder. \fref{fig:12} presents the polygonal mesh used and the VEM solutions. \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:12a} \epsfig{file = fig12a.eps, width = 0.42\textwidth} } \subfigure[]{\label{fig:12b} \epsfig{file = fig12b.eps, width = 0.44\textwidth} } } \mbox{ \subfigure[]{\label{fig:12c} \epsfig{file = fig12c.eps, width = 0.44\textwidth} } \subfigure[]{\label{fig:12d} \epsfig{file = fig12d.eps, width = 0.44\textwidth} } } \caption{Solution for the cantilever beam subjected to a parabolic end load using \texttt{Veamy}. (a) Polygonal mesh, (b) VEM horizontal displacements, (c) VEM vertical displacements, (d) norm of the VEM displacements} \label{fig:12} \end{figure} \subsubsection{VEM performance} A performance comparison between VEM and FEM is conducted. For the FEM simulations, the \texttt{Feamy} module is used. The main C++ setup files for these tests are located in the folder ``test/'' and named as ``ParabolicMainVEMnorms.cpp'' for the VEM and ``FeamyParabolicMainNorms.cpp'' for the FEM using three-node triangles ($T3$). The meshes used for these tests were written to text files, which are located in the folder ``test/test\_files/.'' \texttt{Veamy} implements a function named \texttt{createFromFile} that is used to read these mesh files. The performance of the two methods are compared in~\fref{fig:13}, where the $H^1$-seminorm of the error and the normalized CPU time are each plotted as a function of the number of degrees of freedom (DOF). The normalized CPU time is defined as the ratio of the CPU time of a particular model analyzed to the maximum CPU time found for any of the models analyzed. From \fref{fig:13} it is observed that for equal number of degrees of freedom both methods deliver similar accuracy and the computational costs are about the same. \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:13a} \epsfig{file = fig13a.eps, width = 0.48\textwidth} } \subfigure[]{\label{fig:13b} \epsfig{file = fig13b.eps, width = 0.48\textwidth} } } \caption{Cantilever beam subjected to a parabolic end load. Performance comparison between the VEM (polygonal elements) and the FEM (three-node triangles ($T3$)). (a) $H^1$-seminorn of the error as a function of the number of degrees of freedom and (b) normalized CPU time as a function of the number of degrees of freedom} \label{fig:13} \end{figure} \subsection{Using a \texttt{PolyMesher} mesh} \label{sec:mbbbeamexample} In order to conduct a simulation in \texttt{Veamy} using a \texttt{PolyMesher} mesh, a MATLAB function named \texttt{PolyMesher2Veamy}, which needs to be called within \texttt{PolyMesher}, was especially devised to read this mesh and write it to a text file that is readable by \texttt{Veamy}. This MATLAB function is provided in the folder ``matplots/.'' Function \texttt{PolyMesher2Veamy} receives five \texttt{PolyMesher}'s data structures (\texttt{Node}, \texttt{Element}, \texttt{NElem}, \texttt{Supp}, \texttt{Load}) and writes a text file containing the mesh and boundary conditions. \texttt{Veamy} implements a function named \texttt{initProblemFromFile} that is able to read this text file and solve the problem straightforwardly. As a demonstration of the potential that is offered to the simulation when \texttt{Veamy} interacts with \texttt{PolyMesher}, the MBB beam problem of Section 6.1 in Ref.~\cite{Talischi:POLYM:2012} is considered. The MBB problem is shown in~\fref{fig:14}, where $L=3$ in., $D=1$ in. and $P=0.5$ lbf. The following material parameters are considered: $E_Y = 1\times 10^7$ psi, $\nu = 0.3$ and plane strain condition is assumed. The file containing the translated mesh with boundary conditions is provided in the folder ``test/test\_files/'' under the name ``\texttt{polymesher2veamy.txt}.'' The complete setup instructions for this problem are provided in the file ``PolyMesherMain.cpp'' that is located in the folder ``test/.'' The polygonal mesh and the VEM solution are presented in \fref{fig:15}. \begin{figure}[!tbhp] \centering \epsfig{file = fig14.eps, width = 0.5\textwidth} \caption{MBB beam problem definition as per Section 6.1 in Ref.~\cite{Talischi:POLYM:2012}} \label{fig:14} \end{figure} \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:15a} \epsfig{file = fig15a.eps, width = 0.43\textwidth} } \subfigure[]{\label{fig:15b} \epsfig{file = fig15b.eps, width = 0.48\textwidth} } } \caption{Solution for the MBB beam problem using \texttt{Veamy}. (a) Polygonal mesh and (b) norm of the VEM displacements} \label{fig:15} \end{figure} \subsection{Perforated Cook's membrane} \label{sec:cook} In this example, a perforated Cook's membrane is considered. The objective of this problem is to show more advanced domain definitions and mesh generation capabilities offered by \texttt{Veamy}. The complete setup instructions for this problem are provided in the file ``CookTestMain.cpp'' that is located in the folder ``test/.'' The following material parameters are considered: $E_Y = 250$ MPa, $\nu = 0.3$ and plane strain condition is assumed. The model geometry, the polygonal mesh and boundary conditions, and the VEM solution are presented in \fref{fig:16}. \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:16a} \epsfig{file = fig16a.eps, width = 0.35\textwidth} } \subfigure[]{\label{fig:16b} \epsfig{file = fig16b.eps, width = 0.48\textwidth} } } \caption{Solution for the perforated Cook's membrane problem using \texttt{Veamy}. (a) Model geometry, polygonal mesh and boundary conditions, and (b) norm of the VEM displacements} \label{fig:16} \end{figure} \subsection{A toy problem} \label{sec:toy} A toy problem consisting of a unicorn loaded on its back and fixed at its feet is modeled and solved using \texttt{Veamy}. The objective of this problem is to show additional capabilities for domain definition and mesh generation that are available in \texttt{Veamy}. The complete setup instructions for this problem are provided in the file ``UnicornTestMain.cpp'' that is located in the folder ``test/.'' The following material parameters are considered: $E_Y = 1\times 10^4$ psi, $\nu = 0.25$ and plane strain condition is assumed. The model geometry, the polygonal mesh and boundary conditions, and the VEM solution are shown in \fref{fig:17}. \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:17a} \epsfig{file = fig17a.eps, width = 0.35\textwidth} } \subfigure[]{\label{fig:17b} \epsfig{file = fig17b.eps, width = 0.43\textwidth} } } \caption{Solution for the toy problem using \texttt{Veamy}. (a) Model geometry, polygonal mesh and boundary conditions, and (b) norm of the VEM displacements} \label{fig:17} \end{figure} \subsection{Poisson problem with a manufactured solution} \label{sec:poissonproblem} We conclude the examples by solving a Poisson problem with a source term given by $f(\vm{x})= 32y(1-y)+32x(1-x)$, which is the outcome of letting the solution field be $u(\vm{x})=16xy(1-x)(1-y)$. A unit square domain is considered and $u(\vm{x})$ is imposed along the entire boundary of the domain resulting in the essential (Dirichlet) boundary condition $g(\vm{x})=0$. The complete setup instructions for this problem are provided in the file ``PoissonSourceTestMain.cpp'' that is located in the folder ``test/.'' The polygonal mesh and the VEM solution are shown in \fref{fig:18}. The relative $L^2$-norm of the error and the relative $H^1$-seminorm of the error obtained for the mesh shown in \fref{fig:18a} are $2.6695\times 10^{-3}$ and $6.7834\times 10^{-2}$, respectively. \begin{figure}[!tbhp] \centering \mbox{ \subfigure[]{\label{fig:18a} \epsfig{file = fig18a.eps, width = 0.42\textwidth} } \subfigure[]{\label{fig:18b} \epsfig{file = fig18b.eps, width = 0.45\textwidth} } } \caption{Solution for the Poisson problem using \texttt{Veamy}. (a) Polygonal mesh and (b) VEM solution. The relative $L^2$-norm of the error is $2.6695\times 10^{-3}$ and the relative $H^1$-seminorm of the error is $6.7834\times 10^{-2}$} \label{fig:18} \end{figure} \section{Concluding remarks} \label{sec:conclusions} In this paper, an object-oriented C++ library for the virtual element method was presented for the linear elastostatic and Poisson problems in two-dimensions. The usage of the library, named \texttt{Veamy}, was demonstrated through several examples. Possible extensions of this library that are of interest include three-dimensional linear elastostatics, where an interaction with the polyhedral mesh generator Voro++~\cite{rycroft:VORO:2009} seems very appealing, and nonlinear solid mechanics~\cite{BeiraodaVeiga-Lovadina-Mora:2015,chi:VEMFD:2017,artioli:AOVEMIN:2017,wriggers:VEMCIFD:2017}. \texttt{Veamy} is free and open source software. \begin{acknowledgements} AOB acknowledges the support provided by Universidad de Chile through the ``Programa VID Ayuda de Viaje 2017'' and the Chilean National Fund for Scientific and Technological Development (FONDECYT) through grant CONICYT/FONDECYT No. 1181192. The work of CA is supported by CONICYT-PCHA/Mag\'ister Nacional/2016-22161437. NHK is grateful for the support provided by the Chilean National Fund for Scientific and Technological Development (FONDECYT) through grant CONICYT/FONDECYT No. 1181506. \end{acknowledgements}
1,116,691,498,811
arxiv
\section{Introduction} This paper is one of many recent studies of arithmetic quantities in short intervals in the function field setting \cite{bank2015, bank2018sums, keating2014variance, keating2016squarefree, keating2018sums, rodgers2018, rudnick2019angles}. Here we will be interested in the variance of a quantity to be defined below. The common strategy is to express the variance in terms of functions of coefficients of suitable $L$-functions and use an \emph{equidistribution} result in order to reduce these sums to a matrix integral, when taking the limit $q \rightarrow \infty$ as was done in \cite{keating2014variance, keating2016squarefree, keating2018sums, rodgers2018, rudnick2019angles}. In our case, the relevant equidistribution result is provided for us by Sawin \cite{sawin2018equidistribution}. Let $\mathbb{F}_q$ be the finite field with $q$ elements, where $q = p^k$ for some positive integer $k$ and a rational prime $p$. We will assume the characteristic $p$ to be fixed throughout and when we write $q \rightarrow \infty$ we always mean $k \rightarrow \infty$. In analogy with the integer case where we are interested in the distribution of prime numbers, one can ask questions about the distribution of irreducible polynomials in $\mathbb{F}_q[t]$ in short intervals and arithmetic progressions. For this we define the polynomial von Mangoldt function $\Lambda \from \mathbb{F}_q[t] \to \mathbb{Z}_{\geq 0}$ as \begin{equation} \Lambda(f) \coloneqq \begin{cases} \deg P \quad &\text{if } f = c \cdot P^k \text{ where } P \text{ is irreducible and } c \in \mathbb{F}_q^\times, \\ 0 \quad &\text{otherwise}, \end{cases} \end{equation} and for a polynomial $A \in \mathbb{F}_q[t]$ of degree $n>h$, we define the interval $I(A;h)$ around $A$ of length $h$ to be \begin{equation} I(A;h) \coloneqq \{ f \in \mathbb{F}_q[t] \mid \nnorm{f-A} \leq q^h \}, \end{equation} where the usual norm is $\nnorm{f} \coloneqq q^{\deg(f)}$ for $f \neq 0$. Writing $\mathcal{M}_n \subset \mathbb{F}_q[t]$ for the monic polynomials of degree $n$ we thus define the two quantities \begin{align} \nu(A;h) &= \sum_{\substack{f \in I(A,h) \\ f(0) \neq 0}} \Lambda(f), \quad \text{and} \\ \Psi(n;A,Q) &= \sum_{\substack{f \in \mathcal{M}_n \\ f \equiv A \; \mathrm{mod} \; Q}} \Lambda(f), \end{align} which essentially count the number of irreducible polynomials in short intervals and arithmetic progressions respectively. We define the expectation of $\nu(A;h)$ by \begin{equation} \mathbb{E}_{A,n}[\nu(A;h)] = \frac{1}{q^n} \sum_{A \in \mathcal{M}_n}\nu(A;h), \end{equation} and the variance of $\nu(A;h)$ by \begin{equation} \mathrm{Var}_{A,n}[\nu(A;h)] = \frac{1}{q^n} \sum_{A \in \mathcal{M}_n} \norm{\nu(A;h) - \mathbb{E}_{B,n}[\nu(B;h)]}^2. \end{equation} The following two theorems were proved by Keating and Rudnick \cite[Theorem 2.1, Theorem 2.2]{keating2014variance}. \begin{theorem}[Keating and Rudnick] \label{thm:keating_rudnick_short_intervals} Let $h$ and $n$ be non-negative integers with $h \leq n-5$. Then \begin{equation} \lim_{q \to \infty}\frac{1}{q^{h+1}} \mathrm{Var}_{A,n}[\nu(A;h)] = n-h-2. \end{equation} \end{theorem} \begin{theorem}[Keating and Rudnick] \label{thm:keating_rudnick_arith_prog} Let $Q$ be a square-free polynomial of positive degree and let $n$ be a positive integer such that $1 \leq \deg Q < n$. Then \begin{equation} \lim_{q \rightarrow \infty} \frac{1}{q^n}\sum_{\substack{A \; \mathrm{mod} \; Q \\ (A,Q) = 1}} \norm{\Psi(n;A,Q) - \frac{q^n}{\norm{\Phi(Q)}}}^2 = \deg(Q) -1. \end{equation} \end{theorem} These results were obtained using two equidistribution results, which Katz proved in \cite{katz2013square_free} and \cite{katz2013witt}, respectively. The two theorems above correspond to and provide further evidence for the well-known conjectures of Goldston-Montgomery \cite{goldston1987pair} and Hooley \cite{hooley1974distribution} in the number field setting, which make predictions about the variance of rational primes in short intervals and arithmetic progressions, respectively. One can consider the same problem, but where we replace the polynomial von Mangoldt function $\Lambda$ with the von Mangoldt function $\Lambda_\rho$ attached to a Galois representation $\rho$ of the absolute Galois group of $K = \mathbb{F}_q(t)$. The following is a rough sketch of how $\Lambda_\rho$ is defined. The precise definition is given in Definition \ref{def.von_mangoldt_rho}. Let $\mathcal{Z}(s) = \sum_{f \text{ monic}} \frac{1}{\nnorm{f}^s} = \frac{1}{1-q^{1-s}}$ be the zeta function of $\mathbb{F}_q[t]$. It is not hard to show the following identity \begin{equation} -\frac{\mathcal{Z}'(s)}{\mathcal{Z}(s)} = \sum_{\substack{f \in \mathbb{F}_q[t] \\ f \text{ monic}}} \frac{\Lambda(f)}{\nnorm{f}^s}. \end{equation} Thus given the Artin $L$-function $L_\rho(s)$ attached to $\rho$, we might naturally define the von Mangoldt function attached to $\rho$ by the coefficients of $-L_\rho'(s)/L_\rho(s)$. Once having defined the von Mangoldt function $\Lambda_\rho$ we can analogously to the above define \begin{align*} \Psi_\rho(n;A,Q) &\coloneqq \sum_{\substack{f \equiv A \; \mathrm{mod} \; Q \\ f \in \mathcal{M}_n}}\Lambda_\rho(f), \; \text{and} \\ \nu_\rho(A;h) &\coloneqq \sum_{\substack{f \in I(A;h) \\ f(0) \neq 0}}\Lambda_\rho(f) \end{align*} and the variances of these quantities. In particular we write \begin{align*} \mathrm{Var}_{A}[\Psi_\rho(n;Q,A)] &= \frac{1}{\norm{\Phi(Q)}} \sum_{\substack{A \; \mathrm{mod} \; Q \\ (A,Q) = 1}} \norm{\Psi(n;A,Q) - \mathbb{E}_{B}[\Psi(n;B,Q)] }^2, \; \text{and} \\ \mathrm{Var}_{A,n} \left[ \nu_\rho(A;h) \right] &= \frac{1}{q^n} \sum_{A \in \mathcal{M}_n} \norm{\nu_\rho(A;h) - \mathbb{E}_{B,n}[\nu_\rho(B;h)]}^2 \end{align*} In this context, Hall, Keating and Roditty-Gershon proved the corresponding result for arithmetic progressions \cite{HKRG17}. \begin{theorem}[Hall, Keating and Roditty-Gershon] \label{thm:arithmetic_progressions_galois} Let $\rho$ be a 'suitably nice' Galois representation with $q$-weight $w$ of $G_K$ depending on a square-free polynomial $Q \in \mathbb{F}_q[t]$ and write $\Phi(Q)$ for the set of Dirichlet characters of modulus $Q$. Then there exists a positive integer $r_{\mathcal{Q}}(\rho)$ depending on $Q$ and $\rho$ such that \begin{equation} \lim_{q \rightarrow \infty} \frac{\norm{\Phi(Q)}}{q^{n(1+w)}} \mathrm{Var}_{A}[\Psi_\rho(n;Q,A)] = \mathrm{min}\{n,r_\mathcal{Q}(\rho) \}. \end{equation} \end{theorem} The main result of this manuscript is the corresponding asymptotic for short intervals using a recent equidistribution result by Sawin \cite{sawin2018equidistribution}. Using this, in the end one obtains the following matrix integral \begin{equation*} \int_{\mathrm{U}(S)} \norm{\mathrm{Tr}(g^n)}^2 dg, \end{equation*} which can be shown to equal $\mathrm{min}\{ n,S \}$ \cite[Theorem 2.1]{diaconis2001linear}. \begin{theorem} \label{thm:main_result_intro_form} Let $\rho$ be a 'suitably nice' Galois representation of $G_K$ of $q$-weight $w$ depending on $Q = t^{n-h}$ where $h$ is a nonnegative integer such that $h \leq n-5$. Then there exists a positive integer $s_{\mathcal{Q}}(\rho)$ depending on $Q$ and $\rho$ such that \begin{equation} \lim_{q \to \infty} \frac{1}{q^{nw+h+1}} \mathrm{Var}_{A,n} \left[ \nu_\rho(A;h) \right] = \mathrm{min}\{n,s_\mathcal{Q}(\rho) \}. \end{equation} \end{theorem} Later we will make precise what we mean by 'suitably nice', we will define what it means for $\rho$ to have $q$-weight $w$ and we will define the integer $s_{\mathcal{Q}}(\rho)$ once the necessary notions have been introduced. For reference, the above theorem is stated in its complete form and with all assumptions in Theorem~\ref{thm:main_result_full_form}, respectively. \begin{example} By taking $\rho = \bm{1}$ it is possible to deduce Theorem~\ref{thm:keating_rudnick_short_intervals} and Theorem~\ref{thm:keating_rudnick_arith_prog} from Theorem~\ref{thm:main_result_intro_form} and Theorem~\ref{thm:arithmetic_progressions_galois}, respectively. In fact, then $w=0$ and one can check using \cite[Lemma 4.10]{sawin2018equidistribution} $r_{\mathcal{Q}}(\rho) = \deg Q -1$ and $s_{\mathcal{Q}}(\rho) = n-h-2$. \end{example} \begin{example} \label{ex.legendre_curve} Let $E$ be the Legendre curve \begin{equation*} E \colon \; y^2 = x(x-1)(x-t) \end{equation*} over $\mathbb{F}_{q_0}(t)$ where the characteristic of $\mathbb{F}_{q_0}$ exceeds $3$. Its associated Tate module defines a tamely ramified rank $2$ Galois representation $\rho_E$. Then Lemma 4.10 and Example 4.11 in \cite{sawin2018equidistribution} show \begin{equation*} s_{\mathcal{Q}}(\rho_E) = 2(n-h-1). \end{equation*} It can be shown that $\rho$ satisfies the conditions of Theorem~\ref{thm:main_result_intro_form}, and in this case $\rho$ has weight $w = 1$. Thus for the variance of the associated von Mangoldt function $\Lambda_{\rho_E}$ we obtain \begin{equation*} \lim_{q \to \infty} \frac{1}{q^{n+h+1}} \mathrm{Var}_{A,n} \left[ \nu_{\rho_E}(A;h) \right] = \mathrm{min}\{n, 2(n-h-1) \}. \end{equation*} \end{example} \subsection*{Comparison with number field setting} Let $F$ be a primitive $L$-function in the Selberg class of degree $d_F$. One can define the von-Mangoldt function $\Lambda_F$ attached to $F$ via the following equation \begin{equation*} -\frac{F'(s)}{F(s)} = \sum_{n=1}^\infty \frac{\Lambda_F(n)}{n^s}. \end{equation*} Writing \begin{equation*} \psi_F(x) = \sum_{n \leq x} \Lambda_F(x), \end{equation*} we expect a general prime number theorem to hold \begin{equation*} \psi_F(x) = m_F x + o(x), \end{equation*} where $m_F$ is the order of the pole of $F(s)$ at $s=1$. We study the variance of the von Mangoldt function $\Lambda_F$ by considering \begin{equation} \label{eq.variance_general_L} \tilde{V}_F(X,h) = \int_1^X \norm{\psi_F(x+h) - \psi_F(x) - m_Fh}^2dx. \end{equation} If one takes $F(s) = \zeta(s)$, the Riemann Zeta function, then the expression $\tilde{V}_F(X,h)$ measures the variance of the number of rational primes in short intervals. For general $F$ the quantity defined in \eqref{eq.variance_general_L} has been studied in \cite{bui2016variance}. There it is shown under the Generalised Riemann Hypothesis (GRH) that the analogous results in the number field setting are equivalent to extensions of pair correlation conjectures regarding the zeroes of $F(s)$ on the critical line. In particular, assuming GRH and these pair correlation conjectures the following was shown \cite[Theorem C1, Theorem C2]{bui2016variance}. We write $\mathfrak{q}_F$ for the conductor of $F(s)$ and $\gamma_0$ for the Euler-Mascheroni constant. Let $\varepsilon > 0$. If $0<B_1 < B_2 \leq B_3 < 1/d_F$ then there exists some $c>0$ such that \begin{multline} \label{eq.number_field_degree_1} \tilde{V}_F(X,h) = hX \left( d_F \log \frac{X}{h} + \log \mathfrak{q}_F - (\gamma_0 + \log 2 \pi)d_F \right) \\ + O_\varepsilon \left( hX^{1+\varepsilon}(h/X)^{c/3} \right) + O_\varepsilon \left( hX^{1+\varepsilon} \left( hX^{B_1-1} \right)^{1/3(1-B_1)} \right) \end{multline} uniformly for $X^{1-B_3} \ll h \ll X^{1-B_2}$. If $1/d_F<B_1 < B_2 \leq B_3 < 1$ then there exists some $c > 0$ such that \begin{multline} \label{eq.number_field_degree_large} \tilde{V}_F(X,h) = \frac{1}{6} hX \left( 6\log X - (3 + 8 \log 2) \right) \\ + O_\varepsilon \left( hX^{1+\varepsilon}(h/X)^{c/3} \right) + O_\varepsilon \left( hX^{1+\varepsilon} \left( hX^{B_1-1} \right)^{1/3(1-B_1)} \right) \end{multline} uniformly for $X^{1-B_3} \ll h \ll X^{1-B_2}$. If $d_F = 1$ then the second condition is never satisfied so $\tilde{V}_F(X,h)$ is always given by \eqref{eq.number_field_degree_1}. However, if $d_F \geq 2$ there are two different regimes governing the growth of $\tilde{V}_F(X,h)$ depending on the length of the interval $h$. Note that the asymptotic behaviour of \eqref{eq.number_field_degree_1} and \eqref{eq.number_field_degree_large} are qualitatively different; in the range governed by \eqref{eq.number_field_degree_1} the leading term of $\tilde{V}_F(X,h)/Xh$ is proportional to $\log h$, while in the range governed by \eqref{eq.number_field_degree_large} the leading term of $\tilde{V}_F(X,h)/Xh$ is independent of $h$ This corresponds to what we obtain in Theorem \ref{thm:main_result_intro_form}; if the degree of the considered $L$-function exceeds $1$ then the leading order coefficient is governed by two different regimes depending on the length of the interval. Example~\ref{ex.legendre_curve} provides an explicit case in which this can be clearly seen. \subsection*{Acknowledgements} The author would like to thank Jon Keating for suggesting this area of research and for many useful comments. Further the author is grateful to Chris Hall for providing help in Section \ref{section.weights_purity} and pointing out some relevant results. Finally the author thanks Theo Assiotis, Ofir Gorodetsky, Edva Roditty-Gershon, Zeev Rudnick, Will Sawin and Damaris Schindler for helpful conversations and comments. \section{Preliminaries} \label{sec.preliminaries} Fix a prime $p$ and a prime power $q_0 = p^r$ for some $r$. When we write $q \rightarrow \infty$ we always mean $q = q_0^k$ and $k \rightarrow \infty$. Consider the field $K = \mathbb{F}_q(t)$ of rational functions with coefficients in $\mathbb{F}_q$. Fix an algebraic closure $\overbar{K}$ of $K$ and a separable closure $K_s$ of $K$. Denote by $G_K = \mathrm{Gal}(K_s/K)$ the \emph{absolute Galois group of} $K$. Let $\mathcal{P}$ be the set of places of $K$. If we and let $\mathcal{I}$ be the monic irreducible polynomials in $\mathbb{F}_q[t]$. By Ostrowski's Theorem we have a correspondence correspondence \begin{align*} \mathcal{I} \cup \{ \infty \} &\longleftrightarrow \mathcal{P} \quad \quad \\ P &\longrightarrow v_P \\ P_v &\longleftarrow v \end{align*} Given a subset of places $\mathcal{Q} \subset \mathcal{P}$ we denote by $K_{\mathcal{Q}} \subseteq K_s$ the maximal subextension subject to all places in $\mathcal{P} \setminus \mathcal{Q}$ being unramified. We denote the Galois group by \begin{equation} G_{K,\mathcal{Q}} \coloneqq \mathrm{Gal}(K_{\mathcal{Q}}/K) \end{equation} Given a place $v$ we define as usual \begin{equation*} \mathcal{O}_v = \{ x \in K \mid v(x) \leq 1 \}, \end{equation*} which is a ring with maximal ideal \begin{equation*} \mathfrak{m}_v = \{ x \in K \mid v(x) < 1\}. \end{equation*} We define the residue class field to be \begin{equation} \kappa_v \coloneqq \mathcal{O}_v/\mathfrak{m}_v, \end{equation} and the \emph{degree}  of $v$ is defined to be \begin{equation} d_v \coloneqq [\kappa_v : \mathbb{F}_q]. \end{equation} Let $v$ be a place on $K = \mathbb{F}_q(t)$, and fix an extension of places $w\vert v$ on $K_s$. By definition of the decomposition group $D_w$, for any $\sigma \in D_w$ we have $\sigma(\mathcal{O}_w) = \mathcal{O}_w$ and $\sigma(\mathfrak{m}_w) = \mathfrak{m}_w$. Further, since $\sigma \vert_K = \mathrm{id}$ we also have $\sigma \vert_{\kappa_v} = \mathrm{id}$, so that any $\sigma \in D_w$ induces a $\kappa_v$-automorphism $\overbar{\sigma}$ as follows \begin{equation} \overbar{\sigma} \from \kappa_w \to \kappa_w, \quad x \; \; \mathrm{mod} \; \mathfrak{m}_w \mapsto \sigma(x) \; \mathrm{mod} \; \mathfrak{m}_w. \end{equation} Therefore we obtain a surjective homomorphism \begin{equation} D_w \to \mathrm{Gal}(\kappa_w/\kappa_v), \end{equation} with kernel being the inertia group $I_w$. If we write $G_w = D_w/I_w$ we thus have \begin{equation} G_w \cong \mathrm{Gal}(\kappa_w/ \kappa_v). \end{equation} From a computation above, we know that $\kappa_v \cong \mathbb{F}_{q^{d_v}}$, and since $K_s$ is the separable closure it is not hard to see that we must have $\kappa_w \cong \overline{\mathbb{F}}_{q^{d_v}}$. As $\overline{\mathbb{F}}_{q^{d_v}}$ is the separable closure of $\mathbb{F}_{q^{d_v}}$, we therefore get \begin{equation} G_w \cong \mathrm{Gal}(\overline{\mathbb{F}}_{q^{d_v}}/ \mathbb{F}_{q^{d_v}}). \end{equation} Consider $\tau \in \mathrm{Gal} \left(\overline{\mathbb{F}}_{q^{d_v}}/ \mathbb{F}_{q^{d_v}}\right)$ given by $\tau = \left(a \mapsto (a)^{q^{d_v}}\right)$, for $a \in \overline{\mathbb{F}}_{q^{d_v}}$. By the isomorphism above, there exists an element in $G_w$ corresponding to $\tau$, the so-called \emph{Frobenius element} which we denote by $\mathrm{Frob}_w \in G_w$. Clearly there is a surjective map $D_w \twoheadrightarrow G_w$ with Kernel $I_w$, and we denote the preimage of $\mathrm{Frob}_w$ under this map by $\mathrm{Frob}_w(I_w)$. If we had two different places $w, w'$ of $K_s$ extending $v$ then it is not hard to see that there exists $\sigma \in G_K$ such that simultaneously \begin{equation} \sigma^{-1}\mathrm{Frob}_w \sigma = \mathrm{Frob}_{w'}, \quad \sigma^{-1} G_w \sigma = G_{w'}, \quad \text{and} \quad \sigma^{-1} I_w \sigma = I_{w'}. \end{equation} In the following we will only be interested in the determinant or the trace of the Frobenius automorphism, and so any extension of $v$ works equally well. Therefore, since $\mathrm{Frob}_w$ is determined by $v$ up to conjugacy we will abuse notation and write $v=w$. \section{Artin $L$-functions} \label{sec:artin_l} In this section we construct that $L$-function attached to a Galois representation of $\mathrm{Gal}(K_s/K)$. This will be our central tool when defining $\Lambda_\rho$. \subsection{$L$-functions} Let $\ell$ be a prime different to $p$, and consider an algebraic closure $\overbar{\mathbb{Q}}_\ell$ of $\mathbb{Q}_\ell$. Let $V$ be a finite dimensional $\overbar{\mathbb{Q}}_\ell$ vector space, and let $\mathcal{S} \subset \mathcal{P}$ be a finite subset of places of $K$. Consider a continuous homomorphism \begin{equation} \rho \from G_{K,\mathcal{S}} \to \mathrm{GL}(V). \end{equation} Now fix a place $v \in \mathcal{P}$, and define \begin{equation} V_v \coloneqq \{ \mathbf{x} \in V \mid \rho(\sigma)(\mathbf{x}) = \mathbf{x}, \text{ for all } \sigma \in I_v \}. \end{equation} By construction, the inertia subgroup $I_v$ acts trivially on $V_v$, and the action by the decomposition group $D_v$ preserves $V_v$. Therefore $\rho$ induces a representation $\rho_v$ on the quotient $G_v = D_v/I_v$, \begin{equation} \rho_v \from G_v \to \mathrm{GL}(V_v). \end{equation} We will now define the $L$-function \emph{attached} to the representation $\rho$. \begin{definition} Let $\mathcal{Q} \subset \mathcal{P}$ be a finite subset of places of $K$. Define the \emph{local Euler factor} of $\rho$ at a place $v$ to be the inverse characteristic polynomial of $\rho(\mathrm{Frob}_v) \in V_v$, that is \begin{equation} \label{local_euler_factor_def} L(T,\rho_v) \coloneqq \det(I - T \rho_v(\mathrm{Frob}_v)\mid V_v) \in \overbar{\mathbb{Q}}_\ell[T]. \end{equation} We define the \emph{partial (Artin) $L$-function attached to $\rho$} by \begin{equation} \label{partial_l_defn} L_{\mathcal{Q}}(T,\rho) \coloneqq \prod_{v \notin \mathcal{Q}}L(T^{d_v},\rho_v)^{-1} \in \overbar{\mathbb{Q}}_\ell \llbracket T \rrbracket, \end{equation} and we define the \emph{complete (Artin) $L$-function attached to $\rho$} by \begin{equation} \label{complete_l_defn} L(T,\rho) \coloneqq \prod_{v \in \mathcal{P}}L(T^{d_v},\rho_v)^{-1} \in \overbar{\mathbb{Q}}_\ell \llbracket T \rrbracket, \end{equation} where $\overbar{\mathbb{Q}}_\ell \llbracket T \rrbracket$ is the ring of formal power series with coefficients in $\overbar{\mathbb{Q}}_\ell$. \end{definition} Using the theory of middle extension sheaves, one can deduce from Deligne's theorem that both the partial and the complete $L$-functions of $\rho$ are rational functions \cite[3.4]{HKRG17}. Therefore it makes sense to talk about $\deg(L_\mathcal{Q}(T,\rho))$ and $\deg(L(T,\rho))$. \subsection{Trace formula} \label{subsec:trace_formulas} We now want to compute the logarithmic derivative of $L(T,\rho)$ in order to define a von Mangoldt function $\Lambda_\rho$. Note that by definition of the local Euler factor \eqref{local_euler_factor_def}, we have \begin{equation} L(T,\rho_v) = (1-\lambda_1T) \cdots (1-\lambda_{d_v}T), \end{equation} where $\lambda_i \in \overbar{\mathbb{Q}}_\ell$ are the eigenvalues of $\rho_v(\mathrm{Frob}_v)$. Therefore computing the logarithmic derivative of $L(T,\rho_v)$ we get \begin{equation} T \frac{d}{dT}\log L(T,\rho_v)^{-1} = T\sum_{k=1}^{d_v}\lambda_k \left(\frac{\prod_{i\neq k}(1-\lambda_i T)}{\prod_{j=1}^{d_v}(1-\lambda_j T)}\right) = T\sum_{k=1}^{d_v} \lambda_k (1-\lambda_kT)^{-1}. \end{equation} From the above computation we obtain \begin{equation} T \frac{d}{dT}\log L(T,\rho_v)^{-1} = \sum_{m=1}^{\infty}\left( \sum_{k=1}^{d_v}\lambda_k^m \right)T^m. \end{equation} Therefore, defining \begin{equation}\label{Lefschetz_trace} a_{\rho,v,m} \coloneqq \mathrm{Tr}(\rho_v(\mathrm{Frob}_v)^m \mid V_v), \end{equation} we can rewrite this as \begin{equation} \label{local_l_derivative} T \frac{d}{dT}\log L(T,\rho_v)^{-1} = \sum_{m=1}^{\infty}a_{\rho,v,m}T^m. \end{equation} Further, if we let $\mathcal{P}_d \subset \mathcal{P}$ be the places of degree $d$, and define \begin{equation} b_{\rho,n} \coloneqq \sum_{md = n}\sum_{v \in \mathcal{P}_d \setminus \mathcal{Q}} d \cdot a_{\rho,v,m}, \end{equation} then \eqref{local_euler_factor_def} together with \eqref{partial_l_defn} implies \begin{equation} T \frac{d}{dT}\log L_{\mathcal{Q}}(T,\rho) = \sum_{n=1}^{\infty}b_{\rho,n} T^n. \end{equation} The quantities $b_{\rho,n}$ have arithmetic meaning --- they are the so-called \emph{cohomological traces} of $\rho$, and our definition \eqref{Lefschetz_trace} is usually a result known as the \emph{Grothendieck-Lefschetz trace formula}. Later we will make crucial use of this arithmetic origin. \section{The von Mangoldt function $\Lambda_\rho$} \label{sec:defn_of_von_mang} Let $\mathcal{M} \subset \mathbb{F}_q[t]$ be the set of monic polynomials, and let $\mathcal{M}_n \subset \mathcal{M}$ be the set of monic polynomials of degree $n$. Further, let $\mathcal{I} \subset \mathcal{M}$ be the set of irreducible monic polynomials. Recall we write $v_P$ for the finite place corresponding to $P \in \mathcal{I}$. \begin{definition} \label{def.von_mangoldt_rho} Let $\rho \from G_{K,\mathcal{Q}} \to \mathrm{GL}(V)$ a continuous $\ell$-adic Galois representation, and let $a_{\rho,v,m} = \mathrm{Tr}\left( \rho_v(\mathrm{Frob}_v) \right)$. We define the \emph{von Mangoldt function} $\Lambda_p \from \mathbb{F}_q[t] \to \overbar{\mathbb{Q}}_\ell$ by \begin{equation} \label{eq:von_mang_defn} \Lambda_\rho(f) = \begin{cases} d \cdot a_{\rho,v_P,m} &\text{ if } f = c \cdot P^m \text{ for some } c \in \mathbb{F}_q^{\times} \text{ and } P \in \mathcal{I}_d, \\ 0 &\text{ otherwise}. \end{cases} \end{equation} \end{definition} If we compare this with the usual polynomial von Mangoldt function, recall that the polynomial zeta function is given by \begin{equation} Z(T) = \sum_{f \in \mathcal{M}_n} T^n = \frac{1}{1-qT}. \end{equation} Defining local Euler factors $Z(T,P) = (1-T)$, then just like in \eqref{complete_l_defn} we obtain \begin{equation} Z(T) = \prod_{P \in \mathcal{I}}Z(T^{\deg(P)},P)^{-1}. \end{equation} An easy computation shows \begin{equation} T\frac{d}{dT} \log Z(T,P)^{-1} = \sum_{m=1}^\infty T^m, \end{equation} and thus \begin{equation} T\frac{d}{dT} \log Z(T^{\deg(P)},P)^{-1} = \sum_{m=1}^\infty \deg(P) T^{\deg(P) m} = \sum_{n=1}^\infty\left( \sum_{m \deg(P) = n} \Lambda(P^m) \right) T^n, \end{equation} since $\Lambda(P^m) = \deg(P)$ for all $P \in \mathcal{I}$ and $m \geq 1$. This also shows that for the trivial representation $\rho = \mathbf{1}$ we recover $\Lambda_{\mathbf{1}} = \Lambda$. It is also worth noting that in general, for a representation $\rho$, the von Mangoldt function $\Lambda_\rho$ does \emph{not} satisfy $\Lambda_\rho(P^m) = \Lambda_{\rho}(P^k)$ for positive integers $m$ and $k$. A priori $\Lambda_\rho$ takes values in $\overbar{\mathbb{Q}}_\ell$, however we will assume that the range lies within $\overbar{\mathbb{Q}}$, as was done in \cite{HKRG17}. Further, we fix embeddings $\iota \from \overbar{\mathbb{Q}} \to \mathbb{C}$, and $\overbar{\mathbb{Q}} \to \overbar{\mathbb{Q}}_\ell$ and later we will impose appropriate assumptions on $\iota$. We note here that we suppress notation and mostly not make the embedding $\iota$ explicit in the hope of improving readability. Let $A \in \mathbb{F}_q[t]$ be a polynomial of degree $n$. In analogy with the polynomial case, we define the following quantity \begin{equation} \nu_\rho(A;h) \coloneqq \sum_{\substack{f \in I(A;h) \\ f(0) \neq 0}}\Lambda_\rho(f), \end{equation} and also we define the expectation and variance of $\nu_\rho(A;h)$ in the usual way \begin{align} \label{variance_mangoldt_short_intervals_definition} \mathbb{E}_{A,n}[\nu_\rho(A;h)] &\coloneqq \frac{1}{q^n} \sum_{A \in \mathcal{M}_n}\nu_\rho(A;h), \quad \text{ and} \\ \mathrm{Var}_{A,n}[\nu_\rho(A;h)] &\coloneqq \frac{1}{q^n} \sum_{A \in \mathcal{M}_n} \lvert \nu_\rho(A;h) - \mathbb{E}_{A,n}[\nu_\rho(A;h)] \rvert^2, \end{align} respectively. Our aim is to establish an asymptotic for the variance. In order to do this we need to introduce some more ideas. In particular it will be very useful to express the variance as a combination of Dirichlet characters, which naturally have modulus $T^{n-h}$. \section{Dirichlet characters} Let $Q \in \mathbb{F}_q[t]$ be a non-constant polynomial and write $\Gamma(Q) = \left( \mathbb{F}_q[t]/Q\mathbb{F}_q[t]\right)^\times$. Recall that $\mathcal{P}$ is the set of places of $K$, and let $\mathcal{Q}$ be the set \begin{equation} \mathcal{Q} \coloneqq \{ v \in \mathcal{P} \mid v(Q) \neq 1 \}. \end{equation} Note that for any irreducible polynomial $P \in \mathcal{I}$ we have \begin{equation} P \vert Q \iff v_P \in \mathcal{Q}. \end{equation} Further, since we assumed $Q$ not to be constant, we have that $v_\infty(Q) \neq 1$, so that $v_\infty \in \mathcal{Q}$. Consider now the complement $\mathcal{U_Q} \coloneqq \mathcal{P} \setminus \mathcal{Q}$. Rephrasing the above, we have a bijective correspondence between the elements $u \in \mathcal{U_Q}$ and monic irreducible polynomials $P_u$, which do not divide $Q$. Recall that by definition $G_{K,\mathcal{Q}} = \mathrm{Gal}(K_{\mathcal{Q}}/K)$, where $K_\mathcal{Q} \subset K_s$ is the maximal extension subject to all places in $\mathcal{P} \setminus \mathcal{Q} = \mathcal{U_Q}$ being unramified. As was outlined in Section~\ref{sec.preliminaries}, every $u \in \mathcal{U_Q}$ corresponding to $P_u$ therefore determines a conjugacy class $(\mathrm{Frob}_u) \in G_{K,\mathcal{Q}}$. In particular, if we consider the abelianization $G_{K,\mathcal{Q}}^{\mathrm{ab}}$ of $G_{K,\mathcal{Q}}$, then we obtain a well-defined element $\mathrm{Frob}_u \in G_{K,\mathcal{Q}}^{\mathrm{ab}}$, and a surjective homomorphism \begin{align} \alpha_\mathcal{Q} \from G_{K,\mathcal{Q}}^{\mathrm{ab}} &\twoheadrightarrow \Gamma(Q) \\ \mathrm{Frob}_u &\mapsto P_u \; \mathrm{mod} \; Q. \end{align} Clearly, there is also a canonical map $G_{K,\mathcal{Q}} \twoheadrightarrow G_{K,\mathcal{Q}}^{\mathrm{ab}}$ and so a character $\varphi \in \Phi(Q)$ induces a map \begin{equation} \label{lift_character} G_{K,\mathcal{Q}} \twoheadrightarrow G_{K,\mathcal{Q}}^\mathrm{ab} \overset{\alpha_{\mathcal{Q}}}\twoheadrightarrow \Gamma(Q) \overset{\varphi}\rightarrow \overbar{\mathbb{Q}}^\times. \end{equation} Let $\Phi(Q)$ be the set of Dirichlet characters of modulus $Q$. We may view a character $\varphi \in \Phi(Q)$ as a map $\varphi \from \mathbb{F}_q[t] \to \overbar{\mathbb{Q}}$ in the usual way. This allows us to lift $\varphi$ to a map $G_{K,\mathcal{Q}} \to \overbar{\mathbb{Q}}$ in a similar fashion to above. That is, we define it to be the composite \begin{equation} G_{K,\mathcal{Q}} \twoheadrightarrow G_{K,\mathcal{Q}}^\mathrm{ab} \overset{\alpha_{\mathcal{Q}}}\twoheadrightarrow \mathbb{F}_q[t] \overset{\varphi}\rightarrow \overbar{\mathbb{Q}}. \end{equation} We will abuse notation and denote the induced map $G_{K,\mathcal{Q}} \to \overbar{\mathbb{Q}}$ by $\varphi$. Recall, a character $\varphi \in \Phi(Q)$ is \emph{even} if $\varphi(a) = 1$ for all $a \in \mathbb{F}_q^\times$ and denote by $\Phi(Q)^{ev} \subset \Phi(Q)$ the subset of even Dirichlet characters of modulus $Q$. Let $c$ be a generator of $\mathbb{F}_q^\times$. An arbitrary Dirichlet character $\varphi \in \Phi(Q)$ can take the value $\varphi(c) = \zeta^i$ for $i = 0, \dots, q-1$, where $\zeta$ is a primitive $(q-1)$-th root of unity, whereas even characters are restricted to satisfy $\varphi(c) = 1$. From this we get the following. \begin{lemma} \label{lem:even_characters} Consider $Q \in \mathbb{F}_q[t]$. Then \begin{equation} \norm{\Phi(Q)^{ev}} = \frac{\norm{\Phi(Q)}}{q-1}. \end{equation} \end{lemma} \section{Twisted $L$-functions} \subsection{$L$-functions} As before, for $\mathcal{S} \subset \mathcal{P}$ finite, let \begin{equation} \rho \from G_{K,\mathcal{S}} \to \mathrm{GL}(V) \end{equation} be a continuous $\ell$-adic Galois representation. Let $\mathcal{Q} \subset \mathcal{P}$ be a finite subset of places. Consider now any group homomorphism \begin{equation} \varphi \from G_{K,\mathcal{Q}} \to \overbar{\mathbb{Q}}_\ell, \end{equation} which we will call an \emph{$\ell$-adic character, with conductor supported in $\mathcal{Q}$}. For our purposes $\varphi$ will usually be induced from a Dirichlet character. Consider $\mathcal{R} = \mathcal{S} \cup \mathcal{Q}$. Recall, that the definition of the fields $K_\mathcal{S}$ and $K_\mathcal{Q}$ was that they are the maximal subextensions of $K_s/K$, unramified away from $\mathcal{S}$ and $\mathcal{Q}$, respectively. Since $\mathcal{R} \supset \mathcal{S,Q}$, fewer  places are subject to being unramified for the extension $K_\mathcal{R}$ compared to $K_\mathcal{S}$ and $K_\mathcal{Q}$. Therefore $K_\mathcal{R} \supset K_\mathcal{S},K_\mathcal{Q}$, whence by Galois theory, there exist canonical, surjective quotient maps \begin{equation} G_{K,\mathcal{R}} \twoheadrightarrow G_{K,\mathcal{Q}}, \quad \text{and} \quad G_{K,\mathcal{R}} \twoheadrightarrow G_{K,\mathcal{S}}. \end{equation} Therefore we can canonically define new maps $\rho_\mathcal{R}$ and $\varphi_\mathcal{R}$ as the composites \begin{equation} \rho_\mathcal{R} \from G_{K,\mathcal{R}} \twoheadrightarrow G_{K,\mathcal{S}} \overset{\rho}\rightarrow \mathrm{GL}(V), \quad \text{and} \quad \varphi_\mathcal{R} \from G_{K,\mathcal{R}} \twoheadrightarrow G_{K,\mathcal{Q}} \overset{\varphi}\rightarrow \overbar{\mathbb{Q}}_\ell. \end{equation} We define the \textit{tensor product} $\rho \otimes \varphi$ to be the continuous homomorphism, given by \begin{align} \rho \otimes \varphi \from G_{K,\mathcal{R}} &\to \mathrm{GL}(V) \quad \qquad \\ g &\mapsto \rho_\mathcal{R}(g)\varphi_\mathcal{R}(g). \end{align} This allows us to define twisted $L$-functions. \begin{definition} \label{def:twisted_l_functions} Let $\rho$ and $\varphi$ be as above. The \emph{Euler factors attached to $\rho$, twisted by $\varphi$ at a place $v$} are defined to be \begin{equation} L(T,(\rho \otimes \varphi)_v) \coloneqq \det(I-T(\rho \otimes \varphi)_v(\mathrm{Frob}_v) \mid V_v). \end{equation} The \emph{partial} and \emph{complete} $L$-functions \emph{attached to} $\rho$, \emph{twisted by} $\varphi$ are respectively defined to be \begin{equation} L_\mathcal{Q}(T,\rho \otimes \varphi) \coloneqq \prod_{v \notin \mathcal{Q}} L(T^{d_v}, (\rho \otimes \varphi)_v)^{-1} \in \overbar{\mathbb{Q}}_\ell \llbracket T \rrbracket , \end{equation} and \begin{equation} L(T,\rho \otimes \varphi) \coloneqq \prod_{v \in \mathcal{P}} L(T^{d_v}, (\rho \otimes \varphi)_v)^{-1} \in \overbar{\mathbb{Q}}_\ell \llbracket T \rrbracket. \end{equation} \end{definition} \begin{remark} Here we mean \begin{equation} V_v \coloneqq \{ \mathbf{x} \in V \mid (\rho \otimes \varphi) (\sigma)(\mathbf{x}) = \mathbf{x}, \text{ for all } \sigma \in I_v \}. \end{equation} Further, for any $v \notin \mathcal{Q}$, we have $(\rho \otimes \varphi)_v = \rho_v \cdot \varphi$, so that \begin{equation} \label{eq:euler_factor_twist_nice} L(T,(\rho \otimes \varphi)_v) = L(\varphi(\mathrm{Frob}_v)T,\rho_v). \end{equation} \end{remark} \subsection{Trace formula for twists} Let $v \notin \mathcal{Q}$. Using the results from Section~\ref{subsec:trace_formulas} and \eqref{eq:euler_factor_twist_nice} we can find expressions for the logarithmic derivatives of the twisted Euler factors. Indeed, we obtain \begin{equation} T \frac{d}{dT} \log L(T,(\rho \otimes \varphi)_v )^{-1} = \sum_{m=1}^\infty \varphi(\mathrm{Frob}_v)^m a_{\rho,v,m}T^m. \end{equation} Let $\mathcal{P}_d \subset \mathcal{P}$ the places of degree $d$. If we define  \begin{equation} \label{eq:lefshetz_grothendieck_twisted} b_{\rho \otimes \varphi,n} \coloneqq \sum_{md = n} \sum_{v \in \mathcal{P}_d\setminus \mathcal{Q}} d \cdot \varphi(\mathrm{Frob}_v)^m a_{\rho,v,m}, \end{equation} then we get \begin{equation} \label{eq:twisted_L-function_in_terms_of_coh_traces} T \frac{d}{dT} \log L_\mathcal{Q}(T,\rho \otimes \varphi ) = \sum_{n=1}^\infty b_{\rho \otimes \varphi,n} T^n. \end{equation} Regarding a character $\varphi$ as a map $\varphi \from G_{K,\mathcal{Q}} \to \overbar{\mathbb{Q}}$, as before, note that for an irreducible polynomial $P \in \mathbb{F}_q[t]$ by definition we have $\varphi(P) = \varphi(\mathrm{Frob}_{v_P})$. Thus, from \eqref{eq:lefshetz_grothendieck_twisted} and the definition of $\Lambda_\rho$ \eqref{eq:von_mang_defn} we deduce \begin{equation} \label{eq:cohomological_traces_character_sum} b_{\rho \otimes \varphi,n} = \sum_{f \in \mathcal{M}_n} \varphi(f) \Lambda_\rho(f). \end{equation} \section{The variance as a sum of characters} \subsection{Relating short intervals to arithmetic progressions} We define sums of $\Lambda_\rho$ in arithmetic progressions by \begin{equation} \Psi_\rho(n;A,Q) \coloneqq \sum_{\substack{f \equiv A \; \mathrm{mod} \; Q \\ f \in \mathcal{M}_n}}\Lambda_\rho(f). \end{equation} We also define a related quantity $\widetilde{\Psi}_\rho(n;A,Q)$, where we consider sums of $\Lambda_\rho$ in arithmetic progressions, however, instead of summing over monic polynomials we now also admit non-monic ones. That is, we define \begin{equation} \widetilde{\Psi}_\rho(n;A,Q) \coloneqq \sum_{\substack{f \equiv A \; \mathrm{mod} \; Q \\ \deg(f) = n}}\Lambda_\rho(f). \end{equation} Our aim is to relate $\nu_\rho(A;h)$ and $\widetilde{\Psi}_\rho(n;A,Q)$ following the approach by Keating and Rudnick in \cite{keating2014variance}. In order to do this, we will first need to define the \emph{involution}  of a polynomial. \begin{definition} Let $f(t) = a_dt^d + \dots + a_0 \in \mathbb{F}_q[t]$ be a non-zero polynomial of degree $d$. We define the \emph{involution}  $f^*(t)$ of $f(t)$ to be \begin{equation} f^*(t) \coloneqq t^d f\left(\frac{1}{t}\right) = a_d + a_{d-1}t + \dots + a_0t^d. \end{equation} Further, we define $0^* = 0$. \end{definition} We record the following properties of the involution, which follow directly from the definition: \begin{itemize} \item[(i)] $f^*(0) \neq 0$ and $f(0) \neq 0$ if and only if $\deg f = \deg f^*$, \item[(ii)] If $f(0) \neq 0$ then $(f^*)^* = f$, \item[(iii)] $(fg)^* = f^*g^*$, and \item[(iv)] $f$ is monic if and only if $f^*(0) = 1$. \end{itemize} The proof of the next lemma, follows the proof of \cite{keating2014variance}, where the corresponding result for the case $\rho = \mathbf{1}$ is shown. \begin{lemma} Let $f \in \mathbb{F}_q[t]$ be a polynomial such that $f(0) \neq 0$. Then \begin{equation} \Lambda_\rho(f) = \Lambda_\rho(f^*). \end{equation} \end{lemma} \begin{proof} First we want to establish the following fact: Assuming $f(0) \neq 0$, we have that $f$ is irreducible if and only if $f^*$ is irreducible. Indeed, if we can factorise $f = ab$, where $a \in \mathbb{F}_q[t]$ and $b \in \mathbb{F}_q[t]$ are polynomials of positive degree, then by multiplicativity of the involution we have $f^* = a^*b^*$. Since $f(0) \neq 0$, we must also have $a(0) \neq 0$ and $b(0) \neq 0$, whence by (i) above, $a^*$, and $b^*$ are also non-constant. Since involution is self-inverse and $f(0) \neq0$, the other direction follows immediately. This establishes the claim. All that is left to show now, is that, given two irreducible polynomials $P$ and $P'$ such that $\deg P = \deg P'$, then $\Lambda_\rho(P^m) = \Lambda_\rho((P')^m)$, where $m$ is a positive integer. Recall that $P$ and $P'$ induce places $v = v_P$ and $w = v_{P'}$ respectively. By definition, we have \begin{align} \Lambda_\rho(P^m) &= (\deg P) \cdot \mathrm{Tr}(\rho_v(\mathrm{Frob}_v)^m \mid V_v), \text{ and } \\ \Lambda_\rho((P')^m) &= (\deg P') \cdot \mathrm{Tr}(\rho_w(\mathrm{Frob}_w)^m \mid V_w). \end{align} Therefore it suffices to show \begin{equation} \label{eq:unimportant_1} \mathrm{Tr}(\rho_v(\mathrm{Frob}_v)^m \mid V_v) = \mathrm{Tr}(\rho_w(\mathrm{Frob}_w)^m \mid V_w), \end{equation} for all positive integers $m$. Since there exists only one finite field of a certain order up to isomorphism, we know \begin{equation} \mathbb{F}_q[t]/P \mathbb{F}_q[t] \cong \mathbb{F}_q[t]/P' \mathbb{F}_q[t], \end{equation} and therefore in particular also $G_v \cong G_w$, and $\rho_v \cong \rho_w$. Thus $\rho_v(\mathrm{Frob}_v)$ and $\rho_w(\mathrm{Frob}_w)$ lie in the same conjugacy class. Clearly this implies \eqref{eq:unimportant_1}, and hence the lemma follows. \end{proof} The next lemma follows the proof given in \cite[Lemma 4.2]{keating2014variance}. \begin{lemma} Let $B \in \mathbb{F}_q[t]$ be a polynomial of degree $\deg B = n-h-1$. Then \begin{equation} \label{eq:fundamental_relation} \nu_\rho(n;t^{h+1}B,h) = \widetilde{\Psi}_\rho(n;B^*,t^{n-h}). \end{equation} \end{lemma} \begin{proof} Write $B(t) = b_{n-h-1}t^{n-h-1} + \dots + b_0$, and let $f(t) = a_dt^d + \dots + a_0$ be a polynomial such that $f(0) \neq 0$. By definition of intervals, we have $f \in I(t^{h+1}B,h)$ if and only if $\deg(f(t)-t^{h+1}B(t)) \leq h$. Comparing coefficients gives that we must have $\deg(f) = n$, and \begin{equation} a_n = b_{n-h-1}, \dots, a_{n-h-1} = b_0. \end{equation} It is easy to see, that this is the same as requiring \begin{equation} f^* \equiv B^* \; \mathrm{mod} \; t^{n-h}. \end{equation} Since $f(0) \neq 0$ it follows that $\deg(f^*)=n$, and so \begin{equation} \sum_{\substack{f \in I(t^{h+1}B,h) \\ f(0) \neq 0}}\Lambda_\rho(f) = \sum_{\substack{f^* \equiv B^* \; \mathrm{mod} \; t^{n-h} \\ \deg(f^*) = n}}\Lambda_\rho(f) \end{equation} Finally, as $\Lambda_\rho(f) = \Lambda_\rho(f^*)$ by the previous lemma, the result follows. \end{proof} \subsection{Mean and variance} Denote by $\mathcal{P}_{\leq h}$ the set of polynomials of degree $\leq h$. Note that every $A \in \mathcal{M}_n$ can uniquely be written as \begin{equation} A = t^{h+1}B + g, \end{equation} where $B \in \mathcal{M}_{n-h-1}$ and $g \in \mathcal{P}_{\leq h}$. Since $\deg(g) \leq h$ we have $A \in I(t^{h+1}B,h)$. By the uniqueness of this decomposition, we may write the monic polynomials of degree $n$ as a disjoint union of intervals $I(t^{h+1}B,h)$, where $B$ runs through polynomials in $\mathcal{M}_{n-h-1}$. That is, we obtain \begin{equation} \label{eq:disjoint_union} \mathcal{M}_n = \coprod_{B \in \mathcal{M}_{n-h-1}}I(t^{h+1}B,h) \end{equation} Note that if $A, A' \in I(t^{h+1}B,h)$ then $I(A,h) = I(A',h) = I(t^{h+1}B,h)$ and so $\nu_\rho(A;h) = \nu_\rho(A';h)$. By using \eqref{eq:disjoint_union} and \eqref{eq:fundamental_relation} we get for the expectation \begin{align} \mathbb{E}_{A,n}[\nu_\rho(A;h)] &= \frac{1}{q^n} \sum_{B \in \mathcal{M}_{n-h-1}} \sum_{A \in I(t^{h+1}B;h)} \nu_\rho(A;h) \\ &= \frac{1}{\norm{\mathcal{M}_{n-h-1}}} \sum_{B \in \mathcal{M}_{n-h-1}}\nu_\rho(t^{h+1}B;h) \label{eq:nu_rho_mean_first_expression} \\ &= \frac{1}{q^{n-h-1}} \sum_{\substack{B^* \; \mathrm{mod} \; t^{n-h} \\ B^*(0) = 1}} \widetilde{\Psi}_\rho(n;B^*,t^{n-h}), \end{align} and similarly for the variance we obtain \begin{align} \mathrm{Var}_{A,n}[\nu_\rho(A;h)] &= \frac{1}{q^{n-h-1}} \sum_{\substack{B^* \; \mathrm{mod} \; t^{n-h} \\ B^*(0) = 1}} \norm{\widetilde{\Psi}_\rho(n;B^*,t^{n-h}) - \mathbb{E}_{A,n}[\nu_\rho(A;h)]}^2. \label{eq:var_in_terms_of_arith} \end{align} \begin{lemma} \label{lem:expectation_short_intervals} Let $\rho \from G_{K,\mathcal{S}} \to \mathrm{GL}(V)$ be an $\ell$-adic Galois representation, then for polynomials $A \in \mathbb{F}_q[t]$ with $\deg A = n > h$ as above we have \begin{equation} \mathbb{E}_{A,n}[\nu_\rho(A;h)] = \frac{1}{q^{n-h-1}}\left( \sum_{f \in \mathcal{M}_n} \Lambda_\rho(f) - \Lambda_\rho(t^n) \right). \end{equation} \end{lemma} \begin{proof} Using the expression for the expectation that we obtained in \eqref{eq:nu_rho_mean_first_expression}, we compute \begin{equation}\label{eq:lemma_expression_1} \mathbb{E}_{A,n}[\nu_\rho(A;h)] = \frac{1}{q^{n-h-1}} \sum_{B \in \mathcal{M}_{n-h-1}} \sum_{\substack{f \in I(t^{h+1}B,h) \\ f(0) \neq 0}} \Lambda_\rho(f). \end{equation} By definition of the interval around a polynomial, we can decompose $\mathcal{M}_n$ into intervals as in \eqref{eq:disjoint_union}, and so we get \begin{equation} \label{eq:lemma_expression_3} \coprod_{B \in \mathcal{M}_{n-h-1}} \{ f \in I(t^{h+1}B,h) \colon f(0) \neq 0 \} = \{ f \in \mathcal{M}_n \colon f(0) \neq 0 \}. \end{equation} The only monic polynomial $f \in \mathcal{M}_n$ with $f(0) = 0$, which is a prime power is given by $f(t) = t^n$. Since $\Lambda_\rho$ vanishes away from prime powers we get \begin{equation} \label{eq:lemma_expression_2} \sum_{\substack{f \in \mathcal{M}_n \\ f(0) = 0}} \Lambda_\rho(f) = \Lambda_\rho(t^n). \end{equation} Combining \eqref{eq:lemma_expression_1}, \eqref{eq:lemma_expression_3} and \eqref{eq:lemma_expression_2} we therefore conclude the result. \end{proof} The next result will give us an expression of the variance of $\nu_\rho$ in terms of the cohomological traces. Before we state and prove the result, let us briefly recall the orthogonality relations for Dirichlet characters . For $f,g \in \Gamma(Q)$ we have \begin{equation} \label{eq:char_orth_1} \frac{1}{\norm{\Phi(Q)}} \sum_{\varphi \in \Phi(Q)} \varphi(f) \overbar{\varphi(g)} = \begin{cases} 1 \quad &\text{if } f \equiv g \mod Q, \\ 0 \quad &\text{otherwise}, \end{cases} \end{equation} and for $\varphi_1, \varphi_2 \in \Phi(Q)$ we get \begin{equation} \label{eq:char_orth_2} \frac{1}{\norm{\Gamma(Q)}} \sum_{f \in \Gamma(Q)} \varphi_1(f) \overbar{\varphi_2(f)} = \begin{cases} 1 \quad &\text{if } \varphi_1 = \varphi_2, \\ 0 \quad &\text{otherwise}. \end{cases} \end{equation} From \eqref{eq:char_orth_2} it is not hard to deduce that when $Q = t^m$, for even characters $\varphi_1$ and $\varphi_2$, we have \begin{equation} \label{eq:char_orth_3_even} \frac{1}{\norm{\Phi(Q)^{ev}}} \sum_{\substack{f \in \Gamma(Q) \\ f(0) = 1}} \varphi_1(f) \overbar{\varphi_2(f)} = \begin{cases} 1 \quad &\text{if } \varphi_1 = \varphi_2, \\ 0 \quad &\text{otherwise}. \end{cases} \end{equation} For reference, a proof is given in \cite[Lemma 3.2]{keating2014variance}. In order to simplify notation slightly, we write $\Phi^*(Q)^{ev} = \Phi(Q)^{ev} \setminus \{ \varphi_{tr} \}$, where $\varphi_{tr}$ denotes the trivial Dirichlet character of modulus $Q$. \begin{proposition} Let $\rho \from G_{K,\mathcal{S}} \to \mathrm{GL}(V)$ be a continuous Galois representation, let $Q = t^{n-h}$ and let $A \in \mathbb{F}_q[t]$ be polynomials such that $\deg A=n >h$. Then \begin{equation} \mathbb{E}_{A,n}[\nu_\rho(A;h)] = \frac{b_{\rho \otimes \varphi_{tr},n}}{q^{n-h-1}} \quad \text{and} \quad \mathrm{Var}_{A,n}[\nu_\rho(A;h)] = \frac{1}{q^{2(n-h-1)}} \sum_{\varphi \in \Phi^*(Q)^{ev}} \norm{b_{\rho \otimes \varphi,n}}^2. \end{equation} \end{proposition} \begin{proof} Given $f \in \mathbb{F}_q[t]$ we have for the trivial character of modulus $Q = t^{n-h}$ \begin{equation} \varphi_{tr}(f) = \begin{cases} 1 \quad &\text{if } t \nmid f, \\ 0 \quad &\text{if } t \mid f. \end{cases} \end{equation} Thus by definition of the cohomological traces \eqref{eq:cohomological_traces_character_sum} we get \begin{equation} b_{\rho \otimes \varphi_{tr},n} = \sum_{f \in \mathcal{M}_n} \Lambda_\rho(f) \varphi_{tr}(f) = \sum_{\substack{f \in \mathcal{M}_n \\ f(0) \neq 0}} \Lambda_\rho(f). \end{equation} Comparing this with Lemma \ref{lem:expectation_short_intervals} gives the first part of the proposition. For the second part, note that using the character orthogonality relation \eqref{eq:char_orth_1} we can rewrite $\tilde{\Psi}(n;B^*,Q)$ as \begin{equation} \tilde{\Psi}(n;B^*,Q) = \frac{1}{\norm{\Phi(Q)}} \sum_{\varphi \in \Phi(Q)} \overbar{\varphi(B^*)} \sum_{\deg(f) = n} \Lambda_\rho(f) \varphi(f). \end{equation} The term $\sum_{\deg(f) = n} \Lambda_\rho(f) \varphi(f)$ is non-zero only if $\varphi$ is an even character, and every even, non-trivial character contributes \begin{equation} \overbar{\varphi(B^*)} \frac{q-1}{\norm{\Phi(Q)}} \sum_{f \in \mathcal{M}_n} \Lambda_\rho(f) \varphi(f) = \frac{1}{q^{n-h-1}}\overbar{\varphi(B^*)} b_{\rho \otimes \varphi,n}, \end{equation} where we used Lemma \ref{lem:even_characters} for the last equality. In the beginning of this proof we showed that the contribution of the trivial character is precisely $\mathbb{E}_{A,n}[\nu_\rho(A;h)]$, and therefore we obtain \begin{equation} \tilde{\Psi}(n;B^*,Q) - \mathbb{E}_{A,n}[\nu_\rho(A;h)] = \frac{1}{q^{n-h-1}}\sum_{\varphi \in \Phi^*(Q)^{ev}} \overbar{\varphi(B^*)} b_{\rho \otimes \varphi,n}. \end{equation} Comparing this with \eqref{eq:var_in_terms_of_arith} we get \begin{equation} \mathrm{Var}_{A,n}\left[\nu_\rho(A;h)\right] = \frac{1}{q^{n-h-1}} \sum_{\substack{B^* \; \mathrm{mod} \; Q \\ B^*(0) = 1}} \frac{1}{q^{2(n-h-1)}} \norm{\sum_{\varphi \in \Phi^*(Q)^{ev}} \overbar{\varphi(B^*)} b_{\rho \otimes \varphi,n}}^2. \end{equation} Exchanging the order of summation and using the character orthogonality relation \eqref{eq:char_orth_3_even} we therefore end up with \begin{equation} \label{eq:variance_in_terms_b_rho_phi} \mathrm{Var}_{A,n}[\nu_\rho(A;h)] = \frac{1}{q^{2(n-h-1)}} \sum_{\varphi \in \Phi^*(Q)^{ev}} \norm{b_{\rho \otimes \varphi,n}}^2, \end{equation} which concludes the proof. \end{proof} \section{Weights, purity and characters} \label{section.weights_purity} \subsection{Weights and purity of representations} Recall that we fixed embeddings $\overbar{\mathbb{Q}} \hookrightarrow \overbar{\mathbb{Q}}_\ell$, and $\iota \from \overbar{\mathbb{Q}} \hookrightarrow \mathbb{C}$, even though we suppressed notation. The following definition, which we adopt from \cite{HKRG17}, will allow us to assume a Riemann hypothesis in a precise way. \begin{definition} For a polynomial in $\overbar{\mathbb{Q}}_\ell[T]$, we say that it is \emph{$\iota$-pure of $q$-weight $w$} if it is non-zero, all of its zeroes $\alpha$ lie in $\overbar{\mathbb{Q}}$ and they satisfy \begin{equation} \norm{\iota(\alpha)}^2 = \frac{1}{q^w}. \end{equation} Further, we say that a polynomial is \emph{$\iota$-mixed of $q$-weights $\leq w$} if it is a product of polynomials, each of which is $\iota$-pure of $q$-weight $\leq w$. For an $\ell$-adic Galois representation $\rho \from G_{K,\mathcal{S}} \to \mathrm{GL}(V)$ if there exists a finite subset of places $\mathcal{S} \subset \mathcal{P}$ such that $\rho$ is unramified away from $\mathcal{S}$, we say that $\rho$ is \emph{pointwise $\iota$-pure of $q$-weight $w$} if for each $v \notin \mathcal{S}$, the Euler factor $L(T^{d_v},\rho_v)$ is $\iota$-pure of $q$-weight $w$. \end{definition} We would like to have a Riemann hypothesis satisfied not just for the $L$-function of $\rho$, but in particular for all twists $\rho \otimes \varphi$. The following result, which is shown in \cite{HKRG17}, ensures that we do not need too many assumptions. \begin{lemma} let $\rho$ be a Galois representation as before, and let $\varphi \in \Phi(Q)$ be a Dirichlet character. If $\rho$ is pointwise $\iota$-pure of $q$-weight $w$, then so is $\rho \otimes \varphi$. \end{lemma} \begin{proof} As before let $\mathcal{Q}$ be the set of places, which divide $Q$. We wish to show that for $v \notin \mathcal{Q}$, the Euler factors $L(T^{d_v},(\rho \otimes \varphi)_v)$ are $\iota$-pure of $q$-weight $w$. For this note that since $\Gamma(Q)$ has finite order, we must have that $\varphi_\mathcal{Q}(\mathrm{Frob}_{v_P}) = \varphi(P) = \zeta$ is a root of unity, whence also $\zeta \in \overbar{\mathbb{Q}}$. Comparing this with \eqref{eq:euler_factor_twist_nice}, we see immediately that $\alpha$ is a zero of $L(T,(\rho \otimes \varphi)_v)$ if and only if $\alpha/\zeta$ is a zero of $L(T,\rho_v)$. The result follows. \end{proof} Deligne famously proved the Riemann hypothesis for varieties over finite fields in the most general fashion in 1980. Theorem 1 and Theorem 2 of \cite{deligne1980conjecture} imply the following result. \begin{theorem}[Deligne] Let $\rho$ be a Galois representation as above, and let $\varphi \in \Phi(Q)$ be a Dirichlet character. Assume $\rho \otimes \varphi$ is pointwise $\iota$-pure of weight $w$. Then \begin{itemize} \item $L_\mathcal{Q}(T,\rho \otimes \varphi)$ is a ratio of polynomials, which are $\iota$-mixed of $q$-weights $\leq w+2$, and \item $L(T,\rho \otimes \varphi)$ is a ratio of polynomials in $\overbar{\mathbb{Q}}[T]$, which are $\iota$-pure of $q$-weight $w+1$. \end{itemize} \end{theorem} \subsection{Good, mixed and heavy characters} From now on we will always assume that $\rho$ is pointwise $\iota$-pure of $q$-weight $w$ and $Q = t^{n-h}$, where $n-h \geq 5$. We would like to consider characters $\varphi$ for which $L_\mathcal{Q}(T,\rho \otimes \varphi)$ is a pure polynomial, so that we obtain a good estimate of $b_{\rho \otimes \varphi,n}$. Similarly to Hall, Keating and Roditty-Gershon in \cite{HKRG17} we will distinguish between certain families of characters. \begin{definition} Let $\rho$ be pointwise $\iota$-pure of $q$-weight $w$. Let $Q \in \mathbb{F}_q[t]$ be a polynomial. For an even character $\varphi \in \Phi(Q)$ we say that $\varphi$ is \emph{good} if \begin{equation} M(T,\rho \otimes \varphi) \coloneqq L_\mathcal{Q}(T,\rho \otimes \varphi)/(1-T) \end{equation} is a polynomial that is $\iota$-pure of $q$-weight $w+1$. We denote the degree of $M$ by $s_\mathcal{Q}(\rho)$ or if the context is clear, just by $S$. If an even character is not good, we call it \emph{bad}. We call a bad character \emph{heavy} if $L(T,\rho \otimes \varphi)$ is not a polynomial, and we call a bad character \emph{mixed} if it is not heavy. The sets of good, heavy and mixed even characters are respectively denoted by \begin{equation} \Phi(Q)_{\rho \; \mathrm{good}}^{ev}, \quad \Phi(Q)_{\rho \; \mathrm{heavy}}^{ev} \quad \text{and} \quad \Phi(Q)_{\rho \; \mathrm{mixed}}^{ev}. \end{equation} \end{definition} \begin{remark} A priori the degree $S$ is not well-defined since it might vary across different Dirichlet characters $\varphi$. Fortunately, for our setting \cite[Theorem 1.2]{sawin2018equidistribution} provides that there is a single $S$ for $q^{n-h-1}(1-O(1/q))$ even characters. In fact, a precise value is given in \cite[Lemma 4.10]{sawin2018equidistribution} provided certain conditions are satisfied. \end{remark} \subsection{Cohomological interpretation} Note that our Galois representation $\rho$ is finitely ramified and thus it can be regarded as a representation of the \'etale fundamental group of some open subset of $\mathbb{A}^1_{\overbar{\mathbb{F}}_{q}}$. This defines a middle extension sheaf $\mathrm{ME}(\rho)$ on $\mathbb{A}^1_{\overbar{\mathbb{F}}_{q}}$ that is lisse on a dense open subset. Assuming that $V$ is irreducible implies that $\mathrm{ME}(\rho)$ is irreducible too. For a more detailed discussion of $\mathrm{ME}(\rho)$ we refer the reader to \cite{HKRG17}. For a good or mixed even Dirichlet character $\varphi \in \Phi(Q)$ write $\mathcal{L}_\varphi$ for the unique rank one lisse sheaf whose monodromy representation is $\varphi$. Further write $C_{\varphi}$ for the scaled conjugacy class of $\mathrm{Frob}_q/q^{\frac{w+1}{2}}$ on $H_c^1(\mathbb{A}^1_{\overbar{\mathbb{F}}_{q}}, \mathrm{ME}(\rho) \otimes \mathcal{L}_\varphi)$, where $H_c^i(\mathbb{A}^1_{\overbar{\mathbb{F}}_{q}}, \mathrm{ME}(\rho) \otimes \mathcal{L}_\varphi)$ is the $i$-th \'etale cohomology group of the sheaf $\mathrm{ME}(\rho) \otimes \mathcal{L}_\varphi$. Characteristic polynomials remain unchanged after semisimplification and therefore $\det(1-C_\varphi q^{\frac{w+1}{2}} T ) = \det (1-T \; \mathrm{Frob}_q, H_c^1(\mathbb{A}^1_{\overbar{\mathbb{F}}_{q}}, \mathrm{ME}(\rho) \otimes \mathcal{L}_\varphi))$. Recalling the general formula for an Artin $L$-function of a Galois representation in terms of the Frobenius action on the first cohomology of the associated middle extension sheaf we find \begin{equation} \label{eq:twisted_l_cohomological} L_{\mathcal{Q}}(T, \rho \otimes \varphi)/(1-T) = \det \left( 1-T\; \mathrm{Frob}_q, H_c^1(\mathbb{A}^1_{\overbar{\mathbb{F}}_{q}}, \mathrm{ME}(\rho) \otimes \mathcal{L}_\varphi) \right). \end{equation} The factor of $1/(1-T)$ accounts for the factor at the place at zero since we consider the completed $L$-function. \subsection{Good characters dominate} For a character $\varphi \in \Phi(Q)^{ev}$ we say that the space $H_c^1(\mathbb{A}^1_{\overbar{\mathbb{F}}_{q}}, \mathrm{ME}(\rho) \otimes \mathcal{L}_\varphi)$ is $\iota$-pure of $q$-weight $w$ if $\det \left( 1-T\; \mathrm{Frob}_q, H_c^1(\mathbb{A}^1_{\overbar{\mathbb{F}}_{q}}, \mathrm{ME}(\rho) \otimes \mathcal{L}_\varphi) \right)$ is $\iota$-pure of $q$-weight $w$. We restate \cite[Lemma 2.7]{sawin2018equidistribution}. \begin{lemma} Let $\rho$ be pointwise $\iota$-pure with $q$-weight $w$. Then the space $H_c^1(\mathbb{A}^1_{\overbar{\mathbb{F}}_{q}}, \mathrm{ME}(\rho) \otimes \mathcal{L}_\varphi)$ is $\iota$-mixed of $q$-weight $\leq w+1$, and in particular it is $\iota$-pure of $q$-weight $w+1$ for all but $\dim V$ characters in $\Phi(Q)$. \end{lemma} Using \eqref{eq:twisted_l_cohomological} we therefore deduce the following. \begin{lemma} \label{lem.mixed_chars_finite} Let $\varphi \in \Phi(Q)_{\rho \; \mathrm{mixed}}^{ev}$. Then $L_{\mathcal{Q}}(T, \rho \otimes \varphi)/(1-T)$ is mixed of $q$-weights $\leq w+1$. Further the number of mixed characters is finite, in particular $\norm{\Phi(Q)_{\rho \; \mathrm{mixed}}^{ev}} = O(1)$ as $q \rightarrow \infty$. \end{lemma} Therefore we are now only left with dealing with heavy characters. In the final theorem we will assume that the only possible bad character is $\varphi_{tr}$. The following shows that this is usually not a very strong condition. \begin{lemma}[Corollary 8.3.3 in \cite{HKRG17}] Assume that $\rho$ is pointwise $\iota$-pure with $q$-weight $w$. Assume further that $\rho$ is irreducible and geometrically semisimple. Write $m = \dim V$. Then $\Phi(Q)_{\rho \; \mathrm{heavy}}^{ev} \subseteq \{ \varphi_{tr} \}$ if and only if one of the following hold \begin{itemize} \item[(i)] $m > 1$. \item[(ii)] $m = 1$ and $\rho$ is geometrically isomorphic to the trivial representation. \item[(iii)] $m=1$ and $\rho$ is not geometrically isomorphic to a Dirichlet character in $\Phi(Q)$. \end{itemize} In particular equality holds if and only if (ii) holds. \end{lemma} Assume that $\varphi \in \Phi(Q)$ is good character. Then we can write \begin{equation} L_\mathcal{Q}(T,\rho \otimes \varphi) = (1-T) \prod_{i=1}^S (1-\alpha_i T), \end{equation} where $\norm{\alpha_i} = q^{(1+w)/2}$. Therefore, there exists a conjugacy class of unitary matrices, that depends only on $\rho$ and $\varphi$, of which we will denote a representative by $\theta_{\rho,\varphi} \in \mathrm{U}(S)$, such that \begin{equation} L_{\mathcal{Q}}(T,\rho \otimes \varphi) = (1-T) \det(I - q^{\frac{1+w}{2}} \theta_{\rho,\varphi}T). \end{equation} Recall from \eqref{eq:twisted_L-function_in_terms_of_coh_traces} that we have \begin{equation} T \frac{d}{dT} \log L_\mathcal{Q}(T,\rho \otimes \varphi ) = \sum_{n=1}^\infty b_{\rho \otimes \varphi,n} T^n, \end{equation} so that for a good character $\varphi$ we obtain \begin{equation} b_{\rho \otimes \varphi,n} = - q^{\frac{n(1+w)}{2}}\mathrm{Tr}(\theta_{\rho,\varphi}^n) - 1. \end{equation} Thus, using \eqref{eq:variance_in_terms_b_rho_phi}, we obtain the following important identity \begin{multline} \label{eq:variance_almost_there} \frac{1}{q^{nw+h+1}} \mathrm{Var}_{A,n} \left[ \nu_\rho(A;h) \right] = \frac{1}{q^{n-h-1}}\sum_{\varphi \in \Phi^*(Q)^{ev}_{\rho \; \mathrm{good}}} \norm{\mathrm{Tr}(\theta_{\rho,\varphi}^n)}^2 \\ + \frac{1}{q^{n(w+2)-h-1}} \sum_{\varphi \in \Phi^*(Q)^{ev}_{\rho \; \mathrm{bad}}} \norm{b_{\rho \otimes \varphi,n}}^2 + O\left(q^{h+1-n-\frac{n(1+w)}{2}}\right). \end{multline} If we further assume that $\rho$ is $\iota$-pure of $q$-weight $w$ then from Lemma \ref{lem.mixed_chars_finite} we deduce \begin{equation} \label{eq:variance_good_chars_dominate} \frac{1}{q^{nw+h+1}} \mathrm{Var}_{A,n} \left[ \nu_\rho(A;h) \right] = \frac{1}{q^{n-h-1}}\sum_{\varphi \in \Phi^*(Q)^{ev}_{\rho \; \mathrm{good}}} \norm{\mathrm{Tr}(\theta_{\rho,\varphi}^n)}^2 + O\left( \frac{1}{q} \right) \end{equation} \section{Equidistribution \& proof of the main theorem} \begin{definition} Let $f \from \mathrm{U}(S) \to \mathbb{C}$ be a continuous and conjugacy-invariant function. We define the \emph{mean} of $f$ to be the unique continuous function $\langle f \rangle \from \mathrm{U}(1) \to \mathbb{C}$ satisfying \begin{equation} \int_{\mathrm{U}(S)} f(g) \psi(\det g) dg = \int_{\mathrm{U}(S)} \langle f \rangle (\det g) \psi(\det g) dg, \end{equation} for any continuous $\psi \from \mathrm{U}(1) \to \mathbb{C}$. \end{definition} As a special case adjusted to our setting, Theorem~1.2 of \cite{sawin2018equidistribution} can be stated as follows. \begin{theorem}[Sawin] \label{thm:equidistribution} Let $\rho$ be a Galois representation, which is pointwise $\iota$-pure of $q$-weight $w$, let $f \from \mathrm{U}(S) \to \mathrm{U}(1)$ be a continuous, conjugacy-invariant function and let $Q = t^{n-h}$. For $\varphi \in \Phi^*(Q)_{\rho \; \mathrm{good}}^{ev}$ we have representatives $\theta_{\rho,\varphi} \in \mathrm{U}(S)$ of conjugacy classes such that $L_\mathcal{Q}(T,\rho \otimes \varphi) = (1-T) \det(I-q^{\frac{1+w}{2}}\theta_{\rho,\varphi}T)$. If $n-h \geq 5$, then \begin{equation} \lim_{q \to \infty} \left( \frac{\sum_{\varphi \in \Phi^*(Q)^{ev}_{\rho \; \mathrm{good}}} f(\theta_{\rho,\varphi})}{\norm{\Phi^*(Q)_{\rho \; \mathrm{good}}^{ev}}} - \frac{\sum_{\varphi \in \Phi^*(Q)^{ev}_{\rho \; \mathrm{good}}} \langle f \rangle (\det(\theta_{\rho,\varphi})) }{\norm{\Phi^*(Q)_{\rho \; \mathrm{good}}^{ev}}} \right) = 0. \end{equation} \end{theorem} In our case we are dealing with $f(g) = \norm{\mathrm{Tr}(g^n)}^2$. The proof of the next Lemma is based on \cite[Lemma 3.3]{gorodetsky2018variance}. \begin{lemma} \label{lem:mean_of_traces} Let $f \from \mathrm{U}(S) \to \mathbb{C}$ be given by $f(g) = \norm{\mathrm{Tr}(g^n)}^2$. Then \begin{equation} \langle f \rangle (z) = \int_{g \in \mathrm{U}(S)} f(g) dg = \mathrm{min}\{ n,S \}, \end{equation} for any $z \in \mathrm{U}(1)$. \end{lemma} \begin{proof} By definition of $\langle f \rangle$ we need to show \begin{equation} \int_{\mathrm{U}(S)} f(g) \psi(\det g) dg = \left(\int_{\mathrm{U}(S)} f(g) dg \right) \left( \int_{\mathrm{U}(S)} \psi(\det g) dg \right), \end{equation} for any continuous function $\psi \from \mathrm{U}(1) \to \mathbb{C}$. By the complex Stone-Weierstrass theorem, polynomials are dense in the space of continuous functions on the compact set $\mathrm{U}(1) \subset \mathbb{C}$, so that it is enough to show the above equality for all $\psi(z) = z^k$, $k \in \mathbb{Z}$. Since the Haar measure has total unit mass, the case $k=0$ is trivial. Note that $f(g) = \norm{\mathrm{Tr}(g^n)}^2 = \mathrm{Tr}(g^n) \overbar{\mathrm{Tr}(g^n)}$ is a Laurent polynomial in the eigenvalues of $g$ of degree $0$. This implies, that for any $\lambda \in \mathrm{U}(1)$ we have $f(\lambda g) = f(g)$. If $k \neq 0$ by translation invariance of the Haar measure we have for any $\lambda \in \mathrm{U}(1)$, \begin{equation} \int_{ \mathrm{U}(S)} f(g) (\det g)^k dg = \int_{\mathrm{U}(S)} f(\lambda g) (\det \lambda g)^k dg = \lambda^{kR} \int_{ \mathrm{U}(S)} f(g) (\det g)^k dg, \end{equation} so that considering $\lambda \neq 1$ implies \begin{equation} \int_{ \mathrm{U}(S)} f(g) (\det g)^k dg = \left(\int_{\mathrm{U}(S)} f(g) dg \right) \left( \int_{\mathrm{U}(S)} (\det g)^k dg \right) = 0. \end{equation} The last part of the proof follows, since a standard matrix integral evaluation shows \begin{equation} \int_{\mathrm{U}(S)} \norm{\mathrm{Tr}(g^n)}^2 dg = \mathrm{min}\{ n,S \}. \end{equation} A proof of this fact can be found in \cite[Theorem 2.1]{diaconis2001linear}. \end{proof} We are now finally in a position to state and prove the main result of this chapter. \begin{theorem} \label{thm:main_result_full_form} Let $\rho$ be a Galois representation, which is pointwise $\iota$-pure of $q$-weight $w$. Let $A$ be polynomials of degree $n$, and let $h$ be a positive integer such that $h \leq n-5$. For $Q = t^{n-h}$, assume further that $\Phi(Q)^{ev}_{\rho \; \mathrm{heavy}} \subseteq \{ \varphi_{tr} \}$. Then \begin{equation} \lim_{q \to \infty} \frac{1}{q^{nw+h+1}} \mathrm{Var}_{A,n} \left[ \nu_\rho(A;h) \right] = \mathrm{min}\{n,s_\mathcal{Q}(\rho) \}. \end{equation} \end{theorem} \begin{proof} Write $Q=t^{n-h}$. From Lemma \ref{lem.mixed_chars_finite} and the assumption $\Phi(Q)^{ev}_{\rho \; \mathrm{heavy}} \subseteq \{ \varphi_{tr} \}$ we find \begin{equation} q^{n-h-1} = \norm{\Phi(Q)^{ev}} \sim \norm{\Phi^*(Q)^{ev}_{\rho \; \mathrm{good}}}. \end{equation} Using this and \eqref{eq:variance_good_chars_dominate} thus gives \begin{equation} \lim_{q \to \infty}\frac{1}{q^{nw+h+1}} \mathrm{Var}_{A,n} \left[ \nu_\rho(A;h) \right] = \lim_{q \to \infty} \frac{1}{\norm{\Phi^*(Q)^{ev}_{\rho \; \mathrm{good}}}}\sum_{\varphi \in \Phi^*(Q)^{ev}_{\rho \; \mathrm{good}}} \norm{\mathrm{Tr}(\theta_{\rho,\varphi}^n)}^2. \end{equation} Note that all conditions of Theorem~\ref{thm:equidistribution} are satisfied. Thus, by the result of Lemma \ref{lem:mean_of_traces}, applying Theorem \ref{thm:equidistribution} we obtain \begin{equation} \lim_{q \to \infty} \frac{1}{\norm{\Phi^*(Q)^{ev}_{\rho \; \mathrm{good}}}}\sum_{\varphi \in \Phi^*(Q)^{ev}_{\rho \; \mathrm{good}}} \norm{\mathrm{Tr}(\theta_{\rho,\varphi}^n)}^2 = \mathrm{min}\{ n,s_\mathcal{Q}(\rho) \}, \end{equation} which completes the proof. \end{proof} \bibliographystyle{alpha}
1,116,691,498,812
arxiv
\section*{Introduction} \label{intro} While deforming, crystalline materials change irreversibly through discrete plastic events, i.e. avalanches, originating from the collective motion of dislocations -- the topological defects of the crystal lattice \cite{papanikolaou2017avalanches}. These dislocation avalanches exhibit scale invariance with their distributions of sizes and durations encompassing several orders of magnitude \cite{zaiser2006scale}. This has lead to the discussion of plastic deformation as a non-equilibrium phase transition: below critical loading, the dislocations merely jump from one configuration to another, and the actual yielding of the crystal occurs at the critical point of diverging avalanches and uninhibited flow of dislocations. However, the dislocation movement has a highly complex nature arising from the interplay of the evolving, anisotropic interaction field produced by other dislocations and possible pinning field caused by disorder inside the crystal \cite{miguel2002dislocation,ardell1985precipitation}. Thus, the collective dislocation behaviour is dictated by two competing phenomena $-$ dislocation-dislocation interaction induced \textit{jamming} and dislocation-obstacle induced \textit{pinning} $-$ that can be hard to distinguish from each other although they have fundamental differences \cite{ispanovity2014avalanches,sparks2018nontrivial,salmenjoki2019presiavals}. Indeed in the case of dislocation jamming of pure dislocation systems, the interacting dislocations enter a state of 'extended criticality' where the system shows no distinct critical point but seems to recede in the constant vicinity of the transition independent of the loading force \cite{ispanovity2014avalanches,lehtinen2016glassy}. However, crystals are rarely completely pure, and introducing some disorder $-$ such as precipitates $-$ to the crystal to impede dislocation motion can increase the crystal's mechanical strength, and alter the criticality and ensuing avalanche behaviour of the system \cite{zhang2017taming,pan2019rotatable}. The key point here is that obstacles to dislocation motion may change the system behaviour by inducing dislocation pinning, which, if strong enough, results in a well-defined critical point of a depinning transition of the dislocation assembly \cite{ovaska2015quenched}. Our recent study of 3D discrete dislocation dynamics (DDD) simulations of FCC aluminium with the inclusion of stationary fully coherent precipitates (see Fig.\ \ref{fig:-1}) showed that, by systematically increasing the strength or density of the precipitates, the system goes from the phase of dislocation-interaction dominated jamming to disorder-dominated pinning, and this transition can be observed in both constant load simulations as well as when quasistatically ramping up the external stress \cite{salmenjoki2019presiavals}. The related phenomenology depends on the loading protocol employed. For the quasistatic stress ramp simulations, one observes in general a sequence of strain bursts with a broad size distribution. In the jamming-dominated regime, the average strain burst size grows exponentially with the applied stress, while in the pinning phase we found a critical stress value where the average strain burst size exhibits a power-law divergence. Here, we focus on the creep-like constant loading simulations with varying precipitate density $\rho_p$. There, the general behaviour in both of the phases, i.e. jamming and pinning, is on the one hand similar: In both phases the systems appear to possess a critical stress $\sigma_c(\rho_p)$ where one observes a power-law relaxation of the shear rate, $\dot{\varepsilon} \sim t^{-\theta}$. On the other hand, the relaxation becomes more rapid (larger $\theta$) as the systems move further into the pinning phase \cite{miguel2002dislocation,salmenjoki2019presiavals}. This is illustrated in Fig.\ \ref{fig:0}. An open question regarding the phase transition from jamming to pinning is how exactly does it alter the dislocation structures in the systems? Furthermore, despite the apparent similarities in the response (i.e. power law relaxation) of the dislocation systems in the different phases, could one be able to distinguish them by their dislocation structures without specific {\it a priori} knowledge of the transition? To address this problem, we use machine learning (ML). ML is proving to be a flexible and useful tool for physics and materials science \cite{zdeborova2017machine,mehta2019high,papanikolaou2018learning,papanikolaou2019spatial,steinberger2019machine,zhang2019extracting,yang2020learning}. Using ML for the detection of phase transitions in statistical physics has given fruitful results \cite{carrasquilla2017machine,hu2017discovering,shirinyan2019self} and here we applied the unsupervised 'confusion' scheme introduced in \cite{van2017learning}. With the confusion algorithm, the only assumption one needs to make is that the system exhibits a phase transition in some control parameter range $-$ in our case, the control parameter being the precipitate density $\rho_p$ $-$ and the algorithm should be able to find the critical value $\rho_p^c$ by using the states of the systems as input. Here we followed the evolving systems by concentrating on both the fine details and the long-ranged structures of the dislocation network. To accomplish this we computed the dislocation junction lengths, geometrically necessary dislocation (GND) density and dislocation correlation, and used these separately to describe the microstructure for the ML algorithm. Our results show that the algorithm was able to find the phase transition from all of the used descriptors and the discovered values of $\rho_p^c$ are in perfect agreement. Therefore, as the dislocation structures in the two phases evolve in notably different ways, we were able to quantify some of the changes in the systems by analyzing the used structural descriptors. The rest of this paper is structured as follows: the implementation of the ML method, along with the details of the DDD simulations and our approaches to characterize the dislocation structures, are presented in the next section. After the methodology, we proceed to show results obtained with the ML algorithm and we finish with some discussion. \section*{Methods \label{sec:methods}} \subsection*{DDD simulations} We study the effect of varying the precipitate density $\rho_p$ on the nature of the collective dislocation dynamics within 3D DDD simulations using our modified version of the ParaDiS code \cite{arsenlis2007enabling,lehtinen2016multiscale}. ParaDiS implements the dislocation interactions by approximating the continuous dislocation lines by a set of straight dislocation segments. The segments interact through the stress fields arising from the linear elasticity theory, while the diverging fields at dislocation cores are replaced by the use of results from molecular dynamics simulations. To cope with the long-range elastic forces, ParaDiS uses multipole expansion. Our version of ParaDiS also enables including disorder to the system, in the form of spherical precipitates \cite{lehtinen2016multiscale}. The precipitates are frozen pinning sites for the dislocations that produce a short-range radial force \begin{equation} F(r) = \frac{2 A r e^{-r^2/r_p^2}}{r_p^2}, \end{equation} where $A$ is a constant, $r$ is the distance from the precipitate to the dislocation and $r_p$ is the radius of the precipitate. In the context of transition from dislocation-dominated jamming to disorder-dominated pinning, the relevant parameters are the precipitate density $\rho_p$ and the precipitate strength $A$ \cite{salmenjoki2019presiavals}. For our simulations, we set parameters to approximate those of FCC aluminium with precipitates of fixed strength and size in a simulation box with periodic boundaries $-$ $A$ was especially chosen so that the system exhibits both jamming and pinning-dominated response depending on $\rho_p$ \cite{salmenjoki2019presiavals}. The parameters are presented in Table \ref{table:simparams}. The simulations started with two relaxation periods, the first with only the dislocation networks and the second with also the precipitates present, to ensure the systems reached meta-stable states. After the initial relaxation, the systems were driven by applying a constant external stress $\sigma$. Depending on the magnitude of driving force, the systems tend to either get stuck (exponential decay of strain rate $\dot{\varepsilon}$ with small $\sigma$) or reach linear creep-like conditions (constant $\dot{\varepsilon}$ with large $\sigma$). However independent of precipitate density, all of the systems possess also a critical value $\sigma_c$ (dependent on $\rho_p$, see Table \ref{table:sigmac}) that leads to a power-law relaxation of $\dot{\varepsilon}$, as seen in Fig.\, \ref{fig:0} \cite{miguel2002dislocation,salmenjoki2019presiavals}. The effect of precipitate density is seen in the rate and starting time of the power-law decay: there is a transition between the behaviour of less disordered systems with $\dot{\varepsilon} \sim t^{-0.3}$ and more disordered systems with the more rapid decay starting earlier. To see how this transition affects the system, we characterized the dislocation structure and observed its evolution during the constant stress loading with $\sigma=\sigma_c$. \subsection*{Characterizing dislocation structures} In the characterization of the dislocation structures, we used three distinct descriptors. First, we exploited the fact that disorder causes the dislocations to stretch when parts of the dislocations get pinned. With this in mind, we measured the length of dislocation links between two junction nodes \cite{sills2018dislocation} along the dislocation segments $l_{\mathrm{along}}$, and compared this to the shortest possible length between the nodes $l_{\mathrm{shortest}}$. Thus, we define parameter $J$, \begin{equation} J = l_{\mathrm{along}} - l_{\mathrm{shortest}}, \label{eq:j} \end{equation} which represents the roughness of a dislocation and by collecting its distribution inside a system provides information on the dislocation structure. As an example, Fig.\, \ref{fig:1}a shows the distribution of $J$ in the simulated systems. The second used descriptor was GND density \cite{arsenlis1999crystallographic,steinberger2019machine}. We computed the local GND density (the total GND density is constant throughout the simulation \cite{bulatov2000periodic}) by first evaluating the Nye tensor $\boldsymbol{\alpha}$ in voxels by \begin{equation} \boldsymbol{\alpha} = \frac{1}{V_{voxel}} \sum_i \mathbf{b}_i \otimes \mathbf{l}_i, \end{equation} where $V_{voxel}$ is the voxel volume, $\mathbf{b}$ is the Burgers vector, $\mathbf{l}$ is the line direction giving also the segment length and the sum is over all dislocation segments $i$ inside the voxel. Then, the GND density $\rho_{GND}$ was calculated from the Nye tensor. The resulting GND density fields, for instance the one with $10\times 10 \times 10$ voxels illustrated in Fig.\, \ref{fig:1}b, are quite system specific, and as we are interested especially in the changes in the dislocation structure, we focused on the evolution of GND density, i.e. $\rho_{GND}'(t) = \rho_{GND}(t) - \rho_{GND}(0)$. Moreover to remove the effect of periodic boundaries, we took the Fourier transform of $\rho_{GND}'(t)$ as we collected the data. As the third and final descriptor, we calculated the dislocation spacing correlation according to \cite{csikor2007range} \begin{equation} C(r) = \left( \frac{\mathrm{d}}{\mathrm{d}r} L(r) \right) / (4 \pi r^2 \rho), \label{eq:correlation} \end{equation} where $\rho$ is the total dislocation density and $L(r)$ is approximated by computing the mean line length in spheres of radius $r$ centered at random points along the dislocation structure. We note that in the case of 2D DDD simulations, the average dislocation-dislocation correlation function changed drastically when mobile solutes (pinning points) were introduced to the system \cite{ovaska2016collective}. Here we focused on longer-range correlations to avoid possible effects caused by the assigned segment length restrictions of the computations. Fig.\, \ref{fig:1}c shows the dislocation correlation in systems with varying $\rho_p$. We proceed by collecting the descriptors listed above during the loading with $\sigma=\sigma_c(\rho_p)$ at intervals of $t = 10^{-9}\, \mathrm{s} = 2.6 \cdot 10^{5}\, G M$, where the times are given in the units of shear modulus $G$ times the dislocation mobility $M = M_{\mathrm{edge}} = M_{\mathrm{screw}}$. Due to computational challenges of 3D DDD simulations, we simulated only 19 systems for every value of $\rho_p$ and $\sigma_c$. The time reached in every simulation was at least $4.7 \cdot 10^{6}\, GM$ although some systems were able to run even longer in their allocated simulation time. \subsection*{Unsupervised learning of the phase transition} To observe the transition from dislocation jamming to pinning in an unsupervised manner, we used the confusion method presented in \cite{van2017learning}. The idea is that, assuming the studied system experiences a transition in a control parameter range (in our case $[\rho_p^0, \rho_p^1]$) with some value $\rho_p^c$, one expects that the different systems below and above $\rho_p^c$ are distinguishable from each other. Thus by appointing trial values $\rho_p'$ in the range $[\rho_p^0, \rho_p^1]$, the sample systems are assigned to classes depending on whether $\rho_p$ is below or above $\rho_p'$. This way, a machine learning classifier trained on the trial samples in supervised fashion should perform best near the critical point $\rho_p' \approx \rho_p^c$ where the systems are truly distinguishable. Correspondingly further from $\rho_p^c$, the classifications should get worse as some of the samples are wrongly labeled. If for instance a system was in jamming state (with $\rho_p<\rho_p^c$), trial value $\rho_p'<\rho_p$ would lead to the system being mislabeled to the pinning state with samples that actually belong to the pinning state. Then this labeling would be especially challenging for the classifier to learn because some of the samples in jamming state should be classified as jamming but some as pinning - therefore the confusion. Observing the accuracy of the classification in the range $[\rho_p^0, \rho_p^1]$ should therefore be somewhat $W$-shaped, as the accuracy is good at the transition but also at the beginning and at the end of the range (as large majority of the samples are labeled to one class, the classifier gets high score by simply predicting always the majority class). As we were dealing with a data set of 190 systems with more than one thousand of collected features, we applied some dimensionality reduction before teaching any classifier. The three distinct data sets (different descriptors) were cast to lower dimensions by principal component analysis (PCA). In PCA, every feature of the data is first scaled to zero mean and unit variance, and then the entire dataset is represented by $n$ orthogonal linear combinations of the original data which maximize amount of explained variance. This happens in descending order, so that with the first principal component (PC), the explained variance is the largest. Fig.\, \ref{fig:2} already shows that by projecting the data to the space of the two first PCs, there is a rather smooth transition in dislocation structures from less to more disordered landscapes with all of the used descriptors. \subsection*{Supervised classifier for the confusion method} For a classifier, our choice was based on linear discriminant analysis (LDA) \cite{bishop2006pattern}. In this simple case of 2-class classification, LDA builds one linear decision boundary into the input space according to \begin{equation} y (\mathbf{x}) = \mathbf{w}^T \mathbf{x} + w_0, \label{eq:boundary} \end{equation} where $\mathbf{x}$ is the feature vector of a sample, $\mathbf{w}$ and $w_0$ are the weights and bias of the classifier and $y(\mathbf{x}) = 0$ is the boundary. We used the implementation by \textit{scikit-learn} \cite{scikit-learn}, that computes the boundary parameters by assuming that the samples inside different classes are Gaussian distributed, i.e. probability of a sample with features $\mathbf{x}$ when belonging to class $k$ is \begin{equation} P(\mathbf{x} | y=k) = \frac{1}{ (2 \pi)^{d/2} | \Sigma_k |^{1/2}} \exp \left( -\frac{1}{2} (\mathbf{x} - \mu_k)^T \Sigma_k^{-1} (\mathbf{x} - \mu_k) \right) \label{eq:gaussian} \end{equation} where $d$ is the length of $\mathbf{x}$, $\mu_k$ is the class-specific mean of features and $\Sigma_k$ is the covariance matrix. Moreover to obtain a linear boundary, the different classes are assumed to have identical covariance matrices, so in our case of two classes, $k=-1$ or $k=1$, $\Sigma_{-1} = \Sigma_1 = \Sigma$. The weights for the decision boundary are obtained by applying the Bayes theorem, as at the boundary the probabilities of different classes given the sample are equal $P(y=-1|\mathbf{x}) = P(y=1 |\mathbf{x})$ and, thus, the log-probability ratio is \begin{equation} \log \left( \frac{P(y=1 |\mathbf{x})}{P(y=-1 |\mathbf{x})} \right) = \log \left( \frac{P(\mathbf{x}| y=1) P (y=1)}{P(\mathbf{x} | y=-1) P (y=-1)} \right) = 0. \end{equation} From this, the final weights, $\mathbf{w}$ and $w_0$, are obtained by substituting the probability distribution of Eq. \ref{eq:gaussian} and comparing to Eq. \ref{eq:boundary}, \begin{equation} (\mu_{1} - \mu_{-1})^T \Sigma^{-1} \mathbf{x} - \frac{1}{2} (\mu_{1}^T \Sigma^{-1} \mu_{1} - \mu_{-1}^T \Sigma^{-1} \mu_{-1}) + \log \left( \frac{P(y=1)}{P(y=-1)} \right) = 0 \end{equation} The LDA classifiers were evaluated by the straightforward accuracy, i.e. score $S = $ number of correctly predicted test samples $/$ number of test samples, and trained by 2-fold cross-validation to provide some tentative confidence intervals. \section*{Results \label{sec:results}} The confusion curves with the different dislocation structure descriptors in Fig.\ \ref{fig:3}a show the expected $W$-shape. What is striking, is that every curve shows a possible transition in the form of local maximum at the same spot, $\rho_p^c \approx 3\cdot 10^{19}\, \mathrm{m}^{-3}$. Moreover, the classifying accuracy there is extremely good as every descriptor achieved score larger than $0.95$ at the local maximum. Comparing the position of the transition to the relaxation curves with different $\rho_p$ of Fig.\, \ref{fig:0}b and their power-law part represented by the exponents $\theta$ presented in Fig.\ \ref{fig:3}b, we see that the relaxation behaviour is distributed nicely to the two phases so that $\theta$ is close to constant on one side (jamming) of the transition, while on the other side it starts to increase (pinning) \cite{salmenjoki2019presiavals}. Of course with $\rho_p = 5.1 \cdot 10^{19}\, \mathrm{m}^{-3}$ $\theta$ seems to still be close to the constant value of the jamming-side of the transition, but there the error of $\theta$, arising from the fact that the $\sigma_c$ used in simulations is impossible to get spot on to the one producing power-law relaxation, is notably higher than with other $\rho_p$. The used number of PCs for the best confusion curves (i.e. the curve with the highest maximum accuracy somewhere else than the ends of the range) was 5 for junctions and GND density, and 10 for correlations. Interestingly, the confusion curve obtained with junction lengthening data shows another distinct maximum near $\rho_p \approx 3 \cdot 10^{20}\, \mathrm{m}^{-3}$, although there the accuracy is not as good as at $\rho_p^c$. Similar fluctuations from the pure $W$-shape are also observed in Fig.\, \ref{fig:4} which shows the confusion curves with different amount of PCs used for the classifying task. Basically all of the secondary maxima are positioned to the more disordered side with $\rho_p > \rho_p^c$. Most likely this arises from the fact that in the pinning phase, the systems get more and more pinned with growing $\rho_p$ yielding faster relaxation with larger $\theta$ causing these systems to possess some distinguishability from each other despite being in the same phase. This also explains the tendency of slightly asymmetric $W$-shaped curves in Fig. \ref{fig:4}, as the LDA score does not drop as much in the pinning phase as in the jamming phase. But as was ensured by the choice of the best confusion curves, the dominant maximum is indeed near $\rho_p^c$. We can also study how the ability of the confusion scheme to distinguish the two phases using different microstructure descriptors evolves in time by computing the confusion curves based on single snapshot structures, presented in Fig.\, \ref{fig:5}. There the classifiers were trained by using two PCs of the dislocation structure at the specific times. Starting from the junction lengthening in Fig.\, \ref{fig:5}a, there seems to be a short transient time until the single time step curves have converged to close to the shape of the best confusion curve in Fig.\, \ref{fig:3}. This indicates that the junction lengthening shows early on the signs of distinct jamming and pinning phases. On the other hand GND density in Fig.\, \ref{fig:5}b, which was measured as the difference to the initial density field, shows that the phases are separated well in the immediate beginning of the driving. However, the information about the transition is lost if looking at a momentary GND density field compared to one before loading. Trying the confusion scheme to GND density field without extracting the initial field or difference in the field of subsequent time steps yielded no observable phase transition (not shown here). Finally with the observed dislocation correlation functions in Fig.\, \ref{fig:5}c, the behaviour is similar as with junction lengthening: There is now a longer time during which the transition is not observed, but after that the curves start to resemble the best confusion curve with maximum at $\rho_p^c$. Again, this is quite evident because the correlation functions focused on long-range structures, so it takes time until the systems have evolved structures that are noticeably different in the two phases. Notable here is also that the converged confusion curves are quite flat in the pinning phase. \section*{Discussion \label{sec:conlusions}} As the results with the unsupervised ML scheme showed, the dislocation configurations can be separated into two phases with different relaxation rates even though the general response, i.e. the power-law relaxation, is similar in the two cases. The confusion scheme succeeded extremely well, as it was able to achieve accuracy $>0.95$ at the observed transition indicating that the systems where the dislocation-dislocation interactions dominate are significantly different from the precipitate-dominated systems. This was further supported by the fact that all the three dislocation structure characterization metrics considered captured the transition happening at the same value of $\rho_p^c$ where also the relaxation starts to turn more rapid. The success of all three descriptors reveals some of the notable differences between dislocation structures in the two phases. Firstly, the distribution of the junction lengthening $J$ captures the bowing of the dislocation lines and, clearly, the pinning points cause more stretching and bowing of junctions than the other possible obstacles, namely the jamming dislocation structures, as depicted already in Fig.\, \ref{fig:1}a. Secondly, the spacing correlation of the dislocations, $C(r)$ shows that even long-range structures are slightly affected, although there the differences seem to arise more from the magnitude (and scaling by the total dislocation density in Eq. \ref{eq:correlation}) than the shape of the correlation functions which are plotted in Fig.\, \ref{fig:1}c. Thirdly, the evolution of the local GND density finds similar structural changes as the other two descriptors: On one hand, the bowing dislocations are seen as a 'spreading' density of GND, while on the other hand with only few precipitates dislocations tend to move more in their straight forms. This is illustrated in Fig. \ref{fig:6} which shows the probability of a computational voxel having a non-zero GND density as a function of simulation time for different $\rho_p$. The systems in the jamming phase show more or less constant number of active voxels as dislocations keep their shape while in the pinning phase the number is clearly increasing as dislocations bow. This happens despite the fact that the total GND density stays constant during the simulations. Undoubtedly, the effectiveness of GND density as a descriptor of the phase transition is also enhanced by the fact that in the pinning phase $\sigma_c$ is larger (faster changes in the dislocation structure right in the start of the simulation) but relaxation is more rapid (more constant structures on longer time-scale). However as Fig.\ \ref{fig:7} shows, the confusion scheme seems to be quite robust with respect to the resolution of GND density computation: even sparse number of voxels reveals the changes in the evolving structures. To conclude our findings, we have studied the transition between dislocation-dislocation interaction dominated jamming and disorder dominated pinning. By tuning the disorder content through precipitate density and strength, the system changes the mechanical response and yielding which is also seen in the power-law relaxation rate during the plastic flow with constant loading. Here we have been able to distinguish the simulated systems to the two phases of jamming and pinning solely by their dislocation structures during the constant stress simulations and, thus, highlighted the changes in the microstructure caused by the phase transition. These results offer two obvious prospects for future study: first, to conduct further simulations of the borderline case system where neither dislocation-dislocation nor dislocation-precipitate interaction dominates over the other. The second one is that our results tell that the dislocation structures are different in the two phases. This means that one can correlate these with the most interesting engineering quantity, the yield strength, possibly on a sample-to-sample basis as well. One should thus use the dislocation structure -oriented approach in the experimental verification of the different phases of crystal plasticity and for strength prediction. \begin{backmatter} \section*{ Availability of data and materials} The data that support the findings of this study are available from the corresponding authors on reasonable request. \section*{Competing interests} The authors declare that they have no competing interests. \section*{ Funding} The authors acknowledge support from the Academy of Finland Center of Excellence program, 278367. LL acknowledges the support of the Academy of Finland via the Academy Project COPLAST (project no. 322405), and HS acknowledges the support from Finnish Foundation for Technology Promotion. MA acknowledges support from the European Union Horizon 2020 research and innovation programme under grant agreement No 857470 and from European Regional Development Fund via Foundation for Polish Science International Research Agenda PLUS programme grant No MAB PLUS/2018/8. \section*{Author's contributions} HS, LL and MA designed the study. HS performed the simulations and data analysis, and wrote the first version of the manuscript. All authors contributed to the final version of the manuscript. \section*{Acknowledgements} The authors acknowledge the computational resources provided by the Aalto University School of Science ``Science-IT'' project, as well as those provided by CSC (Finland). \bibliographystyle{bmc-mathphys}
1,116,691,498,813
arxiv
\section{Introduction} With the evolution of the next-generation Internet and the proliferation of wireless applications, the demand of network resources for data transmission, storage, and computation has been increasing rapidly. In particular, the maturity of technologies such as extended reality and digital twins accelerates the realization of Metaverse and Web $3.0$ concepts. This consequently leads to a growing demand for communication and computing support. To meet stringent requirements such as low latency, high reliability, and high immersion for next-generation Internet applications, the semantic communication technique is proposed as one of the fundamental approaches for the sixth generation wireless communications~\cite{yang2022semantic}. By transmitting only task-related semantic information extracted from source messages, semantic communications are believed to break the conventional Shannon communication paradigm and bring higher quality of experience to users~\cite{seo2021semantics,farshbafan2021common}. While semantic communication techniques have demonstrated their significant effectiveness in processing source data in multiple modes, e.g., audio~\cite{weng2022deep}, image~\cite{xie2021task}, video~\cite{zhu2021video}, and text~\cite{xie2021deep}, one of the most promising application scenarios for semantic communication could be the processing of wireless sensing data, which is not thoroughly studied yet. The sensing data is important because that wireless signals are ubiquitous in our daily life, and can be used to accomplish various tasks requested by service providers. Specifically, wireless signals not only help users access the Internet more efficiently, e.g., Metaverse, but also enable indoor positioning and activities detection more effectively. The wireless sensing data also facilitates the construction of virtual worlds such as digital twins. Unlike on-body sensor-based solutions~\cite{chen2019intelligent}, wireless sensing does not require the user to carry any devices and equipment, which is more practical and convenient. Additionally, the wireless sensing method is more robust than camera-based methods particularly in cases of occlusion or inadequate illumination, while causing fewer privacy issues. However, the wireless sensing technique has one major limitation. The transmission and storage of the sensing data, such as signal amplitude and phase spectrums, consumes a large number of resources~\cite{yang2022efficientfi}. In particular, the development of communication technologies such as multiple-input multiple-output and orthogonal frequency-division multiplexing (OFDM) improve the sensing resolution in the spatial and time-frequency domains, which, however, further increases the sensing data volume. Therefore, the semantic communication technique is expected to achieve efficient sensing data transmission or storage while achieving sensing tasks. This vision is more meaningful for applications that require long-term storage of sensing data, such as incremental learning for recognition \cite{ray2016survey}, healthcare services \cite{hassanalieragh2015health} and Internet-of-Things (IoT) systems and applications \cite{singh2020iot}. The reason that semantic communications can ``exceed'' the Shannon limit is the ``impairment'' of the transmitted data, i.e., an effective semantic encoder extracts only task-independent semantic information from the source messages. However, a potential pitfall here is that the well-trained semantic encoding and decoding models for one specific task may fail when the source messages are needed to accomplish several different tasks. As shown at the top of Fig. \ref{res}, instead of transmitting an image, the semantic encoder can extract sentences describing the content of the image. This greatly reduces the number of bits that are required to be transmitted. However, semantic communications would not work well when the task is not only to know the type and number of fruits in the images, but also to know the spatial location. In this case, updated semantic models are required to be re-trained. In a word, semantic communications achieve efficiency transmission while introducing limitations. For the wireless sensing data, if we extract only the semantic information used for localization, the gesture detection task might not be accomplished. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{Show-eps-converted-to.pdf} \caption{The ideas of conventional, semantic, and inverse semantic communications. Motivated by the inverse semantic-aware communication, we propose an inverse semantic-aware encoding and decoding framework and show the results. Specifically, we select $10$ original signal amplitude/phase spectrum (Part I) by using our proposed {\bf{{Algorithm~\ref{Algorithm2}}}}, and encode them into one MetaSpectrum by using the RIS and {\bf{Algorithm~\ref{Algorithm1}}}. After wireless transmission, the reconstruction results (Part II) are obtained by decoding the MetaSpectrum using {\bf{Algorithm~\ref{Algorithm3}}}. The sensing data is collected by real experiments with an IEEE 802.11ax based test platform~\cite{gringoli2022ax}.} \label{res} \end{figure*} To fill this gap in semantic communications, we propose an inverse semantic-aware approach by treating the source messages as semantic information of a {\textit{hyper-source message}}. As shown in Fig.~\ref{res}, the ``inverse'' means that the processing of source messages is no longer to extract semantic information, but to combine multiple source messages (Part I) into one hyper-source message (Part III) for transmission or storage. Subsequently, by decoding, the semantic information of the hyper-source message (Part II), i.e., source messages, can be obtained to support multiple different tasks. Using the inverse semantic-aware approach, we reduce the data volume for transmission or storage, while avoiding the task limitations brought by semantic communications. For the wireless sensing, the source messages are signal amplitude and phase spectrums, and we call the hyper-source messages as amplitude and phase MetaSpectrums, respectively. We use the reconfigurable intelligent surface (RIS) to ensure efficient inverse semantic-aware encoding and decoding\footnote{Our scheme can alternatively be achieved by using active antennas and processors to simulate the same signal processing as the RIS. However, higher hardware costs are introduced compared to the scheme using RIS.}. With the RIS's superior ability to modulate signals, our scheme can be implemented effectively by modifying a small number of elements on the RIS without affecting the RIS-aided communications. {\textit{Unlike most RIS research works that consider only the phase response matrix of RIS, to the best of our knowledge, this is the first paper to make full use of the amplitude response matrix of RIS to help the system design for wireless sensing.}} The amplitude response matrix is not only used to reduce the sensing data volume significantly, but also to encrypt the sensing data because the amplitude response matrix is inevitable in the decoding process. Visual representation of the contributions of this paper is shown in Fig.~\ref{res}, which are summarized as follows: \begin{itemize} \item Following the paradigm of the inverse semantic communications, we design a novel RIS hardware, in which $L$-shaped active sensors are placed behind transmissive elements, to achieve the inverse semantic-aware wireless sensing that reduces the sensing data volume to 5\% of the original data volume. \item We develop the inverse semantic-aware encoding and decoding methods. The amplitude response matrix of the RIS embeds prior knowledge in the encoded sensing signals. The decoding method is based on self-supervised learning, which can achieve high-quality recovery of the original sensing signals without pre-training resource consumption. \item We propose an effective semantic hash sampling algorithm to select the task-related sensing signal spectrums for decoding. The mean squared error (MSE) between the ground truth and the 2D angles-of-arrival (AoA) estimation results obtained by the semantic hash sampling scheme is $67\%$ lower than that of typically used uniform sampling scheme. \item We build an IEEE 802.11ax based test platform~\cite{gringoli2022ax} to collect the real-world sensing data, and perform experiments to demonstrate the effectiveness of our proposed framework. \end{itemize} The remainder of the paper is organized as follows. In Section~\ref{Sre}, we review the related work in the literature. Section~\ref{Sre3} introduces the system model, which contains the novel RIS hardware and the sensing signal model. The inverse semantic-aware encoding and decoding methods are proposed in Section~\ref{S4ra} and Section~\ref{SS5}, respectively. Section~\ref{SS6} presents the experiment results. In Section~\ref{SF}, we present the conclusion and discuss some potential research directions. \section{Related Work}\label{Sre} In this section, we provide a brief review of three related techniques, i.e., wireless sensing, RIS, and spectral snapshot compressive imaging. \subsection{Wireless Sensing} Wireless signals such as WiFi~\cite{niu2022rethinking} have been used for a variety of sensing tasks, from large-scale intrusion detection and indoor localization, to small-scale gesture recognition and breathing monitoring. Moreover, with the rapid advancement of wireless sensing techniques, next-generation Internet service providers (SPs) can construct digital models of the physical world (for digital twin service) or conduct analysis of users' behaviors (for Metaverse services)~\cite{ramadan2020efficient,liu2019wireless,hassanalieragh2015health}. We introduce a completed sensing process. First, wireless IoT devices collect the sensing data. With the frequency conversion and channel estimation, the channel state information (CSI) can be obtained as the sampled version of channel frequency response (CFR), which is proven to be one of the most effective signal sources for sensing tasks such as human activities detection~\cite{yue2020bodycompass} and passive localization~\cite{gao2022towards}. The CFR can be expressed as a complex matrix, e.g., rows are sub-carriers frequencies and columns are active sensors. For easy transmission and storage, the IoT devices can decompose the CFR complex matrix into an amplitude spectrum and a phase spectrum. By using a three-dimensional multiple signal classification (3D-MUSIC) algorithm, 3D spectrum can be obtained using the amplitude and phase spectrums, which contains the information of 2D AoA and time of flight (ToF). The 2D AoA means the elevation and azimuth AoA as shown in Fig.~\ref{img} (Part I). The obtained 3D spectrum can be then used to achieve several purposes, e.g., physical world user localization~\cite{gong2021usability}, or activities detection. A challenge in the above process is that the storage or transmission causes excessive network resource consumption due to a large amount of sensing data. \subsection{Reconfigurable Intelligent Surface} Significant developments in RIS-aided wireless communications have been witnessed over the past $3$ years, from hardware and algorithms design to deep integration with various technologies. One of the most important application scenarios is to enhance wireless sensing~\cite{zhang2022toward}, such as indoor localization~\cite{zhang2021metalocalization} and direction-of-arrival estimation \cite{chen2022ris}. However, the existing methods typically aim to improve the sensing accuracy through signal enhancement by the RIS. The signal control capability of the RIS is not fully utilized, and most literature is limited in the study of reflective RIS that cannot achieve complete coverage. Fortunately, with the deepening understanding of RIS hardware, transmissive and refractive RISs are gaining more and more attention~\cite{tang2022transmissive,mu2021simultaneously,zhang2022dual}. Simultaneously transmitting and reflecting (STAR) RIS~\cite{mu2021simultaneously} and intelligent omni-surface (IOS)~\cite{zhang2022dual} have been proposed as novel instances of RIS to achieve full-dimensional communications. We believe that STAR RIS or IOS can also bring further improvement to wireless sensing. In addition to improving sensing performance by intuitively enhancing signals, adjustment to the amplitude of transmissive signals can be used as prior knowledge to achieve efficient compression of wireless sensing data, which will be discussed in this paper. \subsection{Spectral Snapshot Compressive Imaging} Capturing high dimension (HD) data is a long-term challenge in signal processing and related fields~\cite{yuan2021snapshot}. With theoretical guarantees, snapshot compressed imaging (SCI) uses two-dimensional (2D) detectors to capture HD, e.g., 3D, data in snapshot measurements using novel optical design. Then, reconstruction algorithms are applied to obtain the required HD data cubes~\cite{figueiredo2007gradient,meng2021self}. SCI has been used in many fields such as hyper-spectral imaging, video, holography, tomography, focal depth imaging, polarization imaging, and microscopy~\cite{wang2016adaptive}. However, there is no prior work discussing how to apply SCI to compressed sensing signals in the time dimension. The reason is that the highly dynamic nature of sensing signals brings difficulties to detector hardware design, coded aperture structure, and decompression algorithms. To fill this gap, in Section~\ref{CM}, we use the novel RIS hardware to perform one kind of special SCI to the sensing data. Using our proposed inverse semantic-aware encoding and decoding methods, the compression and self-supervised decompression of the sensing data can be achieved on time scale. Note that our design is different from compressive sensing (CS) methods in wireless sensing, and in fact can be used to further improve the performance of wireless CS systems. One primary objective of this study is to solve an important problem of overwhelming storage or transmission resources consumption in the wireless sensing. Inspired by the SCI system, we propose an encoding and decoding framework using the RIS to achieve inverse semantic-aware sensing, which significantly reduces the data volume and does not affect the accomplishment of various sensing tasks. \section{System Models}\label{Sre3} \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth]{systemmodel-eps-converted-to.pdf} \caption{The framework of the proposed inverse semantic-aware wireless sensing system.} \label{img} \end{figure} Wireless signals contain user information such as activities and walking trajectories, and can preserve user privacy better than camera-based methods. Thus, mobile application SPs can use wireless signals to provide better services to users. For example, healthcare SPs can provide medical advice by analyzing the user's sleeping postures, and Metaverse SPs can customize virtual traveling scenes by positioning the users. To meet the needs of ubiquitous sensing data collection, we consider a 3D wireless indoor communications scenario as an example. As shown in Fig.~\ref{img} (Part I), a multi-antenna transmitter, e.g., IoT devices or WiFi router, transmits signals to multiple users with the help of an RIS. Different from the conventional scheme that uses RIS to improve sensing accuracy by enhancing signal strength, in this section, we propose a novel RIS hardware design to enable RIS with wireless sensing capability. Then, we analyze the mathematical formulas of the received sensing signals. \subsection{Novel Hardware of Reconfigurable Intelligent Surface} To enable RIS to sense the environment, a widely used solution is to replace some reflecting elements on the RIS with active sensors, e.g., for channel estimation using CS~\cite{taha2021enabling}. Thus, a part of the RIS elements can switch between two operation modes, i.e., i) channel sensing mode that is used to estimate the channels, ii) reflection mode that reflects the signal. However, we can see that the RIS cannot assist communications in mode 1. We do not adopt directly the aforementioned solution since our goal is not merely to estimate the channel, but also constantly to sense the environment for the purposes of localizing and detecting user activities. To enable the RIS to assist the sensing function without affecting its communications auxiliary function, we first integrate the RIS with a small number of simultaneous transmitting and reflecting patches~\cite{mu2021simultaneously}, which are called transmissive elements in this paper for convenience. Specifically, as shown in Fig.~\ref{img} (Part II), $L$-shaped $\left( M + N + 1\right) $ transmissive elements are deployed on the RIS, and active sensors are placed behind the transmissive elements to receive the signals modulated by the RIS. \begin{rem}\label{L1} The reason for using the $L$-shaped array is that such a structure has more accurate 2D AoA estimation results than other structures, e.g., cross, linear, and rectangular arrays. This conclusion can be obtained by comparing the Cramer-Rao Bound metrics of different structures~\cite{hua1989shaped,zheng2021coupled}. \end{rem} Accordingly, the signal incident on the $q^{\rm th}$ transmissive element can be transmitted and reflected as~\cite{mu2021simultaneously} \begin{equation} {\beta_{i,q}}{{\rm exp}{\left( j{\delta_{i,q}}\right) }}, \qquad i \in \left\{ {T,R} \right\}, \end{equation} where $i = T$ is for transmission coefficients and $i = R$ is for reflection coefficients. Note that, for each element, the responses of the RIS for transmission and reflection modes can be designed independently from each other~\cite{xu2021star}. In the following, we focus on the sensing function that only uses the transmitted signals. The reflection coefficients can be designed independently, which is outside the scope of this paper. Thus, after one path signal penetrates the $q^{\rm th}$ transmissive element on the RIS, the amplitude of the signal is multiplied by ${\beta_{T,q}}$, and the phase is added by ${\delta_{T,q}}$. In much of the literature, the amplitude and phase response of each element on the RIS is assumed to be constant over the signal bandwidth~\cite{wu2021intelligent}. Although this assumption is acceptable when the bandwidth is narrow, it may become inaccurate when receiving multiple sub-carriers with different frequencies in a large range~\cite{wu2021intelligent}. In our system model, we consider that the transmitter sends the wireless signals modulated by OFDM technology into $K$ sub-carriers\footnote{The OFDM is a widely used modulation method, which makes our analysis general. Moreover, OFDM can provide multi-carriers information, which is useful for signal parameters estimation.}. Because that $K$ might be large, e.g., $2048$ OFDM sub-carriers are used to transmit data in the IEEE 802.11ax protocol, we consider the practical case in which the element on the RIS has different responses to signals with different frequencies. Thus, the amplitude and phase response matrices of the $L$-shaped transmissive elements to $K$ sub-carriers at time $t$ can be expressed as \begin{equation}\label{CodeA} {\bf \Phi}_A^{\left(t\right)} =\!\! \left[\!\! {\begin{array}{*{20}{c}} {\beta _{{f_1}}^{[1,0]}}&\!\! \!\cdots\!\!\! &{\beta _{{f_1}}^{[M,0]}}&{\beta _{{f_1}}^{[0,0]}}&{\beta _{{f_1}}^{[0,1]}}& \!\!\!\cdots\!\!\! &{\beta _{{f_1}}^{[0,N]}} \\ \!\!\!\vdots\!\! \!&\!\!\! \ddots\!\!\! &\!\!\! \vdots\!\!\! &\!\!\!\vdots\!\!\! &\!\!\!\vdots\!\!\! &\!\!\!\ddots\!\!\!& \!\!\!\vdots\!\!\! \\ {\beta _{{f_K}}^{[1,0]}}&\!\!\! \cdots\!\!\! &{\beta _{{f_K}}^{[M,0]}}&{\beta _{{f_K}}^{[0,0]}}&{\beta _{{f_K}}^{[0,1]}}& \!\!\!\cdots\!\!\! &{\beta _{{f_K}}^{[0,N]}} \end{array}} \!\!\right]\!, \end{equation} and \begin{equation}\label{CodeP} {\bf \Phi}_P^{\left(t\right)} = \!\!\left[\!\! {\begin{array}{*{20}{c}} {\delta _{{f_1}}^{[1,0]}}&{\!\!\!\cdots\!\!\!}&{\delta _{{f_1}}^{[M,0]}}&{\delta _{{f_1}}^{[0,0]}}&{\delta _{{f_1}}^{[0,1]}}&{\!\!\!\cdots\!\!\!}&{\delta _{{f_1}}^{[0,N]}} \\ {\!\!\!\vdots\!\!\!}&{\!\!\!\ddots\!\!\!}&{\!\!\!\vdots\!\!\!}&{\!\!\!\vdots\!\!\!}&{\!\!\!\vdots\!\!\!}&{\!\!\!\ddots\!\!\!}&{\!\!\!\vdots\!\!\!} \\ {\delta _{{f_K}}^{[1,0]}}&{\!\!\!\cdots\!\!\!}&{\delta _{{f_K}}^{[M,0]}}&{\delta _{{f_K}}^{[0,0]}}&{\delta _{{f_K}}^{[0,1]}}&{\!\!\!\cdots\!\!\!}&{\delta _{{f_K}}^{[0,N]}} \end{array}}\!\! \right], \end{equation} respectively, where $f_{i}$ denotes the frequency of the $i^{\rm th}$ sub-carrier, and $\left[x,y\right]$ denotes the location of the activate element. As shown in Fig.~\ref{img} (Part I), $0 \leqslant x \leqslant M$ and $0 \leqslant y \leqslant N$ indicate the locations of transmissive elements that are in the $X$-direction and $Y$-direction of the $L$-shape array, respectively. In the following, we analyze the sensing signal model, and propose the amplitude and phase response matrices design scheme. \subsection{Sensing Signal Model}\label{S3B} To better understand the phase differences of the incident signals from different directions, we analyze the signals that impact the transmissive elements at different positions separately, i.e., the origin, the $X$-direction, and the $Y$-direction elements. At time $t$, the CFR of the multipath signals corresponding to the $k$-th OFDM sub-carrier obtained by the transmissive element located at the origin of the $L$-shaped array, can be expressed as \begin{align}\label{fml1} h_{f_k}^{\left[ {0,0} \right]}\left( t \right) = \sum\limits_{i = 1}^I {\alpha _i^{\left[ {0,0} \right]}{e^{ - j2\pi {f_k}{\tau _i}}}}, \end{align} where $\left[ {0,0} \right]$ represents the origin point, ${\alpha _i}$ and ${\tau _i}$ are the complex signal amplitude attenuation and time delay of the $i$-th propagation path, respectively, ${f_k}$ is the frequency of the $k$-th sub-carrier, and $I$ is the total number of multipaths. Since different elements are placed at different positions in $L$-shaped array, the signal needs to travel different distances to arrive each transmissive element. Taking $h_{f_k}^{\left[ {0,0} \right]}$ as a reference, therefore, the CFR obtained by the $m$-th $X$-direction transmissive element can be expressed as \begin{align}\label{fml2} h_{f_k}^{\left[ {m,0} \right]}\left( t \right) = \sum\limits_{i = 1}^I {\alpha _i^{[m,0]}{e^{ - j2\pi {f_k}\left( {{\tau _i} + m\frac{{d\cos \left( {{\theta _i}} \right)\sin \left( {{\varphi _i}} \right)}}{c}} \right)}}}, \end{align} where $\alpha _i^{[m]}$ is the signal amplitude attenuation, $d$ is the antenna spacing equals half wave length, ${\theta _i}$ and ${\varphi _i}$ represent the elevation angle and azimuth angle of the incident signal, respectively, as shown in Fig.~\ref{img} (Part I), and $c$ is the signal propagation speed in the air. Similarly, we can obtain the CFR of the $n$-th $Y$-direction transmissive element as \begin{align}\label{fml3} h_{f_k}^{\left[ {0,n} \right]}\left( t \right) = \sum\limits_{i = 1}^I {\alpha _i^{[0,n]}{e^{ - j2\pi {f_k}\left( {{\tau _i} + n\frac{{d\sin \left( {{\theta _i}} \right)\sin \left( {{\varphi _i}} \right)}}{c}} \right)}}}. \end{align} Therefore, at time $t$, the overall CFR obtained by the $L$-shaped transmissive element array on the RIS can be expressed as a CFR matrix as follows: \begin{align}\label{fml6} &{{\bm{H}}_{xoy}^{\left( t\right) }} = \left[ {{{\bm{H}}_x^{\left( t\right) }}\;{{\bm{H}}_0^{\left( t\right) }}\;{{\bm{H}}_y^{\left( t\right) }}} \right] \notag\\& =\!\left[ {\underbrace {\begin{array}{*{20}{c}} {h_{{f_1}}^{\left[ {1,0} \right]}}& \!\!\!\cdots\!\!\!&{h_{{f_1}}^{\left[ {M,0} \right]}} \\ \vdots & \!\!\!\ddots\!\!\!& \vdots \\ {h_{{f_K}}^{\left[ {1,0} \right]}}& \!\!\!\cdots\!\!\!&{h_{{f_K}}^{\left[ {M,0} \right]}} \end{array}}_{{{\bm{H}}_x}}\underbrace {\begin{array}{*{20}{c}} {h_{{f_1}}^{\left[ {0,0} \right]}} \\ \vdots \\ {h_{{f_K}}^{\left[ {0,0} \right]}} \end{array}}_{{{\bm{H}}_0}}\underbrace {\begin{array}{*{20}{c}} {h_{{f_1}}^{\left[ {0,1} \right]}}&\!\!\!\cdots \!\!\!&{h_{{f_1}}^{\left[ {0,N} \right]}} \\ \vdots & \!\!\!\ddots\!\!\!& \vdots \\ {h_{{f_K}}^{\left[ {0,1} \right]}}& \!\!\!\cdots\!\!\! &{h_{{f_K}}^{\left[ {0,N} \right]}} \end{array}}_{{{\bm{H}}_y}}} \right]. \end{align} Note that ${{\bm{H}}_{xoy}^{\left( t\right) }}$ is the original sensing data that can support various sensing tasks. Because of the high available sampling frequency of the sensing device, e.g., $300$ times in one second~\cite{liu2019wireless}, and novel services that require long-term sensing, a large amount of ${{\bm{H}}_{xoy}^{\left( t\right) }}$ would be collected. To reduce the resources costed to store and transmit ${{\bm{H}}_{xoy}^{\left( t\right) }}$, we propose the inverse semantic-aware encoding and decoding methods in the following sections, respectively. \section{Inverse Semantic-aware Encoding}\label{S4ra} In this section, we introduce the inverse semantic-aware RIS-aided encoding method to compress multiple signal spectrums into one. Two steps, i.e., differential encoding and shifting addition compression, are discussed. Moreover, we propose a semantic hash sampling method to select the task-related signal spectrum to record. \subsection{Encoding Method}\label{CM} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{Cormpress-eps-converted-to.pdf} \caption{The process of modulating the amplitude and phase spectrums through RIS, and then performing differential encoding and shifting addition.} \label{Compress} \end{figure} One can observe from \eqref{fml6} that every element in the CFR matrix is a complex number, which denotes the amplitude and phase of the CFR. Taking ${{\bf{H}}_x^{\left( t\right) }}$ as an example, it can be further decomposed into the amplitude and phase spectrums as \begin{align}\label{fml21} & {{\bm{H}}_x^{\left( t\right) }} \to \left\{ {{{\bm{H}}_{x_a}^{\left( t\right) }},{{\bm{H}}_{x_p}^{\left( t\right) }}} \right\} \notag\\& =\!\! \left\{\! {\underbrace {\left[\!\!\! {\begin{array}{*{20}{c}} {\left\| {h_{f_1}^{\left[ {1,0} \right]}} \right\|}& \!\!\!\!\cdots\!\!\!\! &{\left\| {h_{f_1}^{\left[ {M,0} \right]}} \right\|} \\ \!\!\!\!\vdots\!\!\!\! & \!\!\!\!\ddots\!\!\!\! & \!\!\!\!\vdots\!\!\!\! \\ {\left\| {h_{f_K}^{\left[ {1,0} \right]}} \right\|}& \!\!\!\!\cdots\!\!\!\! &{\left\| {h_{f_K}^{\left[ {M,0} \right]}} \right\|} \end{array}} \!\!\!\right]}_{\textit{amplitude matrix}},\underbrace {\left[\!\!\! {\begin{array}{*{20}{c}} {\angle h_{f_1}^{\left[ {1,0} \right]}}& \!\!\!\!\cdots\!\!\!\! &{\angle h_{f_1}^{\left[ {M,0} \right]}} \\ \!\!\!\!\vdots\!\!\!\! & \!\!\!\!\ddots\!\!\!\! & \!\!\!\!\vdots\!\!\!\! \\ {\angle h_{f_K}^{\left[ {0,1} \right]}}&\!\!\!\! \cdots\!\!\!\! &{\angle h_{f_K}^{\left[ {M,0} \right]}} \end{array}} \!\!\!\right]}_{\textit{phase matrix}}} \!\right\}\!, \end{align} where $\cdot\!\to\!\cdot$ denotes the amplitude and phase extraction operation, $\left\{\cdot \right\}$ represents the set of matrices, $\left\| \cdot \right\|$ is the Euclidean norm operator, and $\angle h_{f_k}^{\left[ {m,0} \right]}$ denotes the signal phase of $h_{f_k}^{\left[ {m,0} \right]}$. Through the same way, ${{\bf{H}}_0}$ and ${{\bf{H}}_y}$ can be expressed as amplitude and phase spectrums, respectively. Hence, the CFR extracted from the $L$-shaped transmissive element array on the RIS can be expressed as \begin{align}\label{fml22} {{\bf{H}}_{xoy}^{\left( t\right) }} = \left[ {{{\bf{H}}_x^{\left( t\right) }}{\ }{{\bf{H}}_0^{\left( t\right) }}{\ }{{\bf{H}}_y^{\left( t\right) }}} \right]\to\left\{{{\bf{H}}_A^{\left(t\right)}}, {{\bf{H}}_P^{\left(t\right)}}\right\}, \end{align} where ${{\bf{H}}_A^{\left(t\right)}} = \left[{{\bm{H}}_{x_a}}\;{{\bm{H}}_{0_a}}\;{{\bm{H}}_{y _a}}\right]$ and ${{\bf{H}}_P^{\left(t\right)}} = \left[{{\bm{H}}_{x_p}}\;{{\bm{H}}_{0_p}}\;{\bm{H}}_{y_p}\right]$ denote the overall amplitude and phase at time $t$, respectively. As shown in Fig.~\ref{Compress} (Parts I and II), after being modulated by the transmissive elements, we can express $T$ received amplitude and phase spectrums by the $L$-shaped active sensor array in two sets as \begin{equation}\label{AmMask} {\bm{Y}}_A = \left\{ {\bm{H}}_A^{\left(1\right)} \circ {\bm{\Phi}} _A^{\left(1\right)},\ldots,{\bm{H}}_A^{\left(T\right)} \circ {\bm{\Phi}} _A^{\left(T\right)}\right\}, \end{equation} and \begin{equation}\label{PhMask} {\bm{Y}}_P = \left\{{\bm{H}}_P^{\left(1\right)} + {\bm{\Phi}}_P^{\left(1\right)},\ldots,{\bm{H}}_P^{\left(T\right)} + {\bm{\Phi}}_P^{\left(T\right)}\right\}, \end{equation} respectively, where $ \circ $ is the Hadamard product calculator, a.k.a., element-wise product. In the following, we encode the 3D data ${\bm{Y}}_A$ and ${\bm{Y}}_P$ onto 2D measurements, respectively. The encoding idea is inspired by the SCI system that compresses several optical spectrums of an object over multiple wavelengths into one spectrum, or several frames of a high-speed video into one frame. Specifically, the 3D data is first modulated by a coded aperture, and then spectrally dispersed by the dispersing element, and finally integrated across the spectral dimension to a 2D measurement. For the 3D sensing data ${\bm{Y}}_A$ and ${\bm{Y}}_P$, although the spectral dispersion process can be performed by low-power computing elements, there are several difficulties in adopting the compression scheme as in the SCI system: \begin{enumerate} \item[D1)] The fixed coded aperture in the SCI system is hard to be used in encoding signal spectrums that change dramatically on the time scale. However, the time-varying coded aperture scheme~\cite{qiao2020snapshot} increases the hardware cost and consume more storage space to record the patterns. \item[D2)] It is difficult for system designers to strike the balance between decoding performance and resource consumption. The inverse decoding problem is hard to be solved by traditional methods. The deep learning method, e.g., convolutional neural networks, requires expensive well-labeled dataset and a long time training~\cite{zhang2018ffdnet}, making the aperture patterns cannot be changed frequently. \item[D3)] Signal spectrums are more sensitive than spectral images or video frames. We find from the experiments that the decoded sensing signal spectrums may lead to errors when performing some sensing tasks that are sensitive to the deviations in signal phase values, e.g., localization. \end{enumerate} To overcome the aforementioned difficulties, we rethink the SCI system from hardware design to software algorithms. For (D1) and (D2), we can observe from \eqref{AmMask} that the amplitude response matrix of the RIS has potential to perform a similar function as the coded aperture in the SCI system. It has been shown that the reconfiguration time for the RIS to change the response matrix is around $33$ ns~\cite{cui2020information}. Therefore, by changing the response matrix over time, the low-cost transmissive elements on the RIS can encode the sensing signals. In addition, the response values can be obtained from the hardware design parameters. This saves the storage resources to record a large number of original response values. Moreover, the amplitude and phase response values are discrete numbers, which can be determined by the number of coding bits. For example, $4$-bit coding bringse $16$ different available response values. Following that, we propose a self-supervision decoding algorithm for arbitrary RIS response matrices, which is discussed in Section~\ref{SS5}. Error negligible decoding results are achieved without pre-training resource consumption. To solve (D3), we compress the differential matrices of the amplitude and phase spectrums instead of the original spectrums to ensure the sensing performance. Unlike channel estimation, which focuses on accurately obtaining the CSI to better perform channel equalization, wireless sensing focuses on extracting information describing the physical environment from the CSI, e.g., 2D AoA and time of flight. This information is hidden in the value difference of amplitude and phase spectrums obtained by the sensors at different locations. For example, the phase difference between active sensors supports the signal AoA estimation. Another advantage to encode the differential spectrum is that the differential spectrum tends to be smoother than the original spectrum, due to the existence of correlation. This results in improved decoding performance. We show that real images can also benefit from the differential encoding in Fig.~\ref{realplot} using the dataset~\cite{perrin2020eyetrackuav2}. The differential encoding and shifting addition compression methods are presented in the following. \subsubsection{Differential Encoding} We first focus on the amplitude spectrum set ${\bm{Y}}_A$. Let ${\bm{Y}}_A\left\{i\right\}$ denote the $i^{\rm th}$ matrix in ${\bm{Y}}_A$. Each column in ${\bm{Y}}_A\left\{i\right\}$ represents the amplitude values of received signals at different frequencies by an active sensor, after amplitude modulation by the transmissive element on the RIS. Let ${\bm{Y}}_{A'}\left\{i\right\}$ denote the ${\bm{Y}}_A\left\{i\right\}$ after the differential encoding. Specifically, we let the $j^{\rm th}$ column in ${\bm{Y}}_{A'}\left\{i\right\}$ store the difference values of the $j^{\rm th}$ column and $\left(j-1\right) ^{\rm th}$ column in ${\bm{Y}}_A\left\{i\right\}$ as \begin{equation}\label{q1} {{\bm{Y}}_{A'}}\left\{ i \right\}\left[ {j,:} \right] = {{\bm{Y}}_A}\left\{ i \right\}\left[ {j ,:} \right] - {{\bm{Y}}_A}\left\{ i \right\}\left[ {j- 1,:} \right], \end{equation} where $j = 2,\ldots,L$. The first columns in ${\bm{Y}}_{A'}\left\{i\right\}$ and ${\bm{Y}}_{A}\left\{i\right\}$ are the same. Then, we have \begin{equation}\label{q2} {{\bm{Y}}_{A'}}\left\{ i \right\}\left[ {1,:} \right] = {{\bm{Y}}_A}\left\{ i \right\}\left[ {1,:} \right]. \end{equation} Similar differential encoding method can be used for the received phase spectrum set ${\bm{Y}}_P$. For the $i^{\rm th}$ matrix in ${\bm{Y}}_P$, i.e., ${{\bm{Y}}_{P}}\left\{ i \right\}$, we obtain the differential encoded matrix ${{\bm{Y}}_{P'}}\left\{ i \right\}$ by \begin{equation}\label{q3} {{\bm{Y}}_{P'}}\left\{ i \right\}\left[ {j,:} \right] = {{\bm{Y}}_P}\left\{ i \right\}\left[ {j ,:} \right] - {{\bm{Y}}_P}\left\{ i \right\}\left[ {j - 1,:} \right], \end{equation} where $j = 2,\ldots,L$. Considering that the phase response value of the RIS is added to the signal phase value, we let the first column in ${\bm{Y}}_{P'}\left\{i\right\}$ be the first column in ${\bm{Y}}_{P}\left\{i\right\}$ minus the phase response of the first transmissive element as \begin{equation}\label{q4} {{\bm{Y}}_{P'}}\left\{ i \right\}\left[ {1,:} \right] = {{\bm{Y}}_P}\left\{ i \right\}\left[ {1,:} \right] - {\bm{\Phi}}_P^{\left(i\right)}\left[ {1,:} \right]. \end{equation} To use the amplitude response matrix of the RIS as the prior knowledge, we multiply the amplitude response matrix of the RIS at the $i^{\rm th}$ moment and ${\bm{Y}}_{P'}\left\{i\right\}$ by elements as \begin{equation}\label{q5} {{\bm{Y}}_{P'}}\left\{ i \right\} = {{\bm{Y}}_{P'}}\left\{ i \right\} \circ {\bm{\Phi}}_A^{\left(i\right)}. \end{equation} In addition to the steps of \eqref{q1}, \eqref{q2}, \eqref{q3}, \eqref{q4}, and \eqref{q5}, the transmissive elements on the RIS should be designed by following Remark~\ref{rem1} to make the amplitude response matrix of the RIS available as a special coded aperture, i.e., prior knowledge used in decoding. \begin{rem}\label{rem1} To achieve differential encoding, we should let every transmissive element on the RIS have the same hardware structure. Thus, different transmissive elements have the same amplitude and phase response to the signals with the same frequency, as shown in Fig.~\ref{Compress} (Part I). Specifically, every column in \eqref{CodeA} and \eqref{CodeP} is the same. This ensures that each column of ${\bm{Y}}_{A'}\left\{i\right\}$ can be represented as the signal amplitude difference values multiplied by the amplitude response values of the RIS as in \eqref{re1a}. \end{rem} Then, we can express ${\bm{Y}}_{A'}\left\{i\right\}$ and ${\bm{Y}}_{P'}\left\{i\right\}$ as \begin{equation}\label{re1a} {\bm{Y}}_{A'}\left\{i\right\} = {\bm{H}}_{A'}^{\left(i\right)} \circ {\bf \Phi}_{A}^{\left(i\right)}, \end{equation} and \begin{equation} {\bm{Y}}_{P'}\left\{i\right\} = {\bm{H}}_{P'}^{\left(i\right)} \circ {\bf \Phi}_{A}^{\left(i\right)}, \end{equation} where ${\bm{H}}_{A'}^{\left(i\right)}$ and ${\bm{H}}_{P'}^{\left(i\right)}$ are the $i^{\rm th}$ differential encoded amplitude and phase spectrums, respectively, and ${\bf \Phi}_{A}^{\left(i\right)}$ can be regarded as the corresponding codebook. \begin{algorithm}[t] {\small \caption{The algorithm for inverse semantic-aware encoding.} \label{Algorithm1} \hspace*{0.02in} {\bf Input:} The received amplitude and phase spectrums in the active sensors: ${\bm{Y}}_{A}$ and ${\bm{Y}}_{P}$\\ \hspace*{0.02in} {\bf Output:} The amplitude and phase MetaSpectrums: $\bm{Z}_A$ and $\bm{Z}_P$ \begin{algorithmic}[1] \State {\textit{\#\# Achieve differential encoding}} \For{Every ${\bm{Y}}_A\left\{i\right\}$ in ${\bm{Y}}_{A}$} \State Obtain ${\bm{Y}}_{A'}\left\{i\right\}$ according to \eqref{q1} and \eqref{q2} \EndFor \For{Every ${\bm{Y}}_P\left\{i\right\}$ in ${\bm{Y}}_{P}$} \State Obtain ${\bm{Y}}_{P'}\left\{i\right\}$ according to \eqref{q3}, \eqref{q4}, and \eqref{q5} \EndFor \State {\textit{\#\# Achieve shifting addition compression}} \State Use ${{\bm{Y}}_{A'}}$ to obtain ${\bm X}_A$ according to \eqref{XA} \State Use ${{\bm{Y}}_{P'}}$ to obtain ${\bm X}_P$ according to \eqref{XP} \State Obtain amplitude MetaSpectrum $\bm{Z}_A$ according to \eqref{sum} \State Obtain phase MetaSpectrum $\bm{Z}_P$ according to \eqref{sumP} \State \Return $\bm{Z}_A$ and $\bm{Z}_P$ \end{algorithmic}} \end{algorithm} \subsubsection{Shifting Addition} To replace the spatial shifting operation to the object spectrum that is performed by a dispersing lens in the SCI system, we perform zero compensation processing to the amplitude and phase spectrums as follows: \begin{equation}\label{XA} {{\bm{X}}_A} = \!\left\{\! {\left[\!\!\! {\begin{array}{*{20}{c}} {{{\bm{Q}}_1}\left( 1 \right)} \\ {{{\bm{Y}}_{A'}}\left\{ 1 \right\}} \\ {{{\bm{Q}}_2}\left( 1 \right)} \end{array}} \!\!\!\right]\!\!, \!\ldots \!,\!\!\left[\!\!\! {\begin{array}{*{20}{c}} {{{\bm{Q}}_1}\left( i \right)} \\ {{{\bm{Y}}_{A'}}\left\{ i \right\}} \\ {{{\bm{Q}}_2}\left( i \right)} \end{array}} \!\!\!\right]\!\!, \!\ldots \!,\!\!\left[\!\!\! {\begin{array}{*{20}{c}} {{{\bm{Q}}_1}\left( T \right)} \\ {{{\bm{Y}}_{A'}}\left\{ T \right\}} \\ {{{\bm{Q}}_2}\left( T \right)} \end{array}} \!\!\!\right]} \!\right\}, \end{equation} where $ {\bm{Q}_1}\left( i \right) \in {\mathbb{R}^{\left( {i - 1} \right) D \times L}} $, $ {\bm{Q}_2}\left( i \right) \in {\mathbb{R}^{ \left( {T - i} \right) D \times L}} $, ${{\bm{X}}_A} \in {\mathbb{R}^{\left( D\left( {T - 1} \right) + K\right) \times L}}$, every elements in both ${\bm{Q}_1}$ and ${\bm{Q}_2}$ is zero, and $D$ is the unit displacement step. Thus, the amplitude MetaSpectrum, $\bm{Z}_A$, can be obtained by \begin{equation}\label{sum} \bm{Z}_A = \sum\limits_{i = 1}^T {{{\bm{X}}_A}\left\{ i \right\}}, \end{equation} where $\bm{Z}_A \in {\mathbb{R}^{\left( K + \left( {T - 1} \right) D\right) \times L}} $ can be transmitted or stored. Similarly, the phase MetaSpectrum, $\bm{Z}_P$, can be expressed as \begin{equation}\label{sumP} \bm{Z}_P = \sum\limits_{i = 1}^T {{{\bm{X}}_P}\left\{ i \right\}}, \end{equation} where \begin{equation}\label{XP} {{\bm{X}}_P} = \!\left\{\! {\left[\!\!\! {\begin{array}{*{20}{c}} {{{\bm{Q}}_1}\left( 1 \right)} \\ {{{\bm{Y}}_{P'}}\left\{ 1 \right\}} \\ {{{\bm{Q}}_2}\left( 1 \right)} \end{array}} \!\!\!\right]\!\!, \!\cdots \!,\!\!\left[\!\!\! {\begin{array}{*{20}{c}} {{{\bm{Q}}_1}\left( i \right)} \\ {{{\bm{Y}}_{P'}}\left\{ i \right\}} \\ {{{\bm{Q}}_2}\left( i \right)} \end{array}} \!\!\!\right]\!\!, \!\cdots \!,\!\!\left[\!\!\! {\begin{array}{*{20}{c}} {{{\bm{Q}}_1}\left( T \right)} \\ {{{\bm{Y}}_{P'}}\left\{ T \right\}} \\ {{{\bm{Q}}_2}\left( T \right)} \end{array}} \!\!\!\right]} \!\right\}. \end{equation} The overall RIS-aided encoding method is in {\bf{Algorithm~\ref{Algorithm1}}}, which has polynomial complexity. After the RIS-aided encoding, we observe that the sensing data volume is significantly reduced. To indicate the efficiency of data compression, we define the data compression ratio, i.e., $\rho $, as the ratio of the number of elements in the received amplitude and phase spectrums and that in the coded MetaSpectrums. The analysis of $\rho $ is given in {\bf{Proposition~\ref{P1}}}. \begin{prop}\label{P1} The data compression ratio $\rho $ of our proposed inverse semantic-aware coding method is ${1}/{T}$. \begin{IEEEproof} The number of elements in the $T$ recorded signal amplitude and phase spectrum is $2KLT$. The number of elements in $\bm{Z}_A$ and $\bm{Z}_P$ is $2{\left( {K + \left( {T - 1} \right)D} \right) \times L}$. Thus, $\rho $ can be expressed as \begin{equation}\label{tario} \rho = \frac{2{\left( {K + \left( {T - 1} \right)D} \right) \times L}}{2{KLT}}{\text{ = }}\frac{1}{T} + \left( {1 - \frac{1}{T}} \right)\frac{D}{K}. \end{equation} Since $D$ is small especially compared to $K$, e.g., $D = 1$ in \cite{meng2021self} and $K=2048$ in the IEEE 802.11ax protocol, $\left( {1 - \frac{1}{T}} \right)\frac{D}{K}$ in~\eqref{tario} can be ignored. Thus, the value of $\rho$ is close to ${1}/{T}$, which completes the proof. \end{IEEEproof} \end{prop} Note that in the above discussion, we encode $T$ amplitude or phase spectrums into one spectrum. However, the $T$ spectrums does not need and should not be sensed continuously in time. The reason is that the wireless channel remains stable during the channel coherence time. Specifically, as the moving or action speed of people is limited, the CSI within the channel coherence time can be considered as constant without loss of precision~\cite{vasisht2016decimeter}. Considering that the available maximal sensing frequency of the active sensors is much higher than the required frequency, we next propose a sampling scheme that selects the most relevant spectrum for the completion of sensing tasks over the channel coherence time for recording and encoding. \subsection{Semantic Hash Sampling} \begin{figure}[t] \centering \includegraphics[width=0.44\textwidth]{hash-eps-converted-to.pdf} \caption{The process of generating the resized matrices from the amplitude and phase spectrums, and then obtaining the semantic hash fingerprint.} \label{hash} \end{figure} We divide the time into segments. Without loss of generality, we consider that the active sensors can perform $T_N$ times sensing in one time segment. From each time segment, one pair of amplitude and phase spectrums is selected to record. In the $i^{\rm th}$ time segment, we express $T_N$ received amplitude and $T_N$ phase spectrums as two sets, i.e., {\small ${\bm{S}}_{A_i} = \left\{ {\bm{H}}_{A_i}^{\left(k\right)} \circ {\bm{\Phi}} _{A_i}^{\left(k\right)}\right\}$} $\left( k = 1,\ldots,T_N\right) $ and {\small ${\bm{S}}_{P_i} = \left\{ {\bm{H}}_{P_i}^{\left(k\right)} \circ {\bm{\Phi}} _{P_i}^{\left(k\right)}\right\}$}, respectively. The recorded amplitude and phase spectrums that are selected from the $i^{\rm th}$ time segment are ${\bm{Y}}_{A}{\left\{i\right\}}$ $\left( i = 1,\ldots,T\right) $ and ${\bm{Y}}_{P}{\left\{i\right\}}$, respectively. To remove information that is not relevant to the task, the traditional method is uniform sampling that selects the first pair of amplitude and phase spectrums in each time segment to record. However, we cannot guarantee that the first pair in every segment is always the most informative pair. Therefore, a better solution is to use one indicator to judge the semantic information richness of the pair of spectrums. As we discussed in Section~\ref{CM}, the information related to sensing tasks is contained in the changes of amplitude and phase spectrums. Therefore, we can select the pair of spectrums that has the largest change compared to the previous signal spectrums in each time segment. Note that the mean square error (MSE) is not recommended to be used as the indicator to compare the difference between spectrums. The reasons are given as follows: \begin{itemize} \item The MSE is calculated using the absolute values of the signal amplitude. However, the absolute values are not important for sensing tasks. The critical information is in the changing process of the signal amplitude over time~\cite{niu2022rethinking}. \item The results of MSE may be affected by several outliers, i.e., signal amplitude fluctuation at a certain time caused by the interference. \item Because the number of elements in the signal spectrum is large, calculating MSE brings large resource consumption. \end{itemize} Therefore, we have to propose a new indicator to characterize the semantic information richness in signal spectrums. Considering the success of the perceptual image hashing method~\cite{du2020perceptual} in the field of image retrieval, we aim to use a string of characters, i.e., fingerprints, to characterize the amplitude and phase spectrums. Perceptual image hashing~\cite{du2020perceptual} is a family of algorithms that generate content-based image hash fingerprints. Then, the Hamming distance between two fingerprints can be used to quantify the similarity of two images. The larger the Hamming distance is, the smaller the similarity of the images have. Although the hash fingerprints can be calculated efficiently with low energy cost, it cannot be applied directly to the similarity detection of sensing data. The reason is that, at each moment, we have one amplitude spectrum and one phase spectrum as shown in Fig.~\ref{hash}, which are both required to achieve sensing tasks. Thus, we propose a novel four-level semantic hash sampling method in {\bf{Algorithm~\ref{Algorithm2}}} to select task-related signals spectrums for encoding, which is used before {\bf{Algorithm~\ref{Algorithm1}}}. As shown in {\bf{Algorithm~\ref{Algorithm2}}}, to obtain the semantic hash matrices, the first step is to resize the $T_K$ amplitude and phase spectrums. The purpose is to produce a small data size, which hastens the processing time~\cite{tuncer2020novel} and preserves the features of the spectrums. Similar to the image pHash method~\cite{khanam2018implementation}, we calculate the average values of the resized amplitude and phase matrices. Different from the conventional hash method, we define four values, i.e., 0, 1, 2, and 3, as values in the hash fingerprints. Thus, we perform the operations as shown in lines $8 - 15$ of {\bf{Algorithm~\ref{Algorithm2}}} to convert the spectrums to semantic hash fingerprints in polynomial complexity. For the $k^{\rm th}$ pair of amplitude and phase spectrums, we use the Hamming distance, which measures the number of different values, between the $k^{\rm th}$ hash fingerprint and the $\left( k-1\right) ^{\rm th}$ one to indicate the semantic information richness of the $k^{\rm th}$ spectrums. Therefore, we can record the pair of spectrums that have the largest Hamming distance to the previous pair of spectrums. \begin{algorithm}[t] {\small \caption{The algorithm for semantic hash sampling} \label{Algorithm2} \hspace*{0.02in} {\bf Input:} \begin{itemize} \item The received amplitude and phase spectrums sets in the $i^{\rm th}$ time segment: ${\bm{S}}_{A_i}$ and ${\bm{S}}_{P_i}$ \item The dimensions of the resized matrices: $R_x$ and $R_y$ \end{itemize} \hspace*{0.02in} {\bf Output:} The selected amplitude and phase spectrums: ${\bm{Y}}_{A}{\left\{i\right\}}$ and ${\bm{Y}}_{P}{\left\{i\right\}}$ \begin{algorithmic}[1] \State {\textit{\#\# Obtain the semantic hash matrix set}} \State Create an empty matrix set ${\bm{\mathcal{H}}}_{i} \in {\mathbb{R}}^{R_x\times{R_y}\times T_K} $ to record the semantic hash values \For{Every {\small ${\bm{S}}_{A_i}{\left\{k\right\}}$} in {\small ${\bm{S}}_{A_i}$}} \State Obtain amplitude and phase spectrums ${{\bm{H}}}_{A_i}^{\left(k\right)}$ and ${\bm{H}}_{P_i}^{\left(k\right)}$ with the prior knowledge ${\bm{\Phi}} _{A_i}^{\left(k\right)}$ and ${\bm{\Phi}} _{P_i}^{\left(k\right)}$, respectively \State Resize ${\bm{H}}_{A_i}^{\left(k\right)}$ and ${\bm{H}}_{P_i}^{\left(k\right)}$ into small matrices ${\bm{h}}_{A_i}^{\left(k\right)}\in {\mathbb{R}}^{R_x\times{R_y}}$ and ${\bm{h}}_{P_i}^{\left(k\right)}\in {\mathbb{R}}^{R_x\times{R_y}}$, respectively \State Calculate the average values of ${\bm{h}}_{A_i}^{\left(k\right)}$ and ${\bm{h}}_{P_i}^{\left(k\right)}$, denoted as ${{h}}_{A_i}^{\left(k\right)}$ and ${{h}}_{P_i}^{\left(k\right)}$, respectively \For{Every element pair in ${\bm{h}}_{A_i}^{\left(k\right)}$ and ${\bm{h}}_{P_i}^{\left(k\right)}$} \If{${{\bm{h}}}_{A_i}^{\left(k\right)}\left[ {x,y} \right] \geqslant {{h}}_{A_i}^{\left(k\right)}$ and ${{\bm{h}}}_{P_i}^{\left(k\right)}\left[ {x,y} \right] \geqslant {{h}}_{P_i}^{\left(k\right)}$} \State Let ${\bm{\mathcal{H}}}_{i}{\left\{k\right\}}\left[ {x,y} \right] \leftarrow 3$ \ElsIf{${{\bm{h}}}_{A_i}^{\left(k\right)}\left[ {x,y} \right] \geqslant {{h}}_{A_i}^{\left(k\right)}$ and ${{\bm{h}}}_{P_i}^{\left(k\right)}\left[ {x,y} \right] < {{h}}_{P_i}^{\left(k\right)}$} \State Let ${\bm{\mathcal{H}}}_{i}{\left\{k\right\}}\left[ {x,y} \right] \leftarrow 2$ \ElsIf{${{\bm{h}}}_{A_i}^{\left(k\right)}\left[ {x,y} \right] < {{h}}_{A_i}^{\left(k\right)}$ and ${{\bm{h}}}_{P_i}^{\left(k\right)}\left[ {x,y} \right] \geqslant {{h}}_{P_i}^{\left(k\right)}$} \State Let ${\bm{\mathcal{H}}}_{i}{\left\{k\right\}}\left[ {x,y} \right] \leftarrow 1$ \Else \State Let ${\bm{\mathcal{H}}}_{i}{\left\{k\right\}}\left[ {x,y} \right] \leftarrow 0$ \EndIf \EndFor \EndFor \vspace{0.01cm} \State {\textit{\#\# Calculate the Hamming distance}} \State Create an empty vector ${\bm{\mathcal{D}}} \in {\mathbb{R}}^{1\times{T_K}}$ to record the Hamming distance values \For{Every ${\bm{\mathcal{H}}}_{i}{\left\{k\right\}}$ in ${\bm{\mathcal{H}}}_{i}$} \State Create a temporary variable $d$ \For{Every element in ${\bm{\mathcal{H}}}_{i}{\left\{k\right\}}$} \If{The element value is different from the element value in the same position in ${\bm{\mathcal{H}}}_{i}{\left\{k-1\right\}}$} \State $d \leftarrow d+1$ \EndIf \EndFor \State ${\bm{\mathcal{D}}} \left( k \right) \leftarrow d $. \EndFor \vspace{0.01cm} \State {\textit{\#\# Select the spectrum and record the information richness}} \State Find $k_{\max}$ that maximizes ${\bm{\mathcal{D}}} \left( k \right)$, i.e., ${\bm{\mathcal{D}}} \left( k_{\max} \right) = \max\left\{ {\bm{\mathcal{D}}} \right\}$ \State Record the value of ${\bm{\mathcal{D}}} \left( k_{\max} \right)$ \State Let $i = k_{\max}$ \State \Return ${\bm{Y}}_{A}{\left\{i\right\}}$ and ${\bm{Y}}_{P}{\left\{i\right\}}$ \end{algorithmic}} \end{algorithm} \section{Inverse Semantic-aware Decoding}\label{SS5} In this section, we propose the inverse semantic-aware self-supervised decoding method. We also introduce how to use the recovered signal spectrums for 2D AoA and ToF estimation, which supports various sensing tasks. \subsection{Objective Function} We first rewrite the amplitude and phase MetaSpectrums $\bm{Z}_A$ and $\bm{Z}_P$ in the vectorized formulations, respectively. Let $\rm{vec}\left(\cdot\right) $ denote the matrix vectorization operation that concatenates columns into one vector, and $\rm{diag}\left({\bf{a}}\right) $ denote the operation of converting the vector $\bf{a}$ into a diagonal matrix where the diagonal element is $\bf{a}$. As such, we rewrite the matrix formulations \eqref{sum} and \eqref{sumP} as \begin{equation}\label{vec} {\bf z}_A = {\bf \Phi} {\bf x}_A, \end{equation} and \begin{equation}\label{vecP} {\bf z}_P = {\bf \Phi} {\bf x}_P, \end{equation} respectively, where $\bm{z}_A = {\rm{vec}}\left({\bm{Z}}_A\right)$, ${\bm{z}}_P = {\rm{vec}}\left({\bm{Z}}_P\right)$, $\bm{z}_A$ and $\bm{z}_P \in {\mathbb{R}^{\left( K + \left( {T - 1} \right) D\right) L \times 1}} $, $\bm{x}_A = \left[\bm{x}_{1,A}^{\rm T} \cdots {\bm{x}}_{i,A}^{\rm T} \cdots \bm{x}_{T,A}^{\rm T}\right]^{\rm T}$, $\bm{x}_P = \left[\bm{x}_{1,P}^{\rm T} \cdots {\bm{x}}_{i,P}^{\rm T} \cdots \bm{x}_{T,P}^{\rm T}\right]^{\rm T}$, $\bm{x}_{i,A} = {\rm{vec}}\left({{{\bm{H}}_{A'}^{\left( i\right) }}} \right)$, $\bm{x}_{i,P} = {\rm{vec}}\left({{{\bm{H}}_{P'}^{\left( i\right) }}} \right)$, ${\bm{x}}_A$ and ${\bm{x}}_P \in {\mathbb{R}^{TKL\times 1}} $, \begin{equation}\label{xianyan} {\bf \Phi} = \left[ {\begin{array}{*{20}{c}} {{{\bm{Q}}_3}\left( 1 \right)}& \cdots &{{{\bm{Q}}_3}\left( i \right)}& \cdots &{{{\bm{Q}}_3}\left( T \right)} \\ {{\bm \phi} _A^{\left( 1 \right)}}& \ddots &{{\bm \phi} _A^{\left( i \right)}}& \ddots &{{\bm \phi} _A^{\left( T \right)}} \\ {{{\bm{Q}}_4}\left( 1 \right)}& \cdots &{{{\bm{Q}}_4}\left( i \right)}& \cdots &{{{\bm{Q}}_4}\left( T \right)} \end{array}} \right], \end{equation} ${{{\bm{Q}}_3}\left( 1 \right)} \in {\mathbb{R}^{\left(i-1\right) DL\times KL}}$, ${{{\bm{Q}}_4}\left( 1 \right)} \in {\mathbb{R}^{\left(T-1\right) DL \times KL}}$, every element in both ${\bm{Q}_3}$ and ${\bm{Q}_4}$ is zero, ${\bm \phi}_A^{\left( i \right)} = {\rm diag}\left( {{\rm vec}\left( {{\bf \Phi}_A^{\left( i \right)}} \right)} \right) $, ${\bm \phi}_A^{\left( i \right)} \in {\mathbb{R}^{KL \times KL}}$, and ${\bf \Phi} \in {\mathbb{R}^{{\left( K + \left( {T - 1} \right) D\right) L} \times TKL}}$. Note that ${\bf \Phi}$ can be obtained using \eqref{xianyan} and the prior knowledge, i.e., the amplitude response matrices of the RIS, and ${\bf z}_A$ and ${\bf z}_P$ are the known vectorized amplitude and phase MetaSpectrums, respectively. Our goal is to decode ${\bf x}_A$ and ${\bf x}_P$ from ${\bf z}_A$ and ${\bf z}_P$, respectively. Although this problem is related to CS, most theories that are developed for CS cannot be used because that the matrix ${\bf \Phi}$ follows a very specific structure as~\eqref{xianyan}. Fortunately, solid theoretical proof has shown that both ${\bf x}_A$ in \eqref{vec} and ${\bf x}_P$ in \eqref{vecP} can be recovered even when $T>1$ \cite{jalali2019snapshot}. The decoding objective function can be formulated as \begin{equation}\label{utility} \mathop {\min }\limits_{{{\bm{x}}_A},{{\bm{x}}_P}}\! \alpha_1 \!\left\| {{{\bm{z}}_A} \!-\! {{\bm{\Phi }}}{{\bm{x}}_A}} \right\|^2 + \!\alpha_2 \!\left\| {{{\bm{z}}_P}\! - \!{{\bm{\Phi }}}{{\bm{x}}_P}} \right\|^2\!, \end{equation} where $\alpha_1$ and $\alpha_2$ are the balance parameters that can be selected according to the specific wireless sensing task. For example, heartbeat and breath detection requires higher accuracy for the phase spectrum~\cite{liu2019wireless}, and the amplitude spectrum is more significant in sensing tasks such as intrusion or fall detection~\cite{ramadan2020efficient}. We propose the algorithm for solving \eqref{utility} as follows. \subsection{Self-supervised Decoding Method} To solve~\eqref{utility}, although different hand-crafted priors, e.g., total variation and sparsity, can be added as the regularization term to improve the decoding performance, it is hard to choose a suitable prior that fits the differential encoded amplitude and phase spectrums ${{\bm{x}}_A}$ and ${{\bm{x}}_P}$. Motivated by the success of deep convolutional neural networks (ConvNets) in inverse problems such as single-image super-resolution~\cite{ledig2017photo} and denoising~\cite{lefkimmiatis2017non}, we use the implicit prior captured by the ConvNets, e.g., deep image prior~\cite{ulyanov2018deep,mataev2019deepred}, to achieve self-supervised decoding\footnote{Another solution is to design a suitable explicit regularization term for decoding sensing signal spectrums and use the explicit and implicit priors jointly~\cite{mataev2019deepred}. This is left for the future work.}. By considering that the unknown amplitude and phase spectrums are the outputs of neural networks, i.e., ${{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right)$ and ${{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_P}}}\!\!\left( {\bm{e}} \right)$, respectively, the decoding problem~\eqref{utility} can be re-written as \begin{equation}\label{utility2} \begin{array}{*{20}{l}} {\mathop {\min }\limits_{{{\bm{\Theta}} _A},{{\bm{\Theta}} _P}}}&\!\!\!\!{{\alpha _1}\!\left\| {{{\bm{z}}_A} \!-\! {{\bm{\Phi }}}{{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right)} \right\|^2 \!+ \!{\alpha _2}\!\left\| {{{\bm{z}}_P}\! - \!{{\bm{\Phi }}}{{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_P}}}\!\!\left( {\bm{e}} \right)} \right\|^2,} \\ {\:\:\:{\rm{s.t.}}}&\!\!\!\!{{{{\bm{\hat x}}}_A} = {{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right)}, \\ {}&\!\!\!\!{{{{\bm{\hat x}}}_P} = {{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_P}}}\!\!\left( {\bm{e}} \right)}, \end{array} \end{equation} where ${{\bm{\Theta }}_A}$ and ${{\bm{\Theta }}_P}$ are the parameters of networks to be learned, and ${\bm{e}}$ is a random vector. Since the training of ${{\bm{\Theta }}_A}$ and ${{\bm{\Theta }}_P}$ is part of the decoding process, this procedure is self-supervised and no pre-training process is required. To solve the problem~\eqref{utility2}, we introduce two auxiliary variables ${\bm{t}_1}$ and ${\bm{t}_2}\in {\mathbb R}^{TKL}$, and corresponding weight parameters $\beta_1$ and $\beta_2$. Then, the constraints can be turned into penalty terms using the augmented Lagrangian method~\cite{afonso2010fast} as \begin{equation}\label{finalpro} \begin{array}{*{20}{l}} {\mathop {\min }\limits_{{\bm{\Theta}}_A,{\bm{\Theta}}_P,{\bm{x}}_A,{\bm{x}}_P}} &{\bm{\mathcal{F}}}_1\left({\bm{\Theta}}_A, {\bm{x}}_A\right) + {\bm{\mathcal{F}}}_2\left({\bm{\Theta}}_P, {\bm{x}}_P\right), \end{array} \end{equation} where \begin{equation} {\bm{\mathcal{F}}}_1\!\! =\! {\alpha _1}\!\!\left(\! \left\| {{{\bm{z}}_A} \!-\! {{\bm{\Phi }}}\!{{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right)} \right\|^2 \!+\!{\beta _1}\!\left\| {{{\bm{x}}_A} \! - \! {{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right) \!- \!{\bm{t}}_1} \right\|^2\!\right)\!, \end{equation} and \begin{equation} {\bm{\mathcal{F}}}_2\!\!=\! {\alpha _2}\!\!\left(\! \left\| {{{\bm{z}}_P} \!-\! {{\bm{\Phi }}}{{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_P}}}\!\!\left( {\bm{e}} \right)} \right\|^2 \!+\!{\beta _2}\!\left\| {{{\bm{x}}_P} \! - \! {{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_P}}}\!\!\left( {\bm{e}} \right) \!- \!{\bm{t}}_2} \right\|^2\!\right)\!. \end{equation} With the help of the alternating direction method of multipliers (ADMM)~\cite{boyd2011distributed}, the problem~\eqref{finalpro} can be solved by a sequential update of the six variables, i.e., ${\bm{\Theta}}_A$, ${\bm{\Theta}}_P$, ${\bm{x}}_A$, ${\bm{x}}_P$, ${\bm{t}}_1$, and ${\bm{t}}_2$. {\it{1) The update of ${\bm{\Theta}}_A$ while fixing other variables:}} \begin{equation}\label{s1} \begin{array}{*{20}{l}} {\mathop {\min }\limits_{{\bm{\Theta}}_A}} &\!\!\! \left\| {{{\bm{z}}_A} \!-\! {{\bm{\Phi }}}{{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right)} \right\|^2 \!+\!{\beta _1}\!\left\| {{{\bm{x}}_A} \! - \! {{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right) \!- \!{\bm{t}}_1} \right\|^2\!, \end{array} \end{equation} which can be solved using the steepest descent and back-propagation optimization methods~\cite{ulyanov2018deep}. Note that ${\beta _1}\!\left\| {{{\bm{x}}_A} \! - \! {{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right) \!- \!{\bm{t}}_1} \right\|^2$ in~\eqref{s1} can be regarded as the denoising of ${{\bm{x}}_A} \! - \! {{\bm{t}}_1}$, which also serves as a proximity regularization that forces ${{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right)$ to be close to ${{\bm{x}}_A} \! - \! {{\bm{t}}_1}$. This second term provides additional stabilizing and robustifying effect to the back-propagation method. {\it{2) The update of ${\bm{x}}_A$ while fixing other variables:}} \begin{equation}\label{Q2} \begin{array}{*{20}{l}} {\mathop {\min }\limits_{{\bm{x}}_A}} &\!\!\!\left\| {{{\bm{x}}_A} \! - \! {{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right) \!- \!{\bm{t}}_1} \right\|^2\!, \end{array} \end{equation} which can be regarded as a denoising problem for ${{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right) + {\bm{t}}_1$. Thus, we have \begin{equation}\label{S2} {\bm{\hat x}}_A ={\bm {\mathcal D}}\left({{{\bm{\mathcal T}}_{{{\bm{\Theta}}_A }}}}\!\!\left( {\bm{e}} \right) + {\bm{t}}_1\right), \end{equation} where ${\bm {\mathcal D}}\left(\cdot\right) $ represents the denoising operator that could be well-studied plug-and-play algorithms~\cite{sreehari2016plug} or a simpler steepest-descent (SD) operator. We present the update equation for SD method as \begin{equation}\label{S2P} {\bm{ x}}_A^{\left(j + 1\right) } ={\bm{ x}}_A^{\left(j \right) } - s \left({\bm{ x}}_A^{\left(j\right) } - {{{\bm{\mathcal T}}_{{{\bm{\Theta}}_A }}}}\!\!\left( {\bm{e}} \right) - {\mathbf{t}}_1\right), \end{equation} where $s$ is the steepest-descent step size, and $j$ is the inner loop iteration number. {\it{3) The update of ${\bm{t}}_1$ while fixing other variables:}} Because ${\bm{t}}_1$ can be regarded as the Lagrange multipliers vector, ${\bm{t}}_1$ can be updated according to the augmented Lagrangian method~\cite{afonso2010fast} as \begin{equation}\label{S3} {\bm{t}}_1^{\left( k+1\right) } = {\bm{t}}_1^{\left( k\right) } + {{\bm{\mathcal T}}_{{{\bm{\Theta}}_A^{\left( k\right) }}}}\!\!\left( {\bm{e}} \right) - {\mathbf{x}}_A^{\left( k\right) }, \end{equation} where $k$ denotes the outer loop iteration number. {\it{4) The update of ${\bm{\Theta}}_P$ while fixing other variables:}} Because the network with parameter ${\bm{\Theta}}_P$ is trained independently, we can update ${\bm{\Theta}}_P$ by solving \begin{equation}\label{S4} \begin{array}{*{20}{l}} {\mathop {\min }\limits_{{\bm{\Theta}}_P}} &\!\!\! \left\| {{{\mathbf{z}}_P} \!-\! {{\mathbf{\Phi }}}{{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_P}}}\!\!\left( {\bm{e}} \right)} \right\|^2 \!+\!{\beta _2}\!\left\| {{{\mathbf{x}}_P} \! - \! {{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_P}}}\!\!\left( {\bm{e}} \right) \!- \!{\bm{t}}_2} \right\|^2\!, \end{array} \end{equation} with the same method as in~\eqref{s1}. {\it{5) The update of ${\bm{x}}_P$ while fixing other variables:}} To minimize the difference between ${\mathbf{x}}_P$ and ${{\bm{\mathcal T}}_{{{\bm{\Theta}}_A}}}\!\!\left( {\bm{e}} \right) + {\mathbf{t}}_1$, we can update ${\mathbf{x}}_P$ as \begin{equation}\label{S5} {\mathbf{\hat x}}_P ={\bm {\mathcal D}}\left({{{\bm{\mathcal T}}_{{{\bm{\Theta}}_P}}}}\!\!\left( {\bm{e}} \right) + {\mathbf{t}}_2\right). \end{equation} where ${\bm {\mathcal D}}$ is the same kind of denoising operator as~\eqref{S2}. {\it{6) The update of ${\bm{t}}_2$ while fixing other variables:}} According to the augmented Lagrangian method~\cite{afonso2010fast}, ${\bm{t}}_2$ can be updated as \begin{equation}\label{S6} {\bm{t}}_2^{\left( k+1\right) } = {\bm{t}}_2^{\left( k\right) } + {{\bm{\mathcal T}}_{{{\bm{\Theta}}_P^{\left( k\right) }}}}\!\!\left( {\bm{e}} \right) - {\mathbf{x}}_P^{\left( k\right) }. \end{equation} \begin{algorithm}[t] {\small \caption{The algorithm for inverse semantic-aware decoding} \label{Algorithm3} \hspace*{0.02in} {\bf Input:} \begin{itemize} \item The weight parameters: $\beta_1$ and $\beta_2$ \item The number of inner iterations of the denoising operator for updating ${\bm{x}}_A$ and ${\bm{x}}_P$: $N_J$ \item The steepest-descent parameters for updating ${{\bm{\Theta }}_A}$ and ${{\bm{\Theta }}_P}$, respectively \end{itemize} \hspace*{0.02in} {\bf Output:} The original amplitude and phase spectrums, i.e., ${\mathbf{H}}_{A}^{\left( i\right) }$ and ${\mathbf{H}}_{P}^{\left( i\right) }$ $\left( i = 1,\ldots,T\right) $ \begin{algorithmic}[1] \State {\textit{\#\# Reconstruction of the ${\bm{x}}_A$ and ${\bm{x}}_P$}} \State Initialize the iteration number $k=0$ \State Set ${{\bm{\Theta }}_A}$ and ${{\bm{\Theta }}_P}$ randomly \While{Not converged} \State Update ${\bm{\Theta}}_A$ by solving~\eqref{s1} using steepest descent and back-propagation methods \State Update ${\bm{x}}_A$ according to~\eqref{S2} \State Update ${\bm{t}}_1$ according to~\eqref{S3} \State Update ${\bm{\Theta}}_P$ by solving~\eqref{S4} using steepest descent and back-propagation methods \State Update ${\bm{x}}_P$ according to~\eqref{S5} \State Update ${\bm{t}}_2$ according to~\eqref{S6} \State Let $k \leftarrow k+1$ \EndWhile \State Record ${\bm{x}}_A$ and ${\bm{x}}_P$ after converged \vspace{0.02cm} \State {\textit{\#\# Differential decoding}} \State Recover ${\mathbf{H}}_{A'}{\left\{i\right\}}$ and ${\mathbf{H}}_{P'}{\left\{i\right\}}$ $\left( i = 1,\ldots,T\right) $ according to the definition of ${\bm{x}}_A$ and ${\bm{x}}_P$, i.e.,~\eqref{vec} and \eqref{vecP} \For{Every ${\mathbf{H}}_{A'}{\left\{i\right\}}$ and ${\mathbf{H}}_{P'}{\left\{i\right\}}$} \State Create empty ${\bm{H}}_{A}\left\{i\right\}$ and ${\bm{H}}_{P}\left\{i\right\}$ to record the decoded results \State Obtain ${\bm{H}}_{A}\left\{i\right\}$ and ${\bm{H}}_{P}\left\{i\right\}$ according to \eqref{req1}, \eqref{req2}, \eqref{req3}, and \eqref{req4} \EndFor \State \Return ${\mathbf{H}}_{A}{\left\{i\right\}}$ and ${\mathbf{H}}_{P}{\left\{i\right\}}$ $\left( i = 1,\ldots,T\right) $ \end{algorithmic}} \end{algorithm} {\bf{Algorithm~\ref{Algorithm3}}} summarizes the steps to perform the aforementioned decoding methods, and then recover the original amplitude and phase spectrums. Specifically, after decoding, we obtain the estimated ${\bm{H}}_{A'}\left\{i\right\}$ and ${\bm{H}}_{P'}\left\{i\right\}$. Let the first column in ${\bm{H}}_{A}\left\{i\right\}$ and ${\bm{H}}_{P}\left\{i\right\}$ be the same as that of ${\bm{H}}_{A'}\left\{i\right\}$ and ${\bm{H}}_{P'}\left\{i\right\}$ as \begin{equation}\label{req1} {{\bm{H}}_{A}}\left\{ i \right\}\left[ {1,:} \right] = {{\bm{H}}_A'}\left\{ i \right\}\left[ {1,:} \right], \end{equation} and \begin{equation}\label{req2} {{\bm{H}}_{P}}\left\{ i \right\}\left[ {1,:} \right] = {{\bm{H}}_{P'}}\left\{ i \right\}\left[ {1,:} \right]. \end{equation} For the second to the last columns ($j = 2,\ldots,L$), we have \begin{equation}\label{req3} {{\bm{H}}_{A}}\left\{ i \right\}\left[ {j,:} \right] = {{\bm{H}}_{A'}}\left\{ i \right\}\left[ {j - 1,:} \right] + {{\bm{H}}_{A'}}\left\{ i \right\}\left[ {j,:} \right], \end{equation} and \begin{equation}\label{req4} {{\bm{H}}_{P}}\left\{ i \right\}\left[ {j,:} \right] = {{\bm{H}}_{P'}}\left\{ i \right\}\left[ {j - 1,:} \right] + {{\bm{H}}_{P'}}\left\{ i \right\}\left[ {j,:} \right]. \end{equation} Note that because of the independent iterative training of the two networks and the use of the ADMM method, $\alpha_1$ and $\alpha_2$ have no effect on the objective function. The running time is mainly taken in updating ${{\bm{\Theta }}_A}$ and ${{\bm{\Theta }}_P}$ since the inner denoising operators work efficiently. In Section~\ref{SS6}, we set the inner iteration numbers of the denoising operators for updating ${\bm{x}}_A$ and ${\bm{x}}_P$ to be both $600$, and the outer loop maximal iteration number is $18$, i.e., $18$ ADMM iterations. The average running time for decoding one MetaSpectrum, which is obtained by encoding $20$ original amplitude spectrums, is about $1$ minute with the experiment setting in Section~\ref{SS6}. Although the self-supervised method is not suitable for high real-time sensing signal data decoding, our method can be used for sensing tasks that require large amounts of historical data for analysis, i.e., healthcare monitoring, sleeping position detection, historical intrusion or walking behavior analysis. With the decoded ${\mathbf{H}}_{A}{\left\{i\right\}}$ and ${\mathbf{H}}_{P}{\left\{i\right\}}$ $\left( i = 1,\ldots,T\right)$, the original signal at each moment can be recovered to the form of complex matrices. Then, the 2D AoA and ToF can be jointly estimated~\cite{hua1989shaped}, which can be used to complete a series of sensing tasks. Thus, the steering matrices of $L$-shaped array in the $x$ and $y$ directions, which describe how the sensor array uses each individual element to select a spatial path for the transmission, can be expressed as \begin{align}\label{fml7} {{\bf{A}}_x} \!= \!\!\left[\!\!\! {\begin{array}{*{20}{c}} 1& \!\!\!\!\!\cdots\!\!\!\!\! &1\\ {{e^{ - j2\pi \!{f_k}d\cos \left( {{\theta _1}} \!\right)\sin \left( {{\varphi _1}} \!\right){\rm{ }}}}}& \!\!\!\!\!\cdots\!\!\!\!\! &{{e^{ - j2\pi \!{f_k}d\cos \left( {{\theta _I}} \!\right)\sin \left( {{\varphi _I}} \!\right)}}}\\ \!\!\!\vdots\!\!\! & \!\!\!\vdots\!\!\! & \!\!\!\vdots\!\!\! \\ {{e^{ - j2\pi \!{f_k}md\cos \left( {{\theta _1}} \!\right)\sin \left( {{\varphi _1}} \!\right)}}}& \!\!\!\!\!\cdots\!\!\!\!\! &{{e^{ - j2\pi \!{f_k}md\cos \left( {{\theta _I}} \!\right)\sin \left( {{\varphi _I}} \!\right)}}} \end{array}} \!\!\!\!\right]\!, \end{align} and \begin{align}\label{fml8} {{\bf{A}}_y} \!= \!\!\left[\!\!\! {\begin{array}{*{20}{c}} 1& \!\!\!\!\!\cdots\!\!\!\!\! &1\\ {{e^{ - j2\pi \!{f_k}d\sin \left( {{\theta _1}} \!\right)\sin \left( {{\varphi _1}} \!\right)}}}& \!\!\!\!\!\cdots\!\!\!\!\! &{{e^{ - j2\pi \!{f_k}d\sin \left( {{\theta _I}} \!\right)\sin \left( {{\varphi _I}} \!\right)}}}\\ \!\!\!\vdots\!\!\! & \!\!\!\vdots\!\!\! & \!\!\!\vdots\!\!\! \\ {{e^{ - j2\pi \!{f_k}nd\sin \left( {{\theta _1}} \!\right)\sin \left( {{\varphi _1}} \!\right)}}}& \!\!\!\!\!\cdots\!\!\!\!\! &{{e^{ - j2\pi \!{f_k}nd\sin \left( {{\theta _I}} \!\right)\sin \left( {{\varphi _I}} \!\right)}}} \end{array}} \!\!\!\!\right], \end{align} respectively. Inspired by \cite{kotaru2015spotfi}, here, we take multiple subcarriers into consideration and extend the 2D AoA estimation into three dimensions, to acheive joint 2D AoA and ToF estimation via the following Proposition~\ref{L2}: \begin{prop}\label{L2} The signal 2D AoA and ToF at time $t$ can be estimated using \begin{align}\label{fml9} {P_{3D}}\left( {\theta ,\varphi ,\tau , t} \right){\rm{ = }}\frac{1}{{{\bf{A}}_{0x'y'}^{\rm{H}}{{\bf{E}}_N}\left( t\right){\bf{E}}_N^{\rm{H}}\left( t\right){{\bf{A}}_{0x'y'}}}}, \end{align} where $P_{3D}$ describes the signal magnitude for a given set of $\left( {\theta ,\varphi ,\tau } \right)$, the superscript H is the conjugate transpose operator, ${{\bf{E}}_N}\left( t\right) $ is the noise subspace obtained by decomposing the auto-correlation matrix of the smoothed original signal at time $t$~\cite{kotaru2015spotfi}, ${{\bf{A}}_{0x'y'}}$ is the steering matrix that is obtained using~\eqref{fml7} and \eqref{fml8} as \begin{align} &{{\bf{A}}_{0x'y'}} = {\left[ {{{\bf{A}}_0}{\rm{ }} {{\bf{A}}_{x'}}{\rm{ }}{{\bf{A}}_{y'}}} \right]^{\rm{T}}} \notag\\& =\!\! {\left[\! {\underbrace {\begin{array}{*{20}{c}} \! 1\! \\ \! \vdots \! \\ \! 1 \!\\ \! \vdots \! \\ \! 1 \!\\ \! \vdots \! \\ \! 1 \! \end{array}}_{{{\mathbf{A}}_0}}\underbrace {\begin{array}{*{20}{c}} {{e^{ - j2\pi {f_1}d\cos \left( \theta \right)\sin \left( \varphi \right)}}} \\ \vdots \\ {{e^{ - j2\pi {f_{{k'}}}d\cos \left( \theta \right)\sin \left( \varphi \right)}}} \\ \vdots \\ \!\! {{e^{ - \!j2\pi\!{f_1}\!\left(\! {{m'} \!- 1} \!\right)d\!\cos \left( \theta \right)\!\sin \left( \!\varphi \!\right)}}}\!\! \\ \vdots \\ \!\!\!\!\!\!{{e^{ - \!j2\pi\!{f_{{k'}}}\!\left(\! {{m'}\! - 1} \!\right)d\!\cos \left( \theta \right)\!\sin \left(\! \varphi \!\right)}}}\!\!\! \!\!\! \end{array}}_{{{\mathbf{A}}_{x'}}}\underbrace {\begin{array}{*{20}{c}} {{e^{ - j2\pi {f_1}d\sin \left( \theta \right)\sin \left( \varphi \right)}}} \\ \!\!\!\!\vdots\!\!\!\! \\ {{e^{ - j2\pi\!{f_{{k'}}}d\sin \left( \theta \right)\sin \left( \varphi \right)}}} \\ \!\!\!\!\vdots\!\!\!\! \\ \!\! {{e^{ - \!j2\pi\!{f_1}\!\left( \!{{n'}\! - 1} \!\right)d\!\sin \left( \theta \right)\!\sin \left(\! \varphi \! \right)}}}\!\!\!\!\! \\ \!\!\!\!\vdots \!\!\!\! \\ \!\!\!\!\!\!{{e^{ -\! j2\pi\!{f_{{k'}}}\!\left( \!{{n'} \!- 1} \!\right)d\!\sin \left( \theta \right)\!\sin \left( \!\varphi \! \right)}}} \!\!\!\!\!\! \end{array}}_{{{\mathbf{A}}_{y'}}}} \!\right]^{\text{T}}}\!, \end{align} $0 < {k'} < K$, $0 < {m'} < M$, and $0 < {n'} < N$. \end{prop} Thus, we complete all the processes of the inverse semantic-aware wireless sensing framework. Specifically, we use {\bf{Algorithm~\ref{Algorithm2}}} to sample the task-related signal spectrums. With the RIS, {\bf{Algorithm~\ref{Algorithm1}}} can encode the sensing data, thus greatly reducing the data volume to be stored or transmitted. We use the self-supervised decoding {\bf{Algorithm~\ref{Algorithm3}}} to recover the original sensing data. Finally, with the help of Proposition~\ref{L2}, various sensing tasks can be performed. For example, intrusion detection can be achieved by detecting the change of estimated 2D AoA, and the human walking trajectory can be tracked by estimating the 2D AoA and the ToF of the signals. \section{Experiments Results}\label{SS6} Since the key contribution of this paper is to achieve the inverse semantic-aware encoding and decoding of the sensing data with the help of RIS, we aim to answer the following research questions via experiments: \begin{enumerate} \item[{\textbf{Q1)}}] Can the proposed self-supervised decoding scheme recover the original signal spectrums and ensure the accomplishment of sensing tasks? \item[{\textbf{Q2)}}] Can the amplitude response matrix of RIS, i.e., codebook, encrypt the sensing data? \item[{\textbf{Q3)}}] Compared with the existing uniform sampling method, can the proposed semantic hash sampling method help to achieve more accurate completions of the sensing tasks? \end{enumerate} We first present the experimental platform and the parameter setting of our proposed algorithms, and then answer the above questions through experimental evaluations. \subsection{Experiments Setting} \begin{figure}[t] \centering \includegraphics[width=0.41\textwidth]{experiment-eps-converted-to.pdf} \caption{Test scenario and hardware of the receiver.} \label{experiment} \end{figure} To collect sensing data from the real-world scenario, we use three access points (APs) based on the IEEE 802.11ax protocol to build a test platform~\cite{gringoli2022ax}. The collected sensing data is used to conduct a comprehensive evaluation of our proposed algorithms. The specific experimental scenario and hardware equipment are shown in Fig.~\ref{experiment}. Specifically, the test scenario is a conference room with tables and chairs. Inside the room, one AP acts as a transmitter to send OFDM wireless signals with the total bandwidth of $160$ MHz and $2048$ sub-carriers. The center frequency of the sub-carriers is $5.805$ GHz. As shown in Fig.~\ref{experiment} (Part II), the other two APs form an receiver with $L$-shaped active sensor array via a power splitter to receive signals. Since the investigation of STAR-RIS hardware is still at a very early stage, we simulate the amplitude and phase response matrices of the transmissive elements using a signal processor~\cite{tang2022transmissive,mu2021simultaneously,xu2021star}. During the experiment, the data packet transmission rate, i.e., transmission frequency, is $100$ Hz, which means $100$ packets are transmitted per second. The human target walks along the preset trajectory to complete the data collection. The experimental platform for running our proposed algorithms is built on a generic Ubuntu 20.04 system with an AMD Ryzen Threadripper PRO 3975WX 32-Cores CPU and an NVIDIA RTX A5000 GPU. In the self-supervised decoding {\bf{Algorithm~\ref{Algorithm3}}}, two U-net without the skip connections~\cite{ulyanov2018deep} are used as the self-supervised neural networks, i.e., ${{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_A}}}\!\!\left( {\bm{e}} \right)$ and ${{\bm{\mathcal T}}\!_{{{\bm{\Theta }}_P}}}\!\!\left( {\bm{e}} \right)$. The input to the network, i.e., ${\bm e}$, is a random vector that has the same size as ${\bm{x}}_A$ and ${\bm{x}}_P$ to be recovered. During the decoding of one MetaSpectrum, ${\bm e}$ is fixed in each ADMM iteration. In addition, to avoid the local minimum that the networks stuck in the last iteration, ${{\bm{\Theta }}_A}$ and ${{\bm{\Theta }}_P}$ are set to zero when each ADMM iteration is finished. In other words, both ${{\bm{\Theta }}_A}$ and ${{\bm{\Theta }}_P}$ are re-trained in each iteration. \subsection{Experiments Performance Analysis} \subsubsection{Effectiveness and Efficiency of the proposed inverse semantic decoding method (Q1)} We first set the data compression ratio as $10\%$. As shown in Fig.~\ref{res} (Part I), starting from $3$ seconds, we select one pair of amplitude and phase spectrums in each $0.1$ second time segment by using {\bf{Algorithm~\ref{Algorithm2}}}, for RIS-aided encoding. Using the encoding {\bf{Algorithm~\ref{Algorithm1}}} presented in Section~\ref{CM}, we can obtain one amplitude MetaSpectrum and one phase MetaSpectrum as shown in Fig.~\ref{res} (Part III) for every $10$ pairs of signal spectrums. The decoded results after $15$ iterations of the outer loop are shown in Fig.~\ref{res} (Part II). For both amplitude and phase spectrums, we observe that the difference between the decoded and the original spectrums is basically negligible. We present a detailed comparison of the decoded and the original amplitude spectrums in Fig.~\ref{res} (Part IV). This proves the effectiveness of our encoding and decoding methods. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{gradchange-eps-converted-to.pdf} \caption{The variation of the decoded amplitude and phase spectrums at time $4.5$ s in the experiment, the corresponding semantic hash matrix, the estimated 2D AoA spectrum by Proposition~\ref{L2} with the number of outer loop decoding iterations, where the data compression ratio is $5\%$.} \label{gradchange} \end{figure*} In addition to the visual contrast, we show in Fig.~\ref{gradchange} how the proposed semantic hash matrix changes with the number of outer loop decoding iterations. We set the data compression ratio as $5\%$. We observe that, as the number of outer loop iterations increases, both the decoded amplitude and phase spectrums at time $4.5$ seconds are gradually close to the ground truth spectrums. Moreover, the Hamming distance between the semantic hash matrices of the decoded pair of amplitude and phase spectrums and that of the original signal spectrums is gradually reducing. Specifically, we can see that $12$ iterations can make the Hamming distance only $2$, which takes about $40$ seconds on average. Furthermore, the estimated 2D AoA values using the decoded spectrum after $12$ iterations are very close to the true values, which basically has no effect on the practical sensing tasks. This proves the efficiency of our encoding and decoding methods. \subsubsection{Effectiveness of using the amplitude response matrix of the RIS as the codebook (Q2)} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{iteration-eps-converted-to.pdf} \caption{The PSNR values versus the number of outer loop decoding iterations with or without codebook.} \label{iter} \end{figure} Figure~\ref{iter} depicts the average peak signal-to-noise ratio (PSNR) values $10$ experiments versus the number of outer loop decoding iterations, with or without the codebook ${\bf \Phi}_{A}^{\left(i\right)}$. If the codebook is available, we observe that the PSNR values of both the amplitude and phase spectrums are increasing as the number of iterations increases, and gradually reached a plateau after about $10$ iterations. However, if no codebook is available or the codebook is wrong, the PSNR values decrease as the number of iterations increases. The reason is that the parameters of two decoding network, i.e., ${{\bm{\Theta }}_A}$ and ${{\bm{\Theta }}_P}$, are learned according to a wrong objective function. \subsubsection{Effectiveness of the proposed semantic hash sampling method (Q3)} \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{trace-eps-converted-to.pdf} \caption{Comparison between different sampling methods and ground truth in terms of 2D AoA changes with movement of human.} \label{trace} \end{figure} Based on the sensing data extracted via two different sampling methods, i.e., red line for the uniform sampling and blue line for the semantic hash sampling methods, Fig.~\ref{trace} displays estimated elevation and azimuth AoA changes over time. Note that the estimation results under two sampling schemes are obtained using the decoded amplitude and phase spectrums with the data compression ratio as $5\%$. First, we observe that the both elevation and azimuth AoA at every moment can be accurately estimated using the decoded data. This further validates the effectiveness of our proposed encoding and decoding algorithms (for Q1). Furthermore, by comparing the blue and red lines, it can be seen that the proposed semantic hash sampling method is more efficient and effective than uniform sampling in describing the details of AoA changes, as shown in the enlarged part in Fig.~\ref{trace}. Because these changes are typically more informative, this shows the effectiveness of our proposed semantic hash sampling method. To compare the two schemes numerically, we consider the MSE between the ground truth and the 2D AoA estimation results after interpolation. By calculating, we obtain that the estimation error of the semantic hash sampling is $0.89$, which is $67\%$ lower than that of uniform sampling scheme whose estimation error is $2.7$. \begin{figure}[t] \centering \includegraphics[width=0.47\textwidth]{SP-eps-converted-to.pdf} \caption{Comparison of azimuth AoA estimation results that are obtained by using the original and decoded signals, respectively.} \label{SPFIG} \end{figure} In addition to the walking human, stationary objects such as tables and chairs in the conference room also reflect wireless signals. Thus, the information that can be extracted from the signal spectrums at one certain moment is rich. Taking the azimuth AoA as an example, Fig.~\ref{SPFIG} shows the comparison of the azimuth AoA estimation results that are obtained by using the original and decoded signals, respectively. The data compression ratio is $5\%$. One can see from Fig.~\ref{SPFIG} that our encoding and decoding methods preserve semantic information related to sensing tasks, which can be illustrated from two aspects. First, the relative magnitude characteristics among different azimuth AoA estimated from the decoded signals are consistent with the ground truth, i.e., azimuth AoA estimated from the original signals. For example, the ground truth shows that the azimuth AoA of the stronger signals are in $10^\circ - 40^\circ$ and $110^\circ - 150^\circ$, as indicated by the red and blue boxes, respectively. In addition, the signals with AoA in $40^\circ - 110^\circ$ are weaker. The above features are almost completely preserved in the estimation results obtained using the decoded data. Second, we observe that the AoA estimation results of the first several strongest signals are almost unchanged before and after the inverse semantic-aware encoding and decoding, e.g., the signals marked by the red and blue boxes in Fig.~\ref{SPFIG}, respectively. This indicates that our proposed algorithms can effectively preserve the phase characteristics (for Q1). \section{Conclusion and Future Directions}\label{SF} We have designed an inverse semantic-aware wireless sensing framework. The amplitude response matrix of the RIS can be effectively used to generate the codebook as prior knowledge for decoding. We have shown that our proposed RIS-aided encoding method can achieve effective data compression. When selecting the signal spectrums to be encoded, our proposed semantic hash sampling method is significantly better than the widely used uniform sampling method. Moreover, the self-supervised decoding method can recover signal amplitude and phase spectrums to achieve various wireless sensing tasks without affecting the performance. Since the decoding method does not require any pre-training, it can greatly save network resources. As the demand for sensing data increases, our proposed framework can contribute to building a resource-friendly next-generation Internet. \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{plot-eps-converted-to.pdf} \caption{Early-stop decompress results the real video frames in~\cite{perrin2020eyetrackuav2} using the method proposed in [25] and our method, respectively. Six images are compressed into one image.} \label{realplot} \end{figure*} There are two potential future research directions. \begin{itemize} \item {\textit{Inverse Semantic-aware Transmission of Images}}. We can consider the inverse semantic-aware encoding and decoding of images or audio. In the surveillance service application, a camera shoots a bay to detect boats. The surveillance videos take up a lot of storage resource. The idea is to compress several video frames, e.g., six frames as shown in Fig.~\ref{realplot}, into one frame. The original frames can be recovered by using the proposed self-supervised decoding algorithm. \item {\textit{Cantor or Szudzik Pairing Compression}}. In this paper, we encoded the amplitude and the phase spectrum separately. A possible improvement is to use the pairing functions, e.g., cantor~\cite{lisi2007some} or szudzik~\cite{szudzik2006elegant} pairing functions, to combine the two spectrum into one. As shown in Fig.~\ref{res}, the pairing compression can be used as an operation after obtaining amplitude and phase spectrums to further compress the sensing data. \end{itemize} % \bibliographystyle{IEEEtran}
1,116,691,498,814
arxiv
\section{ Introduction} In this paper, we develop Schauder and bootstrapping theory for solutions to fourth order non linear elliptic equations of the following double divergence form \begin{equation} \int_{\Omega}a^{ij,kl}(D^{2}u)u_{ij}\eta_{kl}dx=0,\text{ }\forall\eta\in C_{0}^{\infty}(\Omega) \label{eq1 \end{equation} in $B_{1}=B_{1}(0).$ For the Schauder theory, we require the standard Legendre-Hadamard ellipticity condition \begin{equation} a^{ij,kl}(D^{2}u(x))\xi_{ij}\xi_{kl}\geq\Lambda|\xi_{rs}|^{2} \label{LH \end{equation} while in order to bootstrap, we will require the following condition: \begin{equation} b^{ij,kl}(D^{2}u(x))=a^{ij,kl}(D^{2}u(x))+\frac{\partial a^{pq,kl}}{\partial u_{ij}}(D^{2}u(x))u_{pq}(x) \label{Bdef \end{equation} satisfie \begin{equation} b^{ij,kl}(D^{2}u(x))\xi_{ij}\xi_{kl}\geq\Lambda_{1}\left\Vert \xi\right\Vert ^{2}. \label{Bcondition \end{equation} Our main result is the following: Suppose that conditions (\ref{eq1}) and (\ref{Bcondition}) are met on some open set~$U\subseteq S^{n\times n}$ (space of symmetric matrices). ~If $u$ is a $C^{2,\alpha}$~solution with $D^{2}u(B_{1})\subset U$, then $u$ is smooth on the interior of the domain $B_{1}.$ \ \ \ \ One example of such an equation is the Hamiltonian Stationary Lagrangian equation, which governs Lagrangian surfaces that minimize the area functional \begin{equation} \int_{\Omega}\sqrt{\det(I+\left( D^{2}u\right) ^{T}D^{2}u)}dx \label{HS \end{equation} among potential functions $u.$ (cf. \cite{MR1202805}, \cite[Proposition 2.2]{SW03}). The minimizer satisfies a fourth order equation, that, when smooth, can be factored into a a Laplace type operator on a nonlinear quantity. Recently in \cite{CW}, it is shown that a $C^{2}$ solution is smooth. The results in \cite{CW} are the combination of an initial regularity boost, followed by applications of the second order Schauder theory as in \cite{CC}. More generally, for a functional $F$ on the space of matrices, one may consider a functional of the form \[ \int_{M}F(D^{2}u)dx. \] The Euler-Lagrange equation will generically be of the following double-divergence type: \begin{equation} \frac{\partial^{2}}{\partial x_{i}\partial x_{j}}(\frac{\partial F}{\partial u_{ij}}(D^{2}u))=0. \label{generic \end{equation} Equation (\ref{generic}) need not factor into second order operators, so it may be genuinely a fourth order double-divergence elliptic type equation. It should be noted that in general, (\ref{generic}) need not take the form of (\ref{eq1}). It does when $F(D^{2}u)$ can be written as a function of $D^{2}u^{T}D^{2}u$ (as for example (\ref{HS})). Our results in this paper apply to a class of Euler-Lagrange equations arising from such functionals. In particular, we will show that if $F$ is a convex function of $D^{2}u$ and a function of $D^{2}u^{T}D^{2}u$ (such as \ref{HS} when $\left\vert D^{2}u\right\vert \leq1$) then $C^{2,\alpha}$ solutions will be smooth. \ The Schauder Theory for second order divergence and non-divergence type elliptic equations is by now well-developed, see \cite{HL} , \cite{GT}\ and \cite{CC}. For higher order non-divergence equations, Schauder theory is available, see \cite{Simon}. However, for higher order equations in divergence form, much less is known. One expects the results to be different: For second order equations, solutions to divergence type equations with $C^{\alpha}$ coefficients are known to be $C^{1,\alpha}$, \cite[Theorem 3.13]{HL}, whereas for non-divergence equations, solutions will be $C^{2,\alpha}$ \cite[Chapter 6]{GT}. Recently, Dong and Zhang \cite{DZ} have obtained general Schauder theory results for parabolic equations (of order $2m$) in divergence form, where the time coefficients are allowed to be merely measurable. Their proof (like ours) is in the spirit of Campanato techniques, but requires smooth initial conditions. Our result is aimed at showing that weak solutions are in fact smooth. Classical Schauder theory for general systems has been developed, \cite[Chapter 5,6 ]{Morrey66}. However, it is non-trivial to apply the general classical results to obtain the result we are after. Even so, it is useful\ to focus on a specific class of fourth order double-divergence operators, and offer random access to the non-linear Schauder theory for these cases. Regularity for fourth order equations remains an important developing area of geometric analysis. \ Our proof goes as follows: We start with a~$C^{2,\alpha}$ solution of (\ref{eq1}) whose coefficient matrix is a smooth function of the Hessian of $u.$ We first prove that~$u\in W^{3,2}$ by taking a difference quotient of (\ref{eq1}) and give a $W^{3,2}$ estimate of $u$ in terms of its~$C^{2,\alpha }$ norm. Again by taking difference a quotient and using the fact that now~$u\in W^{3,2},$ we prove that~$u\in C^{3,\alpha}$. Next, we make a more general proposition where we prove a $W^{3,2}$ estimate for $u\in W^{2,2}$~ satisfying a uniformly elliptic~equation of the form \[ \int(c^{ij,kl}u_{ik}+h^{jl})\eta_{jl}dx=0 \] in $B_{1}(0),$ where $c^{ij,kl},h^{kl}\in W^{1,2}(B_{1})$ and~$\eta$ is a test function in $B_{1}$. Using the fact that $u\in W^{3,2},$ we prove that $u\in C^{3,\alpha}$ and also derive a $C^{3,\alpha}$ estimate of $u$ in terms of its $W^{3,2}$ norm. \ $\ \ $Finally, using difference quotients and dominated convergence, we achieve all higher orders of regularity. \ \begin{definition} We say an equation of the form (\ref{eq1}) is \textbf{regular on }$U\subseteq S^{n\times n}$ $\ $when the coefficients of the equation satisfy the~ following conditions on $U$: 1. The coefficients $a^{ij,kl}$ depend smoothly on $D^{2}u$. 2.The coefficients $a^{ij,kl}$ satisfy (\ref{LH}). 3.Either $b^{ijkl}$ or $-b^{ijkl}$ (given by (\ref{Bdef})) satisfy (\ref{Bcondition}). \end{definition} The following is our main result. \begin{theorem} \textbf{~~~~~}Suppose that $u\in C^{2,\alpha}(B_{1})$ satisfies the following fourth order equatio \begin{align*} \int_{B_{1}(0)}a^{ij,kl}(D^{2}u(x))u_{ij}(x)\eta_{kl}(x)dx & =0\\ \forall\eta & \in C_{0}^{\infty}(B_{1}(0)) \end{align*} \textit{If }$a^{ij,kl}$ is regular on an open set containing $D^{2 u(B_{1}(0)),$ then $u$ is smooth on $B_{r}(0)$ for $r<1$\textit{. } \end{theorem} To prove this, we will need the following two Schauder type estimates. \begin{proposition} \label{prop3}~Suppose $u\in W^{2,\infty}(B_{1})$ satisfies the followin \begin{align} \int_{B_{1}(0)}\left[ c^{ij,kl}(x)u_{ij}(x)+f^{kl}(x)\right] \eta_{kl}(x)dx & =0\label{Cequation}\\ \forall\eta & \in C_{0}^{\infty}(B_{1}(0))\nonumber \end{align} where $c^{ij,kl},f^{kl}\in W^{1,2}(B_{1}),$ and~$c^{ij,kl}$ satisfies (\ref{LH}). Then $u\in W^{3,2}(B_{1/2})$~ an \[ \left\Vert D^{3}u\right\Vert _{L^{2}(B_{1/2})}\leq C(||u||_{W^{2,\infty (B_{1})},\left\Vert f^{kl}\right\Vert _{W^{1,2}(B_{1})},\left\Vert c^{ij,kl}\right\Vert _{W^{1,2}},\Lambda_{1}). \] \end{proposition} \begin{proposition} \label{prop4}Suppose $u\in C^{2,\alpha}(B_{1})$ satisfies (\ref{Cequation}) in $B_{1}$ where $c^{ij,kl},f^{kl}\in C^{1,\alpha}(B_{1})$ and~$c^{ij,kl}$ satisfies (\ref{LH}).Then we have $u\in C^{3,\alpha}(B_{1/2})$ wit \[ ||D^{3}u||_{C^{0,\alpha}(B_{1/4})}\leq C(1+||D^{3}u||_{L^{2}(B_{3/4})}) \] and~$C=C(|c^{ij,kl}|_{C^{\alpha}(B_{1})},\Lambda_{1},\alpha)$ is a positive constant. \end{proposition} We note that the above estimates are appropriately scaling invariant: \ Thus we can use these to obtain interior estimates for a solution in the interior of any sized domain.~~ \section{Preliminaries} We begin by considering a constant coefficient double divergence equation. \ \begin{theorem} \label{five} Suppose $w\in H^{2}(B_{r})$ satisfies the constant coefficient equatio \begin{align} \int c_{0}^{ik,jl}w_{ik}\eta_{jl}dx & =0\label{ccoef}\\ \forall\eta & \in C_{0}^{\infty}(B_{r}(0)).\nonumber \end{align} \textit{ Then for any~}$0<\rho\leq r$\textit{ there holds \begin{align*} \int_{B_{\rho}}|D^{2}w|^{2} & \leq C_{1}(\rho/r)^{n}||D^{2}w||_{L^{2 (B_{r})}^{2}\\ \int_{B_{\rho}}|D^{2}w-(D^{2}w)_{\rho}|^{2} & \leq C_{2}(\rho/r)^{n+2 \int_{B_{r}}|D^{2}w-(D^{2}w)_{r}|^{2}. \end{align*} Here $(D^{2}w)_{\rho}$ is the average value\ of $D^{2}w$ on a ball of radius $\rho$. \end{theorem} \begin{proof} By dilation we may consider $r=1$. We restrict our consideration to the range $\rho\in(0,a]$ noting that the statement is trivial for $\rho\in\lbrack a,1]$ where $a$ is some constant in $(0,1/2).$ First, we note that $w$ is smooth \cite[Theorem 33.10]{Driver03}. Recall \cite[Lemma 2, Section 4, applied to elliptic case]{DongKimARMA} : For an elliptic $4$th order $L_{0}$ \begin{align*} L_{0}u & =0\text{ on }B_{R}\\ & \implies\left\Vert Du\right\Vert _{L^{\infty}(B_{R/4})}\leq C_{3 (\Lambda,n)\left\Vert u\right\Vert _{L^{2}(B_{R})}. \end{align*} We may apply this to the second derivatives of $w$ to conclude that \begin{equation} \left\Vert D^{3}w\right\Vert _{L^{\infty}(B_{a})}^{2}\leq C_{4}(\Lambda ,n)\int_{B_{1}}\left\Vert D^{2}w\right\Vert ^{2}. \label{dong \end{equation} For small enough $a<1.$~ ~~ ~~~ Now \begin{align*} \int_{B_{\rho}}\left\vert D^{2}w\right\vert ^{2} & \leq C_{5}(n)\rho ^{n}\left\Vert D^{2}w\right\Vert _{L^{\infty}(B_{a})}^{2}\\ & =C_{5}\rho^{n}\inf_{x\in B_{a}}\sup_{y\in B_{a}}\left\vert D^{2 w(x)+D^{2}w(y)-D^{2}w(x)\right\vert ^{2}\\ & \leq C_{5}\rho^{n}\inf_{x\in B_{a}}\left[ D^{2}w(x)+2a\left\Vert D^{3}w\right\Vert _{L^{\infty}(B_{a})}\right] ^{2}\\ & \leq2C_{5}\rho^{n}\left[ \inf_{x\in B_{a}}\left\Vert D^{2}w(x)\right\Vert ^{2}+4a^{2}\left\Vert D^{3}w\right\Vert _{L^{\infty}(B_{a})}\right] \\ & \leq2C_{5}\rho^{n}\left[ \frac{1}{|B_{a}|}||D^{2}w||_{L^{2}(B_{a}) ^{2}+4a^{2}C_{4}||D^{2}w||_{L^{2}(B_{a})}^{2}\right] \\ & \leq C_{6}(a,n)\rho^{n}||D^{2}w||_{L^{2}(B_{1})}^{2}. \end{align*} Similarl \begin{align} \int_{B_{\rho}}\left\vert D^{2}w-(D^{2}w)_{\rho}\right\vert ^{2} & \leq \int_{B_{\rho}}\left\vert D^{2}w-D^{2}w(x_{0})\right\vert ^{2}\nonumber\\ & \leq\int_{S^{n-1}}\int_{0}^{\rho}r^{2}\left\vert D^{3}w\right\vert ^{2}r^{n-1}drd\phi\nonumber\\ & =C_{7}\rho^{n+2}\left\vert D^{3}w\right\vert _{L^{\infty}(B_{a})}^{2}. \label{fred \end{align} Next, observe that (\ref{ccoef}) is purely fourth order, so the equation still holds when a second order polynomial is added to the solution. \ In particular, we may choose \[ D^{2}\bar{w}=D^{2}w-\left( D^{2}w\right) _{1 \] for $\bar{w}$ also satisfying the equation. \ Then \[ D^{3}\bar{w}=D^{3}w \] so \begin{align} \left\Vert D^{3}w\right\Vert _{L^{\infty}(B_{a})}^{2} & =\left\Vert D^{3}\bar{w}\right\Vert _{L^{\infty}(B_{a})}^{2}\label{ed}\\ & \leq C_{4}\int_{B_{1}}\left\Vert D^{2}\bar{w}\right\Vert ^{2}=C_{4 \int_{B_{1}}\left\Vert D^{2}w-\left( D^{2}w\right) _{1}\right\Vert ^{2}.\nonumber \end{align} We conclude from (\ref{ed}) and (\ref{fred}) \[ \int_{B_{\rho}}\left\vert D^{2}w-(D^{2}w)_{\rho}\right\vert ^{2}\leq C_{7 \rho^{n+2}C_{4}\int_{B_{1}}\left\Vert D^{2}w-\left( D^{2}w\right) _{1}\right\Vert ^{2}. \] \end{proof} Next, we have a corollary to the above theorem. \begin{corollary} \label{Cor2} \textit{Suppose }$w$\textit{ is as in the Theorem \ref{five}. Then for any~}$u\in H^{2}(B_{r}),$ and\textit{ for any~} $0<\rho\leq r,$ there hold \begin{equation} \int_{B_{\rho}}\left\vert D^{2}u\right\vert ^{2}\leq4C_{1}(\rho/r)^{n \left\Vert D^{2}u\right\Vert _{L^{2}(B_{r})}^{2}+\left( 2+8C_{1}\right) \left\Vert D^{2}v\right\Vert _{L^{2}(B_{r})}^{2}. \label{twothree \end{equation} \textit{and \begin{align} \int_{B_{\rho}}\left\vert D^{2}u-(D^{2}u)_{\rho}\right\vert ^{2} & \leq4C_{2}(\rho/r)^{n+2}\int_{B_{r}}\left\vert D^{2}u-(D^{2}u)_{r}\right\vert ^{2}\label{twofive}\\ & +\left( 8+16C_{2}\right) \int_{B_{r}}\left\vert D^{2}v\right\vert ^{2}\nonumber \end{align} \end{corollary} \begin{proof} Let $v=u-w.$ Then (\ref{twothree}) follows from direct computation: \begin{align*} \int_{B\rho}|D^{2}u|^{2} & \leq2\int_{B_{\rho}}|D^{2}w|^{2}+2\int_{B_{\rho }|D^{2}v|^{2}.\\ & \leq2C_{1}(\rho/r)^{n}||D^{2}w||_{L^{2}(B_{r})}^{2}+2\int_{B_{r} |D^{2}v|^{2}\\ & \leq4C_{1}(\rho/r)^{n}\left[ ||D^{2}v||_{L^{2}(B_{r})}^{2}+||D^{2 u||_{L^{2}(B_{r})}^{2}\right] +2\int_{B_{r}}|D^{2}v|^{2}\\ & =4C_{1}(\rho/r)^{n}\left\Vert D^{2}u\right\Vert _{L^{2}(B_{r}) ^{2}+2[1+4C_{1}(\rho/r)^{n}]\left\Vert D^{2}v\right\Vert _{L^{2}(B_{r})}^{2}. \end{align*} ~ Similarl \begin{align*} \int_{B\rho}\left\vert D^{2}u-(D^{2}u)_{\rho}\right\vert ^{2} & \leq 2\int_{B_{\rho}}\left\vert D^{2}w-(D^{2}w)_{\rho}\right\vert ^{2 +2\int_{B_{\rho}}\left\vert D^{2}v-(D^{2}v)_{\rho}\right\vert ^{2}\\ & \leq2\int_{B_{\rho}}\left\vert D^{2}w-(D^{2}w)_{\rho}\right\vert ^{2 +8\int_{B_{\rho}}\left\vert D^{2}v\right\vert ^{2}\\ & \leq2C_{2}(\rho/r)^{n+2}\int_{B_{r}}|D^{2}w-(D^{2}w)_{r}|^{2 +8\int_{B_{\rho}}\left\vert D^{2}v\right\vert ^{2}\\ & \leq2C_{2}(\rho/r)^{n+2}\left\{ \begin{array} [c]{c 2\int_{B_{r}}\left\vert D^{2}u-(D^{2}u)_{r}\right\vert ^{2}\\ +2\int_{B_{r}}\left\vert D^{2}v-(D^{2}v)_{r}\right\vert ^{2 \end{array} \right\} +8\int_{B_{r}}\left\vert D^{2}v\right\vert ^{2}\\ & \leq4C_{2}(\rho/r)^{n+2}\int_{B_{r}}\left\vert D^{2}u-(D^{2}u)_{r \right\vert ^{2}\\ & +\left( 8+16C_{2}(\rho/r)^{n+2}\right) \int_{B_{r}}\left\vert D^{2}v\right\vert ^{2}. \end{align*} The statement follows, noting that $\rho/r\leq1.$ \end{proof} We will be using the following Lemma frequently, so we state it here for the reader's convenience. \begin{lemma} \cite[Lemma 3.4]{HL}. \ \ Let $\phi$ be a nonnegative and nondecreasing function on $[0,R].$ \ Suppose that \[ \phi(\rho)\leq A\left[ \left( \frac{\rho}{r}\right) ^{\alpha +\varepsilon\right] \phi(r)+Br^{\beta \] for any $0<\rho\leq r\leq R,$ with $A,B,\alpha,\beta$ nonnegative constants and $\beta<\alpha.$ \ Then for any $\gamma\in(\beta,\alpha),$ there exists a constant $\varepsilon_{0}=\varepsilon_{0}(A,\alpha,\beta,\gamma)$ such that if $\varepsilon<\varepsilon_{0}$ we have for all $0<\rho\leq r\leq R \[ \phi(\rho)\leq c\left[ \left( \frac{\rho}{r}\right) ^{\gamma \phi(r)+Br^{\beta}\right] \] where $c$ is a positive constant depending on $A,\alpha,\beta,\gamma.$ \ In particular, we have for any $0<r\leq R \[ \phi(r)\leq c\left[ \frac{\phi(R)}{R^{\gamma}}r^{\gamma}+Br^{\beta}\right] . \] \end{lemma} \section{Proofs of the propositions} We begin by proving Proposition \ref{prop3}. \begin{proof} By approximation, (\ref{Cequation}) holds holds for $\eta\in W_{0}^{2,2}.$ We are assuming that $u\in W^{2,\infty}$, so (\ref{Cequation}) must hold for the test functio \[ \eta=-[\tau^{4}u^{h_{p}}]^{-h_{p} \] where $\tau\in C_{c}^{\infty}$ is a cutoff function in $B_{1}$ that is $1$ on~$B_{1/2}$, and the subscript $h_{p}$ refers to taking difference quotient in the $e_{p}$ direction. We choose $h$ small enough after having fixed~$\tau $, so that~$\eta$ is well defined. We hav \[ \int_{B_{1}}(c^{ij,kl}u_{ij}+f^{kl})[\tau^{4}u_{\,}^{h_{p}}]_{kl}^{-h_{p }dx=0 \] For $h$ small we can integrate by parts with respect to the difference quotient to get$\, \[ \int_{B_{1}}(c^{ij,kl}u_{ij}+f^{kl})^{h_{p}}[\tau^{4}u_{\,}^{h_{p} ]_{kl}dx=0. \] Using the product rule for difference quotients we ge \[ \int_{B_{1}}[(c^{ij,kl}(x))^{h_{p}}u_{ij}(x)+c^{ij,kl}(x+he_{p})u_{ij}^{h_{p }+(f^{kl})^{h_{p}}][\tau^{4}u_{\,}^{h_{p}}]_{kl}dx=0 \] Letting $v=u^{h_{p}},$ differentiating the second factor gives \begin{multline*} \int_{B_{1}}\left[ (c^{ij,kl}(x))^{h_{p}}u_{ij}(x)+c^{ij,kl}(x+he_{p )v_{ij}(x)+(f^{kl})^{h_{p}}(x)\right] \\ \times\left[ \begin{array} [c]{c \tau^{4}v_{kl}+4\tau^{3}\tau_{k}v_{l}+4\tau^{3}\tau_{l}v_{k}\\ +4v_{\,}(\tau^{3}\tau_{kl}+3\tau^{2}\tau_{k}\tau_{l}) \end{array} \right] (x)dx=0 \end{multline*} from which ~~ \begin{align} \int_{B_{1}}\tau^{4}c^{ij,kl}(x+he_{p})v_{ij}v_{kl}dx & =\nonumber\\ & -\int_{B_{1}}\left[ (c^{ij,kl}(x))^{h_{p}}u_{ij}(x)+c^{ij,kl (x+he_{p})v_{ij}(x)+(f^{kl})^{h_{p}}(x)\right] \nonumber\\ & \times\left[ \begin{array} [c]{c 4\tau^{3}\tau_{k}v_{l}+4\tau^{3}\tau_{l}v_{k}\\ +4v_{\,}(\tau^{3}\tau_{kl}+3\tau^{2}\tau_{k}\tau_{l}) \end{array} \right] dx\label{star}\\ & -\int_{B_{1}}\left[ (c^{ij,kl}(x))^{h_{p}}u_{ij}(x)+(f^{kl})^{h_{p }(x)\right] \tau^{4}v_{kl}dx\nonumber \end{align} First we bound the terms on the right side of (\ref{star}). Starting at the top:\ \begin{align} & \int_{B_{1}}\left[ (c^{ij,kl}(x))^{h_{p}}u_{ij}(x)+(f^{kl})^{h_{p }(x)\right] \times\left[ \begin{array} [c]{c 4\tau^{3}\tau_{k}v_{l}+4\tau^{3}\tau_{l}v_{k}\\ +4v_{\,}(\tau^{3}\tau_{kl}+3\tau^{2}\tau_{k}\tau_{l}) \end{array} \right] dx\nonumber\\ & _{\leq}\left[ \left\Vert u\right\Vert _{W^{2,\infty}(B_{1})}^{2}+1\right] \int_{B_{1}}\left( \left\vert (c^{ij,kl}(x))^{h_{p}}\right\vert ^{2}+\left\vert (f^{kl})^{h_{p}}(x)\right\vert ^{2}\right) dx\label{star0}\\ & +C_{8}(\tau,D\tau,D^{2}\tau)\int_{B_{1}}\left( |Dv|^{2}+|v|^{2}\right) dx.\nonumber \end{align} Next, by Young's inequality we have: \begin{align} & \int_{B_{1}}c^{ij,kl}(x+he_{p})v_{ij}(x)\times\nonumber\\ & \lbrack4\tau^{3}\tau_{j}v_{l}+4\tau^{3}\tau_{l}v_{j}+4v_{\,}(\tau^{3 \tau_{jl}+3\tau^{2}\tau_{j}\tau_{l})]dx\nonumber\\ & \leq\frac{C_{9}(\tau,D\tau,D^{2}\tau,c^{ij,kl})}{\varepsilon}\int_{B_{1 }\left( |Dv|^{2}+v^{2}\right) dx+\varepsilon\int_{B_{1}}\tau^{4}\left\vert D^{2}v\right\vert ^{2}dx \label{tryagain \end{align} and als \begin{align} & \int_{B_{1}}\left[ (c^{ij,kl}(x))^{h_{p}}u_{ij}(x)+(f^{kl})^{h_{p }(x)\right] \tau^{4}v_{kl}dx\nonumber\\ & \leq\varepsilon\int_{B_{1}}\tau^{4}\left\Vert D^{2}v\right\Vert ^{2}dx\nonumber\\ & +\frac{C_{10}}{\varepsilon}(||u||_{W^{2,\infty}(B_{1})}^{2},|\tau |_{L^{\infty}(B_{1})})\int_{B_{1}}[|(c^{ijkl})^{h_{p}}|^{2}+|(h^{jl})^{h_{p }|^{2}]dx \label{star3 \end{align} Now by uniform ellipticity (\ref{LH}), the left hand side of (\ref{star}) is bounded below by \begin{equation} \Lambda\int_{B_{1}}\tau^{4}\left\Vert D^{2}v\right\Vert ^{2}dx\leq\int_{B_{1 }\tau^{4}c^{ij,kl}(x+he_{p})v_{ik}(x)v_{kl}(x)dx \label{star2 \end{equation} Combining all (\ref{star}), (\ref{star0}) ,(\ref{star3}) , (\ref{tryagain}) and (\ref{star2}) and choosing $\varepsilon$ appropriately, we get \begin{align*} & \frac{\Lambda}{2}\int_{B_{1}}\tau^{4}\left\Vert D^{2}v\right\Vert ^{2}dx\\ & \leq C_{11}(||\tau||_{W^{2,\infty}(B_{1})},|||u||_{W^{2,\infty}(B_{1}) ^{2})(\int_{B_{1}}|(f^{kl})^{h_{p}}|^{2}+|c^{ij,kl}|^{2}+|(c^{ij,kl})^{h_{p }|^{2})\\ & \leq C_{12}(||\tau||_{W^{2,\infty}(B_{1})},||u||_{W^{2,\infty}(B_{1}) ^{2},||f^{kl}||_{W^{1,2}(B_{1})}^{2},\left\Vert c^{ij,kl}\right\Vert _{W^{1,2}(B_{1})}^{2},\Lambda). \end{align*} Now this estimate is uniform in $h$ and direction $e_{p}$ so we conclude that the difference quotients of $u$ are uniformly bounded in $W^{2,2}(B_{1/2})$. Hence $u\in W^{3,2}(B_{1/2})$ an \begin{align*} & ||D^{3}f||_{L^{2}(B_{1/2})}\\ & \leq\frac{2C_{12}}{\Lambda}(||\tau||_{W^{2,\infty}(B_{1}) ,||u||_{W^{2,\infty}(B_{1})}^{2},||f^{kl}||_{W^{1,2}(B_{1})}^{2},\left\Vert c^{ij,kl}\right\Vert _{W^{1,2}(B_{1})}^{2},\Lambda). \end{align*} \end{proof} We now prove Proposition \ref{prop4} \begin{proof} We begin by taking a difference quotient of the equation \[ \int(c^{ij,kl}u_{ij}+f^{kl})\eta_{kl}dx=0 \] along the direction $h_{m}$ . This gives \[ \int[(c^{ij,kl}(x))^{h_{m}}u_{ij}(x)+c^{ij,kl}(x+he_{m})u_{ij}^{h_{m }(x)+(f^{kl})^{h_{m}}]\eta_{kl}(x)dx=0 \] which gives us the following PDE in $u_{ij}^{h_{m}}:$ \[ \int c^{ij,kl}(x+he_{m})u_{ij}^{h_{m}}(x)\eta_{kl}(x)dx=\int q(x)\eta _{kl}(x)dx \] where \[ q(x)=-(f^{kl})^{h_{m}}(x)-(c^{ij,kl}(x))^{h_{m}}u_{ij}(x) \] Note that~$q\in C^{\alpha}(B_{1})$ and~$c^{ij,kl}(x+he_{m})$ is still an elliptic term for all $x$ in~$B_{1.}$ For compactness of notation we denote \begin{equation} g=u^{h_{m}} \label{defineg \end{equation} and replace \ $c^{ij,kl}(x+he_{m})$ with $c^{ij,kl},$ as the difference is immaterial. \ Our equation reduces t \begin{equation} \int c^{ij,kl}g_{ij}\eta_{kl}dx=\int q\eta_{kl}dx \label{eq787 \end{equation} Using integration by parts we hav \begin{align*} \int c^{ij,kl}g_{ij}\eta_{kl}dx & =-\int q_{l}\eta_{k}dx\\ & =-\int(q-q(0))_{l}\eta_{k}dx\\ & =\int(q-q(0))\eta_{kl}dx \end{align*} Now for each fixed $r<1$ we write $g=v+w$ where $w$ satisfies the following constant coefficient PDE on~$B_{r}\subseteq B_{1}: \begin{align} \int_{B_{1}(0)}c^{ij,kl}(0)w_{ij}\eta_{kl}dx & =0\label{testv}\\ \forall\eta & \in C_{0}^{\infty}(B_{r}(0))\nonumber\\ w & =g\text{ \ on}~\partial B_{r}\nonumber\\ \nabla w & =\nabla g\text{\ on}~\partial B_{r}.\nonumber \end{align} ~ By the Lax Milgram Theorem the above PDE with the given boundary condition has a unique solution. By combining (\ref{eq787}) and (\ref{testv}) we conclude \begin{equation} \int_{B_{r}}c^{ij,kl}(0)v_{ij}\eta_{kl}dx=\int_{B_{r}}(c^{ij,kl (0)-c^{ij,kl}(x))g_{ij}\eta_{kl}dx+\int_{B_{r}}q\eta_{kl}dx \label{testv2 \end{equation} Now $w$ is smooth (again see \cite[Theorem 33.10]{Driver03}), and $g=u^{h_{m }$ is $C^{2,\alpha},$ so $v=g-w$ is $C^{2,\alpha}$ and can be well approximated by smooth test functions in $H_{0}^{2}(B_{r}).$ \ It follows that $v$ can be used as a test function in (\ref{testv2}):\ On the left hand side we have by (\ref{LH}) \[ \left[ \int_{B_{r}}c^{ij,kl}(0)v_{ij}v_{kl}dx\right] ^{2}\geq\left[ \Lambda\int_{B_{r}}|D^{2}v|^{2}dx\right] ^{2}. \] Defining \begin{equation} \zeta(r)=\sup\{\mid c^{ij,kl}(x)-c^{ij,kl}(y)|:x,y\in B_{r}\} \label{ccalpha \end{equation} and using the Cauchy-Schwarz inequality we get \[ \left[ \int_{B_{r}}(c^{ij,kl}(0)-c^{ij,kl}(x))g_{ij}v_{kl}dx\right] ^{2 \leq\zeta^{2}(r)\int_{B_{r}}|D^{2}g|^{2}dx\int_{B_{r}}|D^{2}v|^{2}dx. \] Using Holder's inequalit \[ \left[ \int_{B_{r}}\left\vert (q(x)-q(0))v_{kl}(x)\right\vert dx\right] ^{2}\leq\int_{B_{r}}|q(x)-q(0)|^{2}dx\int_{B_{r}}|D^{2}v|^{2}dx \] This gives us \[ \Lambda^{2}\left[ \int_{B_{r}}|D^{2}v|^{2}dx\right] ^{2}\leq\zeta^{2 (r)\int_{B_{r}}|D^{2}g|^{2}dx\int_{B_{r}}|D^{2}v|^{2}dx+\int_{B_{r }|q(x)-q(0)|^{2}dx\int_{B_{r}}|D^{2}v|^{2}dx \] which implie \begin{equation} \Lambda^{2}\int_{B_{r}}|D^{2}v|^{2}dx\leq\zeta^{2}(r)\int_{B_{r}}|D^{2 g|^{2}dx+\int_{B_{r}}|q(x)-q(0)|^{2}dx. \label{eq799 \end{equation} Using corollary \ref{Cor2} \ for any $0<\rho\leq r$~ we ge \begin{equation} \int_{B_{\rho}}\left\vert D^{2}g\right\vert ^{2}dx\leq4C_{1}(\rho /r)^{n}\left\Vert D^{2}g\right\Vert _{L^{2}(B_{r})}^{2}+\left( 2+8C_{1 \right) \left\Vert D^{2}v\right\Vert _{L^{2}(B_{r})}^{2} \label{eq800 \end{equation} Now combing (\ref{eq800}) and (\ref{eq799}) we get \begin{align} \int_{B_{\rho}}\left\vert D^{2}g\right\vert ^{2}dx & \leq4C_{1}(\rho /r)^{n}\left\Vert D^{2}g\right\Vert _{L^{2}(B_{r})}^{2}\nonumber\\ & +\frac{\left( 2+8C_{1}\right) }{\Lambda^{2}}\left[ \zeta^{2 (r)\int_{B_{r}}|D^{2}g|^{2}dx+\int_{B_{r}}|q(x)-q(0)|^{2}dx\right] \nonumber\\ & =\left[ \frac{\left( 2+8C_{1}\right) \zeta^{2}(r)}{\Lambda^{2} +4C_{1}(\rho/r)^{n}\right] \int_{B_{r}}|D^{2}g|^{2}dx\nonumber\\ & +\frac{\left( 2+8C_{1}\right) }{\Lambda^{2}}\int_{B_{r}}|q(x)-q(0)|^{2 dx. \label{A0 \end{align} Also from Corollary \ref{Cor2} \ \begin{align*} \int_{B_{\rho}}\left\vert D^{2}g-(D^{2}g)_{\rho}\right\vert ^{2}dx & \leq4C_{2}(\rho/r)^{n+2}\int_{B_{r}}\left\vert D^{2}g-(D^{2}g)_{r}\right\vert ^{2}dx\\ & +\left( 8+16C_{2}\right) \int_{B_{r}}\left\vert D^{2}v\right\vert ^{2}dx\\ & \leq4C_{2}(\rho/r)^{n+2}\int_{B_{r}}\left\vert D^{2}g-(D^{2}g)_{\rho }\right\vert ^{2}dx\\ & +\frac{\left( 8+16C_{2}\right) }{\Lambda^{2}}\left[ \zeta^{2 (r)\int_{B_{r}}|D^{2}g|^{2}dx+\int_{B_{r}}|q(x)-q(0)|^{2}dx\right] . \end{align*} Because $c^{ij,kl}\in C^{1,\alpha}$ we have from (\ref{ccalpha}) that \begin{equation} \zeta(r)^{2}\leq C_{13}r^{2\alpha \end{equation} Again $q$ is a~$C^{\alpha}$ function which implie \[ \left\vert q(x)-q(0)\right\vert \leq\left\Vert q\right\Vert _{C^{\alpha (B_{1})}|x-0|^{\alpha \] and \[ \int_{B_{r}}|q-q(0)|^{2}dx\leq C_{14}\left\Vert q\right\Vert _{C^{\alpha }(B_{1})}r^{n+2\alpha \] So we hav \begin{align} & \int_{B_{\rho}}|D^{2}g-(D^{2}g)_{\rho}|^{2}\label{A1}\\ & \leq4C_{2}(\rho/r)^{n+2}\int_{B_{r}}\left\vert D^{2}g-(D^{2}g)_{\rho }\right\vert ^{2}\nonumber\\ & +\frac{\left( 8+16C_{2}\right) }{\Lambda^{2}}C_{13}r^{2\alpha}\in _{B_{r}}|D^{2}g|^{2}\nonumber\\ & +\frac{\left( 8+16C_{2}\right) }{\Lambda^{2}}C_{14}\left\Vert q\right\Vert _{C^{\alpha}(B_{1})}r^{n+2\alpha}.\nonumber \end{align} For $r<r_{0}<1/4$ to be determined, we have (\ref{A0}) \[ \int_{B_{\rho}}\left\vert D^{2}g\right\vert ^{2}\leq C_{15}\left\{ [(\rho/r)^{n}+r^{2\alpha}]\int_{B_{r}}\left\vert D^{2}g\right\vert ^{2 +r_{0}^{2\alpha+2\delta}r^{n-2\delta}\right\} . \] Where $\delta$ is some positive number. \ Now we apply \cite[Lemma 3.4]{HL}. \ In particular, take \begin{align*} \phi(\rho) & =\int_{B_{\rho}}\left\vert D^{2}g\right\vert ^{2}\\ A & =C_{15}\\ B & =r_{0}^{2\alpha+2\delta}\\ \alpha & =n\\ \beta & =n-2\delta\\ \gamma & =n-\delta. \end{align*} There exists $\varepsilon_{0}(A,\alpha,\beta,\gamma)$ such that if \begin{equation} r_{0}^{2\alpha}\leq\varepsilon_{0} \label{rnot \end{equation} we have \[ \phi(\rho)\leq C_{15}\left\{ [(\rho/r)^{n}+\varepsilon_{0}]\phi (r)+r_{0}^{2\alpha+2\delta}r^{n-2\delta}\right\} \] and the conclusion of \cite[Lemma 3.4]{HL} says that for $\rho<r_{0} \begin{align*} \phi(\rho) & \leq C_{16}\left\{ [(\rho/r)^{\gamma}]\phi(r)+r_{0 ^{2\alpha+2\delta}\rho^{n-2\delta}\right\} \\ & \leq C_{16}\frac{1}{r_{0}^{n-\delta}}\rho^{n-\delta}\left\Vert D^{2}g\right\Vert _{L^{2}(B_{r_{0}})}+r_{0}^{2\alpha+2\delta}\rho^{n-2\delta }\\ & \leq C_{17}\rho^{n-\delta \end{align*} This $C_{17}$ depends on $r_{0}$ which is chosen by (\ref{rnot}) and $\left\Vert D^{2}g\right\Vert _{L^{2}(B_{3/4})}$. So there is a positive uniform radius upon which this holds for points well in the interior. In particular, we choose $r_{0}\in(0,1/4)$ so that the estimate can be applied uniformly at points centered in $B_{1/2}(0)$ whose balls remain in $B_{3/4}(0)$. Turning back to (\ref{A1}), we now have \begin{align*} \int_{B_{\rho}}|D^{2}g-(D^{2}g)_{\rho}|^{2} & \leq4C_{2}(\rho/r)^{n+2 \int_{B_{r}}\left\vert D^{2}g-(D^{2}g)_{\rho}\right\vert ^{2}+C_{18 r^{2\alpha}\rho^{n-\delta}\\ & +C_{19}\left\Vert q\right\Vert _{C^{\alpha}(B_{1})}r^{n+2\alpha}\\ & \leq4C_{2}(\rho/r)^{n+2}\int_{B_{r}}\left\vert D^{2}g-(D^{2}g)_{\rho }\right\vert ^{2}+C_{20}r^{n+2\alpha-\delta \end{align*} Again we apply \cite[Lemma 3.4]{HL}: This time, take \begin{align*} \phi(\rho) & =\int_{B_{\rho}}|D^{2}g-(D^{2}g)_{\rho}|^{2}\\ A & =4C_{2}\\ B & =C_{20}\\ \alpha & =n+2\\ \beta & =n+2\alpha-\delta\\ \gamma & =n+2\alpha \end{align*} and conclude that for any $r<r_{0} \begin{align*} \int_{B_{r}}|D^{2}g-(D^{2}g)_{\rho}|^{2} & \leq C_{21}\left\{ \frac {1}{r_{0}^{n+2\alpha}}\int_{B_{r_{0}}}|D^{2}g-(D^{2}g)_{r_{0}}|^{2 r^{n+2\alpha}+C_{20}r^{n+2\alpha-\delta}\right\} \\ & \leq C_{22}r^{n+2\alpha-\delta \end{align*} with $C_{22}$ depending on $r_{0},\left\Vert D^{2}g\right\Vert _{L^{2 (B_{3/4})}$, $\left\Vert q\right\Vert _{C^{\alpha}(B_{1})}$ etc. \ It follows by \cite[Theorem 3.1]{HL} that $D^{2}g\in C^{\left( 2\alpha-\delta\right) /2}(B_{1/4}),$ in particular, must be bounded locally \begin{equation} \left\Vert D^{2}g\right\Vert _{L^{\infty}(B_{1/4})}\leq C_{23}\left\{ 1+\left\Vert D^{2}g\right\Vert _{L^{2}(B_{1/2})}\right\} . \label{repeatlater \end{equation} This allows us to bound \[ \int_{B_{r}}|D^{2}g|^{2}\leq C_{24}r^{n \] which we can plug back in to (\ref{A1}) \begin{align*} \int_{B_{\rho}}|D^{2}g-(D^{2}g)_{\rho}|^{2} & \leq4C_{2}(\rho/r)^{n+2 \int_{B_{r}}\left\vert D^{2}g-(D^{2}g)_{\rho}\right\vert ^{2}+C_{25 r^{2\alpha}C_{24}r^{n}\\ & +C_{19}\left\Vert q\right\Vert _{C^{\alpha}(B_{1})}r^{n+2\alpha}\\ & \leq C_{26}r^{n+2\alpha \end{align*} This is precisely the hypothesis in \cite[Theorem 3.1]{HL}. \ We conclude that \[ \left\Vert D^{2}g\right\Vert _{C^{\alpha}(B_{1/4})}\leq C_{27}\left\{ \sqrt{C_{26}}+\left\Vert D^{2}g\right\Vert _{L^{2}(B_{1/2})}\right\} . \] Recalling (\ref{defineg}) we see that $u$ must enjoy uniform $C^{3,\alpha}$ estimates on the interior, and the result follows. \end{proof} \section{Proof of the Theorem} The propositions in the previous section allow us to prove the following Corollary, from which the Main Theorem will follow. \ \begin{corollary} Suppose~$u\in C^{N,\alpha}(B_{1})$ , $N\geq2,$and satisfies the following regular (recall (\ref{Bdef})) fourth order equation \[ \int_{\Omega}a^{ij,kl}(D^{2}u)u_{ij}\eta_{kl}dx=0,\text{ }\forall\eta\in C_{0}^{\infty}(\Omega). \] Then \[ \left\Vert u\right\Vert _{C^{N+1,\alpha}(B_{r})}\leq C(n,b,\left\Vert u\right\Vert _{W^{N,\infty}(B_{1})}). \] In particular \[ u\in C^{N,\alpha}(B_{1})\implies u\in C^{N+1,\alpha}(B_{r}) \] \end{corollary} \textbf{Case 1} $N=2.$ The function $u\in C^{2,\alpha}\left( B_{1}\right) $ and hence also in $W^{2,\infty}\left( B_{1}\right) $ . By approximation (\ref{eq1}) holds for $\eta\in W_{0}^{2,\infty},$ in particular, fo \[ \eta=-[\tau^{4}u^{h_{m}}]^{-h_{m} \] where $\tau\in C_{c}^{\infty}\left( B_{1}\right) $ is a cut off function in $B_{1}$ that is $1$ on~$B_{1/2}$, and superscript $h_{m}$ refers to the difference quotient. As before, we have chosen $h$ small enough (depending on~$\tau$) so that~$\eta$ is well defined . We hav \[ \int_{\Omega}a^{ij,kl}(D^{2}u)u_{ij}\left[ \tau^{4}f^{h_{m}}\right] _{kl}dx=0. \] Integrating by parts as before with respect to the difference quotient, we ge \[ \int_{B_{1}}[a^{ij,kl}(D^{2}f)u_{ij}]^{h_{m}}[\tau^{4}u^{h_{m}}]_{kl}dx=0 \] Let $v=u^{h_{m}}$. Observe that the first difference quotient can be expressed as \begin{align} \lbrack a^{ij,kl}(D^{2}f)u_{ij}]^{h_{m}}(x) & =a^{ij,kl}(D^{2 u(x+he_{m}))\frac{u_{ij}(x+he_{m})-u_{ij}(x)}{h}\label{diff_of_a}\\ & +\frac{1}{h}\left[ a^{ij,kl}(D^{2}u(x+he_{m}))-a^{ij,kl}(D^{2 u(x))\right] u_{ij}(x)\nonumber\\ & =a^{ij,kl}(D^{2}u(x+he_{m}))v_{ij}(x)\nonumber\\ & +\left[ \int_{0}^{1}\frac{\partial a^{ij,kl}}{\partial u_{pq} (tD^{2}u(x+he_{m})+(1-t)D^{2}u(x))dt\right] v_{pq}(x)u_{ij}(x).\nonumber \end{align} We ge \begin{equation} \int_{B_{1}}\tilde{b}^{ij,kl}v_{ij}[\tau^{4}v]_{kl}dx=0 \label{dq3 \end{equation} wher \begin{equation} \tilde{b}^{ij,kl}(x)=a^{ij,kl}(D^{2}u(x+he_{m}))+\left[ \int_{0}^{1 \frac{\partial a^{pq,kl}}{\partial u_{ij}}(tD^{2}u(x+he_{m})+(1-t)D^{2 u(x))dt\right] u_{pq}(x). \label{btwid \end{equation} Expanding derivatives of the second factor\ in (\ref{dq3}) and collecting terms gives us \[ \int_{B_{1}}\tilde{b}^{ij,kl}v_{ij}\tau^{4}v_{kl}dx\leq\int_{B_{1}}\left\vert \tilde{b}^{ij,kl}\right\vert \left\vert v_{ij}\right\vert \tau^{2}C_{28 (\tau,D\tau,D^{2}\tau)\left( 1+|v|+|Dv|\right) dx\, \] Now for $h$ small, $\tilde{b}^{ij,kl}$ very closely approximates $b^{ij,kl},$ so we may assume $h$ is small. Applying (\ref{Bcondition})) and Young's inequalit \[ \int_{B_{1}}\tau^{4}\Lambda_{1}|D^{2}v|^{2}\leq C_{28}\sup\tilde{b ^{ij,kl}\int_{B_{1}}\left( \varepsilon\tau^{4}|D^{2}v|^{2}+C_{32}\frac {1}{\varepsilon}(1+|v|+|Dv|)^{2}\right) dx. \] That is \[ \int_{B_{1/2}}|D^{2}v|^{2}\leq C_{29}\int_{B_{1}}(1+|v|+|Dv|)^{2}dx. \] Now this estimate is uniform in $h$ (for $h$ small enough) and direction $e_{m,}$ so we conclude that the derivatives are in $W^{2,2}(B_{1/2}).$ This also shows tha \[ ||D^{3}u||_{L^{2}(B_{1/2})}\leq C_{30}\left( ||Du||_{L^{2}(B_{1})},\left\Vert D^{2}u\right\Vert _{L^{2}(B_{1})}\right) . \] Remark : We only used uniform continuity of $D^{2}u$ to allow us to take the limit, but we did require the precise modulus of continuity. For the next step, we are not quite able to use Proposition \ref{prop4} because the coefficients $a^{ij,kl}$ are only known to be $W^{1,2}$. \ So we proceed by hand. \ \ We begin by taking a single difference quotien \[ \int_{B_{1}}[a^{ij,kl}(D^{2}f)u_{ij}]^{h_{m}}\eta_{kl}dx=0 \] and arriving at the equation in the same fashion as to (\ref{dq3}) above (this time letting $g=u^{h_{m}}$) we have \[ \int_{B_{1}}\tilde{b}^{ij,kl}g_{ij}(x)\eta_{kl}dx=0. \] Inspecting (\ref{btwid}) we see that $\tilde{b}^{ij,kl}$ is $C^{\alpha}:$ \ \[ \left\Vert \tilde{b}^{ij,kl}(x)-\tilde{b}^{ij,kl}(y)\right\Vert \leq C_{31}\left\vert x-y\right\vert ^{\alpha}\text{ \] where $C_{31}$ depends on $\left\Vert D^{2}u\right\Vert _{C^{\alpha}}$ and on bounds of $Da^{ij,kl}$ and $D^{2}a^{ij,kl}.$\ As in the proof of Proposition \ref{prop4}, \ for a fixed $r\,<1$ \ we let $w$ solve the boundary value problem \begin{align*} \int\tilde{b}^{ij,kl}(0)w_{ij}\eta_{kl}dx & =0,\forall\eta\in C_{0}^{\infty }(B_{r})\\ w & =g\text{ on }\partial B_{r}\\ \nabla w & =\nabla g\text{ on }\partial B_{r \end{align*} Let $v=g-w.$ Note tha \[ \int\tilde{b}^{ij,kl}(0)v_{ij}\eta_{kl}dx=\int\left( \tilde{b}^{ij,kl (0)-\tilde{b}^{ij,kl}(x)\right) g_{ij}\eta_{kl}dx. \] Now $v$ vanishes to second order on the boundary, and we may use $v$ as a test function. We get \[ \int\tilde{b}^{ij,kl}(0)v_{ij}v_{kl}dx=\int\left( \tilde{b}^{ij,kl (0)-\tilde{b}^{ij,kl}(x)\right) g_{ij}v_{kl}dx. \] As before, \[ \left( \Lambda\int_{B_{r}}\left\vert D^{2}v\right\vert ^{2}dx\right) ^{2}\leq\left[ \sup_{x\in B_{r}}\left\vert \tilde{b}^{ij,kl}(0)-\tilde {b}^{ij,kl}(x)\right\vert \right] ^{2}\int_{B_{r}}\left\vert D^{2 g\right\vert ^{2}dx\int_{B_{r}}\left\vert D^{2}v\right\vert ^{2}dx. \] Defining \begin{align} \zeta(r) & =\sup\{\left\vert \tilde{b}^{ij,kl}(x)-\tilde{b}^{ij,kl (y)\right\vert x,y\in B_{r}\}\label{alphab}\\ & \leq4^{\alpha}C_{31}r^{2\alpha}\nonumber \end{align} the \[ \int_{B_{r}}(\tilde{b}^{ij,kl}(0)-\tilde{b}^{ij,kl}(x))g_{ij}v_{kl}dx)^{2 \leq\zeta^{2}(r)\int_{B_{r}}\left\vert D^{2}g\right\vert ^{2}\int_{B_{r }\left\vert D^{2}v\right\vert ^{2}. \] So now we have \[ \int_{B_{r}}\left\vert D^{2}v\right\vert ^{2}\leq\frac{\zeta^{2}(r) {\Lambda^{2}}\int_{B_{r}}\left\vert D^{2}g\right\vert ^{2}. \] Using Corollary \ref{Cor2}, for any $0<\rho\leq r$ we ge \begin{align} \int_{B_{\rho}}\left\vert D^{2}g-(D^{2}g)_{\rho}\right\vert ^{2} & \leq4C_{2}(\rho/r)^{n+2}\int_{B_{r}}\left\vert D^{2}g-(D^{2}g)_{r}\right\vert ^{2}\nonumber\\ & +\left( 8+16C_{2}\right) \int_{B_{r}}\left\vert D^{2}v\right\vert ^{2}\nonumber\\ & \leq4C_{2}(\rho/r)^{n+2}\int_{B_{r}}\left\vert D^{2}g-(D^{2}g)_{r \right\vert ^{2}+\frac{\left( 8+16C_{2}\right) \zeta^{2}(r)}{\Lambda^{2 }\left\Vert D^{2}g\right\Vert _{L^{2}(B_{r})}^{2}. \label{fromcor6 \end{align} Also by Corollary \ref{Cor2 \begin{align*} \int_{B_{\rho}}\left\vert D^{2}g\right\vert ^{2} & \leq4C_{1}(\rho /r)^{n}\left\Vert D^{2}g\right\Vert _{L^{2}(B_{r})}^{2}+\left( 2+8C_{1 \right) \left\Vert D^{2}v\right\Vert _{L^{2}(B_{r})}^{2}\\ & \leq4C_{1}(\rho/r)^{n}\left\Vert D^{2}g\right\Vert _{L^{2}(B_{r}) ^{2}+\left( 2+8C_{1}\right) \frac{\zeta^{2}(r)}{\Lambda^{2}}\left\Vert D^{2}g\right\Vert _{L^{2}(B_{r})}^{2}. \end{align*} This implie \[ \int_{B_{\rho}}\left\vert D^{2}g\right\vert ^{2}\leq\left( 4C_{1}(\rho /r)^{n}+\left( 2+8C_{1}\right) 4^{2\alpha}C_{31}^{2}r^{2\alpha}\right) \left\Vert D^{2}g\right\Vert _{L^{2}(B_{r})}^{2}. \] Now we can apply \cite[Lemma 3.4]{HL} again, this time with \begin{align*} \phi(\rho) & =\int_{B_{\rho}}\left\vert D^{2}g\right\vert ^{2}\\ A & =4C_{1}\\ \alpha & =n\\ B,\beta & =0\\ \gamma & =n-2\delta\\ \varepsilon & =\left( 2+8C_{1}\right) 4^{2\alpha}C_{31}^{2}r^{2\alpha}. \end{align*} There exists a constant $\varepsilon_{0}(A,\alpha,\gamma)$ such that by chosing \[ r_{0}^{2\alpha}\leq\frac{\varepsilon_{0}}{\left( 2+8C_{1}\right) 4^{2\alpha }C_{31}^{2}}<\frac{1}{4 \] we may conclude that for $0<r\leq r_{0} \begin{equation} \int_{B_{r}}\left\vert D^{2}g\right\vert ^{2}\leq C_{32}r^{n-2\delta \frac{\int_{Br_{0}}\left\vert D^{2}g\right\vert ^{2}}{r_{0}^{n-2\delta}}. \label{conclusion5 \end{equation} Next, for small $\rho<r<r_{0}$ we have combining (\ref{fromcor6}) (\ref{alphab}) and (\ref{conclusion5}) \begin{align} \int_{B\rho}\left\vert D^{2}g-(D^{2}g)_{\rho}\right\vert ^{2} & \leq 4C_{2}(\rho/r)^{n+2}\int_{B_{r}}\left\vert D^{2}g-(D^{2}g)_{r}\right\vert ^{2}\label{bigintegral}\\ & +\frac{\left( 8+16C_{2}\right) 4^{\alpha}}{\Lambda^{2}}\frac{\in _{Br_{0}}\left\vert D^{2}g\right\vert ^{2}}{r_{0}^{n-2\delta}}C_{31 C_{32}r^{n-2\delta}r^{2\alpha}\nonumber\\ & \leq C_{33}r^{n+2\alpha-\delta}\nonumber \end{align} with $C_{33}$ depending on $\left\Vert D^{2}g\right\Vert _{L^{2}(B_{3/4 )},r_{0},\varepsilon_{0}$. Again, we apply \cite[Theorem 3.1]{HL} \ to $D^{2}g\in C^{\left( 2\alpha-\delta\right) /2}(B_{1/4}).$ \ From here, the argument is identical to the argument following (\ref{repeatlater}). We conclude tha \[ \left\Vert D^{2}g\right\Vert _{C^{\alpha}(B_{1/4})}\leq C_{34}\left\{ 1+\left\Vert D^{2}g\right\Vert _{L^{2}(B_{3/4})}\right\} . \] Substituting $g=u^{h_{m}}$ we see that $u$ must enjoy uniform $C^{3,\alpha}$ estimates on the interior, and the result follows. \textbf{Case 2 }$N=3.$ \ We may take a difference quotient\ of (\ref{eq1}) directly \[ \int_{\Omega}\left[ a^{ij,kl}(D^{2}u)u_{ij}\right] ^{h_{m}}\eta _{kl}dx=0,\text{ }\forall\eta\in C_{0}^{\infty}(\Omega). \] (To be more clear we, are using a slightly offset test function $\eta(x+he_{m})$ and then using a change of variables, subtracting, and dividing by $h.$)\ We get \[ \int_{B_{1}}\left[ a^{ij,kl}(D^{2}u(x+he_{m}))u_{ij}^{h_{m}}(x)+\frac {\partial a^{ij,kl}}{\partial u_{pq}}(M^{\ast}(x))u_{pq}^{h_{m} (x)u_{ij}(x)\right] \eta_{kl}=0. \] where $M^{\ast}(x)=t^{\ast}D^{2}u(x+h_{m})+(1-t^{\ast})D^{2}u(x)$ and $t^{\ast}\in\lbrack0,1]$. Now we are assuming that $u\in C^{3,\alpha }(B_{1}),$ so the first and second derivatives of the difference quotient will converge to the second and third derivatives, uniformly. We can then apply dominated convergence, passing the limit as $h\rightarrow0$ inside the integral and recalling $u_{m}=v$ as before, we obtain \ \[ \int_{B_{1}}\left[ [a^{ij,kl}(D^{2}u(x))v_{ij}(x)+\frac{\partial a^{pq,kl }{\partial u_{ij}}\left( D^{2}u(x)\right) v_{ij}(x)u_{pq}(x)\right] \eta_{kl}=0 \] that is \begin{equation} \int_{B_{1}}b^{ij,kl}(D^{2}u(x))v_{ij}(x)\eta_{kl}(x)=0,\text{ \ \ \forall\eta\in C_{0}^{\infty}(\Omega). \label{eqb3 \end{equation} It follows that $v\in C^{2,\alpha\text{ }}$satisfies a fourth order double divergence equation, with coefficients in $C^{1,\alpha}.$ \ First, we apply Proposition \ref{prop3}\ : \[ \left\Vert D^{3}v\right\Vert _{L^{2}(B_{1/2})}\leq C_{35}\left( ||v||_{W^{2,\infty}(B_{1})}\right) (1+||b^{ij,kl}||_{W^{1,2}(B_{1})}). \] In particular, $u\in W^{4,2}(B_{1/2}).$ Next, we apply \ref{prop4} \begin{align*} ||D^{3}v||_{C^{0,\alpha}(B_{1/4})} & \leq C(1+||D^{3}v||_{L^{2}(B_{1/2 )})\leq C(||u||_{W^{2,\infty}(B_{1})},|b^{ij,kl}||_{W^{1,2}(B_{1})})\\ & \leq C_{36}(n,b,\left\Vert u\right\Vert _{C^{3,\alpha}(B_{1})}). \end{align*} We conclude that $u\in C^{4,\alpha}(B_{r})$ for any $r<1.$ \textbf{Case 3 } $N\geq4$. \ \ Let $v=D^{\alpha}u$ for some multindex $\alpha$ with $\left\vert \alpha\right\vert =N-2.$ Observe that taking the first difference quotient and then taking a limit yields (\ref{eqb3}), when $u\in C^{3,\alpha}.$ \ Now if $u\in C^{4,\alpha}$ we may take a difference quotient and limit of (\ref{eqb3}) to obtai \[ \int_{B_{1}}\left[ b^{ij,kl}(D^{2}u(x))u_{ijm_{1}m_{2}}(x)+\frac{\partial b^{ij,kl}}{\partial u_{pq}}(D^{2}u(x))u_{pqm_{2}}u_{ij}\right] \eta _{kl}(x)=0,\text{ \ \ }\forall\eta\in C_{0}^{\infty}(\Omega). \] and if $u\in C^{N,\alpha}$, then $v\in C^{2,\alpha}$, so we may take $N-2$ difference quotients to obtai \begin{equation} \int_{B_{1}}\left[ b^{ij,kl}(D^{2}u(x))v_{ij}(x)+f^{kl}(x)\right] \eta _{kl}(x)=0,\text{ \ \ }\forall\eta\in C_{0}^{\infty}(\Omega). \label{inductiveeq \end{equation} where \[ f^{kl}=D^{\alpha}\left( b^{ij,kl}(D^{2}u(x))u_{ij}\right) -b^{ij,kl (D^{2}u(x))D^{\alpha}u_{ij}. \] One can check by applying the chain rule repeatedly that $f^{kl}$ is $C^{1,\alpha}.$ \ So we may apply Proposition \ref{prop3}\ \ to (\ref{inductiveeq}) and obtain that \[ \left\Vert D^{3}v\right\Vert _{L^{2}(B_{1/2})}\leq C_{37}(\left\Vert v\right\Vert _{W^{2,\infty}(B_{1})})(1+||b^{ij,kl}||_{W^{1,2}(B_{1})}) \] that is \[ \left\Vert u\right\Vert _{W^{N+1,2}(B_{r})}\leq C_{38}(n,b,\left\Vert u\right\Vert _{W^{N,\infty}(B_{1})}). \] Now apply Proposition \ref{prop4} \[ ||D^{3}v||_{C^{0,\alpha}(B_{1/4})}\leq C_{39}(1+||D^{3}v||_{L^{2}(B_{3/4})}) \] that is \[ \left\Vert u\right\Vert _{C^{N+1,\alpha}(B_{r})}\leq C_{40}(n,b,\left\Vert u\right\Vert _{W^{N,\infty}(B_{1})}). \] The Main Theorem follows. \section{Critical Points of Convex Functions of the Hessian} Suppose that $F(D^{2}u)$ is either a convex or a concave function of $D^{2}u,$ and we have found a critical point of \begin{equation} \int_{\Omega}F(D^{2}u)dx\label{Ffunc \end{equation} for some $\Omega\subse \mathbb{R} ^{n},$ where we are restricting to compactly supported variations, so the that Euler-Lagrange equation is (\ref{generic}). \ \ If we suppose that $F$ also has the additional structure condition, \begin{equation} \frac{\partial F(D^{2}u)}{\partial u_{ij}}=a^{pq,ij}(D^{2}u)u_{pq \label{structc \end{equation} for a some $a^{ij,kl}$ satisfying (\ref{LH}), then we can derive smoothness from $C^{2,\alpha},$ as follows. \begin{corollary} ~Suppose $u\in C^{2,\alpha}(B_{1})$ is critical point of (\ref{Ffunc}), where $F$ is a smooth function satisfying (\ref{structc}) with $a^{ij,kl}$ satisfying (\ref{LH}) and F is uniformly convex or uniformly concave on $U\subseteq S^{n\times n}$ where~$U$ is the range of~$D^{2}u(B_{1})$ in the Hessian space. Then $u\in C^{\infty}(B_{r})$, for all $r<1.$ \end{corollary} \begin{proof} If $u$ is a critical point of (\ref{Ffunc}), then it satisfies the weak equation (\ref{eq1}), for $a^{ij,kl}$ in (\ref{structc}). To apply the main Theorem, all we need to show is that \[ b^{ij,kl}(D^{2}u(x))=a^{ij,kl}(D^{2}u(x))+\frac{\partial a^{pq,kl}}{\partial u_{ij}}(D^{2}u(x))u_{pq}(x) \] satisfies (\ref{LH}). \ $\ $From (\ref{structc}) \begin{equation} \frac{\partial}{\partial u_{kl}}\left( \frac{\partial F(D^{2}u)}{\partial u_{ij}}\right) =a^{kl,ij}(D^{2}u)+\frac{\partial a^{pq,ij}(D^{2}u)}{\partial u_{kl}}u_{pq}. \end{equation} So \[ b^{ij,kl}(D^{2}u(x))\xi_{ij}\xi_{kl}=\frac{\partial}{\partial u_{kl}}\left( \frac{\partial F(D^{2}u)}{\partial u_{ij}}\right) \xi_{ij}\xi_{kl}\geq \Lambda\left\vert \xi\right\vert ^{2 \] for some $\Lambda>0$, because $F$ is convex. If $F$ is concave, $u$ is still a critical point of $-F$ and the same argument holds. \end{proof} \bigskip We mention one special case. \begin{lemma} Suppose $F(D^{2}u)=f(w)$ where $w=(D^{2}u)^{T}(D^{2}u).$ \ Then \begin{equation} \frac{\partial F(D^{2}u)}{\partial u_{ij}}=a^{ij,kl}(D^{2}u)u_{kl \label{squares \end{equation} \end{lemma} \begin{proof} Le \[ w_{kl}=u_{ka}\delta^{ab}u_{bl}. \] The \begin{align*} \frac{\partial F(D^{2}u)}{\partial u_{ij}} & =\frac{\partial f(w)}{\partial w_{kl}}\frac{\partial w_{kl}}{\partial u_{ij}}\\ & =\frac{\partial f(w)}{\partial w_{kl}}\left( \delta_{ka,ij}\delta ^{ab}u_{bl}+u_{ka}\delta^{ab}\delta_{bl,ij}\right) \\ & =\frac{\partial f(w)}{\partial w_{kl}}\left( \delta_{ki}u_{jl}+u_{ki \delta_{lj}\right) \\ & =\frac{\partial f(w)}{\partial w_{il}}\delta_{jm}u_{ml}+\frac{\partial f(w)}{\partial w_{kj}}u_{km}\delta_{im}\\ & =\frac{\partial f(w)}{\partial w_{il}}\delta_{jk}u_{kl}+\frac{\partial f(w)}{\partial w_{kj}}u_{kl}\delta_{il}. \end{align*} This shows (\ref{squares}) for \[ a^{ij,kl}=\frac{\partial f(w)}{\partial w_{il}}\delta_{jk}+\frac{\partial f(w)}{\partial w_{kj}}\delta_{il}. \] \end{proof} \bibliographystyle{amsalpha}